Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Compute integral $\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x$ I want to solve $\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x$ but I get the wrong results: $$ \int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x = \int_{-6}^6 \! \frac{16e^{4x} + 16e^{2x} + 4}{e^{2x}} \, \mathrm{d} x $$ $$ = \left[ \frac{(4e^{4x} + 8e^{2x} + 4x)2}{e^{2x}} \right]_{-6}^6 = \left[ \frac{8e^{4x} + 16e^{2x} + 8x}{e^{2x}} \right]_{-6}^6 $$ $$ = (\frac{8e^{24} + 16e^{12} + 48}{e^{12}}) - (\frac{8e^{-24} + 16e^{-12} - 48}{e^{-12}}) $$ $$ = e^{-12}(8e^{24} + 16e^{12} + 48) - e^{12}(8e^{-24} + 16e^{-12} - 48) $$ $$ = 8e^{12} + 16 + 48e^{-12} - (8e^{-12} + 16 - 48e^{12}) $$ $$ = 8e^{12} + 16 + 48e^{-12} - 8e^{-12} - 16 + 48e^{12}) $$ $$ = 56e^{12} + 56e^{-12} $$ Where am I going wrong?
$$ \int_{-6}^6 \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x= \int_{-6}^6 \frac{16e^{4x} + 16e^{2x}+ 4}{e^{2x}} \, \mathrm{d} x= \int_{-6}^6 16e^{2x} + 16+ 4e^{-2x} \, \mathrm{d} x= \left[ 8e^{2x} + 16x-2e^{-2x} \right]_{-6}^6= 8(e^{12}-e^{-12}) + 16\cdot 12 -2(e^{-12}-e^{12})= 192+ 10 e^{12}-10 e^{-12} $$ You can check both indefinite and definite integral at WolframAlpha. I am not sure where is mistake in your solution (since I do not understand what exactly you have done), but most probably you have used $\int \frac{f(x)}{g(x)} \, \mathrm{d} x = \frac{\int f(x) \, \mathrm{d} x}{\int g(x)\, \mathrm{d} x}$, as suggested by Gerry's comment. This formula is incorrect.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Question about direct sum of Noetherian modules is Noetherian Here is a corollary from Atiyah-Macdonald: Question 1: The corollary states that finite direct sums of Noetherian modules are Noetherian. But they prove that countably infinite sums are Noetherian, right? (so they prove something stronger) Question 2: I have come up with the following proof of the statement in the corollary, can you tell me if it's correct? Thank you: Assume $M_i$ are Noetherian and let $(\bigoplus_i L_i)_n$ be an increasing sequence of submodules in $\bigoplus_i M_i$. Then in particular, $L_{in}$ is an increasing sequence in $M_i$ and hence stabilises, that is, for $n$ greater some $N_i$, $L_{in} = L_{in+1} = \dots $. Now set $N = \max_i N_i$. Then $(\bigoplus_i L_i)_n$ stabilises for $n> N$ and is equal to $\bigoplus_i L_i$, where $L_i = L_{iN_i}$. This proves that finite direct sums of Noetherian modules are Noetherian so it's a bit weaker. But if it's correct it proves the corollary.
You are getting mixed up: The proof is saying that given any $n$ number of Noetherian modules, the direct sum of these finitely many $n$ number of modules is Noetherian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Check if two 3D vectors are linearly dependent I would like to determine with code (c++ for example) if two 3D vectors are linearly dependent. I know that if I could determine that the expression $ v_1 = k · v_2 $is true then they are linearly dependent; they are linearly independent otherwise. I've tried to construct an equation system to determine that, but since there could be zeros anywhere it gets very tricky and could end with divisions by zero and similar. I've also though about using some matrices/determinants, but since the matrix would look like: $$ \begin{matrix} x_1 & y_1 & z_1\\ x_2 & y_2 & z_2\\ \end{matrix} $$ i don't see an easy way to check for the the linear dependency... any ideas? Thanks!
If $x_1/x_2 = y_1/y_2 = z_1/z_2$ then $v_1 = k v_2.$ Care needs to be taken for the zeros though. If any component is zero, then the corresponding component needs to be zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What real numbers are in the Mandelbrot set? The Mandelbrot set is defined over the complex numbers and is quite complicated. It's defined by the complex numbers $c$ that remain bounded under the recursion: $$ z_{n+1} = z_n^2 + c,$$ where $z_1 = 0$. If $c$ is real, then above recursion will remain real. So for what values of $c$ does the recursion remain bounded?
If z is a complex number whose distance to origin is bigger than $|c|$ and 2 then z is a point than scape for the iteration of the function $z^2+c$. It's easy to demostrate this, then the recursion remain bounded inside the closed ball of radio 2, but we can find the mandelbrot set inside $[-2,0.7]\times[-1.2,1.2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Countable union of countable sets(ZF) Let ${{E_n}}_{n\in \mathbb{N}}$ be a sequence such that every $E_n$ is countable. Let $g_n : \mathbb{N} \to E_n$ be a bijection for every $n\in \mathbb{N}$. Let $\alpha (n,k) = g_n(k)$ Let $A$ be the union of $E_n$'s. Then $\alpha : \mathbb{N} × \mathbb{N} \to A$ is a surjective function. Since $\mathbb{N} \times \mathbb{N}$ is equipotent with $\mathbb{N}$, there exists a surjective function $f: \mathbb{N}\to A$. Let $[n]$={$m\in \mathbb{N}$|$f(m)=f(n)$} for every $n\in \mathbb{N}$. Since $f$ is surjective, for every $n\in \mathbb{N}$, $[n]\ne \emptyset$. Since $[n] \subset \mathbb{N}$, $[n]$ is well-ordered. Let $l_n$ designate the least element of $[n]$. Let $B=\{l_n \in \mathbb{N} | n\in \mathbb{N}\}$ Then $f_{[B]}$ : $B\to A$ is a bijection. Since $B\subset \mathbb{N}$, $B$ is at most countable. Since $A$ is infinite, $B$ is countable, hence $A$ is countable. I don't know where i used AC in my argument. Help
You used countable choice when you chose $g_n$. It is true that for every countable set there is an injection from the said set into $\mathbb N$, however to choose exactly one for every set requires choice. If, however, you are given the injections then the union is in fact countable, since there is no need to choose bijections. Note that if we only wish to take union over finitely many countable sets then we can choose finitely many injections and the argument follows. Similarly the finite product of countable sets is countable and non-empty, whereas infinite products could be empty even if all sets are finite. See also the last part in this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Functional and Linear Functional may I know what is the distinction between functional analysis and linear functional analysis? I do a search online and came to a conclusion that linear functional analysis is not functional analysis and am getting confused by them. When I look at the book Linear Functional Analysis published by Joan Cerda by AMS, there is not much difference in its content compared to other titles with just the heading Functional Analysis. Hope someone can clarify for me. Thank You.
They probably mean the same thing. It's a matter of emphasis. Cerda may be trying to distinguish ordinary functional analysis from nonlinear functional analysis, where nonlinear maps are studied. Bollobás does something similar; his book on functional analysis is entitled Linear Analysis and I think this is again to emphasize the linear nature of the subject.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Limit of a subsequence I studied a definition,and I didn´t find it in anyother book (but those I use). It´s like a point of closure for sequences.We call $a$ "value of closure" of $(x_n)$ when $a$ is the limit of a subsequence of $(x_n)$. The question is: For a real number $a$ be a "value of closure" is necessary and sufficient that $\forall \epsilon >0$ and $\forall k \in \mathbb{N}$ given ,there is $n > k$ such that $|x_n -a|< \epsilon$. I could do the first part ($a \Rightarrow |x_n - a|<\epsilon$) but not the $\Leftarrow$. Thanks for any help!
To expand the comment, fix $n_1=1$ for example. We can find $n_2>1$ such that $|x_{n_2}-a|<\frac 12$. Assume that $n_1<n_2<\dots<n_k$ are construct. For $\varepsilon=2^{-(k+1)}$, we can find $n_{k+1}>n_k$ such that $$|x_{n_{k+1}}-a|\leq 2^{-(k+1)}.$$ Hence we have construct a subsequence $\{x_{n_k}\}$ such that $|x_{n_k}-a|\leq 2^{-k}$ for all integer $k$. This proves that $a$ is a value of closure of $\{x_n\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Are there "one way" integrals? If we suppose that we can start with any function we like, can we work "backwards" and differentiate the function to create an integral that is hard to solve? To define the question better, let's say we start with a function of our choosing, $f(x)$. We can then differentiate the function with respect to $x$ do get $g(x)$: $$g(x) = f'(x)$$ This, in turn, implies, under appropriate conditions, that the integral of $g(x)$ is $f(x)$: $$\int_a^b { g(x) dx } = [f(x)]_a^b$$ I'm wondering what conditions are appropriate to allow one to easily get a $g(x)$ and $f(x)$ that assure that $f(x)$ can't be easily found from $g(x)$. SUMMARY OF THE QUESTION Can we get a function, $g(x)$, that is hard to integrate, yet we know the solution to? It's important that no one else should be able to find the solution, $f(x)$, given only $g(x)$. Please help! POSSIBLE EXAMPLE This question/integral seems like it has some potential. DEFINITION OF HARDNESS The solution to the definite integral can be returned with the most $n$ significant digits correct. Then it is hard to do this if the time it takes is an exponential number of this time. In other words, if we get the first $n$ digits correct, it would take roughly $O(e^n)$ seconds to do it.
Find two large primes $p$ and $q$, call their product $N$; then it's easy for you to compute $\int_{J(N)}1\,dx$, where $J(N)$ is the interval between the prime factors of $N$, but for anyone else to compute the integral, she would have to factor $N$ first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Eigenvalue of a matrix Let $A$ be an $n\times n$ matrix and let $I$ be the $n\times n$ identity matrix. Show that if $A^{2} = I$, and $A \neq I$, then $\lambda =-1$ is an eigenvalue of $A$. This problem doesn't seem that too hard to solve, but I am stuck near the end. Here is what I have done so far. Since $A^{2}=I$, then by definition $Ax=\lambda x$, where $x$ is an eigenvector of $A$ and $\lambda$ is an eigenvalue of $A$. It follows that $x=Ix=A^{2}x=A(Ax)=A(\lambda x)= \lambda(Ax)=\lambda^{2}x$. (Now I was going to use the fact that since $A \neq I$, that $x\neq 0$, so we get that $\lambda = 1$ or $-1$). This is where I am stuck.
If $A^2=I$, then $A$ satisfies the polynomial $t^2-1=(t-1)(t+1)$. Therefore, the minimal polynomial of $A$ divides $(t-1)(t+1)$. If the minimal polynomial is $t-1$, then that means that $A-I=0$, so $A=I$; but we are assuming that $A\neq I$, so this is not the case. That means that the minimal polynomial is divisible by $t+1$; since every irreducible factor of the minimal polynomial divides the characteristic polynomial, it follows that $t+1$ divides the characteristic polynomial of $A$, and hence that $-1$ is an eigenvalue of $A$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
The order of the Galois group of a cyclotomic field over a finite prime field Possible Duplicate: For what $(n,k)$ there exists a polynomial $p(x) \in F_2\[x\]$ s.t. $\deg(p)=k$ and $p$ divides $x^n-1$? Galoisgroup $\operatorname{Gal}(K(\mu_n) / K) \subseteq (\mathbb{Z} / (n) )^*$ Let $p$ be a prime number. Let $F = \mathbb{Z}/p\mathbb{Z}$. Let $l$ be an odd prime number such that $l \neq p$. Let $X^l - 1 \in F[X]$. Let $K$ be the splitting field of $X^l - 1$. Can we determine the degree $K/F$? This is a related question.
Let $f$ be the smallest positive integer such that $p^f \equiv 1$ (mod $l$). Let $\Omega$ be the algebraic closure of $F$. Let $\omega \neq 1$ be a root of $X^l - 1$ in $\Omega$. Then, by my answer to this question, the minimal polynomial of $\omega$ over $F$ has degree $f$. Since $F(\omega)$ is the splitting field of $X^l - 1$, The degree of $K/F$ is $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solutions to $e^z+z-a=0$ with $Re(z)>0$ and $a$ is a real number larger than 1 In an exercise I am required to show that the only solution to $e^z+z-a=0$ in the right half plane is a real one, where $a$ is fixed a real number larger than 1. I am not able to work it out. I tried to used the Argument Principle to compute the number of zeros of this function, but I cannot make my integral converge. Could anybody give me a hand? Thanks very much!
Let $a=\log(3\pi/2)$, so $a$ is real and $a\gt1$. Let $z=\log(3\pi/2)+(3\pi/2)i$, so the real part of $z$ is positive, and $z$ is not real. Then $$\eqalign{e^z+z&=e^{\log(3\pi/2)}e^{(3\pi/2)i}+z=(3\pi/2)(\cos(3\pi/2)+i\sin(3\pi/2))+z\cr&=-(3\pi/2)i+\log(3\pi/2)+(3\pi/2)i=a\cr}$$ So, for $a=\log(3\pi/2)$, there is a nonreal solution in the right half-plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
This classic from euclid's elements, is it accepted everywhere? I was reading linear vector spaces. When doing some exercise to prove some statements based on the properties defined for linear vector spaces, i suddenly noticed, outside the things defined, i'm using a common notion without proof. This notion also surfaced when i was studying Group theory. After giving a thought i come to the conclusion that i've used it in all systems which are modeled after euclid. I mean first i give some postulates. Then i derive statements from those postulates. Euclid also has it as "common notion" : If equals are added to equals, the wholes are equal. So i started wondering that if anybody challenged this or attempted to build a system without this.
If this were not the case, how could we say that the things were equal in a sense meaningful for our purposes? In different areas of mathematics, we have different notions of equality that generally mean 'the same as far as everything we're currently concerned with'. For example if we have two Riemann surfaces, if we were studying them from the point of view of topology we might call them equal if they were homeomorphic (or maybe homotopy equivalent), from the point of view of Differential Geometry we would consider them equal if they were diffeomorphic to each other, whereas from the point of view of complex geometry they would need to be biholomorphic. We may not be interested in the structure at all, and consider them equal if their underlying sets had the same cardinality. If we had a notion of equality that did not guarantee equivalence for all things we were interested in, it would not be a good notion of equality to study those phenomenon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The residue field of a prime ideal of a cyclotomic number field Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$. Let $K = \mathbb{Q}(\zeta)$. Let $A$ be the ring of algebraic integers in $K$. Let $p \ne l$ be a prime number. Let $f$ be the order of $p$ modulo $l$, i.e. the smallest positive integer such that $p^f \equiv 1$ (mod $p$). Let $P$ be a prime ideal of $A$ lying over $p$. My question: Is the following proposition true? If yes, how would you prove this? Proposition Let $\alpha \in A$. Then there exist rational integers $a_0, ..., a_{f-1}$ such that $\alpha \equiv a_0 + a_1\zeta + ... + a_{f-1}\zeta^{f-1}$ (mod $P$). Here $a_0, ..., a_{f-1}$ are uniquely determined mod $p$.
By my answer to this question, the degree of $P$ is $f$. Let $\omega$ be the image of $\zeta$ by the canonical homomorphism $\mathbb{Z}[\zeta] \rightarrow \mathbb{Z}[\zeta]/P$. Clearly $\mathbb{Z}[\zeta]/P = (\mathbb{Z}/p\mathbb{Z})(\omega)$. Hence every element $x \in \mathbb{Z}[\zeta]/P$ can be uniquely written as $x = a_0 + a_1\omega + \cdots + a_{f-1} \omega^{f-1}$, where $a_i \in \mathbb{Z}/p\mathbb{Z}$. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
For Maths Major, advice for perfect book to learn Algorithms and Date Structures Purpose: Self-Learning, NOT for course or exam Prerequisite: Done a course in basic data structures and algorithms, but too basic, not many things. Major: Bachelor, Mathematics My Opinion: Prefer more compact, mathematical, rigorous book on Algorithms and Data Structures. Since it's for long-term self-learning, not for exam or course, then related factors should not be considered (e.g. learning curve, time-usage), only based on the book to perfectly train algorithms and data structures. After searching at Amazon, following three are somehow highly reputed. (1)Knuth, The Art of Computer Programming, Volume 1-4A This book-set contains most important things in Algorithms, and very mathematically rigorous. I prefer to use this to learn algorithms all in one step. (2)Cormen, Introduction to Algorithms It's more like in-between (1) and (3). (3)Skiena, The Algorithm Design Manual More introductory and practical compared with (1), is it ok for this to be warm-up then read (1), and skip (2) ? Desirable answer: Advice or more recommended books
I would suggest you take 2 (Cormen et. al - Introduction to algortihms) and combine it with the online video lectures for this book. The book is very formal and offers a variety of exercises.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Ranking students from 2 separate exams in single scale. Is there a way to rank 2 student groups who face 2 separate exams in a single scale using z-score, given that there are enough student in each group to consider each score distribution a normal distribution? For instance say 2 student groups answered to 2 separate maths paper. Each of these groups has at lease 2000 students in it. We want to give 100 scholarships to those students, so we need to rank them accordingly and give this scholarship to the top 100 of them. And now they know their marks and we need a way to rank them in a single scale.
It depends on your prior beliefs about these students. If they're all from the same population and just happened to take two different tests, you can compare their $z$ scores from the separate distributions. On the other hand, if there's reason to believe that the two groups differ statistically, then there's no way of knowing how to compare the tests. Simplifying assumptions might be to ignore the differences between the groups and use the separate $z$ scores anyway, or to ignore the differences in difficulty between the two tests and compare their test results directly, but you have no systematic way of knowing how good either of these approximations is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $\phi \in C^1_c(\mathbb R)$ then $ \lim_n \int_\mathbb R \frac{\sin(nx)}{x}\phi(x)\,dx = \pi\phi(0)$. Let $\phi \in C^1_c(\mathbb R)$. Prove that $$ \lim_{n \to +\infty} \int_\mathbb R \frac{\sin(nx)}{x}\phi(x) \, dx = \pi\phi(0). $$ Unfortunately, I didn't manage to give a complete proof. First of all, I fixed $\varepsilon>0$. Then there exists a $\delta >0$ s.t. $$ \vert x \vert < \delta \Rightarrow \vert \phi(x)-\phi(0) \vert < \frac{\varepsilon}{\pi}. $$ Now, I would use the well-known fact that $$ \int_\mathbb R \frac{\sin x}{x} \, dx = \pi. $$ On the other hand, by substitution rule, we have also $$ \int_\mathbb R \frac{\sin(nx)}{x} \, dx = \int_\mathbb R \frac{\sin x}{x} \, dx = \pi. $$ Indeed, I would like to estimate the quantity $$ \begin{split} & \left\vert \int_\mathbb R \frac{\sin(nx)}{x}\phi(x) \, dx - \pi \phi(0) \right\vert = \\ & = \left\vert \int_\mathbb R \frac{\sin(nx)}{x}\phi(x) \, dx - \phi(0)\int_\mathbb R \frac{\sin{(nx)}}{x}dx \right\vert \le \\ & \le \int_\mathbb R \left\vert \frac{\sin(nx)}{x}\right\vert \cdot \left\vert \phi(x)-\phi(0) \right\vert dx \end{split} $$ but the problem is that $x \mapsto \frac{\sin(nx)}{x}$ is not absolutely integrable over $\mathbb R$. Another big problem is that I don't see how to use the hypothesis $\phi$ has compact support. I think that I should use dominated convergence theorem, but I've never done exercises about this theorem. Would you please help me? Thank you very much indeed.
Assume that $\phi(x)$ is supported in $|x|< L$. Since $\phi$ is differentiable, $\frac{\phi(x)-\phi(0)}{x}$ is bounded and therefore integrable on $|x|<L$. $$ \begin{align} \int_{-\infty}^\infty\frac{\sin(nx)}{x}\phi(x)\,\mathrm{d}x &=\pi\,\phi(0)+\int_{-\infty}^\infty\sin(nx)\frac{\phi(x)-\phi(0)}{x}\,\mathrm{d}x\\ &=\pi\,\phi(0)+\color{#C00000}{\int_{-L}^L\sin(nx)\frac{\phi(x)-\phi(0)}{x}\,\mathrm{d}x}\\ &-2\,\phi(0)\color{#00A000}{\int_{nL}^\infty\frac{\sin(x)}{x}\,\mathrm{d}x}\\ \end{align} $$ As $n\to\infty$, the red integral vanishes by the Riemann-Lebesgue Lemma and the green integral vanishes because The Dirichlet Integral converges. This leaves us with $$ \lim_{n\to\infty}\int_{-\infty}^\infty\frac{\sin(nx)}{x}\phi(x)\,\mathrm{d}x=\pi\,\phi(0) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/174779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
What is the Psi(x) variable binding operator? In the Free Variable article on Wikipedia, it lists these: as variable-binding operators. I have seen all of them during my math studies, except for the psi operator. What does $\psi x$ mean in this context?
I don't know what was intended by $\psi$ here, but some Wikipedia archaeology reveals that it was introduced in this edit, and the same user tried to remove it again one minute later. They made a mess out of the removal, and Michael Hardy undid the mess, leaving in the $\psi x$ with no explanation. None is likely forthcoming, because the user who added the $\psi x$, back in August 2008, has not been back to Wikipedia since. In short, it is most likely a piece of Wikipedia nonsense. Unless someone posts a definitive answer here, I will shortly remove the $\psi x$ from the Wikipedia article. As a consolation prize, here are some other variable binding operators you may not be familiar with, which are not in the Wikipedia article: * *Robin Milner's $\pi$-calculus uses "$\nu x. F$" to denote an expression F in which $x$ is instantiated to a "new" variable that has never been used before within the scope of the current computation. *Whitehead and Russell use "$\iota x.\Phi(x)$" to denote the unique $x$ satisfying some description $\Phi(x)$. For example, "$\iota x. x$ is the King of Swaziland" is an expression denoting the King of Swaziland. *Hilbert used "$\epsilon x.\phi(x)$" to denote "some value for which $\phi$ is true". That is, for any property $\phi$, if $\exists v.\phi(x)$ is true,then so is $\phi(\epsilon x.\phi(x))$. *Similarly, Hilbert used $\mu x.\phi(x)$ to denote the smallest natural number for which $\phi$ is true. *Category theorists often use $\exists!x.\phi(x)$ as an abbreviation for $\exists x\forall y.\phi(x) \land (\phi(y)\implies y=x)$, or some equivalent variation of it. It says that there is exactly one $x$ for which $\phi(x)$ holds. Commonly-used quantifiers that do not seem to have any standard compact notation include "almost everywhere" and "all but finitely many".
{ "language": "en", "url": "https://math.stackexchange.com/questions/174853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Does $z ^k+ z^{-k}$ belong to $\Bbb Z[z + z^{-1}]$? Let $z$ be a non-zero element of $\mathbb{C}$. Does $z^k + z^{-k}$ belong to $\mathbb{Z}[z + z^{-1}]$ for every positive integer $k$? Motivation: I came up with this problem from the following question. Maximal real subfield of $\mathbb{Q}(\zeta )$
Let's go by induction as in @Arturo Magidin's answer. The result holds for $k=0,1$. Assume $z^k+z^{-k} \in \mathbb{Z}[z+z^{-1}]$ for $0\le k\le n$. But $$z^{n+1}+z^{-(n+1)} = (z^n+z^{-n})(z+z^{-1}) - (z^{n-1} + z^{-(n-1)}),$$ and so $z^{n+1}+z^{-(n+1)} \in \mathbb{Z}[z+z^{-1}]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
$\mathbb{CP}^1$ is compact? $\mathbb{CP}^1$ is the set of all one dimensional subspaces of $\mathbb{C}^2$, if $(z,w)\in \mathbb{C}^2$ be non zero , then its span is a point in $\mathbb{CP}^1$.let $U_0=\{[z:w]:z\neq 0\}$ and $U_1=\{[z:w]:w\neq 0\}$, $(z,w)\in \mathbb{C}^2$,and $[z:w]=[\lambda z:\lambda w],\lambda\in\mathbb{C}^{*}$ is a point in $\mathbb{CP}^1$, the map is $\phi_0:U_0\rightarrow\mathbb{C}$ defined by $$\phi_0([z:w])=w/z$$ the map $\phi:U_1\rightarrow\mathbb{C}$ defined by $$\phi_1([z:w])=z/w$$ Now,Could any one tell me why $\mathbb{CP}^1$ is the union of two closed sets $\phi_0^{-1}(D)$ and $\phi_1^{-1}(D)$, where $D$ is closed unit disk in $\mathbb{C}$,and why $\mathbb{CP}^1$ is compact?
The maps you gave are the coordinate charts on $\mathbb{C}\mathbb{P}^1$ that makes it into a manifold. In particular, they are bijective. If we take $\phi_0^{-1}(D)$, we get all point $[1:z]$, with $z\in D$. Similarly, $\phi_0^{-1}(D)$ is all points of the form $[z:1]$. Together, these sets cover $\mathbb{C}\mathbb{P}^1$. The inverse maps are continuous, because the maps $\phi_i$ are bijective, so $\phi_0^{-1}(D)$ and $\phi_1^{-1}(D)$ are compact, as the image of a compact set under a continuous map is compact. It's not hard to show that if a space is a union of two compact sets, it is compact, so we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/174987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Opposite Clifford-Algebra for a symmetric bilinearform $\beta$ on a $\mathbb{K}$-vectorspace $V$ the associated Clifford Algebra $Cl(\beta)$ is the associative algebra with unit subject to the relations $$v\cdot v=\beta(v,v)\cdot 1\qquad\forall v\in V.$$It is then often said that $Cl(-\beta)$ is isomorphic to the opposite Algebra $Cl(\beta)^\text{op}$. Why is that? Cheers, Robert
There are different conventions for Clifford algebras, which is confusing. If I understand your question correctly. If the convention for the bilinear form is $ v . v=\beta(v,v)\cdot 1\qquad\forall v\in V $ then the algebra is denoted as $\mathcal{Cl}_{p,q}$. In the opposite convention $v . v= - \beta(v,v)\cdot 1\qquad\forall v\in V $ it is equivalent to the algebra $\mathcal{Cl}_{q,p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Question about $p$-adic numbers and $p$-adic integers I've been trying to understand what $p$-adic numbers and $p$-adic integers are today. Can you tell me if I have it right? Thanks. Let $p$ be a prime. Then we define the ring of $p$-adic integers to be $$ \mathbb Z_p = \{ \sum_{k=m}^\infty a_k p^k \mid m \in \mathbb Z, a_k \in \{0, \dots, p-1\} \} $$ That is, the $p$-adic integers are a bit like formal power series with the indeterminate $x$ replaced with $p$ and coefficients in $\mathbb Z / p \mathbb Z$. So for example, a $3$-adic integers could look like this: $1\cdot 1 + 2 \cdot 3 + 1 \cdot 9 = 16$ or $\frac{1}{9} + 1 $ and so on. Basically, we get all natural numbers, fractions of powers of $p$ and sums of those two. This is a ring (just like formal power series). Now we want to turn it into a field. To this end we take the field of fractions with elements of the form $$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$ for $\sum_{k=r}^\infty b_k p^k \neq 0$. We denote this field by $\mathbb Q_p$. Now as it turns out, $\mathbb Q_p$ is the same as what we get if we take the ring of fractions of $\mathbb Z_p$ for the set $S=\{p^k \mid k \in \mathbb Z \}$. This I don't see. Because then this would mean that every number $$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$ can also be written as $$ \frac{\sum_{k=m}^\infty a_k p^k}{p^r}$$ and I somehow don't believe that. So where's my mistake? Thanks for your help.
To define $\mathbb{Z}_p$ the summations should start at $k = 0$. In particular, it contains no negative powers of $p$. As for your second question, it suffices to show that the inverse of a $p$-adic integer of the form $1 + a_1 p^1 + a_2 p^2 + ...$ is a $p$-adic integer. I'll write this as $1 - pz$ where $z$ is another $p$-adic integer. Then $$\frac{1}{1 - pz} = 1 + pz + p^2 z^2 + p^3 z^3 + ...$$ and this is a $p$-adic integer because only finitely many terms contribute to the coefficient of $p^k$ for any particular $k$. (I really am allowed to take this infinite sum because it converges $p$-adically.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/175098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Direct products in the category Rel Please describe direct products in the category Rel.
As Dylan has already mentioned, the category-theoretic product in $\textbf{Rel}$ is the disjoint union of sets. We can verify this by hand: \begin{align} \textbf{Rel}(X, Y \amalg Z) & = \mathscr{P}(X \times (Y \amalg Z)) \\ & \cong \mathscr{P}((X \times Y) \amalg (X \times Z)) \\ & \cong \mathscr{P}(X \times Y) \times \mathscr{P}(X \times Z) = \textbf{Rel}(X, Y) \times \textbf{Rel}(X, Z) \end{align} This isn't too surprising, since $\textbf{Rel}$ is isomorphic to $\textbf{Rel}^\textrm{op}$ and behaves a bit like what one expects for the category of (free) vector spaces over the field of one element. There is a categorical description of the cartesian product of sets within $\textbf{Rel}$, however. To avoid confusion, let us now write $X \otimes Y$ for the cartesian product of $X$ and $Y$. It's not hard to check that this makes $\textbf{Rel}$ into a symmetric monoidal category. Moreover, $\textbf{Rel}(X, Y) = \mathscr{P}(X \otimes Y)$, hence, $$\textbf{Rel}(X \otimes Y, Z) \cong \textbf{Rel}(X, Y \otimes Z)$$ so $\textbf{Rel}$ is even a monoidal closed category! Of course, this means $\textbf{Rel}$ is enriched over itself, with the internal hom being given, confusingly, by the cartesian product. (Note that the representable functor $\textbf{Rel}(1, -)$ is not the forgetful functor!) Thus, we may characterise the cartesian product as follows: it is the unique monoidal product on $\textbf{Rel}$ that has unit $1$ and admits an internal hom. (This is because every set is a coproduct of copies of $1$.) [But how does one characterise $1$...? It's not the terminal object anymore!]
{ "language": "en", "url": "https://math.stackexchange.com/questions/175169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
optimality of 2 as a coefficient in a continued fraction theorem I'm giving some lectures on continued fractions to high school and college students, and I discussed the standard theorem that, for a real number $\alpha$ and integers $p$ and $q$ with $q \not= 0$, if $|\alpha-p/q| < 1/(2q^2)$ then $p/q$ is a convergent in the continued fraction expansion of $\alpha$. Someone in the audience asked if 2 is optimal: is there a positive number $c < 2$ such that, for every $\alpha$ (well, of course the case of real interest is irrational $\alpha$), when $|\alpha - p/q| < 1/(cq^2)$ it is guaranteed that $p/q$ is a convergent to the continued fraction expansion of $\alpha$? Please note this is not answered by the theorem of Hurwitz, which says that an irrational $\alpha$ has $|\alpha - p_k/q_k| < 1/(\sqrt{5}q_k^2)$ for infinitely many convergents $p_k/q_k$, and that $\sqrt{5}$ is optimal: all $\alpha$ whose cont. frac. expansion ends with an infinite string of repeating 1's fail to satisfy such a property if $\sqrt{5}$ is replaced by any larger number. For the question the student at my lecture is asking, an optimal parameter is at most 2, not at least 2.
Roth's theorem, for which he won the 1958 Fields Medal, implies that for all algebraic numbers the exponent limit of 2 is optimal. Theorem (Thue-Siegel-Roth). Suppose $x$ and $\alpha$ are real numbers, $\alpha >2$. If there are infinitely many reduced fractions $p,q$ satisfying $$ |x − \frac{p}{q}| < \frac{1}{q^{\alpha}}$$ then $x$ is transcendental. As I understand it, this theorem was key in demonstrating the existence of many more (classes of) transcendental numbers (which were first proposed by Leibniz in the 1600s), and thus ending the work of approximating with rationals, that was begun by Louisville in 1844, and marking a major milestone in identifying classes of transcendental numbers that was begun by Euler in 1748.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Limit of a complex valued integral The question is: Compute $$\lim_{p\rightarrow0^{+}}\int_{C_p}\frac{e^{3iz}}{z^{2}-1}dz$$ Where $$C_p: z = 1 + pe^{i\theta}$$ My initial thought was to use residues, yet the poles are -1 and 1, so they're on the real line (thus the Residue Theorem does not apply). My next thought was to find some way to make the integral work with the Cauchy Integral Formula, but I can't find a way to do that since a partial fraction decomposition won't work in this case. So, I am stuck.
Your contour $C_p$ is a circle of radius $p \to 0$ centred at the point $z=1$. This means that there is a singularity in the contour (not on its path). Because of this, we may use residue theorem (at singularity $z=1$) to evaluate this integral. If $$f(z)=\frac{\exp(3iz)}{z^2-1}=\frac{\exp(3iz)}{(z+1)(z-1)}$$ we see that $$\operatorname{Res}_{z=1}f(z)=\lim_{z \to 1} (z-1) \frac{\exp(3iz)}{(z+1)(z-1)} = \frac{\exp(3i)}{2}$$ Then, by residue theorem $$\oint_{C_p} f(z)\, dz=2\pi i \operatorname{Res}_{z=1} (f(z)) = 2 \pi i \frac{\exp(3i)}{2}=\pi i \exp(3 i)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/175267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\sum a_n$ converges absolutely, does $\sum (a_n + \cdots + a_n^n)$ converge Suppose $\sum_{n=1}^\infty a_n$ converges absolutely. Does this imply that the series $$\sum_{n=1}^\infty (a_n + \cdots + a_n^n)$$ converges? I believe the answer is yes, but I can't figure out how to prove it. Any help would be appreciated. Thanks.
Another way to see this is to notice that each term is less than $a_n + a_n^2 + a_n^3 + ...$ which, when $a_n < 1$ is $\frac{a_n}{1-a_n}$. When $a_n < \frac{1}{2}$ then $\frac{a_n}{1-a_n} < 2a_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
Determine the sum $T=a_0+a_1+a_2+...+a_{2012}$ Let ${a_n}$, $n \ge 0$ be a sequence of positive real numbers, given by $a_0=1$ and $a_m<a_n$ for all $m,n \in \mathbb{N}, m<n$ with $a_n=\sqrt{a_{n+1}a_{n-1}}+1$ and $4\sqrt{a_n}=a_{n+1}-a_{n-1}$ for all $n \in \mathbb{N}, n\neq 0$. Help me, determining the sum $T=a_0+a_1+a_2+...+a_{2012}$.
Take $n=1$ in both the equations, you will get, $a_1=\sqrt {a_2}+1$ and $4\sqrt {a_1}=a_2-1$ (here i substituted $a_0=1$), solving these equations gives $a_1=4=2^2$ and $a_2=9=3^2$. Now you can calculate $a_3$ by putting $n=2$ in any one of the given equations(the equations are consistent(you can check by using both to obtain answer)) which gives you $a_3=16=4^2$. Following this manner, you will get further terms as $25,36,49\cdots {2013}^2$. Therefore sum is $1^2+2^2+3^2+\cdots +{2013}^2=2721031819$
{ "language": "en", "url": "https://math.stackexchange.com/questions/175419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $f: \mathbb Q\to \mathbb Q$ is a homomorphism, prove that $f(x)=0$ for all $x\in\mathbb Q$ or $f(x)=x$ for all $x$ in $\mathbb Q$. If $f: \mathbb Q\to \mathbb Q$ is a homomorphism, prove that $f(x)=0$ for all $x\in\mathbb Q$ or $f(x)=x$ for all $x$ in $\mathbb Q$. I'm wondering if you can help me with this one?
If $f$ is a homomorphism of rings, we know the kernel of $f$ has to be an ideal of $\mathbb{Q}$, but as $\mathbb{Q}$ is a field, the only ideals of $\mathbb{Q}$ are $0$ and $\mathbb{Q}$ itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
The longest sum of consecutive primes that add to a prime less than 1,000,000 In Project Euler problem $50,$ the goal is to find the longest sum of consecutive primes that add to a prime less than $1,000,000. $ I have an efficient algorithm to generate a set of primes between $0$ and $N.$ My first algorithm to try this was to brute force it, but that was impossible. I tried creating a sliding window, but it took much too long to even begin to cover the problem space. I got to some primes that were summed by $100$+ consecutive primes, but had only run about $5$-$10$% of the problem space. I'm self-taught, with very little post-secondary education. Where can I read about or find about an algorithm for efficiently calculating these consecutive primes? I'm not looking for an answer, but indeed, more for pointers as to what I should be looking for and learning about in order to solve this myself.
I used the mathematica code: list = {}; Do[ a = Sum[Prime[i], {i, k, j}]; If[900000 < a <= 1000000 && PrimeQ[a], AppendTo[list, {k, j, a}]], {k, 1, 1000}, {j, 1, 1000}] and then. Sort[list, #1[[3]] < #2[[3]] &]. This seems very inefficient, but it was pretty quick. I found that among the sequences of the first 1000 primes, the greatest prime sum was from 459 to 695, with a value of 999749. This is probably not what you're looking for since I don't really consider mathematica to be programming. Btw, would it be an interesting to ask how many ways a number can be summed as the sum of consecutive primes?
{ "language": "en", "url": "https://math.stackexchange.com/questions/175523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
How to get a new point of a vector when rotated. I want to obtain the new point of a vector that I rotate like this. When I rotate them, I have the angle of rotation. I want to know x and y, it rotates taking the reference point of 0,0 Thanks
To give a general answer, you take your position vector $\vec{v}\in\mathbb{R}^{n}$, and you multiply it by the appropriate rotation matrix ${\bf M}\in\mathbb{R}^{n\times n}$. So we have: $$\vec{v}'={\bf M}\vec{v}$$ This will give you the position vector under the rotation described by ${\bf M}$. So let's take your example, of the vector $\vec{v}\in\mathbb{R}^{2}$, where $\vec{v}=\left[55,0\right]$, and multiply it by the matrix ${\bf M}\in\mathbb{R}^{2\times2}$, where ${\bf M}=\left[\begin{smallmatrix}\cos{30^{\circ}} & -\sin{30^{\circ}} \\ \sin{30^{\circ}} & \cos{30^{\circ}} \end{smallmatrix}\right]$. So we have: $$\vec{v}'=\underbrace{\begin{bmatrix}\frac{\sqrt{3}}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{\sqrt{3}}{2}\end{bmatrix}}_{\bf M}\underbrace{\left[55\atop 0\right]}_{\vec{v}}\approx\left[47.63 \atop 27.5\right]$$ Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/175588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
$G$ be a group acting holomorphically on $X$ could any one tell me why such expression for $g(z)$, specially I dont get what and why is $a_n(g)$? notations confusing me and also I am not understanding properly, and why $g$ is automorphism? where does $a_i$ belong? what is the role of $G$ in that expression of power series? Thank you, I am self reading the subject, and extremely thank you to all the users who are responding while learning and posting my trivial doubts. Thank you again.
Let me add something to countinghaus' fine answer. You also asked "why g is an automorphism", something that has not been satisfactorily answered yet. It is in fact true in much bigger generality. Recall that one axiom of a group action is that for all $x \in X$ and $g,h \in G$ we have $$ g \cdot_{\text{a}} (h\cdot_{\text{a}} x) = (g \cdot_{\text{g}} h) \cdot_{\text{a}} x $$ where i labeled the group multiplication with g and the action operation with a. Applying this to $h = g^{-1}$ and using that the unit element of the group acts as the identity on $X$, we see that we have found an inverse for our homomorphic map given by $g$. Hence it is an automorphism. Note that one needs to be precise. Of course $g^{-1}$ is the inverse of $g$ in the group, that we all know. However, we've just shown something else: that the holomorphic map given by $g^{-1}$ is an inverse of the holomorphic map given by $g$. Lastly, it works in any category $\mathcal{C}$. If we say some group acts on some object in $\mathcal{C}$, we normally require that the elements of the group induce morphisms in $\mathcal{C}$, which have a two sided inverse by the reasoning above. Hence they are isomorphisms in $\mathcal{C}$. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
On the class number of a cyclotomic number field of an odd prime order Is the following proposition true? If yes, how would you prove this? Proposition Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$. Let $K = \mathbb{Q}(\zeta)$. Let $k$ be a subfield of $K$. Let $h_0$ be the class number of $k$. Let $h$ be the class number of $K$. Then $h$ is divisible by $h_0$. Motivation Let $k$ be the unique quadratic subfield of $K$. The class number of $k$ can be relatively easily calculated if the discriminant of $k$ is small. Hence, by the proposition, we can get useful information of the class number of $K$. Effort I considered the Hilbert class field $L/k$ and tried to use this.
Let $k^1$ denote the Hilbert class field of $k$. Since $K/k$ is completely ramified, $k^1/k$ is disjoint from $K/k$, so class field theory predicts that the norm is surjective on ideal classes: $N(Cl(K)) = Cl(k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is this statement stronger than the Collatz conjecture? $n$,$k$, $m$, $u$ $\in$ $\Bbb N$; Let's see the following sequence: $x_0=n$; $x_m=3x_{m-1}+1$. I am afraid I am a complete noob, but I cannot (dis)prove that the following implies the Collatz conjecture: $\forall n\exists k,u:x_k=2^u$ Could you help me in this problem? Also, please do not (dis)prove the statement, just (dis)prove it is stronger than the Collatz conjecture. If it implies and it is true, then LOL. UPDATE Okay, let me reconfigure the question: let's consider my statement true. In this case, does it imply the Collatz conjecture? Please help me properly tagging this question, then delete this line.
Even if we make believe that we don't know that the statement is false, it is still not clear whether it says anything about Collatz. Collatz doesn't go by doing $3x+1$ until you get a power of 2; it goes by alternating doing $3x+1$ and dividing out powers of 2. Every time you divide out a power of 2, you get to a new odd number, and have to start the $3x+1$ bit all over again. There's no reason to think you'd ever hit a power of 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Question about proof of $A[X] \otimes_A A[Y] \cong A[X, Y] $ As far as I understand universal properties, one can prove $A[X] \otimes_A A[Y] \cong A[X, Y] $ where $A$ is a commutative unital ring in two ways: (i) by showing that $A[X,Y]$ satisfies the universal property of $A[X] \otimes_A A[Y] $ (ii) by using the universal property of $A[X] \otimes_A A[Y] $ to obtain an isomorphism $\ell: A[X] \otimes_A A[Y] \to A[X,Y]$ Now surely these two must be interchangeable, meaning I can use either of the two to prove it. So I tried to do (i) as follows: Define $b: A[X] \times A[Y] \to A[X,Y]$ as $(p(X), q(Y)) \mapsto p(X)q(Y)$. Then $b$ is bilinear. Now let $N$ be any $R$-module and $b^\prime: A[X] \times A[Y] \to N$ any bilinear map. I can't seem to define $\ell: A[X,Y] \to N$ suitably. The "usual" way to define it would've been $\ell: p(x,y) \mapsto b^\prime(1,p(x,y)) $ but that's not allowed in this case. Question: is it really not possible to prove the claim using (i) in this case?
Why Not show that the tensor product has the universal property of the polynomial algebra? For me that is the more intuitive way of proving this fact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
How to prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m? m, t, k are Natural numbers. How can I prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m ?
We have $k=m-10t$ and so $67 | t - 20(m - 10t) = 201t - m$. But $201 = 3 \times 67$, and so $$67 | 67 \times 3t - m$$ And so $67 | m$. This can also be seen using modular arithmetic: we have $m=10t+k$ and $t \equiv 20k \pmod {67}$, so $m \equiv 201k \pmod {67}$. But $201 = 3 \times 67$, so $m \equiv 0 \pmod {67}$, and so $67|m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/175870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Combinatorial Interpretation of a Certain Product of Factorials Let $\mu$ denote the Moebius function. What is a combinatorial interpretation of the following integer, \begin{align} \prod_{d \mid n} d!^{\,\mu(n/d)}, \end{align} where the product is taken over divisors of $n$? Does it have a simpler representation in terms of known functions? Note: The Online Encyclopedia of Integer Sequences does not have an entry containing the corresponding sequence.
This is an idea of approach, and it does not completely answer to the question. Define $$ \mathcal{R}_n(x) = \{r \le x: (r,n) = 1\}$$ and note that, by Inclusion-Exclusion principle $$ | \mathcal{R}_n(x) | = \sum_{d \mid n} \mu(d) \left \lfloor \frac{x}{d} \right \rfloor $$ Let $S_n$ be the sum in the text. Now, for every prime $p \le n$, consider $ V_p(S_n) $. Recalling Polignac identity on $V_p(k!)$, we have $$ V_p(S_n) = \sum_{d \mid n} \mu(n/d) V_p(d!) = \sum_{d \mid n} \mu(n/d) \sum_{k=1}^{\infty} \left \lfloor \frac{d}{p^k} \right \rfloor = \sum_{k=1}^{\infty} \sum_{d \mid n} \mu(d) \left \lfloor \frac{n}{dp^k} \right \rfloor = \sum_{k=1}^{\infty} |\mathcal{R}_n(n/p^k) | $$ Take $r \in \mathcal{R}_n(n)$: the number of times it is counted in the last infinite sum is the number of positive integer solutions $k$ of $n/p^k \ge r$, i.e. $ \lfloor \log_p n/r \rfloor $. So $$ \sum_{k=1}^{\infty} |\mathcal{R}_n(n/p^k)| = \sum_{r \in \mathcal{R}_n(n)} \lfloor \log_p (n/r) \rfloor $$ Define $q_p(x) := \max \{ p^n \ | \ p^n \le x\}$, and note that $ q_p(x) =p^{ \lfloor \log_p x \rfloor} $. So, in conclusion: $$ S_n = \prod_{p \le n} p^{V_p(S_n)} = \prod_{\substack{p \le n \\ r \in \mathcal{R}_n(n) }} p^{ \lfloor \log_p (n/r) \rfloor } = \prod_{\substack{r,p \le n \\(r,n)=1 }} q_p(n/r) $$ I don't know, actually, if this can be useful; I think that it gives a combinatorial meaning to involved quantities, even if not much to the formula itself. Hope it helps, Andrea EDIT: I found an error in an inequality I used; so I can't actually state two bounds (upper and lower) of the same order. I would like to know more about this: Second-order asymptotics for $\pi(n), \theta(n)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/175930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 1, "answer_id": 0 }
Calculate a vector that is perpendicular to Oy axis. Find a vector perpendicular to $Oy$ axis. Knowing that $v\cdot v_1=8$ and $v\cdot v_2=-3$, where $v_1=(3,1,-2)$ and $v_2=(-1,1,1)$
ley $v=(x,y,z)$ perpendicular to $OY$ axis means that $v*(0,1,0)=0$ $v*v_1=3*x+1*y-2*z=8$ $v*v_2=-1*x+1*y+1*z=-3$ $v*(0,1,0)=0 -->y=0$ so $3*x-2*z=8$ $-x+z=-3$ from second $z=-3+x$ put into first one $3*x-2*(-3+x)=8$ $x=2$ and $z=-1$ so we have $v=(2,0,-1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/175990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $p$ is a factor of $m^2$ then $p$ is a factor of $m$ I'm a complete beginner and not sure where to go with this proof of Euclid's lemma. Any help would be greatly appreciated. If $m$ is a positive integer and a prime number $p$ is a factor of $m^2,$ then $p$ is a factor of $m.$ So far I have: Since we know that $m$ is a positive integer, then $m^2$ must also be positive. We also know that $p$ is positive integer, since it is a prime number. So $m^2 = p*k$ where $k$ is positive since both $m^2$ and $p$ are positive. Therefore, $k$ is greater than or equal to $1.$ ...?
If $p$ is not a prime factor of $m$ then $gcd(p,m)=1$ since $p$ is prime. So there exists integers $x$ and $y$ such that $$ xp + ym = 1 $$ So $$ xmp = m - ym^2. $$ Since $p \mid m^2$, this implies $p \mid m$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Antisymmetric powers of $SO(n)$ representation. I am particularly interested for $SO(3)$. Let us say that I start with the natural/defining $3$-real-dimensional vectorial representation of $SO(3)$ and I choose the generator of rotation in the $1$-$2$ plane as my Cartan subalgebra. So I have $3$ weight vectors with weights $1$, $0$, $-1$ w.r.t. this generator. Then is there a way to see what will be the weights under this chosen Cartan of $SO(3)$ of the weight vectors in the $m$-fold antisymmetrization of this $3$-dimensional vectorial representation? * *It would be great if possible someone can explain the corresponding general result for $SO(n)$, $U(n)$ and $SU(n)$ where too I think the same question can be asked.
In general the multiset of weights of the $m$-th exterior power of a representation $V$ (I take it that this is what you cal $m$-fold anti-symmetrization) is obtained by taking the sums of all $m$-subsets of weights of $V$. So in the $SO(3)$ case the second exterior power of the defining representation has weights $1+0=1$, $1+-1=0$, and $0+-1=-1$ (it is isomorphic to the defining representation itself), the third exterior power has weight $1+0+-1=0$ (it is the trivial representation), and all further exterior powers are the zero representation (no weights). You can similarly work out the other cases. For $U(n)$ and $SU(n)$, the exterior powers of the defining representation traverse all $n$ "fundamental representations" and then become zero. For $SO(n)$, one basically traverses the fundamental representations once and then again in reverse order (because the defining representation is self-dual), but there are some exceptions near the turning point, where the (fundamental) spin representations are not obtained, but others are in their place. However, the above description of the set of weights is valid in all cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Function $f(x)$ similar to exp(x) where $-f(x)$ is approximately $f(-x)$ I am wondering if there is a function $f(x)$ "similar" to the exponential function $\exp(x)$ such that: $-f(x) \approx f(-x)$ I would also like $f(x)$ to have the following property: $\frac{{f(a)}}{{f(b)}} = f(a - b)$ Or alternately, $\frac{{f(a)}}{{f(b)}} \approx f(a - b)$
You might be interested in the hyperbolic sine "sinh". It is antisymmetric and its asymptotic behaviour for $x\to\infty$ is similar to the one of the exponential function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of the sequence $f_{1}\left(x\right)=\sqrt{x} $ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $ Let $\left\{ f_{n}\right\} $ denote the set of functions on $[0,\infty) $ given by $f_{1}\left(x\right)=\sqrt{x} $ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $ for $n\ge1 $. Prove that this sequence is convergent and find the limit function. We can easily show that this sequence is is nondecreasing. Originally, I was trying to apply the fact that “every bounded monotonic sequence must converge” but then it hit me this is true for $\mathbb{R}^{n} $. Does this fact still apply on $C[0,\infty) $, the set of continuous functions on $[0,\infty) $. If yes, what bound would we use?
In a sense, the answer is "yes". You can apply the Monotone Convergence Theorem pointwise. That is, for a fixed value of $x$, show that the sequence of nonnegative numbers $\bigl(f_n(x)\bigr)$ is bounded above and increasing. Then of course for each $x$, the sequence $\bigl(f_n(x)\bigr)$ converges to some $f(x)$. (You can find $f(x)$ explicitly by taking the limits of both sides of your defining relation for $f_n$.) Towards showing that $\bigl(f_n(x)\bigr)$ is indeed bounded above, try using the bound $1+2\sqrt x$. Perhaps better, as suggested by Did, would be to formally find the limit fuction first, and then show that this serves as an upper bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Proof by contradiction that $n!$ is not $O(2^n)$ I am having issues with this proof: Prove by contradiction that $n! \ne O(2^n)$. From what I understand, we are supposed to use a previous proof (which successfully proved that $2^n = O(n!)$) to find the contradiction. Here is my working so far: Assume $n! = O(2^n)$. There must exist $c$, $n_{0}$ such that $n! \le c \cdot 2^n$. From the previous proof, we know that $n! \le 2^n$ for $n \ge 4$. We pick a value, $m$, which is gauranteed to be $\ge n_{0}$ and $\ne 0$. I have chosen $m = n_{0} + 10 + c$. Since $m > n_0$: $$m! \le c \cdot 2^m\qquad (m > n \ge n_0)$$ $$\dfrac{m!}{c} \le 2^m$$ $$\dfrac{1}{c} m! \le 2^m$$ $$\dfrac{1}{m} m! \le 2^m\qquad (\text{as }m > c)$$ $$(m - 1)! \le 2^m$$ That's where I get up to.. not sure which direction to head in to draw the contradiction.
It is quite easy to show that $n! \ge 3^n$ if $n\ge 7.$ If $n = 7$, we have $3^7 = 2187 < 5040 = 7!$. Now let $n\ge 7$. $$n! = n\cdot(n-1)! \ge 3\cdot(n-1)! = 3\cdot 3^{n-1}, $$ if we invoke the induction hypothesis $n! \ge 3^n$. Then $${n!\over 2^n} \ge {3^n\over 2^n} \to \infty$$ as $n\to\infty$. This rules out $n! = O(2^n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 2 }
An example showing that a Skolem normal form of $A$ can be not logically equivalent to $A.$ I am trying to learn a little about Mathematical Logic. Precisely now I am reading about Prenex Normal Forms from E. Mendelson, Introduction to Mathematical Logic, 2nd Edition. I would like to know whether I have correctly worked out exercise 2.80 (which is Exercise 2.87 in the 4th Edition): Find a Skolem normal form $B$ for $\forall x\exists yA^2_1(x,y)$ and show that $\not\vdash B\leftrightarrow \forall x\exists yA^2_1(x,y).$ What is the context? * *Mendelson is working with a pure predicate calculus, i.e. a predicative calculus without individual constant nor function letters, such that for any positive integer $n$ there are infinitely $n$-ary predicate letters. What I have done? * *I have applied the described algorithm to find a Skolem normal form, and I have found $B:=\exists x \exists y \forall z[(A_1^2(x,y)\to P(x))\to P(z)],$ where $P$ is a $1$-ary predicative variable. *By Goedel's completeness theorem, I have to show the $B\leftrightarrow \forall x\exists yA^2_1(x,y)$ is not universally valid, i.e. I have to find an interpretation $\mathfrak{A}$ s.t. $\mathfrak A\not\models B\leftrightarrow \forall x\exists yA^2_1(x,y).$ *I have considered the interpretation, with domain $\mathbb N,$ which assigns to $A_1^2(x,y)$ the relation $x>y,$ and to $P(x)$ the relation "x=1". If I am not wrong then, for any $s\in\mathbb{N}^\omega,$ I have $\mathfrak A\not\models\forall x\exists y A_1^2(x,y)[s]$ while $\mathfrak A\models B[s].$ As obvious, any feedbak is highly appreciated.
It looks alright to me. Note that a simpler counterexample interpretation would be to make $P({\cdot})$ always true and $A_1^2({\cdot},{\cdot})$ always false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find endomorphism of $\mathbb{R}^3$ such that $\operatorname{Im}(f) \subset \operatorname{Ker}(f)$ I have a problem: Let's find an endomorphism of $\mathbb{R}^3$ such that $\operatorname{Im}(f) \subset \operatorname{Ker}(f)$. How would you do it? The endomorphism must be not null.
Well, you could always take $f$ to be the null function... Not the only solution, but certainly the simplest :) If you want $f$ to be non-null, then you just need to make sure that $f^2=0$. Either try to find a $3 \times 3$ matrix for which this holds (look up nilpotent matrices) or look at this endomorphism and see how it can be adapted : $$f : (x,y,z) \mapsto (y,z,0)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/176504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Intuitive proof of Tucker's lemma or Borsuk-Ulam theorem I'm looking for a intuitive proof of Tucker's lemma and/or the Borsuk-Ulam theorem. The proof should not make use of topology, cohomology etc. as it should be understandable by undergraduates. Thanks in advance!
This is my intuition for the special case of $n=2$ (it cannot be easilly generalized to higher dimensions). If $f: S^2\to \mathbb{R}^2$ is antipodal and nowhere zero, than $g:=f/|f|$ is an antipodal map from $S^2$ to the circle $S^1$. Take two antipodal points in the equator, $A$ and $B$. Their images $f(A)$ and $f(B)$ are antipodal on the circle and the image of the half-circle $AB$ on the equator is mapped to some curve on the circle that winds around the circle $n+1/2$ times for some $n$. So, the image of the whole equator ($(AB)$+the antipodal halfcircle $(BA)$) winds around the circle $2n+1$ times, a nonzero number. The equator can be slipped towards the north pole and contracted in $S^2$. However, for its image, you cannot contract a curve in the circle that has a nontrivial winding number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Alternating sum of binomial coefficients Calculate the sum: $$ \sum_{k=0}^n (-1)^k {n+1\choose k+1} $$ I don't know if I'm so tired or what, but I can't calculate this sum. The result is supposed to be $1$ but I always get something else...
$$\sum_{k=0}^n (-1)^k {n+1\choose k+1}=-\sum_{k=0}^n (-1)^{k+1} {n+1\choose k+1}= $$ $$=-\left(\sum_{k=0}^n (-1)^{k+1} {n+1\choose k+1}\right)=$$ $$=-\left(\sum_{j=1}^{n+1} (-1)^{j} {n+1\choose j}+(-1)^{0} {n+1\choose 0}-(-1)^{0} {n+1\choose 0}\right)= $$ $$=-\left(\sum_{j=0}^{n+1} (-1)^{j} {n+1\choose j}-(-1)^{0} {n+1\choose 0}\right)=-(1-1)^{n+1}+1=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/176622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Expected values of Binomial? A group of $d$ students tries to solve $k$ problems in the following way. Each student picks only one problem at random and tries to solve it. Assume that all the attempts have probability $p$ of success and are independent of each other and of the students choice. Let $X$ denote the number of solved problems. Find $\Bbb E(X)$. I am not sure if I just overestimating this, but would the answer simply be $E(X) = \dbinom{d}{k}p$ Because there are exactly $\dbinom{d}{k}$ independent trials and p is the success rate Thank you
The probability that $\ge j$ problems are solved is the probability that: * *At least one student solves one of the problems *At least one of the remaining students solves a second problem $\qquad \qquad \vdots$ * *At least one of the remaining students solves a $j^{\text{th}}$ problem And these events are all independent. And the probability that any given student chooses and solves any given problem is $\dfrac{p}{k}$, so we obtain: $$\mathbb{P}(X \ge j) = dk\dfrac{p}{k} \cdot (d-1)(k-1)\dfrac{p}{k} \cdot (d-j+1)(k-j+1) \dfrac{p}{k}$$ which is precisely $\displaystyle \dfrac{p^j}{k^j} \binom{d}{j} \binom{k}{j}$. Hence we have $$\displaystyle \mathbb{E}(X) = \sum_{j=0}^k \mathbb{P}(X \ge k) = \sum_{j=0}^k \dfrac{p^j}{k^j} \binom{d}{j} \binom{k}{j}$$ I'm not sure how you could simplify this, but no doubt there is a way. Another way of visualising it is this: you have a $d \times k$ grid, where the $d$ rows represent students and the $k$ columns represent the problems. You select one square from each row, and you need to work out the expected number of columns which have at least one square chosen from them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Show $\iint_{D} f(x,y)(1 - x^2 - y^2) ~dx ~dy = \pi/2$ Suppose $f(x,y)$ is a bounded harmonic function in the unit disk $D = \{z = x + iy : |z| < 1 \} $ and $f(0,0) = 1$. Show that $$\iint_{D} f(x,y)(1 - x^2 - y^2) ~dx ~dy = \frac{\pi}{2}.$$ I'm studying for a prelim this August and I haven't taken Complex in a long time (two years ago). I don't know how to solve this problem or even where to look unless it's just a game with Green's theorem-any help? I don't need a complete solution, just a helpful hint and I can work the rest out on my own.
Begin with the mean value property of harmonic functions: $\int_0^{2 \pi} f(r \cos \theta, r \sin \theta) d\theta =f(0,0) 2 \pi $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What does the letter epsilon signify in mathematics? This letter "$\varepsilon$" is called epsilon right ? What does it signify in mathematics ?
Hilbert's epsilon-calculus used the letter $\varepsilon$ to denote a value satisfying a predicate. If $\phi(x)$ is any property, then $\varepsilon x. \phi(x)$ is a term $t$ such that $\phi(t)$ is true, if such $t$ exists. One can define the usual existential and universal quantifiers $\exists$ and $\forall$ in terms of the $\varepsilon$ quantifier: $$\begin{eqnarray} \def\hil#1{#1(\varepsilon x. #1(x))} \exists x.\phi(x) & \equiv & \hil{\phi}\\ \forall x.\phi(x) & \equiv & \phi(\varepsilon x.\lnot\phi(x)) \end{eqnarray} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/176783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 3, "answer_id": 0 }
Fourier transform of the derivative - insufficient hypotheses? An exercise in Carlos ISNARD's Introdução à medida e integração: Show that if $f$ and $f'$ $\in\mathscr{L}^1(\mathbb{R},\lambda,\mathbb{C})$ and $\lim_{x\to\pm\infty}f(x)=0$ then $\hat{(f')}(\zeta)=i\zeta\hat{f}(\zeta)$. ($\lambda$ is the Lebesgue measure on $\mathbb{R}$.) I'm tempted to apply integration by parts on the integral from $-N$ to $N$ and then take limit as $N\to\infty$. But to obtain the result I seemingly need $f'e^{-i\zeta x}$ to be Riemann-integrable so as to use the fundamental theorem of Calculus. What am I missing here? Thank you.
There is a measure-theoretic version of the fundamental theorem of Calculus, see here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
For which angles we know the $\sin$ value algebraically (exact)? For example: * *$\sin(15^\circ) = \frac{\sqrt{6}}{4} - \frac{\sqrt{2}}{4}$ *$\sin(18^\circ) = \frac{\sqrt{5}}{4} - \frac{1}{4}$ *$\sin(30^\circ) = \frac{1}{2}$ *$\sin(45^\circ) = \frac{1}{\sqrt{2}}$ *$\sin(67 \frac{1}{2}^\circ) = \sqrt{ \frac{\sqrt{2}}{4} + \frac{1}{2} }$ *$\sin(72^\circ) = \sqrt{ \frac{\sqrt{5}}{8} + \frac{5}{8} }$ *$\sin(75^\circ) = \frac{\sqrt{6}}{4} + \frac{\sqrt{2}}{4}$ *? Is there is a list of known exact values of $\boldsymbol \sin$ somewhere? Found a related post here.
In January 2008 I posted several references published in the 1800s of tables that give exact values for the sine and cosine of $3$, $6$, $9$, …, $90$ degree angles. (Among the integer degree angles, only those that are multiples of $3$ can be expressed in real-radical form.) See google-groups archive for 1st post and google-groups archive for 2nd post. The best table I know of was prepared by the Belgium mathematician E. Gelin in the 1880s. His table gives a list of values, with rationalized denominators, for all six trig. functions evaluated at $3$, $6$, $9$, …, $90$ degree angles. I know of three places where his table has been published: Mathesis Recueil Mathematique (1) 8 (1888), Supplement 3. [See pp. 327-333 of the downloaded .pdf file.] Mathesis Recueil Mathematique (3) 6 (1906), Supplement 3. [See pp. 338-348 of the downloaded .pdf file.] E. Gelin, Éléments de Trigonométrie Plane et Sphérique (1888). [See pp. 59-62, which is equivalent to pp. 66-69 of the downloaded .pdf file.] I believe Johann Heinrich Lambert was the first person who published exact radical values for the sine of $3$, $6$, $9$, etc. degree angles. A table of values is in Volume 1 of his Collected Works. The table is in an item that was published in 1770. Lambert’s table was reprinted two or three times in the first half of the 1800s (e.g. one was in Crelle’s Journal [= Journal für die reine und angewandte Mathematik]), but I don’t have the exact references with me now.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 0 }
What is smallest possible integer k such that $1575 \times k$ is perfect square? I wanted to know how to solve this question: What is smallest possible integer $k$ such that $1575 \times k$ is a perfect square? a) 7, b) 9, c) 15, d) 25, e) 63. The answer is 7. Since this was a multiple choice question, I guess I could just put it in and test the values from the given options, but I wanted to know how to do it without hinting and testing. Any suggestions?
If you take the prime factorisation of a perfect square, all the exponents come out even, because if $n=p_1^{a_1}\cdot p_2^{a_2} \cdots p_m^{a_m}$, then $n^2=p_1^{2a_1}\cdot p_2^{2a_2} \cdots p_m^{2a_m}$. Therefore, if you want to multiply $1575 = 3^2\cdot 5^2 \cdot 7$ by some number $k$ to make a perfect square, then the prime factorisation of $k$ will have to include an odd exponent for 7, and an even exponent for any other primes. Clearly, the smallest such $k$ (apart from the trivial $k=0$) would have no primes other than 7 in its prime factorisation, and an exponent of 1 for the prime 7; in other words, $k=7$ is the smallest possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/176954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
I want to prove $ \int_0^\infty \frac{e^{-x}}{x} dx = \infty $ How can I prove this integral diverges? $$ \int_0^\infty \frac{e^{-x}}{x} dx = \infty $$
For $0\lt x\lt 1$, our function is $\gt e^{-1}\frac{1}{x}$. Thus $\int_\epsilon^1 \frac{e^{-x}}{x}\,dx \gt -e^{-1}\log(\epsilon)$. But $-\log(\epsilon)$ blows up as $\epsilon$ approaches $0$ from the right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What's the meaning of the unit bivector i? I'm reading the Oersted Medal Lecture by David Hestenes to improve my understanding of Geometric Algebra and its applications in Physics. I understand he does not start from a mathematical "clean slate", but I don't care for that. I want to understand what he's saying and what I can do with this geometric algebra. On page 10 he introduces the unit bivector i. I understand (I think) what unit vectors are: multiply by a scalar and get a scaled directional line. But a bivector is a(n oriented) parallellogram (plane). So if I multiply the unit bivector i with a scalar, I get a scaled parallellogram?
A bivector operating on a vector rotates the vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Does "nullity" have a potentially conflicting or confusing usage? In Linear Algebra and Its Applications, David Lay writes, "the dimension of the null space is sometimes called the nullity of A, though we will not use the term." He then goes on to specify "The Rank Theorem" as "rank A + dim Nul A = n" instead of calling it the the rank-nullity theorem and just writing "rank A + nullity A = n". Naturally, I wonder why he goes out of his way to avoid using the term "nullity." Maybe someone here can shed light....
Maybe he just doesn't like the idea of using jargon for the dimension of a specific subspace? It's not a particularly useful piece of jargon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Who came up with the Euler-Lagrange equation first? Could someone explain who came up with the specific equation first? http://en.wikipedia.org/wiki/Euler-Lagrange makes it sound like Lagrange got it first, in 1755, then sent it to Euler. but: http://en.wikipedia.org/wiki/Calculus_of_variations sort of makes it sound like Euler got it first in the 1730s. It seems like a straightforward question, but I can't find an answer anywhere. Who came up with the equation, Euler or Lagrange? And what precisely did the other man contribute to get his name on there?
I'm writing this partially not to let this question go unanswered and partially to include some details that I didn't find at the MO post. There is a book by Herman Goldstine titled "A History of the Calculus of Variations from the 17th through the 19th Century" that covers the history and development of the Euler-Lagrange equation (and much more, naturally), and it covers this topic well. In it, Goldstine writes that Euler first discovered what we now call the Euler-Lagrange equation prior to April 15, 1743, which we know as a result of a letter from that date sent by Euler to Daniel Bernoulli containing his discovery. Euler then published this finding to a broader audience in his 1744 Methodus Inveniendi. Euler's derivation approximated a curve by $N$ points and then let $N$ go to infinity to find extremals. This method was somewhat tedious in its implementation and Euler himself was interested in finding a method that did not rely on any geometry as his method did. Eleven years later, in a letter dated August 12, 1755, Lagrange (at just 19 years old) sent Euler a letter in which he re-derived Euler's result using purely analytical methods. Lagrange's derivation was powerful enough to handle other types of problems and had Euler's earlier result as a nearly automatic consequence. Euler himself much preferred Lagrange's derivation and gave it the name "calculus of variations," and it is essentially Lagrange's technique that is used today. The name of the equation, then, is very reasonable given that Euler found it first and Lagrange refined his approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What digit appears in unit place when $2^{320}$ is multiplied out Is there a way to answer the following preferably without a calculator What digit appears in unit place when $2^{320}$ is multiplied out ? a)$0$ b)$2$ c)$4$ d)$6$ e)$8$ ---- Ans(d)
As the ending digits for powers of two are $${2}^{0} \mapsto 1$$ $${2}^{1} \mapsto 2$$ $${2}^{2} \mapsto 4$$ $${2}^{3} \mapsto8$$ $${2}^{4} \mapsto6$$ $${2}^{5} \mapsto2$$ You only have to do ${2}^{320 \mod(4)}$ to get the ending digit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Prove $\frac{1}{1 \cdot 3} + \frac{1}{3 \cdot 5} + \frac{1}{5 \cdot 7} + \cdots$ converges to $\frac 1 2 $ Show that $$\frac{1}{1 \cdot 3} + \frac{1}{3 \cdot 5} + \frac{1}{5 \cdot 7} + \cdots = \frac{1}{2}.$$ I'm not exactly sure what to do here, it seems awfully similar to Zeno's paradox. If the series continues infinitely then each term is just going to get smaller and smaller. Is this an example where I should be making a Riemann sum and then taking the limit which would end up being $1/2$?
Here is an intuitive way to image this problem. Imagine a teacher gives you a test with an odd number of problems. You get the first problem incorrect. Then with the next two problems, you miss one and get one right. As the number of problems increases to infinity, your score will approach 1/2( or 50 %) 1 problem test , your score is 0/1 3 problem test , your score is 1/3 The difference bewtween 1/3 and 0/1 is 1/3 5 problem test , your score is 2/5 The difference between 2/5 and 1/3 is 1/15 7 problem test , your score is 3/7 The difference between 3/7 and 2/5 is 1/35 (2n-1) problem test , your score is (n-1)/(2n-1) (2n+1) problem test, your score is (n)/(2n+1) The difference of your score between a (2n+1) test and a (2n-1) test is: (n+1)/(2n+1) - (n)/(2n-1) which simplifies to 1/(2n-1)(2n+1) The sum of this series (starting with n =1 ) goes to 1/2 PS, you can use this idea to prove that 1/2 + 1/6 +1/12 +1/20 +... goes to 1 In this example, you miss the first problem on the test and then get the rest correct. As the number of problems goes to infinity, your score approaches 1( or 100%)
{ "language": "en", "url": "https://math.stackexchange.com/questions/177373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Approximating an $L^2$ function in the Riemann sense $\newcommand{\R}{\Bbb R}$ Consider the Lebesgue measure in $\R$ and the following proposition: P. For each representative of a function class $f\in L^2[0,1]$ there is a sequence of continuous functions $(f_n)_{n\in\Bbb N}$ such that: * *$|f_n-f|$ is Riemann integrable on $[0,1]$, for all $n\in\Bbb N$. *$\lim\limits_{n\to\infty} \int\limits_0^1 |f_n(x)-f(x)|^2\ \mathrm d x=0$. There is no reason why this proposition should be true, but I cannot find a counterexample.
This proposition is not true. Let $f:[0,1]\to\mathbb R$ be defined by $f(x)=1$ iff $x\in\mathbb Q$ and $f(x)=0$ otherwise. $f$ is Lebesgue integrable but not Riemann integrable. Suppose $|f_n-f|$ is Riemann integrable and $\int_0^1|f_n-f|dx<1/4$. Then there is a partition $0=t_0<t_1<\dots<t_n=1$ of the unit interval such that $$\sum_{i=1}^n(t_i-t_{i-1})\sup\{|f_n(x)-f(x)|:x\in[t_{i-1},t_i]\}<1/4.$$ Now there is $i\in\{1,\dots,n\}$ such that $\sup\{|f_n(x)-f(x)|:x\in[t_{i-1},t_i]\}<1/4$. The function $f_n$ is not continuous on $[t_{i-1},t_i]$ since on a dense subset of that interval, $f_n<1/4$ and on a dense set, $f_n>3/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Subspaces of Hilbert Spaces of finite dimension Given a Hilbert space $H$ of finite dimension, why is any subspace of this space closed? I tried bashing out an answer using an arbitrary Cauchy sequence $\{ f_1 , f_2, \ldots \} \subset S \subset H $ and trying to show its limit $f \in S$. I keep getting stuck and suspect there's an easy answer that I'm missing. Could someone enlighten me on this? Thanks in advance!
Let $S$ a subspace of $H$, and $\{e_1,\dots,e_d\}$ an orthonormal basis of $S$. We can complete it as a basis of $H$. By Gram-Schmidt process, we can assume that this gives an orthonormal basis $\{e_1,\dots,e_d,f_1,\dots,f_N\}$ of $H$. Then we notice that $S=\operatorname{Span}(f_j,1\leq j\leq N)^{\perp}$, and the orthogonal of a set is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
How to solve $M=SU=US?$ This is from Berkeley Problems in Mathematics, Spring 1979: Let $M$ be a $3\times 3$ non-singular real matrix. Prove that there are real matrices $S$ and $U$ such that $M=SU=US$, all the eigenvalues of $U$ equal 1, and $S$ is diagonalizable over $\mathbb{C}$. I am thinking that since $U$'s eigenvalues are all 1, we have $(U-I)^{3}=0$. Thus up to Jordon canonical form $U$ should be either a whole Jordan block or has a rank 2 Jordan block. But I do not know how to use this to approach this problem. On the other hand $S$ can be expressed in $PDP^{-1}$, and we are supposed to solve $A=PDP^{-1}U=UPDP^{-1}$. Again no familiar formula can be obtained this way. So I venture to ask for a hint. This may be related to Schur decomposition but I do not know it well enough to solve it.
Here is a rather tedious proof: Since $M$ is real, either the eigenvalues are all real, or there is one real and a pair of conjugate eigenvalues. The key result here is that a real matrix has a real Jordan form, not quite upper triangular, but 'almost'. In the invertible $3 \times 3$ case, this means that there exists a real $V$ such that $J = V^{-1}MV$ has one of the following forms: $$J = \begin{bmatrix} \lambda_1 & M_{12} & 0 \\ 0 & \lambda_2 & M_{23} \\ 0 & 0 & \lambda_3\end{bmatrix} \ \ \ \ \ \ \ \ J = \begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & a & b \\ 0 & -b & a\end{bmatrix},$$ where all the entries are real, $M_{12}, M_{23} \in \{0,1\}$, $\lambda_i \neq 0$ and $b \neq 0$. In the second case, since all eigenvalues are distinct ($b \neq 0$), $S$ is diagonalizable over $\mathbb{C}$. We can take $S= V J V^{-1}$, $U= I$ and we are finished. Now for the first case. Since this is the Jordan form, we have $(\lambda_1 \neq \lambda_2) \Rightarrow M_{12} = 0$, and similarly $(\lambda_2 \neq \lambda_3) \Rightarrow M_{23} = 0$. Let $\Sigma = \mathbb{diag}(\lambda_1, \lambda_2, \lambda_3)$, and let $W = I + \alpha E_{12} + \beta E_{23}$, where $E_{12}$ is the matrix with zeros everywhere except a one in the $1,2$ position, and similarly for $E_{23}$. We want to determine constants $\alpha, \beta$ so that $J = \Sigma W = W \Sigma$. We have: $$ \Sigma W = \Sigma + \alpha \lambda_1 E_{12} + \beta \lambda_2 E_{23}\\ W \Sigma = \Sigma + \alpha \lambda_2 E_{12} + \beta \lambda_3 E_{23}$$ This shows that $\Sigma W = W \Sigma$ iff $\alpha( \lambda_1 - \lambda_2) = 0$ and $\beta( \lambda_2 - \lambda_3) = 0$. Furthermore, we want J$ = \Sigma W$, and this will be true iff $M_{12} = \alpha \lambda_1$, and $M_{23} = \beta \lambda_2$. Let $\alpha = \frac{M_{12}}{\lambda_1}$ and $\beta = \frac{M_{23}}{\lambda_2}$, then it is easy to verify that all the conditions are satisfied. Then we can take $S = V \Sigma V^{-1}$, and $U = V W V^{-1}$, and we are finished.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If a player is 50% as good as I am at a game, how many games will it be before she finally wins one game? This is a real life problem. I play Foosball with my colleague who hasn't beaten me so far. I have won 18 in a row. She is about 50% as good as I am (the average margin of victory is 10-5 for me). Mathematically speaking, how many games should it take before she finally wins a game for the first time?
Here's one plausible model: each new point (independent of past history) has probability $2/3$ of being scored by you and $1/3$ of being scored by her. The game ends, I assume, when one player has $10$ points. The probability that she wins by a score of $10$ to $x$, i.e. that of the first $9+x$ points she scores exactly $9$ and then she wins the last point, is ${{9+x} \choose 9} (1/3)^{10} (2/3)^x$. The total probability that she wins the game is then $\displaystyle p = \sum_{x=0}^9 {{9+x} \choose 9} 2^x/3^{10+x} = \frac{75275227}{1162261467} \approx 0.0648$. The expected number of games until she wins is $1/p \approx 15.44$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Bounding the $l_1$ norm of a vector Let $x$ be real vector with $\|x\|_1=x_1+\ldots +x_{2n}$. How to bound from above $(x_1+\ldots+x_n)(x_{n+1}+\ldots+x_{2n})$ by $l_2$ norm of the vector $x$. Of course, using $\|x\|\leq\sqrt {2n}\|x\|_2$ I can bound $$ (x_1+\ldots+x_n)(x_{n+1}+\ldots+x_{2n})\leq\|x\|^2_1\leq 2n\|x\|_2^2 $$ But I would like to get an upper bount not greater then $1/2\|x\|_2^2$. Is it possible to get such a bound?
$$(x_1 + \ldots + x_n)(x_{n+1} + \ldots x_{2n}) = \sum_{i=1}^n \sum_{j=n+1}^{2n} x_i x_j \le \sum_{i=1}^n \sum_{j=n+1}^{2n} \frac{x_i^2 + x_j^2}{2} = \frac{n}{2} \|x\|_2^2 $$ and this is an equality when all $x_i$ are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/177801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove whether a relation is an equivalence relation Define a relation $R$ on $\mathbb{Z}$ by $R = \{(a,b)|a≤b+2\}$. (a) Prove or disprove: $R$ is reflexive. (b) Prove or disprove: $R$ is symmetric. (c) Prove or disprove: $R$ is transitive. For (a), I know that $R$ is reflexive because if you substitute $\alpha$ into the $a$ and $b$ of the problem, it is very clear that $\alpha \leq \alpha + 2$ for all integers. For (b), I used a specific counterexample; for $\alpha,\beta$ in the integers, if you select $\alpha = 1$, and $\beta = 50$, it is clear that although $\alpha ≤ \beta + 2$, $\beta$ is certainly not less than $ \alpha + 2$. However, for (c), I am not sure whether the following proof is fallacious or not: Proof: Assume $a R b$ and $b R g$; Hence $a ≤ b + 2$ and $b ≤ g + 2$ Therefore $a - 2 ≤ b$ and $b ≤ g + 2$ So $a-2 ≤ b ≤ g + 2$ and clearly $a-2 ≤ g+2$ So then $a ≤ g+4$ We can see that although $a$ might be less than $ g+2$, it is not always true. Therefore we know that the relation $R$ is not transitive. QED. It feels wrong
Here's a counterexample: $a = 5, b = 3, c = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/177877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is this a function? Is the set $\theta=\{\big((x,y),(3y,2x,x+y)\big):x,y ∈ \mathbb{R}\}$ a function? If so, what is its domain, codomain, and range? This is probably a dumb question. I understand what a function is, but the three elements in the ordered pair got me confused.
Yes it is, presumably one from $\mathbb{R}^2$ to $\mathbb{R}^3$, although the domain and codomain could potentially be smaller. You have an ordered pair in which the first element is itself an ordered pair (of real numbers), and the second is an ordered triple (of real numbers). I'm used to codomain and range meaning the same thing. If you meant image for one of them, I can't think of a better description than $\{(3y,2x,x+y):x,y\in\mathbb{R}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/178005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Are the two statements concening number theory correct? Statement 1: any integer no less than four can be factorized as a linear combination of two and three. Statement 2: any integer no less than six can be factorized as a linear combination of three, four and five. I tried for many numbers, it seems the above two statement are correct. For example, 4=2+2; 5=2+3; 6=3+3+3; ... 6=3+3; 7=3+4; 8=4+4; 9=3+3+3; 10=5+5; ... Can they be proved?
Statement 1: Either $b$ is even or $b-3$ is even. Statement 2: Either $b$ is divisible by 3, or $b-4$ is or ... now complete the argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/178078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Accidents of small $n$ In studying mathematics, I sometimes come across examples of general facts that hold for all $n$ greater than some small number. One that comes to mind is the Abel–Ruffini theorem, which states that there is no general algebraic solution for polynomials of degree $n$ except when $n \leq 4$. It seems that there are many interesting examples of these special accidents of structure that occur when the objects in question are "small enough", and I'd be interested in seeing more of them.
I would include in this list a discussion of $G(n)$ and $g(n)$ in the Waring Problem. The point is that when it comes to representing integers as sums of powers of non-negative integers, it seems to happen that some (smallish) integers require more powers to represent them due to some some (presumably unknown) peculiarity of small integers. For example, in the case of representing integers as sums of cubes, it has been proved that 9 cubes are sufficient, and some numbers require 9, so that $g(3)=9$. On the other hand, calculations suggest that almost all numbers from some point onwards are sums of at most 4 cubes (so that $G(3)$ might be 4, but this is not proved), and it appears that it is an "accident of small $n$" that there are some (smallish) numbers which require more than 4 cubes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/178183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 20, "answer_id": 5 }
Euler's theorem for powers of 2 According to Euler's theorem, $$x^{\varphi({2^k})} \equiv 1 \mod 2^k$$ for each $k>0$ and each odd $x$. Obviously, number of positive integers less than or equal to $2^k$ that are relatively prime to $2^k$ is $$\varphi({2^k}) = 2^{k-1}$$ so it follows that $$x^{{2^{k-1}}} \equiv 1 \mod 2^k$$ This is fine, but it seems like even $$x^{{2^{k-2}}} \equiv 1 \mod 2^k$$ holds, at least my computer didn't find any counterexample. Can you prove or disprove it?
This is basically the same as Will's answer, but the key to all this is the simple equation $x^{2^{m}}-1 = (x^{2^{m-1}}-1)(x^{2^{m-1}}+1)$ for $m \geq 1.$ When $x$ is an odd integer, both factors on the right side of the equation are even integers, but also, every odd integer is congruent to $\pm 1$ (mod 4), so at least one factor on the right side is divisible by $4$ (and the other factor is congruent to $2$ (mod $4$)). Hence the left hand side is divisible by $8$ whenever $x$ is odd and $m \geq 2.$ And, furthermore, the power of $2$ dividing $x^{2^{m}}-1$ is at least one higher than the power of $2$ dividing $x^{2^{m-1}}-1.$ When $m \geq 1,$ then, it follows by induction that $x^{2^{m}}-1$ is divisible by $2^{m+2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/178242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Best constant in an integral inequality Which is the smallest constant $B_d$ such that the following inequality $$\left|\int_{0}^{1}t^d(1-t)\psi(t) dt\right|^2\le B_d\int_{0}^{1}t^d|\psi(t)|^2dt$$ holds, provided that $\psi(t)$ is a polynomial?
Presumably $d > -1$ so the integrals converge. Consider the Hilbert space $L^2(\mu)$ where $\mu$ is the measure $\mu(dt) = t^d \ dt$ on $[0,1]$. Your inequality says $|\langle 1-t, \psi\rangle|^2 \le B_d \|\psi\|^2$. The Cauchy-Schwarz inequality says the inequality is true for all $\psi \in L^2(\mu)$, in particular for polynomials, with $$B_d = \|1-t\|^2 = \int_0^1 t^d (1-t)^2\ dt = \dfrac{2}{(d+1)(d+2)(d+3)}$$ and this is best possible (since you could take $\psi(t) = 1-t$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/178312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Randomly selecting a natural number In the answer to these questions: * *Probability of picking a random natural number, *Given two randomly chosen natural numbers, what is the probability that the second is greater than the first? it is stated that one cannot pick a natural number randomly. However, in this question: * *What is the probability of randomly selecting $ n $ natural numbers, all pairwise coprime? it is assumed that we can pick $n$ natural numbers randomly. A description is given in the last question as to how these numbers are randomly selected, to which there seems to be no objection (although the accepted answer is given by one of the people explaining that one cannot pick a random number in the first question). I know one can't pick a natural number randomly, so how come there doesn't seem to be a problem with randomly picking a number in the last question? NB: I am happy with some sort of measure-theoretic answer, hence the probability-theory tag, but I think for accessibility to other people a more basic description would be preferable.
Perhaps a justification is this. In the first question it is (correctly) claimed that it is impossible to have a uniform distribution on the natural numbers. Thus we can't develop a sensible way of choosing particular numbers at random, when each supposedly has the same probability. The last question though is dealing with the probability of picking a certain class of numbers. In that case the approach is to pick $N$ large, impose a uniform distribution on $[1,N]$ (which we can always do), work out the probability as a function of $N$, and then take the limit $N \rightarrow \infty.$ Example: what's the probability of randomly picking an even number? Fix $N$. If $N$ is even then the probability of picking an even number in the uniform distribution on $[1,N]$ is exactly $1/2$. If $N$ is odd then the probability is $$ \frac{\text{amount of even numbers}}{\text{total amount}} = \frac{ (N-1)/2 }{N} = \frac{1}{2} - \frac{1}{2N} $$ And so, as we take the limit $N \rightarrow \infty$, we say that the probability of choosing an even number is 1/2. This approach is really a "natural density" approach; see http://en.wikipedia.org/wiki/Natural_density. The natural density features in, for instance, the Green-Tao theorem. Green and Tao showed that the primes have positive natural density, and thus that they must contain arbitrarily long arithmetic sequences. <- incorrect!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/178415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
recurrence solution to gambler's ruin From DeGroot 2.4.2, let $a_i$ be the conditional probability that the gambler wins all $k$ given gambler is at $i$. $a_i = pa_{i+1} + (1 - p)a_{i-1} $ It's not clear from the text what steps are taken to solve for the general form $a_i$ (maybe Gaussian elimination). How do you solve the recurrence equation for $a_i$? If this can be found through the characteristic equation of a matrix $A$, then how do you construct $A$? As the problem is described, the first row has $a_0 = 0$, and last with $a_k = 1$.
Hint: Rewrite our recurrence as $$pa_i+(1-p)a_i=pa_{i+1}+(1-p)a_{i-1}.$$ A little manipulation changes this to $$p(a_{i+1}-a_i)=(1-p)(a_i-a_{i-1}).$$ If $p \ne 0$, this can be rewritten as $$a_{i+1}-a_i=\frac{1-p}{p}(a_i-a_{i-1}).$$ Let $b_i=a_i-a_{i-1}$. Then the $b_i$ satisfy the simple recurrence $$b_{i+1}=\frac{1-p}{p}b_i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/178500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is every monomial over the UNIT OPEN BALL bounded by its L^{2} norm? Let $m\geq 2$ and $B^{m}\subset \mathbb{R}^{m}$ be the unit OPEN ball . For any fixed multi-index $\alpha\in\mathbb{N}^{m}$ with $|\alpha|=n$ large and $x\in B^{m}$ $$|x^{\alpha}|^{2}\leq \int_{B^{m}}|y^{\alpha}|^{2}dy\,??$$
Using the Bergman inequality, for each $K \subset B^{m}$ compact there exists $M_{K}>0$ such that $$|x^{\alpha}|\leq M_{K}||p_{\alpha}||_{2},\quad \alpha\in\mathbb{N}^{m+1},\,x\in K.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/178573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Question about proof that $C(X)$ is separable In my notes we prove Stone-Weierstrass which tells us that if we have a subalgebra $A$ of $C(X)$ such that it separates points and contains the constants then its closure (w.r.t. $\|\cdot\|_\infty$) is $C(X)$. A few chapters later there is a lemma that if $X$ is a compact metric space then $C(X)$ is separable. The proof constructs a subalgebra that separates points by taking a dense countable subset of $X$, $\{x_n\}$, and defining $f_n (x) = d(x,x_n)$. Question: could we treat this as a corollary of Stone-Weierstrass and say that polynomials with rational coefficients are a subalgebra containing $1$ and separating points? Thank you.
In the hope I did understand the question correctly by now: It is a corollary of Stone-Weierstrass. Add the constant functions to the algebra you constructed and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/178716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simplifying Equivalent Functions Given two functions in closed form such that f(x) is the same for all x for both functions, is there always a way to manipulate either function to make it so they are written exactly the same or can you have two functions that can be proven equivalent yet neither can be simplified to look the same as the other?
Your question is equivalent to asking if there is some normal form for every closed form expression that we can reduce equivalent expressions to. Now the answer depends on the kind of formal system you use, i.e. what exactly you call closed form. Surely when we're restricting ourselves to few enough operations, there is. Take e.g. arithmetic terms containing natural numbers, +, $\cdot$ and variables, we can completely simplify the terms, thus yielding a normal form. However for some not too complicated systems like λ-calculus, which just consists of function creation and application, is has been proven that deciding whether two terms are equivalent is already undecidable, so there is no normal form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/178838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof: $X\ge 0, r>0\Rightarrow E(X^r)=r\int_0^{\infty}x^{r-1}P(X>x)dx$ As the title states, the problem at hand is proving the following: $X\ge 0, r>0\Rightarrow E(X^r)=r\int_0^{\infty}x^{r-1}P(X>x)dx$ Attempt/thoughts on a solution I am guessing this is an application of Fubini's Theorem, but wouldn't that require writing $P(X>x)$ as an expectation? If so, how is this accomplished? Thoughts and help are appreciated.
Proof: Consider the expectation of the identity $$ X^r=r\int_0^{X}x^{r-1}\,\mathrm dx=r\int_0^{+\infty}x^{r-1}\mathbf 1_{X>x}\,\mathrm dx. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/178896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proofs that every mathematician should know. There are mathematical proofs that have that "wow" factor in being elegant, simplifying one's view of mathematics, lifting one's perception into the light of knowledge, etc. So I'd like to know what mathematical proofs you've come across that you think other mathematicians should know, and why.
I really like the simple and nice proof of the 5-color theorem (i.e. that for every planar graph there exists a vertex coloring with not more than 5 colors) and how surprisingly difficult it is to proof the sharper 4-color theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/178940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "448", "answer_count": 23, "answer_id": 2 }
What is the units digit of $13^4\cdot17^2\cdot29^3$? What is the units digit of $13^4\cdot17^2\cdot29^3$? I saw this on a GMAT practice test and was wondering how to approach it without using a calculator. Thanks.
If you compute modulo $10,$ then you'll get $$13^4 17^2 29^3 \equiv 3^4 7^2 (-1)^3\equiv -81\cdot49\equiv (-1)^2\equiv 1 (\mathrm{mod}~10).$$ Thus the last digit is $1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/178982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
application of the maximum modulus theorem Let $f$ be holomorphic on the unit disc and continuous on the unit circle. Suppose there is an $M \in \mathbb{R}$ such that $|f(z)| \leq M$ on the unit circle and let $\alpha_1, \alpha_2, ..., \alpha_n$ be zeros of $f$ in the unit disc listed according to multiplicity. Show that $|f(z)| \leq M \frac{|z-\alpha_1| \cdots |z- \alpha_n|}{|1-z \overline{\alpha_1}| \cdots |1-z \overline{\alpha_n}|}$. Why can't I apply the Maximum Modulus theorem to $f$ directly? Is there something I am missing?
Such a $M$ exists as the unit circle is compact. Let $M:=\max_{|z|=1}|f(z)|$. The map $$g(z):=f(z)\prod_{j=1}^n\frac{1-z\bar{\alpha_j}}{z-\alpha_j}$$ is holomorphic (since we can write $f(z)=\frac{z-\alpha_n}{1-z\bar{\alpha_n}}g_n(z)$, and continue this process, the cleanest way would be writing the multiplicities, and doing the last step for the multiplicity of the last root.) This function is continuous on the unit circle as the roots are in the open unit disk. If $|z|=1$ and $|a|<1$, then $$\left|\frac{1-z\bar a}{z-a}\right|=\frac{|1-\frac 1za|}{|z-a|}=\frac 1{|z|}=1,$$ hence $g$ is bounded by $M$ in the unit circle. If $g$ is constant we are done, otherwise we conclude by maximum modulus principle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is this CRC calculation correct? I am currently studying for an exam and trying to check a message (binary) for errors using a polynomial, I would like if somebody could verify that my results below are (in)valid. Thanks. Message: 11110101 11110101 Polynomial: X4 + x2 + 1 Divisor (Derived from polynomial): 10101 Remainder:111 Result: There is an error in the above message? Also, I had a link to an online calculator that would do the division but can't relocate it, any links to a calculator would be greatly appreciated. Thanks.
Mathematica gave me the same result: În[1]:=s=x^7+x^6+x^5+x^4+x^2+1; In[2]:=PolynomialRemainder[(1+x^8)s, 1+x^2+x^4,x] Out[2]:= 1+x+x^2 So there wasn't even any need to reduce the coefficients modulo two in the end. I don't know whether Wolfram Alpha supports this. Anyway, you got it right. In this case we can also find non-divisibility by observing that $$ \begin{aligned}s&=x^7+x^6+x^5+x^4+x^2+1=(x^2+x+1)x^5+(x^2+x+1)^2\\ &=(x^2+x+1)(x^5+x^2+x+1)=(x^2+x+1)(x+1)^2(x^3+x+1)\end{aligned}$$ implying that your message polynomial $$ m=(x^8+1)s=(x^2+x+1)(x^3+x+1)(x+1)^{10} $$ is not divisible by $x^4+x^2+1=(x^2+x+1)^2$. In general trying to factor the message polynomial is a very bad idea, though :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/179113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Inclusion-exclusion principle problems I've got huge problems with inclusion-exclusion principle. I know the formula, but always don't know how to use it, how to denote all the things. Hope it will pass when I do some excercises. I stuck with these two: * *How many are $8$-digit numbers (without $0$ at any position) that don't have subsequence $121$? *Find the number of permutations of the set: $\left\{1,2,3,4,5,6,7\right\}$, that don't have four consecutive elements in ascending order. And here are my propositions for solutions: * *On the whole, there are $9^8$ numbers this kind. Let think about numbers that have at least one subsequence: $121$. We choose place for first $1$ of this subsequence. There are six possibilities. After choosing place for $1$ we set $21$ after this digit, and the rest of digits are with no restrictions. So we have $6\cdot 9^5$ numbers with at least one subsequence $121$, so numbers without this subsequence are $9^8-6\cdot 9^5$. Is that correct? *Let $X$ be a set of all permutations of a given set. $|X|=7!$. Let $A_i$ be a set of permutations that have numbers: $i, \ i+1, \ i+2, \ i+3$ consecutive in ascending order. In other words they have subsequence of this form. Hence $|A_i|=4\cdot 3!$, because we choose one of $4$ places for $i$ and the rest $3$ of digits are with no restrictions. Another observation is that for $i_1<...<i_k$ we have $\displaystyle |A_{i_1}\cap ... \cap A_{i_k} |=(3-i_k+i_1) \cdot (3-i_k+i_1)!$ which is that simple only because the set is $\left\{1,2,3,4,5,6,7 \right\}$. $A_{i_1}\cap ... \cap A_{i_k}$ is a set of permutations that have subsequence: $i_1,...,i_k,...,i_{k+3}$ so we choose place for $i_1$, set this subsequence starting from this place and permuting the rest of digits. By the way I'm wondering if it was possible to solve this problem in general, I mean if the set was of the form $\left\{1,..,n \right\}$ for any natural number $n$? Back to problem. Now, what I want to count is, by exclusion-inclusion principle, this sum: $\displaystyle \sum_{k=0}^{4}(-1)^kS_k$, where $\displaystyle S_k=\sum_{i_1<...<i_k\le 4}|A_{i_1}\cap ... \cap A_{i_k}|$, and $S_0=|X|$. The last observation: $A_{i_1}\cap ... \cap A_{i_k}=A_{i_1}\cap A_{i_k}$ (which again wouldn't be so easy in general unfortunately) and let's do it: $$\sum_{k=0}^{4}(-1)^kS_k=|X|-|A_1|-|A_2|-|A_3|-|A_4|+|A_1\cap A_2|+|A_1\cap A_3|+|A_1\cap A_4|+|A_2\cap A_3|+|A_2\cap A_4|+|A_3\cap A_4|-|A_1\cap A_2\cap A_3|-|A_1\cap A_2\cap A_4|-|A_1\cap A_3\cap A_4|-|A_2\cap A_3\cap A_4|+|A_1\cap A_2\cap A_3\cap A_4|=\\=|X|-|A_1|-|A_2|-|A_3|-|A_4|+|A_1\cap A_2| + |A_2 \cap A_3|+|A_3\cap A_4|= \\ =7!-4\cdot 4\cdot 3! + 3\cdot 2\cdot 2!=4956$$ Is that correct? I'm afraid not :-( While waiting for answer I'll write a program that counts these permutations and check if it is a good answer. I would be very grateful for any help, answering my questions, any advices and hints about this type of problems. I really want to finally understand this principle. Regards
Here's my take on the second problem. It doesn't really need the Inclusion-Exclusion Principle (here one only set is included in the other). I can't see how to choose the sets that would lead to a nice and simple solution based on it, but I too would be interested to see it. There are $7!$ permutations of the set $\{1, 2, 3, 4, 5, 6, 7\}$. We need to subtract from it the number of permutations that contain 4 consecutive elements in ascending order. Let's count it in 3 steps: * *How many sequences of 4 numbers from the given set are in ascending order, like $(2, 4, 5, 7)$? This is for instance how you would select this sequence: $$1\ 2\ 3\ 4\ 5\ 6\ 7\\ 0\ 1\ 0\ 1\ 1\ 0\ 1$$ So we just have to choose four numbers from left to right (1's), ignoring the others (0's). Every 7-bits binary string containing exactly 4 1's leads to a distinct ascending sequence. This gives us $\binom{7}{4}$ such sequences. * *For each of these sequences, how many spots are there to place it (as consecutive numbers) in a permutation of 7 numbers? Just 4: $$2\ 4\ 5\ 7\ X\ X\ X,\quad X\ 2\ 4\ 5\ 7\ X\ X,\quad X\ X\ 2\ 4\ 5\ 7\ X,\quad X\ X\ X\ 2\ 4\ 5\ 7$$ * *For each of these association sequence/spot, how many arrangements of the last 3 numbers to give us a full unique permutation? Of course: $3!$ Given all that, the final answer would be: $$7! - \binom{7}{4}\cdot4\cdot3! = 4200$$ Let me know if I over-counted (or under-counted) somewhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Roots of a cubic mod prime For which primes $p$ is there a root to the equation $x^3+x^2-2x-1$ mod $p$? I have no idea where to start, any help is appreciated! Thank you
This is a bit of a trick question, for the following reason. First, if we approach this question "honestly", and ask about generic cubics, there is not much one can say at an elementary level, in part (indirectly) because the Galois group over $\mathbb Q$ is probably not abelian (so, secretly, "classfield theory", the well-developed study of questions of this sort for abelian extensions would not apply). However, since the question is asked at all, one might suspect that the Galois group over $\mathbb Q$ is abelian. Both because one is disinclined to compute a discriminant of a cubic, and because one suspects that the polynomial is special, anyway, my reaction is to wonder whether it's the simplest cubic I know with abelian Galois group over $\mathbb Q$, namely, that for the cubic subfield of the field of seventh roots of unity (with cyclic Galois group of order $6$, so admitting a unique cubic subfield). Indeed, a standard trick going back at least 240 years: from $x^6+x^5+\ldots+x+1=0$, dividing through by $x^3$, gives $x^3+x^2+x+1+x^{-1}+x^{-2}+x^{-3}=0$. Letting $y=x+x^{-1}$, we find $y^3+y^2-2y-1=0$. [Edit: terrible typo: the $y^2$ term was earlier written just as $y$. Sorry!] Thus, that cubic factoring means there is a linear factor, so a seventh root of unity is at most quadratic over $\mathbb F_p$. That is, either there is a seventh root of $1$ in $\mathbb F_p$ already, which is $7|(p-1)$, or in the quadratic extension, so $7|(p^2-1)$. The latter condition subsumes the former, so the condition is $7|(p^2-1)$, which is $p=\pm 1\mod 7$, since $7$ is prime. Edit-edit: as in commments by Will Jagy, the cubic $x^3+x^2-4x+1$ apparently is a cubic with roots in the unique cubic subfield of 13th cyclotomic field. :) Edit-edit-edit: indeed, as Gerry M notes, the 9th roots of unity have an arguably even simpler cubic subfield. And/but we'd recognize that cubic, indeed. Maybe future generations will all recognize the cubic subfields of 7th and 13th roots. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/179221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
odds of picking exactly 2 women and 2 men out of 12 men and 12 womem I understand the answer to be 12 choose 2 * 12 choose 2 over 24 choose 4. I don't really understand why, or what principle I can extract from the problem. I can understand that we are putting the total possible outcomes in the denominator, but not how the numerator represents the exact 2 by 2 requirement the problem states
There are ${12}\choose{2}$ ways to pick $2$ women out of $12,$ without regards to order (i.e. Mary & Lucy $=$ Lucy & Mary). For each of these ways, there are ${12}\choose{2}$ ways to choose $2$ men. So you must multiply to get the number of possibilities: ${12}\choose{2}$ $\cdot$ ${12}\choose{2}$. Assuming equal likelihood of any choice, divide this by the total number of outcomes ${24}\choose{4}$ to get the probability of any particular group of $4.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/179318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$ For the finite-dimensional case, we have a canonical isomorphism between $\mathbf{V}$, a vector space with the usual addition and scalar multiplication, and $(\mathbf{V}^*)^*$, the "dual of the dual of $\mathbf{V}$." This canonical isomorphism means that the isomorphism is always the same, independent of additional choices. We can define a map $I : \mathbf{V} \to (\mathbf{V}^*)^*$ by $$x \mapsto I(x) \in (\mathbf{V}^*)^* \ \text{ where } \ I(x)(f) = f(x) \ \text{for any } \ f \in \mathbf{V}^*$$ My Question: what can go wrong in the infinite-dimensional case? The notes I am studying remark that if $\mathbf{V}$ is finite-dimensional, then $I$ is an isomorphism, but in the infinite-dimensional case we can go wrong? How?
(This is just a bit too longer to be a comment) It is important to remark that the fact that for an infinite dimensional space $\bf V$ is not isomorphic to $\bf V^{**}$ requires the axiom of choice. Furthermore, the counterexample is not some pathological space. In some models without the axiom of choice a peculiar thing happens: every linear operator from a Banach space into a normed space is continuous. One example of such model is Solovay's model in which all sets of reals are Lebesgue measurable. We remark that in this model the principle of Dependent Choice holds, which is enough to develop most classical analysis. In such model $\ell_2$, a Banach space, has only continuous linear functionals, so the algebraic dual is the same as the topological dual. The fact that $\ell_2$ is a self-dual (in the topological sense) does not require much of the axiom of choice, not more than we have in Solovay's model anyway. Now we need to verify that the evaluation map is a linear isomorphism, it is that and more. It is an isometry. This is not a very hard exercise in applying basic functional analysis theorems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 2 }
Enumerate certain special configurations - combinatorics. Consider the vertices of a regular n-gon, numbered 1 through n. (Only the vertices, not the sides). A "configuration" means some of these vertices are joined by edges. A "good" configuration is one with the following properties: 1) There is at least one edge. 2) There can be multiple edges from a single vertex. 3) If A and B are joined by an edge, then the degree of A equals the degree of B. 4) No two edges must intersect each other, except possibly at the endpoints. 5) The degree of each vertex is at most k. (where $0\leq k \leq n$ ) Find f(n, k), the number of good configurations. For example, f(3, 2) = 4 and f(n, 0) = 0.
From the various comments we have * *$f(n,0) = 0$ since we can have no edges, but need at least one *$f(n,1)=M_n-1$ where $M_n$ are the Motzkin numbers, the number of different ways of drawing non-intersecting chords on a circle between $n$ points, and $-1$ because we need at least one edge *$f(n,2)=C_n-1$ where $C_n$ are the Catalan numbers, the number of noncrossing partitions on a circle between $n$ points, and $-1$ because we need at least one edge *$f(n,k)=f(n,2)$ for $k\gt 2$ as we cannot have degrees greater than $2$ on a convex polygon without intersections There are many recurrence relations. Two pretty ones are $$M_{n+1}=M_n+\sum_{i=0}^{n-1}M_i M_{n-i-1}$$ and $$C_{n+1}=\sum_{i=0}^{n}C_i\,C_{n-i}$$ starting from $M_0=1$ and $C_0=1$. The Catalan numbers can be written in closed form as $C_n = \frac{1}{n+1}{2n\choose n}$ but there is no similarly simple form for the Motzkin numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Relation of compositum of fields Let $E/k$ be a finite field extension, $\operatorname{char}(k)=p>0$. Suppose that $E^p k = E$. Is it then true that $E^{p^n}k = E$ for any positive integer $n$? If yes, why? Thanks.
Yes, it is true. I will show that $E=E^{p^2}k$ and leave to you the proof of the general case $E=E^{p^n}k$. Since $E=E^{p}k$, any element $e\in E$ can be written as $e=\sum q_ie_i^p\;$ (for some $e_i\in k, q_i\in k$) . [This is due to the fact that the ring formed by the sums on the right is already a field, because that ring is a $k$-subalgebra of the algebraic extension $E/k$] In the same way, each $e_i$ can be written as $e_i=\sum q_{ij}e_{ij}^p$. Substituting yields $$e=\sum q_i(\sum q_{ij}e_{ij}^p)^p=\sum q_iq_{ij}^pe_{ij}^{p^2}$$ which shows that indeed $E=E^{p^2}k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/179523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Trigonometry proof involving sum difference and product formula How would I solve the following trig problem. $$\cos^5x = \frac{1}{16} \left( 10 \cos x + 5 \cos 3x + \cos 5x \right)$$ I am not sure what to really I know it involves the sum and difference identity but I know not what to do.
Applying the same approach as that of this, let $A\cos5x+B\cos3x+C\cos x=\cos^5x$ As $\cos 3x = 4\cos^3x-3\cos x$ and $\cos 5x = 16\cos^5 x-20\cos^3 x+5\cos x$ $A(16\cos^5 x-20\cos^3 x+5\cos x) + B( 4\cos^3x-3\cos x)+ C\cos x=\cos^5x$ Comparing the coefficients of different powers of cosx, 5th power=>16A=1=>$A=\frac{1}{16}$ , 3rd power=>-20A+4B=0=>B=5A=$\frac{5}{16}$ and 1st power=>5A-3B+C=0=>C=3B-5A$=3\cdot\frac{5}{16}-5\cdot\frac{1}{16}=\frac{5}{8}$ Alternatively, observe that the 3rd power of $\cos x$ is absent in the given expression. But $A\cos5x+B\cos3x=A(16\cos^5 x-20\cos^3 x+5\cos x) + B( 4\cos^3x-3\cos x)$ $=16A\cdot\cos^5 x+ \cos^3x\cdot4(B-5A)+\cos x(5A-3B)$ So, B must be 5A to eliminate $cos^3x$ $A\cos5x+B\cos3x=A(\cos5x+5\cdot\cos3x)=A(16\cdot\cos^5 x - 10\cdot\cos x)$ Putting A=1, $\cos5x+5\cdot\cos3x=16\cdot\cos^5 x - 10\cdot\cos x$ So, we just need a little rearrangement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Convergence of integrals: if $g_n\ge 0$ and $\int_a^b g_n\to 0$, then $\int_a^b fg_n\to 0$ Suppose that $g_n \geqslant 0$ is a sequence of integrable functions which satisfies $\lim\limits_{n \to \infty} \int_a^b g_n(x) \mathrm{d} x = 0$. a) Show that if $f$ is an integrable function on $[a,b]$, then $\lim_{n\to \infty} \int_a^b f(x) g_n(x) \mathrm{d}x = 0$. b) Prove that if $f$ is integrable on $[0,1]$, then $\lim_{n\to \infty} \int_0^1 x^n f(x) \mathrm{d}x = 0$. Here is what I have so far; We are given that $g_n \geqslant 0$, with $\lim_{n\to\infty}\int^b_ag_n(x)=0$. We also have that $f$ is integrable on $[a,b]$. Then, by general form of Mean Value Theorem for Integrals (which I already proved in another problem), we know $\exists c\in[a,b]$ such that $\int^b_af(x)g_n(x)dx=f(c)\int^b_ag_n(x)dx$. Thus we have: $$\lim_{n\to\infty}\int^b_af(x)g_n(x)dx\Rightarrow \lim_{n\to\infty}f(c)\int^b_ag_n(x)dx\Rightarrow f(c)\lim_{n\to\infty}\int^b_ag_n(x)\Rightarrow f(c)⋅0=0. $$ (b) We are given that $f$ is integrable on $[0,1]$. Let $g_n(x)=x^n$. Our claim, is that $$g_n(x)\to g(x)=\left\{\begin{array}{ccc}0&,&x\ne1\\1&,&x=1\end{array}\right.$$ on $[0,1]$. Then let $x_0\in [0,1)$. Then $\lim_{n\to\infty}x^n_0=0$. Thus $g_n(x)\to g(x)$ pointwise on $[0,1]$. Further $\lim_{n\to\infty}\int^b_ag_n(x)=\int^b_ag(x)=0$. Thus, by (a), we have that $\lim_{n\to\infty}\int^b_ax^nf(x)dx=0$. Is this correct?
For part (a), your answer has an error. You can only apply the mean value theorem for integrals if $f$ is continuous. I would have suggested using Cauchy's inequality, but $f$ is not necessarily in $L^2$. Perhaps look at simple functions. Your answer to (b) is essentially correct, but you should justify moving the limit inside the integral sign. Or just compute the integrals of $g_n$ directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find the domain of $f(x)=\frac{3x+1}{\sqrt{x^2+x-2}}$ Find the domain of $f(x)=\dfrac{3x+1}{\sqrt{x^2+x-2}}$ This is my work so far: $$\dfrac{3x+1}{\sqrt{x^2+x-2}}\cdot \sqrt{\dfrac{x^2+x-2}{x^2+x-2}}$$ $$\dfrac{(3x+1)(\sqrt{x^2+x-2})}{x^2+x-2}$$ $(3x+1)(\sqrt{x^2+x-2})$ = $\alpha$ (Just because it's too much to type) $$\dfrac{\alpha}{\left[\dfrac{-1\pm \sqrt{1-4(1)(-2)}}{2}\right]}$$ $$\dfrac{\alpha}{\left[\dfrac{-1\pm \sqrt{9}}{2}\right]}$$ $$\dfrac{\alpha}{\left[\left(\dfrac{-1+3}{2}\right)\left(\dfrac{-1-3}{2}\right)\right]}$$ $$\dfrac{\alpha}{(1)(-2)}$$ Now, I checked on WolframAlpha and the domain is $x\in \mathbb R: x\lt -2$ or $x\gt 1$ But my question is, what do I do with the top of the problem? Or does it just not matter at all.
Let's look at the question: Find the domain of $f(x)=\dfrac{3x+1}{\sqrt{x^2+x-2}}$ Now to answer this, we need to know what the domain of a function is. Without getting into technical details, we can think of the domain as the values of $x$ that give us values of $y$. When answering a question to find the domain of a function, it is often more useful to try to find values of $x$ that do not give any value of $y$. There are three basic kinds of functions to pay attention to: rational (fractions), sqaure roots, and logarithms. Here you have the first two. (I suspect that you haven't learned about logarithms, so we will ignore those for today.) For rational functions, the denominator cannot be zero. For square root functions, the radicand (stuff under the sqaure root symbol) cannot be negative. We can use these two facts to answer this problem. First we set the denominator to zero: $$\sqrt{x^2+x-2}=0$$ You can square both sides of this equation and use the quadratic formula (or factor or complete the square) to find that $x = 1$ or $x = -2$ are solutions to the equation. This means that these two values of $x$ are not in the domain. The next step is to determine which values of $x$ make the denominator of the original function negative. You can use the solutions to the above step to determine this. To do this, we first note that $x^2+x-2$ is zero when $x = 1$ or $x = -2$. So we know that every other value of $x$ will make the quadratic expression either positive or negative. So let's look at values of $x$ less than $-2$, between $-2$ and $1$ and greater than $1$. Which of these "intervals" make our quadratic function positive?
{ "language": "en", "url": "https://math.stackexchange.com/questions/179713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number Firstly, I give the definition of the epsilon number: $\alpha$ is called an epsilon number iff $\omega^\alpha=\alpha$. Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number and there are $\kappa$ epsilon numbers below $\kappa$; In particular, the first epsilon number, called $\in_0$, is countable. I've tried, however I have not any idea for this. Could anybody help me?
Here is a slightly easier way: Lemma: For every $\alpha,\beta$ the ordinal exponentiation $\alpha^\beta$ has cardinality of at most $\max\{|\alpha|,|\beta|\}$. Now use the definition of $\omega^\kappa=\sup\{\xi^\omega\mid\xi<\kappa\}$, since $|\xi|<\kappa$ we have that $\omega^\kappa\leq\kappa$, but since $\xi\leq\omega^\xi$ for all $\xi$, $\omega^\kappa=\kappa$. Here is an alternative way (a variation on the above suggestion): Lemma: If $\alpha$ is an infinite ordinal then there is some $\varepsilon_\gamma\geq\alpha$ such that $|\alpha|=|\varepsilon_\gamma|$ Hint for the proof: Use the fact that you need to close under countably many operations, and by the above Lemma none changes the cardinality. Now show that the limit of $\varepsilon$ numbers is itself an $\varepsilon$ number, this is quite simple: If $\beta=\sup\{\alpha_\gamma\mid\alpha_\gamma=\omega^{\alpha_\gamma}\text{ for }\gamma<\tau\}$ (for some $\tau$ that is) then by definition of ordinal exponentiation $$\omega^\beta=\sup\{\omega^{\alpha_\gamma}\mid\gamma<\tau\}=\sup\{\alpha_\gamma\mid\gamma<\tau\}=\beta$$ Now we have that below $\kappa$ there is a cofinal sequence of $\varepsilon$ numbers, therefore it is an $\varepsilon$ number itself; now by induction we show that there are $\kappa$ many of them: * *If $\kappa$ is regular then every cofinal subset has cardinality $\kappa$ and we are done; *if $\kappa$ is singular there is an increasing sequence of regular cardinals $\kappa_i$, such that $\kappa = \sup\{\kappa_i\mid i\in I\}$. Below each one there are $\kappa_i$ many $\varepsilon$ numbers, therefore below $\kappa$ there is $\sup\{\kappa_i\mid i\in I\}=\kappa$ many $\varepsilon$ numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why is the following language decidable? $L_{one\ right\ and\ never\ stops}$ I can't understand how the following language can ever be decidable: $L= \{ \langle M \rangle | M \ is \ a \ TM \ and\ there\ exists\ an\ input\ that\ in\ the\ computation\ $ $of\ M(w)\ the\ head\ only\ moves\ right\ and\ M\ never\ stops \}$, but apparently it is. We need to approve that there's at least one input as requested, so even if run on all inputs in parallel, we can't say something as "if it never stops, accept", it's paradox, and even if we could do that it was showing that the language is in $RE$- recursive enumerable. so how come it is recursive? Thanks a lot
As sdcvvc hints, this can be done due to how restricted the TMs in L are. The problem can be decided by examining the set of rules of the TM, which is finite. Firstly check if for some state $q_n$ the TM moves right on a blank input. This is the only way to go right infinitely as the tape contains only a finite number of symbols, other than a blank one. Secondly check that $q_n$ can be reached from $q_0$. For a normal TM this would not be decidable, but since this one can only go right, all of its input is the initial symbols of the tape. Consider a graph representing this TM. The vertices are states and edges are symbols on which state changes. The direction is always right and the written symbol is irrelevant, so they do not need to be represented. This problem is equivalent to first finding a cycle that only uses blank symbol edges and then a path from starting vertex to this cycle in the graph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Goldbach's conjecture and number of ways in which an even number can be expressed as a sum of two primes Is there a functon that counts the number of ways in which an even number can be expressed as a sum of two primes?
Yes, there is such a function and it has been studied for at least a century. See Sloane's A002375. But it doesn't have a set letter specified for it. Here I will use $g$. So, for example, $g(36) = 4$, $g(38) = 2$. (If you're looking in Sloane's, be sure to divide by 2 before looking up). There is also a function which requires the primes to be distinct, so $31 + 7 = 38$ counts but $19 + 19$ does not; that has also been studied for at least a century. Now, is there a formula that you can plug in an even number $2n$ and have it give you an answer without knowing the primes up to $n$? I think that if the Riemann hypothesis is proven, it could lead to such a formula. As a quick and dirty estimate, I suggest $g(2n) \approx \frac{n}{8}$; your mileage may vary. EDIT: Gerry Myerson rightly pointed out that $g(2n) < \pi(n)$, and that my quick and dirty estimate is quite inadequate for large numbers. From his comment, I revise my quick and dirty estimate to $g(2n) \approx \frac{n}{4 \log n}$. The point that I was getting at is that Bertrand's postulate tells us there is always at least one prime between $n$ and $2n$, and it seems unlikely to me that each and every prime in that interval would fail to "match" to a prime between 1 and $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/179895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Local invariants of the discrete Galois module associated to a $p$-ordinary newform Let $f=\sum_{n=1}^\infty a_nq^n$ be a $p$-ordinary newform of weight $k\geq 2$, level $N$, and character $\chi$, and let $\rho_f:G_\mathbf{Q}\rightarrow\mathrm{GL}_2(K_f)$ be the associated $p$-adic Galois representation, where $K_f$ is the finite extension of $\mathbf{Q}_p$ obtained by adjoining the Fourier coefficients of $f$. Let $\mathscr{O}_f$ be the ring of integers of $K_f$, and $A_f$ a cofree $\mathscr{O}_f$-module of corank $2$, i.e., $(K_f/\mathscr{O}_f)^2$, on which $G_\mathbf{Q}$ acts by $\rho_f$ (so we've chosen an integral model of $\rho_f$). My question involves the local invariants of $A_f$. Specifically, let $F$ be a number field, and let $v$ be a finite prime of $F$ not dividing $p$ or the conductor of $\rho_f\vert_{G_F}$. Fix a decomposition group $G_v$ of $v$ in $G_F\leq G_\mathbf{Q}$. Is it true that $H^0(G_v,A_f)$ is finite? I'm really interested in whether or not $\ker(H^1(G_v,A_f)\rightarrow H^1(I_v,A_f))$ vanishes ($I_v\leq G_v$ the inertia group), but with my hypotheses on $v$, the vanishing of this kernel is equivalent to the finiteness of $H^0(G_v,A_f)$ (because the kernel in question is divisible of the same $\mathscr{O}$-corank as $H^0(G_v,A_f)$). This vanishing seems to be implicit in a couple papers I've been looking at, and I'm not sure why it's true.
If the invariants were infinite, they would be divisible, and so they would correspond to an invariant line in $V_f$ (the representation on $K_f^2$ attached to $\rho$). Let $\ell$ be the rational prime lying undre $v$. The char. poly. of $\mathrm{Frob}_{\ell}$ acting on this rep'n is exactly the $\ell$th Hecke polynomial, so by Ramanujan--Petterson, the eigenvalues of $\mathrm{Frob}_{\ell}$ are Weil numbers of weight $(k-1)/2$. In particular, they are not roots of unity (provided the weight $k > 1$). The eigenvalues of $\mathrm{Frob}_v$ are powers of the eigenvalues of $\mathrm{Frob}_{\ell}$ (since $\mathrm{Frob}_v$ is a power of $\mathrm{Frob}_{\ell}$), and so they cannot be $1$. Consequently, $H^0(G_v,V_f) = 0$. QED (If $k = 1$ this argument breaks down, and of course the statement is false.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/180021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Approximating $\pi$ with least digits Do you a digit efficient way to approximate $\pi$? I mean representing many digits of $\pi$ using only a few numeric digits and some sort of equation. Maybe mathematical operations also count as penalty. For example the well known $\frac{355}{113}$ is an approximation, but it gives only 7 correct digits by using 6 digits (113355) in the approximation itself. Can you make a better digit ratio? EDIT: to clarify the "game" let's assume that each mathematical operation (+, sqrt, power, ...) also counts as one digit. Otherwise one could of course make artifical infinitely nested structures of operations only. And preferably let's stick to basic arithmetics and powers/roots only. EDIT: true. logarithm of imaginary numbers provides an easy way. let's not use complex numbers since that's what I had in mind. something you can present to non-mathematicians :)
Here is a site that focuses on numerical computation of rational approximations of $\pi$: http://www.isi.edu/~johnh/BLOG/1999/0728_RATIONAL_PI/ Also, using a truncated form of the continued fraction will give nice approximations. The first few fractions given are $3$, $22\over7$, $333\over106$, $355\over113$, $103993 \over33102$ and $104348\over33215$. These numerators and denominators are given by the OEIS sequences A002485 and A002486 respectively. You may be interested in this page. It states that $355 \over 113$ is the "best" (efficiency-wise) rational approximation with the denominator less then 30,000.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 2 }
Periodic solution of differential equation let be the ODE $ -y''(x)+f(x)y(x)=0 $ if the function $ f(x+T)=f(x) $ is PERIODIC does it mean that the ODE has only periodic solutions ? if all the solutions are periodic , then can all be determined by Fourier series ??
No, it doesn't. But in in special cases it has at least one periodic solution. The equation is known as Hill's equation, and the theory of its solutions, whether periodic or not, is known as Floquet theory. The referenced wikipedia page deals with first order systems; you'll have to rewrite your second order equation to a first order system to use the theory directly. If you want to learn more, the little book Hill's equation by Magnus and Winkler (in the Dover series) is an excellent resource.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Need help with Unbiased estimator Let $X_1,X_2,X_3,\ldots,X_n$ be a random sample from a $\mathrm{Bernoulli}(\theta)$ distribution with probabilty function $P(X=x) = (\theta^x)(1 - \theta)^{(1 - x)}$, $x=0,1$; $0<\theta<1$. Is $\hat\theta(1 - \hat\theta)$ an unbiased estimator of $\theta(1 - \theta)$? Prove or disprove. I tried $x=\theta(1-\theta)$, $\bar x=\hat\theta(1-\hat\theta)$, $E[\bar x)=x$ $E[\bar x(1-\bar x)]=E[\bar x]-E[\bar x-1)$ but I'm not sure what to do now or how to prove it. I have an exam tomorrow so any help is really appreciated! Hopefully this is the last stats question I'll have to ask!
$\newcommand{\var}{\operatorname{var}}$ $\newcommand{\E}{\mathbb{E}}$ Your notation is confusing: you use $x$ to refer to two different things, and you seem to use the lower-case $\bar x$ to refer to the sample mean after using capital letters to refer to random variables initially. Remember that the variance of a random variable is equal to the expected value of its square minus the square of its expected value. That enables us to find the expected value of its square if we know it variance and its expected value. I surmise that by $\hat\theta$ you mean $(X_1+\cdots+X_n)/n$. That makes $\hat\theta$ an unbiased estimator of $\theta$. So $\E(\hat\theta) = \theta$ and $$ \var(\hat\theta) = \var\left( \frac{X_1+\cdots+X_n}{n} \right) = \frac{1}{n^2}\var(X_1+\cdots+X_n) = \frac{1}{n^2}(\var(X_1)+\cdots+\var(X_n)) $$ $$ =\frac{1}{n^2}\cdot n\var(X_1) = \frac 1 n \var(X_1) = \frac 1 n \theta(1-\theta). $$ Now we want $\mathbb{E}(\hat\theta(1-\hat\theta))$: $$ \mathbb{E}(\hat\theta(1-\hat\theta)) = \mathbb{E}(\hat\theta) - \mathbb{E}(\hat\theta^2) = \theta - \Big( \var(\hat\theta) + \left(\E(\hat\theta)\right)^2 \Big) = \theta - \left( \frac{\theta(1-\theta)}{n} + \theta^2 \right) $$ $$ = \frac{n\theta - \theta(1-\theta) - n\theta^2}{n} = \frac{n-1}{n}\theta(1-\theta). $$ From this you can draw a conclusion about whether $\hat\theta(1-\hat\theta)$ is an unbiased estimator of $\theta(1-\theta)$. (By the way, $\hat\theta(1-\hat\theta)$ is the maximum-likelihood estimator of $\theta(1-\theta)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/180270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving linear system of equations when one variable cancels I have the following linear system of equations with two unknown variables $x$ and $y$. There are two equations and two unknowns. However, when the second equation is solved for $y$ and substituted into the first equation, the $x$ cancels. Is there a way of re-writing this system or re-writing the problem so that I can solve for $x$ and $y$ using linear algebra or another type of numerical method? $2.6513 = \frac{3}{2}y + \frac{x}{2}$ $1.7675 = y + \frac{x}{3}$ In the two equations above, $x=3$ and $y=0.7675$, but I want to solve for $x$ and $y$, given the system above. If I subtract the second equation from the first, then: $2.6513 - 1.7675 = \frac{3}{2}y - y + \frac{x}{2} - \frac{x}{3}$ Can the equation in this alternate form be useful in solving for $x$ and $y$? Is there another procedure that I can use? In this alternate form, would it be possible to limit $x$ and $y$ in some way so that a solution for $x$ and $y$ can be found by numerical optimization?
It appears that the system of equations is linearly dependent, since $\det(A) = 0$ using Cramer's rule and one equation can be transformed into the other by multiplication. Since both equations are the same line, there is no intersection. Perhaps there is a way to limit $x$ and $y$ so that some sort of optimization procedure could be used to determine $x$ and $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculating $\lim_{n \to +\infty}\int_0^1 (n + 1)x^{n}(1 - x^3)^{1/5}\,dx$ This question is from a bank of past master's exams. I have been asked to evaluate $$\lim_{n \to +\infty}\int_0^1 (n + 1)x^{n}(1 - x^3)^{1/5}\,dx.$$ I did this problem in a hurried manner, but here's what I think. Since $x^n$ is decreasing in $n$ for fixed $x$ in the closed unit interval, it seems like the integrand, which we may denote by $f_n$, converges pointwise to zero. If I can show that the integrand in fact converges uniformly to zero, by showing $$M_n = \sup_{x\in[0,1]}|f_n(x)|\rightarrow 0,$$ then the question is simply a matter of commuting the limit with the integral. Now, $f_n(x)$ is continuous and differentiable on $[0 , 1]$, so it achieves its supremum, which can be found by differentiating and finding the critical points. I found this critical point to be $x=(\frac{5n}{5n+3})^{1/3}$. The denominator exceeds the numerator in this expression for all $n$, so the critical point is between 0 and 1. It also seems clear to me that this is a local maximum. At this point, $f_n$ achieves the value $$M_n = (n+1)(\frac{5n}{5n+3})^{n/3}(1 - \frac{5n}{5n+3})^{1/5}$$ which goes to infinity. So, my intuition failed me at some point. What is the proper solution to this limit? More importantly, is there a better approach to this type of problem?
First, integrate by parts: $$\small I_n:=\int_0^1(n+1)x^n(1-x^3)^{1/5}dx=[x^{n+1}(1-x^3)^{1/5}]_0^1-\int_0^1x^{n+1}\frac 15(1-x^3)^{-4/5}(-3x^2)dx,$$ hence $$I_n=\frac 35\int_0^1x^{n+3}(1-x^3)^{-4/5}dx.$$ What you did before shows that now we have uniform convergence to $0$ of the integrand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Calculate Rotation Matrix to align Vector $A$ to Vector $B$ in $3D$? I have one triangle in $3D$ space that I am tracking in a simulation. Between time steps I have the the previous normal of the triangle and the current normal of the triangle along with both the current and previous $3D$ vertex positions of the triangles. Using the normals of the triangular plane I would like to determine a rotation matrix that would align the normals of the triangles thereby setting the two triangles parallel to each other. I would then like to use a translation matrix to map the previous onto the current, however this is not my main concern right now. I have found this website http://forums.cgsociety.org/archive/index.php/t-741227.html that says I must * *determine the cross product of these two vectors (to determine a rotation axis) *determine the dot product ( to find rotation angle) *build quaternion (not sure what this means) *the transformation matrix is the quaternion as a $3 \times 3$ (not sure) Any help on how I can solve this problem would be appreciated.
Sadly I don't have enough points to comment on the accepted answer but as others have noted, the formula doesn't work when a == -b. To solve this edge case you have to create a normal vector of a by for example using the formula found here (a,b and c being the components of the vector): function (a,b,c) { return c<a ? (b,-a,0) : (0,-c,b) } then make the rotation matrix by rotating vector a around this normal by Pi.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "181", "answer_count": 19, "answer_id": 12 }
How to "project" a measure Given a borelian measure in $\mathbb{R}^2$, there is a canonical way or simply a way to obtain a measure on a line, for example $x=0$? (a measure with support in the line I'm considering). The question is very general, I explain this with this example because is clearly easy to understand.
Given $\mu$ a Borel measure on $\Bbb R^2$, define $\nu:{\cal B}_{\Bbb R}\to [0,\infty]$ by $$\nu(E)=\mu(E\times [0,1]).$$ Then $\nu$ is a Borel measure on $\Bbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Logical question problem A boy is half as old as the girl will be when the boy’s age is twice the sum of their ages when the boy was the girl’s age. How many times older than the girl is the boy at their present age? This is a logical problem sum.
Let the boy's age (in years) be $b$ and the girl's age be $g$. When the boy was the girl's age, the girl was $g+(g-b)=2g-b$, so the sum of their ages then was $3g-b$, and twice that sum is $6g-2b$. The boy will be that age in $6g-3b$ years, at which point the girl will be $g+(6g-3b)=7g-3b$. The boy is half that old now, so $b=\frac12(7g-3b),$ from which we see (multiplying by $2$ and rearranging) that $5b=7g$. Thus, $b=\frac75g$, so the boy is $\frac75$ times the girl's age at present.
{ "language": "en", "url": "https://math.stackexchange.com/questions/180556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the limit of uniformly integrable functions integrable? If $\left\{f_n\right\}$ are uniformly integrable and $f_n\overset{a.e.}{\rightarrow}f$ ($f$ measurable), is $f$ integrable? Can "uniformly integrable" be weakened to "integrable"?
Notes on Additional Assumptions * *It should be noted that for this to be true, we need the hypothesis that $|f|<\infty$ a.e.. For example, if $(X,\mathfrak{m},\mu)$ is defined by $X=\{x\},\mathfrak{m}=\{\emptyset,X\},\mu(X)=1$, take $f_n:=n$. *It also should be noted that we also need the hypothesis that $\mu(X)<\infty$. For example, if the measure space $X$ is $\mathbb{R}$ with Lebesgue measure $m$, take $f_n:=\chi_{[0,n]}$. A proof assuming the following hypothesis : * *$|f|<\infty$ (or only a.e. is also ok) *$\mu(X)<\infty$ can be given as follow. Proof By uniform integratability, find $\delta>0$ with $\mu(E)<\delta$ implies $\int_E|f_n|<1$. By Fatou, $\chi_Ef\in L^1(\mu)$. Since $\mu(X)<\infty$, by successively cutting away these (can be taken to be finitely many) $E$, we may just assume now that every non-empty measurable subset of $X$ has measure $\geq\delta$. In this case, notice for $x\in X$, $\mu(f^{-1}(f(x)))\geq\delta$, which shows that $f(X)$ is a finite set since $\mu(X)<\delta$, then the condition $|f|<\infty$ applied to each fiber of $f$ shows that $f\in L^1(\mu)$. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/180616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }