Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Ackermann Function primitive recursive I am reading the wikipedia page on ackermann's function, http://en.wikipedia.org/wiki/Ackermann_function And I am having trouble understanding WHY ackermann's function is an example of a function which is not primitive recursive. I understand that the ackermann function terminates, and thus is a total function. So why is it not primitive recursive? The only information that i can find on the wikipedia page is [Ackermann's function] grows faster than any primitive recursive function and is therefore not primitive recursive Which isn't a good example of why it is not primitive recursive. Could anyone explain to me how exactly ackermann's function is NOT primitive recursive please?
Here's a proof showing why Ackermann's function is not primitive recursive. The key to showing that A is not primitive recursive, is to find a properties shared by all primitive recursive functions, but not by A . One such property is in showing that A in some way ``grows'' faster than any primitive recursive function Also, here's a proof showing that Ackermann's function is both a total function and a recursive function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 4, "answer_id": 0 }
Mathematical places to visit There are certain buildings and places on this planet where mathematicians can find delight because of the history, the art, the architecture, and for other reasons. For example, the Alhambra with it's heavy use of geometric patterns. Or, perhaps, going to the leaning tower of Pisa and playing games with calculating where it's shadow will be cast. Clearly, there are many places with historical or related significance to those interested in mathematics. I would enjoy hearing about some from people here -- in particular, I would love to hear about any places in the middle east.
For sake of pure history associated with Göttingen it should be a Mecca for any mathematicians. I also remember once my Calculus professor talk about an auditorium for acoustics where if you whisper at one end and if your friend is standing at the foci, then she can hear you. (This blog discusses it a bit.) As far as architecture inspired by mathematics: * *Double spiral staircase in Vatican as mentioned by the visitor here *Islamic designs of geometry as well as Alhambra *This site also poses the interesting question: Why are fortresses often pentagons? However, the go-to-guide would be Jane Burry's The New Mathematics of Architecture (Google images of some of its content)
{ "language": "en", "url": "https://math.stackexchange.com/questions/96539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
What's the connection between derivatives and boundaries? The (second) fundamental theorem of calculus says that $$\int_a^b f'(x) dx = f(b) - f(a)$$ which can also be stated, if one knows enough about what's coming next, as: The integral of the derivative of a function over an interval is the same as the function evaluated at the (signed) boundary of the interval. where I had to insert the word 'signed' to make it clear that there's an implicit multiplication by $-1$ when you evaluate the function at the 'bottom' end of the integral. If we wrote the right-hand side of the expression as $$f(b) + (-1) f(a)$$ then even a high-school student could probably be persuaded that this is the same as 'integrating' $f$ over the two points $b$ and $a$, with a multiplication by $-1$ attached to the evaluation at $a$. The generalization of this is the generalized Stokes theorem: $$\int_C dw = \int_{\partial C} w$$ where $w$ is a differential form, $d$ is the exterior derivative, $C$ is a manifold on which $dw$ is defined, and $\partial$ is the boundary operator, which maps a manifold $C$ to its boundary. This can be made to look pretty suggestive by writing integration of a form over a manifold using inner product notation: $$\langle C, w \rangle \equiv \int_Cw$$ in which case Stokes' theorem becomes $$\langle C, dw \rangle = \langle \partial C, w \rangle$$ which looks suspiciously like $\partial$ is the Hermitian adjoint of $d$. But is that really the case? Differential forms and manifolds seem pretty different to me. If they are, in fact, related in this way, is there a theory which expounds upon this relation, generalizes it, or puts it in context with other areas of mathematics?
Maybe this is the theory that you mean: A manifold $M$ of dimension $m$ defines a $m$-current $[[M]]$, which is a functional on the space of smooth $m$-form in the following sense: $$[[M]](\omega)=\int_M\omega.$$ If $M$ is a manifold with boundary $\partial M$, then by Stoke's theorem the $m$-current $[[M]]$ and the $(m-1)$-current $[[\partial M]]$ is related by $$[[M]](d\omega)=\int_Md\omega=\int_{\partial M}\omega=[[\partial M]](\omega)$$ for any smooth $(m-1)$-form. Personally, I first learned the theory of current from the lecture note of Demailly, which is available here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 3, "answer_id": 0 }
Equivalence class on real numbers Call two real numbers equivalent if their binary decimal expansion differ in a finite amount of places, if S is a set which contains an element of every equivalence class, must S contain an interval? How to show that every interval contains an (uncountable number of?) element of every equivalence class?
First, a technical point: for terminating expansions in binary, such as 1 or ½, we must choose one of the two valid binary expansions for representing them: for instance, either 1.000... or 0.111... in the case of 1. This is necessary for the equivalence classes to be well-defined. We may establish the convention of always choosing the terminating expansion. Then, you can solve your problem by making the following observations: * *For a given equivalence class E and a fixed representative e ∈ E, how can you characterize an arbitrary x ∈ E relative to e? What are the implications for the cardinality of E? *Is there any particularly simple sort of interval which is guaranteed to contain at least one representative from each equivalence class? Can you show that any interval strictly contains such an interval (and therefore, also contains infinitely many)? *Consider a partitioning of ℝ into the rational and irrational numbers. Can you find a way of selecting representatives for each equivalence class of the rationals, and also for the irrationals, such that their union does not contain any intervals? I think that this approach should be fairly straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
In a Boolean algebra B and $Y\subseteq B$, $p$ is an upper bound for $Y$ but not the supremum. Is $qI don't think that this is the case. I am reading over one of my professor's proof, and he seems to use this fact. Here is the proof: Let $B$ be a Boolean algebra, and suppose that $X$ is a dense subset of $B$ in the sense that every nonzero element of $B$ is above a nonzero element of $X$. Let $p$ be an element in $B$. The proof is to show that $p$ is the supremum of the set of all elements in $X$ that are below $p$. Let $Y$ be the set of elements in $X$ that are below $p$. It is to be shown that $p$ is the supremum of $Y$. Clearly, $p$ is an upper bound of $Y$. If $p$ is not the least upper bound of $Y$, then there must be an element $q\in B$ such that $q<p$ and $q$ is an upper bound for $Y$ ...etc. I do not see how this last sentence follows. I do see that if $p$ is not the least upper bound of $Y$, then there is some upper bound $q$ of $Y$ such that $p$ is NOT less than or equal to $q$. But, since we have only a partial order, and our algebra is not necessarily complete, I do not see how we can show anything else. So, is my professor's proof wrong, or am I just missing something fundamental?
Let $p$ and $q$ be upper bounds of $Y$. Then $p\wedge q$ is an upper bound of $Y$ and $p\wedge q\leq p$ and $p\wedge q\leq q$. Now if for all upper bounds $q$ of $Y$, $p\leq p\wedge q\leq q$, $p$ must be the least upper bound. Otherwise $p\wedge q<p$ for some $q$ and $p\wedge q$ is a strictly lower upper bound of $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Centralizers in reductive Liegroups = unimodular? Let $G$ be a real reductive group. Why is the centralizer of an element unimodular? What is a reference?
As the OP pointed out, my original answer was wrong. In fact centralizers of elements are unimodular, and a reference can be found by following the link in the OP's comment below. Here is an example that I find interesting: Consider the element $\begin{pmatrix} 1 & 0 & 1 \\0 & 1 & 0 \\0 & 0 & 1 \end{pmatrix}$ in $GL_3(\mathbb R)$. Its centralizer is $\Big\{ \begin{pmatrix} a & b & c \\ 0 & d & e \\ 0 & 0 & a \end{pmatrix} \Big\} \subset GL_3(\mathbb R)$. Although this is a solvable group, and looks very similar to the Borel in $GL_3$, which is not unimodular, it is in fact a unimodular group (as the OP points out in a second comment below).
{ "language": "en", "url": "https://math.stackexchange.com/questions/96930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Chance for picking a series of numbers (with repetition, order doesn't matter) I want to calculate the chance for the following situation: You throw a die 5 times. How big is the chance to get the numbers "1,2,3,3,5" if the order does not matter (i.e. 12335 = 21335 =31235 etc.)? I have 4 different solutions here, so I won't include them to make it less confusing. I'm thankful for suggestions!
If you already have $4$ solutions, what is it that you seek ? There is more than one way to skin a cat, and i would suggest that you use the one that strikes you as simplest. To add to the list, here's one (hopefully not already there in exact form) favorable ways $= \dfrac{5!}{2!} = A$ total ways $= 6^5 = B$ Pr $= A/B$
{ "language": "en", "url": "https://math.stackexchange.com/questions/97046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Lie Algebra of $SL_n(\mathbb H)$ The Lie algebra of $SL_n(\mathbb C)$ are the matrices where the trace is $0$. But what is the Lie algebra of $SL_n(\mathbb H)$ where $\mathbb H$ is the quaternions?
The group $\mathrm{SL}(n, \mathbb{H})$ is defined here: * *Wikipedia, GL(n,H) and SL(n,H). It works like this: The algebra of quaternions is a subalgebra of the $2 \times 2$ complex matrices in a standard way. This lets us regard $n \times n$ quaternionic matrices as certain special $2n \times 2n$ complex matrices. This in turn lets us define a complex-valued determinant of an $n \times n$ quaternionic matrix, called the Study determinant, and also a complex-valued trace. The $n \times n$ quaternionic matrices for which the Study determinant is 1 form the group $\mathrm{SL}(n,\mathbb{H})$. The Lie algebra of $\mathrm{SL}(n,\mathbb{H})$ then consists of $n \times n$ quaternionic matrices for which the complex-valued trace is 0. The Study determinant is the square of the Dieudonné determinant, as shown here: * *Helmer Aslaksen, Quaternionic determinants, The Mathematical Intelligencer 18 (1996), 57–65. But the Dieudonné determinant is nonnegative, so in fact the Study determinant also takes only nonnegative real values, and $\mathrm{SL}(n,\mathbb{H})$ is also the group of quaternionic $n \times n$ matrices with Dieudonné determinant 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Cauchy Sequence in $X$ on $[0,1]$ with norm $\int_{0}^{1} |x(t)|dt$ In Luenberger's Optimization book pg. 34 an example says "Let $X$ be the space of continuous functions on $[0,1]$ with norm defined as $\|x\| = \int_{0}^{1} |x(t)|dt$". In order to prove $X$ is incomplete, he defines a sequence of elements in $X$ by $$ x_n(t) = \left\{ \begin{array}{ll} 0 & 0 \le t \le \frac{1}{2} - \frac{1}{n} \\ \\ nt-\frac{n}{2} + 1 & \frac{1}{2} - \frac{1}{n} \le t \le \frac{1}{2} \\ \\ 1 & t \ge \frac{1}{2} \end{array} \right. $$ Each member of the sequence is a continuous function and thus member of space $X$. Then he says: the sequence is Cauchy since, as it is easily verified, $\|x_n - x_m\| = \frac{1}{2}\left|\dfrac1n - \dfrac1m\right| \to 0$. as $n,m \to \infty$. I tried to verify the norm $\|x_n - x_m\|$ by computing the integral for the norm. The piecewise function is not dependent on $n,m$ on the last piece (for $t \ge 1/2$), so norm $\|x_n - x_m\|$ is 0. For the middle piece I calculated the integral, it comes up zero. That leaves the first piece, and I did not receive the result Luenberger has. Is there something wrong in my approach?
He does not "define" that $X$ is incomplete, he proves it. The idea is that the function $|x_n - x_m|$ looks like this : assume $n < m$, so $$ (x_n - x_m)(t) = \begin{cases} 0 & \text{ if } t \le \frac 12 - \frac 1n \text{ or } t \ge \frac 12 \\ nt- \frac n2 + 1 & \text{ if } \frac 12 - \frac 1n \le t \le \frac 12 - \frac 1m \\ (n-m)t - \frac{n-m}2 & \text{ if } \frac 12 - \frac 1m \le t \le \frac 12. \end{cases} $$ Computing the integral gives you $$ \left( \left. \frac{nt^2}2 - \frac {nt}2 + t \right|_{\frac 12 - \frac 1n}^{\frac 12 - \frac 1m} \right) + \left( \left. \frac{(n-m)t^2}2 - \frac{(n-m)t}2 \right|_{\frac 12 - \frac 1m}^{\frac 12} \right) = \frac 12 \left( \frac 1n - \frac 1m \right). $$ The first parenthesis is the integral over the second part of the piecewise writing of $x_n - x_m$ and the second parenthesis is the integral over the third part. The integral over the first part is $0$. The sequence is Cauchy because of this. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/97171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Unique symmetric covariant $k$-tensor satisfying $(\operatorname{Sym} T)(A,...,A)=T(A,...,A)$ for all $A \in V$ Let $T$ be a covariant $k$-tensor on a finite dimensional vector space $V$. I want to prove that the symmetrization of $T$ is the unique symmetric $k$-tensor satisfying the following condition: $(\operatorname{Sym} T)(A,\ldots,A)=T(A,\ldots,A)$ for all $A \in V$. Definition. Symmetrization of $T$ is defined as $$(\operatorname{Sym}T)(A_1,\ldots,A_k)=\frac{1}{k!} \sum_{\sigma \in S_k} T(A_{\sigma(1)},\ldots,A_{\sigma(k)})$$ where $S_k$ is the symmetric group on $k$ letters. I assumed that there exists another symmetric $k$-tensor $\tilde{T}$ which satisfies the condition. Since $\tilde{T}$ is symmetric, it is equal to its symmetrization $\operatorname{Sym} \tilde{T}$. Then I tried to show that $(\operatorname{Sym}T)(A_1,\ldots,A_k)=\tilde{T}(A_1,\ldots,A_k)$, or equivalently, $(\operatorname{Sym}T)(A_1,\ldots,A_k)=(\operatorname{Sym}\tilde{T})(A_1,\ldots,A_k)$ but I couldn't. Thanks in advance.
Here's an analysis of the case $k=2$. Suppose $\bar T$ is another symmetric tensor satisfying $\bar T(A,A)= T(A,A)$ for all $A$. Now $$T(x+y,x+y)=\bar T(x+y,x+y)= \bar T(x,x)+\bar T(x,y)+\bar T(y,x)+\bar T(y,y).$$ This equals $T(x,x)+2\bar T(x,y)+T(y,y)$. Thus $$\bar T(x,y)=\frac{1}{2}(T(x+y,x+y)-T(x,x)-T(y,y)).$$ So $\bar T$ is uniquely determined by $T$. Since $Sym(T)$ is symmetric and also satisfies the formula $Sym(T)(A,A)=T(A,A)$, $\bar T$ and $Sym(T)$ must be equal. A similar, but more complicated, argument works in the case of $k>2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Solving $\int\frac{\ln(1+e^x)}{e^x} \space dx$ I'm trying to solve this integral. $$\int\frac{\ln(1+e^x)}{e^x} \space dx$$ I try to solve it using partial integration twice, but then I get to this point (where $t = e^x$ and $dx = \frac{1}{t} dt$): $$\int\frac{\ln(1+t)}{t^2} \space dt = \frac{1}{t} \cdot \ln(1+t) - \frac{1}{t} \cdot \ln(1+t) + \int\frac{\ln(1+t)}{t^2} \space dt$$ $$\cdots$$ $$0 = 0$$ What am I doing wrong?
Ok, I think I know what you did wrong. For a particular choice of variable substitutions, doing integration by parts twice will land you back where you started. Notice that you got a true statement ("0=0", which indeed it does) so you haven't done anything wrong in the sense of doing some step incorrectly, but you haven't made any progress in your answer either. To illustrate, let me define two functions $f(x)$ and $g(x)$ and consider the integral: $$ \int f(x) g(x) dx $$ integration by parts tells us: $$ \int u dv = u v - \int v du $$ which translates to: $$ \int f(x) g(x) dx = f(x) G(x) - \int G(x) f'(x) dx $$ where $G(x)$ is the anti-derivative of $g(x)$ and $f'(x)$ is the derivative of $f(x)$. In the above, I used $u=f(x)$ and $dv = g(x) dx$. Let's do integration by parts again, but this time using the substitution $u = G(x)$ and $dv = f'(x) dx$. This will yield: $$ \int G(x) f'(x) dx = G(x) f(x) - \int f(x) g(x) dx$$ substitution back in we find: $$ \int f(x) g(x) dx = G(x) f(x) - G(x) f(x) + \int f(x) g(x) dx $$ $$ \to \int f(x) g(x) dx = \int f(x) g(x) dx $$ $$ \to 0 = 0 $$ $0=0$! It sure does! Integration by parts represents a transformation. By choosing to do it a second time with the choice of variables above it amounts to doing the inverse of the first transformation, yielding the identity transformation. In order to get an answer to your integral you have to not apply integration by parts twice in the manner you did. Since your question is likely homework, I won't go into how to actually solve your integral but I will tell you that I solved your integral by first using integration by parts and then an application of partial fractions. Hope that helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/97292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Number of knots possible with length L string What is the asymptotic growth in L for the numer of topological different knots possible using a length L closed string of radius 1? In 3 dimension euclidean space.
Let $Cr(K)$ be the crossing number of a knot $K$. If $L(K)$ is the minimal length of a knot $K$ with thickness 1, it has been shown that $L(K)=\Omega(Cr(K)^{3/4})$, first proved in this paper by Gregory Buck http://www.nature.com/nature/journal/v392/n6673/pdf/392238a0.pdf. Hopefully this is close enough to the answer that you needed. See also http://www.sciencedirect.com/science/article/pii/S0166864197002113.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Secret Number Problem Ten students are seated around a (circular) table. Each student selects his or her own secret number and tells the person on his or her right side the number and the person his or her left side the number (without disclosing it to anyone else). Each student, upon hearing two numbers, then calculates the average and announces it aloud. In order, going around the table, the announced averages are 1,2,3,4,5,6,7,8,9 and 10. What was the secret number chosen by the person who announced a 6?
Extending pedja's analysis: Let us denote secret numbers as $x_i$ , where $i$ is announced number ,then we have following system of equations : $\begin{cases} x_1+x_3=4 \\ x_2+x_4=6 \\ x_3+x_5=8 \\ x_4+x_6=10 \\ x_5+x_7=12 \\ x_6+x_8=14 \\ x_7+x_9=16 \\ x_8+x_{10}=18 \\ x_9+x_1=20 \\ x_{10}+x_2=2 \end{cases}$ So we have $(x_6+x_8)-(x_8+x_{10})+(x_{10}+x_2)-(x_2+x_4)+(x_4+x_6) = \\ x_6+(x_8-x_8)+(-x_{10}+x_{10})+(x_2-x_2)+(-x_4+x_4)+x_6 = x_6 + x_6$ $14-18+2-6+10 = 2 = 2x_6$ $x_6 = 1$ Maple not required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Defining addition of supernatural numbers? In the comments on this question Bill Dubuque mentions the supernatural numbers. My curiosity was piqued by the statement on Wikipedia that "there is no natural way to add supernatural numbers" and I soon invented this example: Let $a$ be the supernatural product of all primes congruent to 1 mod 4, and let $b$ be the supernatural product of all primes congruent to 3 mod 4. Because GCD is defined for supernatural numbers, and the sum of two relatively prime numbers is relatively prime to each of them, we can say that $2a + b = 1$ and also that $a + 2b = 1$; adding these gives $3a + 3b = 2$ or $a + b = \frac{2}{3}$. The value $\frac{2}{3}$ can apparently be interpreted as a "super-rational" number, a supernatural-like number where negative exponents are permitted. So it seems that I can give a consistent definition of addition at least for some supernatural numbers (although the result in this case is "super-rational"). What is the basis of the claim that "there is no natural way to add supernatural numbers"? Do the assumptions underlying my idea lead to any contradiction? If not, to what extent can it be extended to allow the addition of more general forms? EDIT: I hadn't read the article closely enough to realize that supernatural numbers are allowed to have exponent values of $\infty$, and also it has been pointed out that my idea does not work in any case. What remains of this question I feel is too unfocused. I am accepting Greg Martin's answer.
You say "Because GCD is defined for supernatural numbers, and the sum of two relatively prime numbers is relatively prime to each of them, we can say that $2a+b=1$". This deduction seems hasty to me. I assume you're thinking "two natural numbers that are relatively prime have a sum that has no prime factors in common with either; and the only natural number that has no prime factors is 1". However, you're assuming to start with that $2a+b$ is a natural (or perhaps supernatural) number, but there's no reason that $2a+b$ has to be well-defined. In fact, your claim together with mjqxxxx's modification could probably be combined to give a proof that addition cannot be defined on the supernatural numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Graphing Inequalities in Two Variables When we have to graph an inequality in two variables, we usually graph the corresponding equality, i.e. the straight line on the coordinate plane, which divides the plane into two parts. Then, we use a so-called 'test point' to decide which of the two parts represents the inequality. So, here we clearly assume that one of the two parts formed by the straight line represents the '>' inequality, whereas the other part represents the '<' inequality. Is there any rigorous proof that can show that this is indeed the case? Thanks.
It's just the interpretation of the graph of a function: $$\text{ the point $(x,y)$ is on the graph of the function $f$ if and only if $y=f(x)$.} $$ As an example, consider the inequality $$y>2x-4$$ If you set $$f(x)=2x-4,$$ then the graph of $f$ is precisely the solution set to $y=2x-4$. Now, if you fix $x=x_0$ and draw a vertical line thorough $x=x_0$, then one portion of the line will be above the graph of $f$ and one below. We have for any point $(x_0,y_1)$ on the portion that's above that $y_1> 2x_0-4=f(x_0)$. So any point on the line $x=x_0$ above the graph of $f$ is a solution to the inequality. Similarly any point on the line $x=x_0$ below the graph of $f$ is not a solution to the inequality. The same can be said for other vertical lines. The $y$-coordinate of any point $(x,y)$ above the graph of $f$ will be greater than the $y$ coordinate of the corresponding point $(x, f(x))$ on the graph of $f$; and from this you can say any point above the graph of $f$ is a solution to the inequality. If $\color{maroon}{(x_0 ,y_1)}$ is on the maroon line segment, then $y_1>f(x_0 )=2x_0-4$. If $\color{darkgreen}{(x_0 ,y_2)}$ is on the green line segment, then $y_2<f(x_0 )=2x_0-4$. The above applies to any inequality in $x$ and $y$ that can be written as $y>f(x)$. The reason why the "pick a test point method" works in this case is that the graph of a function does divide the plane into three mutually disjoint and exhaustive sets: the graph of the function (where $y=f(x)$), the region "above the graph" (where $y\gt f(x)$), and the region "below the graph" (where $y\lt f(x)$). So if the test point is in one of these particular regions, then every point in that region will satisfy the same inequality that the test point satisfies. Of course, things go awry when you have an inequality not of the above form...
{ "language": "en", "url": "https://math.stackexchange.com/questions/97589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is $\sin^3 x=\frac{3}{4}\sin x - \frac{1}{4}\sin 3x$? $$\sin^3 x=\frac{3}{4}\sin x - \frac{1}{4}\sin 3x$$ Is there any formula that tells this or why is it like that?
\begin{align} \sin(3x) &= \sin(x+2x) \tag{1} \\ \sin(\alpha+\beta) &= \sin \alpha \cdot \cos \beta + \cos \alpha \cdot \sin \beta \tag{2} \\ \sin 2\alpha &= 2\cdot \sin \alpha \cdot \cos \alpha \tag{3} \\ \cos 2\alpha &= \cos^2 \alpha - \sin^2 \alpha \tag{4} \\ 1 &= \sin^2 \alpha + \cos^2 \alpha \tag{5} %% \end{align} If you apply all these formulas you should get: $$ \sin(3x)=3\cdot \sin x -4\cdot \sin^3 x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/97654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 8, "answer_id": 5 }
difficult sequence Prove that the sequence: $$a_n= \frac{1}{n}\left(e\cdot\sqrt e \cdots\sqrt[3]e\cdot\sqrt[n]e\right)$$ is decreasing to a finite limit. After having shown that the sequence: $$b_n=\left(\sum_{k=1}^n\frac{1}{k}\right)-\log n$$ converges to a positive real number $b,$ say who is the limit of $ a_n $
First, notice that $$\log(a_n) = \log\frac{1}{n} + \log(e\cdot\sqrt{e}\cdots\sqrt[n]{e}) = \sum_{k = 1}^n\log \sqrt[k]{e} - \log n = \sum_{k=1}^n\frac{1}{k} - \log n = b_n$$ Assume that $\lim_{n\rightarrow\infty}b_n = b\in\mathbb{R}$. Then by the above equation, $$\lim_{n\rightarrow\infty}a_n = \lim_{n\rightarrow\infty}e^{b_n} = e^b$$ Hence it is enough to show that $b_n$ is decreasing to a finite limit. Now, notice that $$\int_1^{n+1}\frac{1}{x}dx \leq \sum_1^n \frac{1}{k}\leq 1 + \int_1^n\frac{1}{x}dx$$ In particular, $$\int_1^{n + 1}\frac{1}{x}dx - \log n \leq b_n \leq 1 + \int_1^n\frac{1}{x}dx - \log n$$ so that $$\log\frac{n+1}{n} \leq b_n \leq 1$$ So $\{b_n\}$ is bounded and positive. It remains to show that $b_n$ converges. Try to prove monotonicity of $b_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Changing the argument for a higher order derivative I start with the following: $$\frac{d^n}{dx^n} \left[(1-x^2)^{n+\alpha-1/2}\right]$$ Which is part of the Rodrigues definition of a Gegenbauer polynomial. Gegenbauer polynomials are also useful in terms of trigonometric functions so I want to use the substitution $x = \cos\theta$, which is the usual way of doing it. However, I'm stuck as to how this works for the Rodrigues definition, because it gives me a derivative with respect to $\cos\theta$ instead of a derivative with respect to $\theta$: $$\frac{d^n}{d(\cos\theta)^n} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]$$ QUESTION: Is there a way to write this as $\dfrac{d^n}{d\theta^n}[\text{something}]$? I have read some about Faa di Bruno's formula for the $n$-th order derivative of a composition of functions but it doesn't seem to do what I want to do. Also, for n=1 there is the identity, from the chain rule, $\dfrac{d}{d(\cos\theta)} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]=\frac{\frac{d}{d\theta} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]}{\frac{d}{d\theta} \left[\cos\theta\right]}$, but this doesn't hold for higher order derivatives. Any ideas?
I could be mistaken, but here goes nothing. So you're looking at making the substitution $x = \cos\theta$. When you do this, you will need to change $\frac{d}{dx}$ into $\frac{d}{d\theta}$ via chain rule. Let's let $f(x(\theta))$ be our function we wish to differentiate. From chain rule, $\frac{df}{d\theta} = \frac{df}{dx}\frac{dx}{d\theta}$. You know what $\frac{dx}{d\theta}$ is from your definition of $x$, so you can compute this. This comes out to be $(-\sin(\theta))$. Rewriting our expression, we see that $\frac{df}{dx} = \frac{1}{\frac{dx}{d\theta}} \frac{df}{d\theta}$, or if we drop the function $f$: $\frac{d}{dx} = \frac{1}{\frac{dx}{d\theta}} \frac{d}{d\theta}$. Now we'll make the appropriate substitution to get $\frac{d}{dx} = -\frac{1}{\sin(\theta)}\frac{d}{d\theta}$. From this, $\left(\frac{d}{dx}\right)^n = (-1)^n \left(\frac{1}{\sin(\theta)} \frac{d}{d\theta}\right)^n$. I'm not sure if a nice expression exists in general for this, but it's pretty easy to compute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/97929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Derivative of a function is odd prove the function is even. $f:\mathbb{R} \rightarrow \mathbb{R}$ is such that $f'(x)$ exists $\forall x.$ And $f'(-x)=-f'(x)$ I would like to show $f(-x)=f(x)$ In other words a function with odd derivative is even. If I could apply the fundamental theorem of calculus $\int_{-x}^{x}f'(t)dt = f(x)-f(-x)$ but since the integrand is odd we have $f(x)-f(-x)=0 \Rightarrow f(x)=f(-x)$ but unfortunately I don't know that f' is integrable.
Let $g(x)=f(-x)$. Then $g'(x)=-f'(-x)=f'(x)$. Since $g(0)=f(0)$ and $g'=f'$, it follows from the mean value theorem that $g=f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
How to prove this claim on $\omega$ How to prove this claim: $T\subset \omega$ and if $n \subset T$ implies $n\in T$, then $T=\omega$? Any help will be appreciated.
A hint: Argue by induction, or what is the same, show that there can be no smallest witness to the negation of the conclusion. In more detail: If $T\ne\omega$ then, since $T\subset\omega$, there must be some $n\in\omega$ with $n\notin T$. Then the set of elements of $\omega$ that are not in $T$ is nonempty, so it has a smallest element, say $m$. Then $m\notin T$ but any $k<m$ is in $T$. It should be clear how to finish from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the purpose of Stirling's approximation to a factorial? Stirling approximation to a factorial is $$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n. $$ I wonder what benefit can be got from it? From computational perspective (I admit I don't know too much about how each arithmetic operation is implemented and which is cheaper than which), a factorial $n!$ contains $n-1$ multiplications. In Stirling's approximation, one also has to compute one division and $n$ multiplications for $\left(\frac{n}{e}\right)^n$, no? Plus two multiplication and one square root for $\sqrt{2 \pi n}$, how does the approximation reduce computation? There may be considerations from other perspectives. I also would like to know. Please point out your perspective if you can. Added: For purpose of simplifying analysis by Stirling's approximation, for example, the reply by user1729, my concern is that it is an approximation after all, and even if the approximating expression converges, don't we need to show that the original expression also converges and converges to the same thing as its approximation converges to? Thanks and regards!
Abraham de Moivre was the person who first introduced Stirling's formula. His friend James Stirling is the one who found that the constant is $\sqrt{2\pi}$; de Moivre only knew it numerically. Demoivre used it to approximate the probability that the number of heads you get when you toss a coin 1800 times is $x$, for $x$ not too many standard deviations away from 900. He wrote about this in his book titled The Doctrine of Chances (google the title!). The title of the book is in effect 18th-century English for "the theory of probability". The phrase appears again in Thomas Bayes' famous posthumous paper "An essay towards solving a problem in the doctrine of chances" (google that title too). De Moivre derived the bell-shaped curve $$ x\mapsto \text{constant} \cdot e^{-x^2/2} $$ from this formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 10, "answer_id": 7 }
Is there a simple formula for this simple question about a circle? What is the average distance of the points within a circle of radius $r$ from a point a distance $d$ from the centre of the circle (with $d>r$, though a general solution without this constraint would be nice)? The question arose as an operational research simplification of a real problem in telecoms networks and is easy to approximate for any particular case. But several of my colleagues thought that such a simple problem should have a simple formula as the solution, but our combined brains never found one despite titanic effort. It looks like it might involve calculus. I'm interested in both how to approach the problem, but also a final algebraic solution that could be used in a spreadsheet (that is, I don't want to have to integrate anything).
This solution definitely seems to have problem(s), but perhaps even though it's wrong, it'll help get someone to a complete/correct solution. First, suppose that the point is outside the circle. For each secant from the point through the circle, the distance from the given point to the midpoint of the part of the secant that is inside the circle is equal to the average of the distances to all of the points in the circle through which that secant passes. The locus of these midpoints is an arc of the circle that has a diameter with endpoints at the given point and the center of the given circle. If we place the given point at the origin and the center of the given circle at $d$ on the positive horizontal axis, the locus of points described above has polar equation $r=\frac{d}{2}\cos\theta$ for $-\arcsin\frac{r}{d}\le\theta\le\arcsin\frac{r}{d}$. For each $\theta$, the length of the line segment on the secant and inside the circle is $2\sqrt{r^2-d^2\sin^2\theta}$. So, the average distance should be: $$\frac{1}{2\arcsin\frac{r}{d}}\int_{-\arcsin\frac{r}{d}}^{\arcsin\frac{r}{d}}\left(2\sqrt{r^2-d^2\sin^2\theta}\cdot\frac{d}{2}\cos\theta\right)d\theta,$$ which I let Mathematica work on for a bit and it's telling me is 0, so I probably screwed something up. Now, if the point is inside the circle, then we have the entirety of the circle with diameter with endpoints at the given point and the center of the circle. Placing the given point and the circle as above, the locus-circle has the same equation, $r=\frac{d}{2}\cos\theta$, the length of the secant line segments inside the circle (now just chords) is the same $2\sqrt{r^2-d^2\sin^2\theta}$, but the limits of integration change to encompass the whole circle: $$\frac{1}{\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\left(2\sqrt{r^2-d^2\sin^2\theta}\cdot\frac{d}{2}\cos\theta\right)d\theta.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/98231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Easy way to determine the primes for which $3$ is a cube in $\mathbb{Q}_p$? This is a qual problem from Princeton's website and I'm wondering if there's an easy way to solve it: For which $p$ is $3$ a cube root in $\mathbb{Q}_p$? The case $p=3$ for which $X^3-3$ is not separable modulo $p$ can easily be ruled out by checking that $3$ is not a cube modulo $9$. Is there an approach to this that does not use cubic reciprocity? If not, then I'd appreciate it if someone would show how it's done using cubic reciprocity. I haven't seen good concrete examples of it anywhere. EDIT: I should have been more explicit here. What I really meant to ask was how would one find all the primes $p\neq 3$ s.t. $x^3\equiv 3\,(\textrm{mod }p)$ has a solution? I know how to work with the quadratic case using quadratic reciprocity, but I'm not sure what should be done in the cubic case.
As noted in the comments, the question comes down to: For which primes $p\gt 3$ is $3$ a cubic residue modulo $p$? This is answered in detail in Franz Lemmermeyer's Reciprocity Laws, Chapter 7 ("Cubic Reciprocity"). If $p\equiv 2\pmod{3}$, then the order of the units modulo $p$ is prime to $3$, so every element is a cube; thus, $3$ is a cube modulo $p$ for all primes $p\equiv 2\pmod{3}$. If $p\equiv 1\pmod{3}$, then one can write $4p = L^2 + 27M^2$ for integers $L$ and $M$, and $3$ is a cubic residue modulo $p$ if and only if $M\equiv 0\pmod{3}$ (Proposition 7.2 in Lemmermeyer).
{ "language": "en", "url": "https://math.stackexchange.com/questions/98298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Show that $D_{12}$ is isomorphic to $D_6\times C_2$ Show that $D_{12}$ is isomorphic to $D_6 \times C_2$, where $D_{2n}$ is the dihedral group of order $2n$ and $C_2$ is the cyclic group of order $2$. I'm somewhat in over my head with my first year groups course. This is a question from an example sheet which I think if someone answered for me could illuminate a few things about isomorphisms to me. In this situation, does one use some lemma (that the direct product of two subgroups being isomorphic to their supergroup(?) if certain conditions are satisfied)? Does $D_{12}$ have to be abelian for this? Do we just go right ahead and search for a fitting bijection? Can we show the isomorphism is there without doing any of the above? If someone could please answer the problem in the title and talk their way through, they would be being very helpful. Thank You.
Using Direct Product Theorem (Direct product theorem). Let H1,H2 ≤ G. Suppose the following are true: (i) $H_1$ ∩$H_2$ = {e}. (ii) (∀ai ∈ Hi) $a_1a_2 = a_2a_1$. (iii) (∀a ∈ G)(∃$a_i$ ∈ $H_i$) a = $a_1a_2$. We also write this as G=$H_1H_2$. Then G $\cong$ $H_1 ×H_2$. In this question, express $D_{12}$ as its usual way,which is {e,$r,r^2,...,r^5,s,rs,...,r^5s$}, write $D_6$ as {$e,r^2,r^4,s,r^2s,r^4s$} and $C_2$ as {e,$r^3$} (i) is obvious (ii)This is a little bit tricky. If it does not involve s, then it is obviously true. If it involves s, notice that $r^3s=sr^3$, then this is true. (iii)Easy to show.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Integral of periodic function over the length of the period is the same everywhere I am stuck on a question that involves the intergral of a periodic function. The question is phrased as follows: Definition. A function is periodic with period $a$ if $f(x)=f(x+a)$ for all $x$. Question. If $f$ is continuous and periodic with period $a$, then show that $$\int_{0}^{a}f(t)dt=\int_{b}^{b+a}f(t)dt$$ for all $b\in \mathbb{R}$. I understand the equality, but I am having trouble showing that it is true for all $b$. I've tried writing it in different forms such as $F(a)=F(b+a)-F(b)$. This led me to the following, though I am not sure how this shows the equality is true for all $b$, $$\int_{0}^{a}f(t)dt-\int_{b}^{b+a}f(t)dt=0$$ $$=F(a)-F(0)-F(b+a)-F(b)$$ $$=(F(b+a)-F(a))-F(b)$$ $$=\int_{a}^{b+a}f(t)dt-\int_{0}^{b+a}f(t)dt=0$$ So, this leaves me with $$\int_{a}^{b+a}f(t)dt-\int_{0}^{b+a}f(t)dt=\int_{0}^{a}f(t)dt-\int_{b}^{b+a}f(t)dt$$ I feel I am close, and I've made myself a diagram of a sine function to visualize what each of the above integrals might describe, but the power to explain the above equality evades me.
You have made various false steps in your four line block and should have ended up with $$\int_{a}^{b+a}f(t)dt-\int_{0}^{b}f(t)dt=0$$ but this does not take you much further forward. Instead note that somewhere in the interval $[b, b+a]$ is an integer multiple of $a$, say $na$. Then using $f(t)=f(t+a)=f(t+na)$: $$\int_{b}^{b+a}f(t)dt = \int_{b}^{na}f(t)dt+\int_{na}^{b+a}f(t)dt = \int_{b+a}^{(n+1)a}f(t)dt+\int_{an}^{b+a}f(t)dt = \int_{na}^{(n+1)a}f(t)dt = \int_{0}^{a}f(t)dt.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/98409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 5, "answer_id": 0 }
Estimate probabilities from its moments I want to estimate probability $Pr(X \leq a)$, where $X$ is a continuous random variable and $a$ is given, only based on some moments of $X$ (e.g., the first four moments, but without knowing its distribution type).
The entire sequence of moments of a random variable $m_k = \mathbb{E}(X^k)$ determines the distribution function of $X$ uniquely, provided that $\sum_{k=0}^\infty \frac{m_k}{k!} t^k$ converges for all $t$ in an open neighborhood of $t=0$. See this. If you have two such sequences, which coincide up to order $r$, but differ afterwards, these sequences correspond to different distributions. You may, however, ask to approximate the distribution function $F_X(x) = \mathbb{P}(X \leq x)$ given the values of the low order moments, if some assumptions on the nature of the distribution is made. See method of moments estimation, for example. Knowledge of moments, determines an upper bound on the tail of the distribution function. See Chernoff bound, and Chebyshev inequality. You may also find Pearson distribution, determined by first 4 moments, useful
{ "language": "en", "url": "https://math.stackexchange.com/questions/98460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
Equation of a sphere as the determinant of its variables and sampled points Searching for an equation to find the center of a sphere given 4 points, one finds that taking the determinant of the four (non-coplanar) points together with the variables $x$, $y$, and $z$ arranged like so: $\left|\begin{array}{ccccc} x^2+y^2+z^2 & x & y&z&1\\ x_1^2 + y_1^2 + z_1^2 & x_1 & y_1 & z_1 & 1\\ x_2^2 + y_2^2 + z_2^2 & x_2 & y_2 & z_2 & 1\\ x_3^2 + y_3^2 + z_3^2 & x_3 & y_3 & z_3 & 1\\ x_4^2 + y_4^2 + z_4^2 & x_4 & y_4 & z_4 & 1\\ \end{array}\right| = 0$ yields the equation for the sphere. Then one need only re-arrange terms into the more familiar form to find the center and radius. This works fine. My question is why. This same approach also works for one or two dimensions. I'm guessing it also works for finding hyperspheres in higher-dimensional spaces as long as you have a corresponding number of points. But where did that determinant form come from? Is there an intuitive meaning for what that relationship is saying?
The equation of a sphere of radius $r$ centred at $(u,v,w)$ is $(x - u)^2 + (y-v)^2 + (z-w)^2 = r^2$, or $x^2 + y^2 + z^2 + U x + V y + W z + C = 0$ where $C = u^2 + v^2 + w^2 - r^2$, $U = -2u$, $V=-2v$, $W=-2w$. If $A$ is the matrix you're taking the determinant of, to have this equation satisfied for the four given points and an arbitrary point $(x,y,z)$ on the sphere means $$A \pmatrix{1\cr U\cr V\cr W\cr C\cr} = \pmatrix{0\cr 0\cr 0\cr 0\cr0\cr}$$ But for that to happen, $\det A$ must be $0$. Conversely, if $\det A = 0$, there exist $T,U,V,W,C$ not all $0$ for which $$A \pmatrix{T\cr U\cr V\cr W\cr C\cr} = \pmatrix{0\cr 0\cr 0\cr 0\cr0\cr}$$ If $T \ne 0$ we may divide by it and get an equation of a sphere going through the points. If $T = 0$ we have a degenerate case where the "sphere" is actually a plane. Yes, this generalizes to any number of dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
The limiting ratio of the angle and distance between two vectors Consider two unit vectors $u, v$ and name the angle between them as $\theta$. My claim is that \[ \lim_{\theta \to 0} \frac{\theta}{|u - v|} = 1. \]
Let us denote : $|u-v| = p$ . According to the Cosine Law we can write : $p^2=|u|^2+|v|^2-2\cdot |u|\cdot|v|\cdot \cos \theta \Rightarrow$ $\Rightarrow p^2=2 \cdot(1- \cos \theta)=2\cdot 2 \cdot \sin^2 {\frac{\theta}{2}}=4 \cdot \sin^2 {\frac{\theta}{2}} \Rightarrow$ $ \Rightarrow p=2\cdot \sin {\frac{\theta}{2}}$ So we have that : $\displaystyle \lim_{\theta \to 0} \frac{\theta}{|u-v|}=\displaystyle \lim_{\theta \to 0} \frac{\theta}{2\cdot \sin {\frac{\theta}{2}}}=\displaystyle \lim_{\theta \to 0} \frac{\frac{\theta}{2}}{ \sin {\frac{\theta}{2}}}=\left(\displaystyle \lim_{\theta \to 0} \frac{ \sin {\frac{\theta}{2}}}{\frac{\theta}{2}}\right)^{-1}=1^{-1}=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/98724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving a function is not uniformly continuous I have a homework question to which is If $f(x)$ is diferentiable in $(0,\infty)$ and $f'(x)>\frac{1}{x}$ for every $x>0$ then $f$ is not uniformly continuous in $(0,\infty)$. The question has a clue which asks me to prove that for $0<a<b$ then $f(b)-f(a)>\frac{b-a}{b}$ which I have also not managed to prove. But even if I did I can't how to use that fact to solve the question. Can someone help me out? Thanks a lot :)
We'll first prove the hint. Let $0 <a<b$. For this, we use the mean value theorem in $[a,b]$. Why does $f$ satisfy the hypothesis of MVT on $[a,b]$? As $f$ is differentiable on $(0,\infty)$, $f$ is continuous on $[a,b]$. And, $f$ is, in fact, differentiable on $[a,b]$, if you define differentability at endpoints using one-sided derivatives. So, there exists $c \in (a,b)$ $$\dfrac{f(b)-f(a)}{b-a}=f'(c) >\dfrac{1}{c}>\dfrac{1}{b}$$ Now, noting that, $b-a>0$, we can actually take $b-a$ to the other side, taking us to $$f(b)-f(a)>\dfrac{b-a}{b}$$ This proves your hint. Now to prove that, $f$ is not uniformly continuous, we actually prove the contrapositive of the definition: Contrapositive of uniform continuity $f$ is NOT uniformly continuous if there exists $\epsilon>0$ such that for each $\delta >0$ there exists $x,y$ such that $|x-y|<\delta$ but $|f(x)-f(y)|\geq \epsilon$. I think this should not be hard. So, how do we go about this? Set $b=2a$. Note that this is consistent with our assumption on $a$ and $b$ (i.e. $a < b $). There exists an $\epsilon>0$, here $\epsilon =\dfrac{1}{2}$, such that for any $\delta>0$, here any $a>0$, the claimed inequality holds. So, we are through.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing $\mathbb{Q}$ does not have the L.U.B. property I am trying to prove in a different way than how it was already proved on this website (another question). So yes, this is sort of a duplicate. Claim: $\mathbb{Q}$ does not have the least upper bound property Let $S = \{r \in \mathbb{Q} : r > 0~~ \text {and} ~~r^2 < 2 \}$. Clearly $S$ is bounded above by 2 and $S$ is nonempty since $1 \in S$. To prove S has no least upper bound it suffices to prove that if $t$ is an upperbound for $S$ then there exists a upper bound $t' \in \mathbb Q$ such that $t' < t$. It can then be shown that for $n \in \mathbb{N}$ is sufficiently large then $$\left(t-\frac{1}{n} \right)^2 > 2$$ given that $t^2 > 2$ But I get stuck here. How would I go about proving this is true? any ideas would help. I appreciate it. Thankyou
If you substitute 4 for 2 in your proof, then the conclusion will be false: $\{r \in \mathbb{Q} : r > 0$ and $r^2 < 4 \}$ does have a least upper bound in $\mathbb Q$ (because 4 is a perfect square). So somewhere you have to use the fact that if $r \in \mathbb Q$, then $r^2 \ne 2$. So your proof should have two parts: * *Show that if $r \in \mathbb Q$, then $r^2 \ne 2$. *Show that if $r \in \mathbb Q$ and $r^2 > 2$, then $\exists t \in \mathbb Q$ such that $2 < t^2 < r^2$. If you can prove these (can you?), then the result follows: any upper bound $r$ will satisfy $r^2 > 2$ (because of 1), so it can't be a least upper bound (because of 2). Edited to reply to comment: OK, if $r^2 > 2$, then $r^2 = 2 + \epsilon$ for some rational $\epsilon > 0$. Now look at $t = r - \epsilon/2r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Complex Analysis: Correspodence between $H(\Omega_1)$ and $H(\Omega_2)$. If we let $$\Omega_1=\left\{z \in \mathbb{C} : 0<\operatorname{Im}(z)< \pi\}, \quad \Omega_2=\{z \in \mathbb{C}: 0<\operatorname{Im}(z) \right\},$$ can we establish a one-to-one correspondence between $H(\Omega_1)$ and $H(\Omega_2)$ where $H(\Omega)$ represents the set of real-valued harmonic functions on $\Omega \subset \mathbb{C}$? Thoughts: Can we appeal to simple conformal mapping and say that, if we can establish a conformal map between $\Omega_1$ and $\Omega_2$ then the correspondence exists as the harmonic nature of the functions is preserved under such a map i.e. the composition of a holomorphic and harmonic map is harmonic? If so, what could this conformal map be? If we apply $z \mapsto e^z$ initially, this transforms the 'strip'to a 'wedge', but how do we advance from here? If such a method is incorrect, how else can we demonstrate the existence of the correspondence? Any help would be greatly appreciated. Best, MM.
The Riemann Mapping theorem says there is a differentiable map between these two sets that is compositionally invertible. This just might do the job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lower bound on number of vertices given girth and chromatic number Is there a general way to define a lower bound on $|V(G)|$ given the girth $g(G)=g$ and chromatic number $\chi(G) = k$? I heard there is a result, telling that $|V(G)| \geq k^{\frac{g}{2}}$, but I can't find it. Thanks in advance.
Some such lower bound may exist, but the inequality $\chi(G)^{g(G)} \leq n$ is very far from being true in general. For one example, let $n \geq 3$ and let $G = K_n$. Then $\chi(G) = n$ and $g(G) = 3$, but $n^{3/2} > n$. For another, let $n \geq 3$ and let $G = C_{2n}$. Then $\chi(G) = 2$ and $g(G) = 2n$, but $2^n > 2n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/98978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
k×k grid has tree-width at least k I am looking for ideas how to solve the problem from Diestel's textbook Graph Theory. Chapter 12. Minors, Trees, and WQO. Problem 16. Apply Theorem 12.3.9 to show that the $k \times k$ grid has tree-width at least $k$, and find a tree-decomposition of width exactly $k$. Theorem 12.3.9. (Seymour & Thomas 1993). Let $k \geq 0$ be an integer. A graph has tree-width $\geq k$ if and only if it contains a bramble of order $> k$. (A bramble is a set of mutually touching connected vertex sets in $G$; its order is the minimum number of vertices in a set meeting each member of the bramble.) The problem is the proof of the theorem 12.3.9 was given in the terms of bramble, which is a bit confusing, at present I don't really see the way to solve the problem by using this theorem. If you familiar with the topic, please, help me out. Addendum: In Graphs & Algorithms: Advanced Topics on the slide 5. The $n\times n$ - grid on $\left \{(i,k) | 1 \leq i, j \leq n\right \}$ has treewidth $\leq n$: Consider the path on $X_{n(i-1)+j}=\left \{(i,k)|j\leq k\leq n\right \}\cup\left \{(i+1,k)|1\leq k\leq j\right \}, 1\leq i\leq n-1, 1\leq j\leq n$ How this is supposed to help me?
I didn't read Diestel's book, but simple observation may be help is, if $G$ is a grid then we know $\operatorname{tw}(G) \ge BN(G)-1$ and as you know if you select $row_i \cup col_j$ as a bramble set, its hitting set size is $n$, so treewidth is at least $n-1$. Also you can simply construct a tree decomposition with size of $n$. So treewidth is $n$ or $n-1$. (By $BN(G)$ I mean bramble number of $G$.) Edit: As Braian mentioned you can simply find good brambles and answer of sample tree decomposition is in your question. (my mistake was I thinking about $cross_{i,i}$ not $cross_{i,j}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/99037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How does one interpret the meaning of a stochastic derivative? My understanding of derivatives is in the difference quotient limit sense... How does one interpret the meaning of a stochastic derivative? How can one possibly differentiate a random variable? What is its physical meaning, if it has any?
As pointed out in the comments, there is some context missing in your question, so I'll just guess to fill it in: Let's talk about one-dimensional Brownian motion, which is a stochastic process: It is a family of random variables indexed by a continuous parameter, which is usually called "time" and is written as $B_t$. Another point of view is that Brownian motion is a probability measure on a suitable set of functions. Since it can be shown that Brownian motion has continuous sample paths with probability one, we can think of it as probability measure on the set $C[0, T]$, the set of of continuous functions $$ f: [0, T] \to \mathbb{R} $$ In addition, one can prove that Brownian motion has with probability one sample paths that are not differentiable and not even of bounded variation. This means it is not possible to define a Riemann-Stieltjes integral with respect to the sample paths. This is why one needs to develop a new concept of an integral with respect to Brownian motion, for example the Ito or the Stratonovich integral. It is possible to give precise meaning to the expressions like this one: $$ X_T = \int_0^T f(t, x) d B_t $$ and prove (with appropriate assumptions for $f$) that there is a unique stochastic process $X_T$ satisfying this relation. These integral equations are usually abbreviated, with an abuse of notation, as $$ d X_t = f \; d B_t $$ but one has to keep in mind that the symbol $d B_t$ is actually undefined. Only the integral with respect to Brownian motion is defined in the Ito- or the Stratonovich calculus. This means that there is no "stochastic derivative", and that the notion of "velocity" is undefined for Brownian motion. There just is no room for the interpretation of a "velocity" in physical terms in the theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/99184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
representing arcsinh as a logarithm I am trying to understand this equality: $$ \ln{\left|\frac{x}{2}+\sqrt{\frac{x^2}{4}+1}\right|} + C= \ln{|x+\sqrt{x^2+4}|} + C'$$ My teacher didn't really explain it, she just noted that "the difference between the two statements is a constant (This equality is an answer for an integral so she just changed $C$ to $C'$). Can anyone please explain it? Thanks!
$$\ln{\left|\frac{x}{2}+\sqrt{\frac{x^2}{4}+1}\right|} + C=\ln{\left|\frac{x}{2}+\sqrt{\frac{x^2+4}{4}}\right|}+C$$ $$=\ln{\left|\frac{x+\sqrt{x^2+4}}{2}\right|}+C$$ $$= \ln{|x+\sqrt{x^2+4}|} -\ln2+ C$$ $$= \ln{|x+\sqrt{x^2+4}|}+ C'.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/99249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Physics undecidable problem in ZFC. Is there a physical problem that is undecidable in Zermelo-Fraenkel-Choice set theory? Something related with free abelian groups and Whitehead problem perhaps?
There are numerous questions about the nature of the solutions to specific differential equations that are computationally undecidable, and which therefore admit numerous specific instances whose solution has a nature independent of ZFC or of any other fixed consistent theory. For example, in the paper Boundedness of the domain of definition is undecidable for polynomial ODEs, the authors Graca, Buescu and Campagnolo prove that the question of whether the differential equation $\frac{dx}{dt}=p(t,x)$ with initial conditiion $x(t_0)=x_0$, where $p$ is a vector of polynomials, has a solution with unbounded domain or not, is computationally undecidable. My point is that whenever a problem like this is computationally undecidable, then it follows that infinitely many specific instances of it are also provably undecidable in any fixed consistent true theory, such as PA or ZFC (or ZFC + large cardinals). The reason is that if a consistent true theory were able to settle all but finitely many instances of the question, then the original problem would be decidable by the algorithm that simply searched for proofs. One can write down a very specific polynomial ODE, such that one cannot prove or refute in ZFC whether it has an unbounded solution or not. I think there are many other similar examples. I recall hearing in my graduate student days about similar examples, such as the question of whether a given dynamical system is chaotic or not, is also undecidable in general. Therefore these other questions also admit numerous instances of ZFC independence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/99307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Maximum triangle area I have a small problem. Consider I have a triangle. Which maximum area can it cover if two of his medians are 3 and 8? I think I'll need to use derivative here, but firstly I need to find a function of an area which it covers. I actually tried to use some sorts of formulas but didn't succeed. Could anyone give me a hint at least? Thanks
If the lengths of two medians of a triangle are $m_1$ and $m_2$ and the angle formed by these two medians is $\theta$, then the area of the triangle is $$K_\triangle=\frac{2}{3}m_1m_2\sin\theta.$$ Since the maximum value of $\sin\theta$ is $1$, the maximum area of your triangle is $\frac{2}{3}\cdot3\cdot8\cdot1=16$. edit The formula above is probably not obvious. Suppose we have $\triangle ADE$ with $B$ and $C$ being the midpoints of $\overline{AE}$ and $\overline{AD}$, respectively (more because that's what I happened to draw than anything else). The area of any quadrilateral with diagonals $d_1$ and $d_2$ and angle between then $\theta$ is $\frac{1}{2}d_1d_2\sin\theta$ (to derive this, the diagonals split the quadrilateral into 4 triangles, each with sides that are parts of the diagonals and included angles $\theta$ or $\pi-\theta$, the area of a triangle with sides $x$ and $y$ and included angle $\phi$ is $\frac{1}{2}xy\sin\phi$, and do some algebra). This gives the area of quadrialteral (trapezoid) $BCDE$ as $\frac{1}{2}m_1m_2\sin\theta$. Now, $\triangle ABC$ is a dilation image of $\triangle AED$ by a factor of $\frac{1}{2}$ centered at $A$ (because of the midpoints, etc.), so it has $\frac{1}{4}$ of the area of the larger triangle. That is, $K_{\triangle ABC}=\frac{1}{4}K_{\triangle ADE}$ and $$K_{\text{quad }BCDE}=\frac{3}{4}K_{\triangle ADE},$$ so $$K_{\triangle ADE}=\frac{4}{3}\frac{1}{2}m_1m_2\sin\theta=\frac{2}{3}m_1m_2\sin\theta.$$ edit 2 Here is a picture of a triangle with medians with lengths in the ratio $8:3$ that are perpendicular:
{ "language": "en", "url": "https://math.stackexchange.com/questions/99376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Mathematics understood through poems? Along with Diophantus mathematics has been represented in form of poems often times. Bhaskara II composes in Lilavati: Whilst making love a necklace broke. A row of pearls mislaid. One sixth fell to the floor. One fifth upon the bed. The young woman saved one third of them. One tenth were caught by her lover. If six pearls remained upon the string How many pearls were there altogether? Or to cite from modern examples: Poetry inspired by mathematics which includes Tom Apostol's Where are the zeros of Zeta of s? to be sung to to the tune of "Sweet Betsy from Pike". Or Tom Lehrer's derivative poem here. Thus my motivation is to compile here a collection of poems that explain relatively obscure concepts. Rap culture welcome but only if it includes homological algebra or similar theory. (Please let us not degenerate it to memes...). Let us restrict it to only one poem by answer so as to others can vote on the richness of the concept.
To the tune of "The Barney Song/I Love You, You Love Me" (for an earlier generation, "This Old Man"): vee dee-yoo (plus) yoo dee-vee ... That's the "dee" of "yoo-times-vee". So, remember when the Product Rule you do: DON'T you say, "dee-vee dee-you"! See here. (c) Copyright, me!
{ "language": "en", "url": "https://math.stackexchange.com/questions/99406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 19, "answer_id": 14 }
What was done in this equation involving the fundamental theorem of calculus? All I know is that it uses the fundamental theorem of calculus. $$\large\frac{d}{dx}\int_{x^2}^{\sin x} e^{xt^2}dt = e^{x\;\sin^2 x}\cos x - e^{x^5}2x+\int_{x^2}^{\sin x} t^2e^{xt^2}dt$$
This is more an application of differentiation under the integral sign, which is a generalization of the fundamental theorem (and can be proved using it). $$ \frac{d}{dx}\,\int_{a(x)}^{b(x)}f(x,t)\,dt = f(x,b(x))\,b'(x) - f(x,a(x))\,a'(x) + \int_{a(x)}^{b(x)} \frac{\partial}{\partial x}\, f(x,t)\; dt $$ In this case, $a(x) = x^2$, $b(x) = \sin(x)$, and $f(x, t) = e^{x t^2}$. $$ a'(x) = 2x $$ $$ b'(x) = \cos(x) $$ $$ \frac{\partial}{\partial x} f(x, t) = t^2 e^{x t^2} $$ So $$ \large\frac{d}{dx}\int_{x^2}^{\sin x} e^{xt^2}dt = e^{x\;\sin^2 x}\cos x - 2xe^{x^5}+\int_{x^2}^{\sin x} t^2e^{xt^2}dt $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/99472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to interpret this vector multiplication? I have to solve the following problem (exact wording): Given is a Matrix $A_{m \times n} \in \mathbb{R}^{m \times n}$ and $r = \mathrm{rank}(A)$. The goal is to find the vectors $V=\{v_1,v_2,\ldots,v_n\}$ of the orthogonal basis of $\mathrm{im}(A)$ by using the Gram Schmidt algorithm. May $a_1,\ldots,a_n \in \mathbb{R}^m$ be the column vectors of A. We define $u_1,\dots,u_n \in \mathbb{R}^m$ by means of the Gram Schmidt algorithm, $$ u_r = a_r - \sum_{s=1}^{r\;-1}\;\mathrm{proj}_{u_s}(a_r)\quad \text{for r = 1,...,n} $$ where $$ \mathrm{proj}_{u_s}(a_r) = \begin{cases} \frac{u_s\cdot a_r}{u_s\cdot u_s}u_s & \text{for}~u_s \neq 0\\ \phantom| \\ \qquad0 & \text{for}~u_s=0 \end{cases} $$ a) What is the value of $u_r$, if $a_r\in \mathrm{span}(a_1,\dots,a_{r\,-1})$ with $r \leq n$? How can $\{v_1,\ldots,v_k\}$ be expressed in terms of $\{u_1,\ldots,u_n\}$? What I don't understand is how $\mathrm{proj}_{u_s}(a_r)$ is defined, as far as I understand $u_s$ and $a_r$ are column vectors of size $m$, thus how can they be multiplied with one another? My hunch is that $u_r$ is zero if $a_r \in \mathrm{span}(a_1,\dots,a_{r\,-1})$ as $a_r$ is not linearly independent in this case and can thus be expressed as sum of $(u_1,\dots,u_{r\,-1})$.
The vector product being described here is the dot product, which is an important way to take a product of vectors. For column vectors, the dot product can be computed as $$ \mathbf a \cdot \mathbf b = \mathbf a^{\mathsf T} \mathbf b = a_1 b_1 + a_2 b_2 + \cdots + a_m b_m\;.$$ It is possible to show that $\mathbf a \cdot \mathbf b = \| \mathbf a \| \| \mathbf b \| \cos(\theta) $, where $ \| \mathbf v \|$ is the length of a vector $\mathbf v$ (e.g. the distance of the point $\mathbf v$ from the origin in Euclidean space), and $\theta$ is the angle between the two vectors $\mathbf a$ and $\mathbf b$ (i.e. between the line segment connecting the origin to the point $\mathbf a$, and the similar segment for $\mathbf b$). As a result, for any vector $\mathbf a$ and any unit vector $\mathbf u$, the vector $(\mathbf a \cdot \mathbf u) \; \mathbf u$ represents the projection of $\mathbf a$ onto the line passing through the origin and pointing in the direction of $\mathbf u$. This property is implicitly the basis on which the Gram-Schmidt process operates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/99524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Essays on the real line? Are there any essays on real numbers (in general?). Specifically I want to learn more about: * *The history of (the system of) numbers; *their philosophical significance through history; *any good essays on their use in physics and the problems of modeling a 'physical' line. Cheers. I left this vague as google only supplied Dedekind theory of numbers which was quite interesting but not really what I was hoping for.
I'm not sure what you're looking for but try these books: * *Number Systems and the Foundations of Analysis by Mendelson *The Number System by Thurston *The Structure of Number Systems by Parker
{ "language": "en", "url": "https://math.stackexchange.com/questions/99659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
The square of an integer is congruent to 0 or 1 mod 4 This is a question from the free Harvard online abstract algebra lectures. I'm posting my solutions here to get some feedback on them. For a fuller explanation, see this post. This problem is from assignment 6. The notes from this lecture can be found here. a) Prove that the square $a^2$ of an integer $a$ is congruent to 0 or 1 modulo 4. b) What are the possible values of $a^2$ modulo 8? a) Let $a$ be an integer. Then $a=4q+r, 0\leq r<4$ with $\bar{a}=\bar{r}$. Then we have $a^2=a\cdot a=(4q+r)^2=16q^2+8qr+r^2=4(4q^2+2qr)+r^2, 0\leq r^2<4$ with $\bar{a^2}=\bar{r^2}$. So then the possible values for $r$ with $r^2<4$ are 0,1. Then $\bar{a^2}=\bar{0}$ or $\bar{1}$. b) Let $a$ be an integer. Then $a=8q+r, 0\leq r<8$ with $\bar{a}=\bar{r}$. Then we have $a^2=a\cdot a=(8q+r)^2=64q^2+16qr+r^2=8(8q^2+2qr)+r^2, 0\leq r^2<8$ with $\bar{a^2}=\bar{r^2}$. So then the possible values for $r$ with $r^2<8$ are 0,1,and 2. Then $\bar{a^2}=\bar{0}$, $\bar{1}$ or $\bar{4}$. Again, I welcome any critique of my reasoning and/or my style as well as alternative solutions to the problem. Thanks.
Hint $ $ Division $\rm\Rightarrow n = 4q\! +\! r,\ r\in \{0\ 1\ 2\ 3\}\Rightarrow n^2 = (4q\!+\!r)^2 =$ $\rm\: 4(4q^2\!+\!2qr)\!+\!r^2 = $ $\rm\:4Q\! +\! \color{#c00}{r^2}.\,$ For the remainder of $\rm\,r^2\,$ note: $\,\rm r\in \{0\ 1\ 2\ 3\}$ $\rm\Rightarrow\, \color{#c00}{r^2 = \,4 \bar q + \bar r},\ \bar r\in \{0\ 1\},\,$ so $\rm \, n^2 = 4Q\!+\!\color{#c00}{4\bar q + \bar r}$ It's simpler in modular language $\rm\ mod\ 4\!:\ n\equiv r\ \Rightarrow\ n^2 \equiv r^2\equiv \{0\ 1\ 2\ 3\}^2\equiv \{0\ 1\ \color{#c00}4\ \color{#0a0}9\}\equiv \{\color{#c00}0\ \color{#0a0}1\}\, $ by applying Congruence Product Rule, or working in the quotient / residue ring $\rm\ \mathbb Z/4\, =\, \mathbb Z\ mod\ 4.$ Optimization: work is halved employing a balanced residue system, e.g. $\rm\: \pm \{0\ 1\ 2\ 3\ 4\}\ mod\ 8.\ $ Using that we quickly compute $\rm\ mod\ 8\!:\ odd^2\equiv \{ 1 \ 3\}^2 \equiv 1,\ \: even^2 \equiv \{0\ 2\ 4\}^2 \equiv \{0\ 4\}.$ Or we can halve the cases using $\ \rm n\equiv r\pmod{\!2m}\Rightarrow n^2\equiv r^2\pmod{\!4m}\,$ by here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/99716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 1 }
Prove\Refute: for every norm in $\mathbb{R}^n: \left \| x \right \|\leq \max (\left \| x+y \right \|,\left \| x-y \right \|)$ I need to prove or refute that for every norm in $\mathbb{R}^n$:$ \left \| x \right \|\leq \max (\left \| x+y \right \|,\left \| x-y \right \|)$. It's been quite a while since I studied Linear algebra 1. I tried to look for vectors $x$ and $y$ such that they will refute the claim, but I didn't find any, so I tried to prove the question by showing the explicit sum of each norm, and go on form that, but it didn't do either. (sorry if the question is too easy or silly) Any help? Thank you very much!
A geometric way to think of this is to note that balls in normed spaces are convex, and $x$ lies on the line segment between $x+y$ and $x-y$. If $\|x\|$ were greater than $\|x-y\|$ and $\|x+y\|$, then $\{z:\|z\|<\|x\|\}$ would be an open ball that contains $x-y$ and $x+y$, but not their midpoint $x$, and therefore it would not be convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/99788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What is an honest basis? In a comment to this question, the commentator stated that "the monomials form an honest basis for your vector space". To be honest, I never heard of that. Is this something elementary?
This was not a term of art, but rather an attempt at emphasis. The monomials form a basis for the vector space in the usual sense (every vector is a linear combination of elements of the basis in a unique way), as opposed to, for instance, a Hilbert basis (whose linear span is not necessarily equal to the entire space). (Reminds me of something that happened when I was taking Measure Theory in my final undergraduate year; the professor had his own very good notes, with a set of exercises. One of the problems asked us to prove that a function that satisfied a certain property "is automatically continuous"; we couldn't figure out what the definition of "automatically continuous" was, and asked the professor the next lecture. Of course, he meant that such a function would necessarily be continuous...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/99856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the minimum reading requirements for understanding Newton's Principia Mathematica Naturalis Philosophae? For a long time, the self-contained nature of Newton's Principia has intrigued me. At a glance, it looks as if Euclid's Elements would be the only required reading for understanding his arguments. But it's still pretty tough going. Are there any lesser-known works from his time or before his time (I'm not looking for something to explain him to me, I want to read him and understand his arguments from first principles, the way he wrote them) that might have been obvious points of reference for people at the time he published, that simply haven't survived the way Euclid has?
See Newton Revisited: An excursion in Euclidean geometry by Greg Markowsky http://arxiv.org/abs/0910.4807
{ "language": "en", "url": "https://math.stackexchange.com/questions/99910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
If $xy$ is a unit, are $x$ and $y$ units? I know if $x$ and $y$ are units, in say a commutative ring, then $xy$ is a unit with $(xy)^{-1}=y^{-1}x^{-1}$. But if $xy$ is a unit, does it necessarily follow that $x$ and $y$ are units?
NO if $R$ is not commutative Definition: Suppose $R$ is any ring with unity $1$. An element $u$ is said to be unit in $R$ if there exits $v\in R$ such that $uv=1$ and $vu=1$. Let $\mathbb{R}[x]$ be the (infinite dimensional) vector space of all polynomials over $\mathbb{R}$. Let $S$ denote the ring of all linear operators on $\mathbb{R}[x]$ with usual addition and composition of operators. Let $D\colon\mathbb{R}[x]\rightarrow\mathbb{R}[x]$ demote the differential operator: $D(p(x))=p'(x)$. Let $J\colon:\mathbb{R}[x]\rightarrow \mathbb{R}[x]$ denote the integral operator: $J(a_0+a_1x+\cdots + a_nx^n)=a_0x+a_1\frac{x^2}{2}+\cdots + a_n\frac{x^{n+1}}{n+1}$. Then $D\circ J$ is identity operator which is obviously unit. But $D$ is not unit: it is not one-to-one (why?) hence it can not have a two-sided inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/99949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 2 }
Higher Dimensional Generalization of Helmholtz Theorem We know that given the divergence and curl of a vector field (and appropriate boundary conditions) it is possible to construct a unique vector field in $\mathbb R^3$. The specific problem I am thinking about is related to the PDE $$\operatorname{div} F = g,$$ where $F \colon \mathbb R^n \to \mathbb R^n$ is a vector field and $g \colon \mathbb R^n \to \mathbb R$ is a scalar field, and $\operatorname{div}$ is the $n$-dimensional generalization of the divergence given by $$\operatorname{div} F = \frac{dF_{i}}{dx_{i}}$$ (summation implied). What additional pieces of information are necessary to uniquely specify $F$ given the function $g$ (we know the answer is the curl of $F$ in 3D)?
I read that W. Hauser generalized Helmholtz's theorem to R⁴ by proving it for second-rank tensors [1,2]. but unfortunately I could not found his original papers for download (to see the details).. tray by yourself to found these references, you may be more lucky than me: [1] W. Hauser, "On the Fundamental Equations of Electromagnetism," Am. J. Physics, vol. 38, no. 1, pp. 80-85, 1970. [2] W. Hauser, Introduction to the Principles of Electromagnetism. Addison-Wesley Educational Publishers Inc, 1971.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
If $B \ (\supseteq A)$ is a finitely-generated $A$-module, then $B$ is integral over $A$. I'm going through a proof of the statement: Let $A$ and $B$ be commutative rings. If $A \subseteq B$ and $B$ is a finitely generated $A$-module, then all $b \in B$ are integral over $A$. Proof: Let $\{c_1, ... , c_n\} \subseteq B$ be a set of generators for $B$ as an $A$-module, i.e $B = \sum_{i=1}^n Ac_i$. Let $b \in B$ and write $bc_i = \sum_{j=1}^n a_{ij}c_j $ with $a_{ij} \in A$, which says that $(bI_n - (a_{ij}))c_j = 0 $ for $ 1 \leq j \leq n$. Then we must have that $\mathrm{det}(bI_n - (a_{ij})) = 0 $. This is a monic polynomial in $b$ of degree $n$. Why are we not done here? The proof goes on to say: Write $1 = \alpha_1 c_1 + ... + \alpha_n c_n$, with the $\alpha_i \in A$. Then $\mathrm{det}(bI_n - (a_{ij})) = \alpha_1 (\mathrm{det}...) c_1 + \alpha_2 (\mathrm{det}...) c_2 + ... + \alpha_n (\mathrm{det}...) c_n = 0$. Hence every $b \in B$ is integral over $A$. I understand what is being done here on a technical level, but I don't understand why it's being done. I'd appreciate a hint/explanation. Thanks
You prematurely write "Then we must have that $\mathrm{det}(bI_n - (a_{ij})) = 0$". At that stage you can only deduce (by multiplying by the adjoint of your matrix on the left) that all the $det\cdot c_i =0$. However writing $1 = \alpha_1 c_1 + ... + \alpha_n c_n$ and multiplying by $det$ you do get $$det=det\cdot 1= \alpha_1\cdot det\cdot c_1+...+\alpha_n\cdot det\cdot c_n=\alpha_1\cdot 0+...+\alpha_n\cdot 0=0$$ (This is a variation on the Cayley-Hamilton theorem, according to which the characteristic polynomial of a square matrix annihilates that matrix.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/100124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Derivation of pmf from convolution Suppose that a discrete random variable (with finite support) $Y$ is given by $Y = X_1 - X_2$, where $X_1$ and $X_2$ are both discrete random variables with finite support and with the same probability mass function. Is it possible to determine the pmf of $X_1$ from the pmf of $Y$?
I will consider the case when $X_1$ and $X_2$ are independent identically distributed and integer-valued. The following properites hold: * *Since $Y$ is discrete with finite support, its pmf is a polynomial in $z$ and $z^{-1}$ with non-negative coefficients. $$ Z_Y(z) = \sum_{k=m(Y)}^{n(Y)} \mathbb{P}(Y=k) z^k $$ *Let $X$ be the purported solution with $Z_X(z) = \sum_{k=m(X)}^{n(X)} \mathbb{P}(X=k) z^k$. *Since $Y = X_1-X_2$, for $X_1$, $X_2$ iids, $Z_Y(z) = Z_{X}(z) Z_{X}(z^{-1})$. The problem you are posing is whether, given $Z_Y(z)$ one can determine $Z_X(z)$ such that $Z_Y(z) = Z_X(z) Z_X(z^{-1})$. Observe that the solution, if exists, is not unique, since $Z_{X^\prime}(z) = z^k Z_{X}(z)$ would also be a solution for arbitrary $k \in \mathbb{Z}$. The necessary condition for existence of a solution is that $Z_Y(z)$ should verify $Z_{Y}(z) = Z_Y(z^{-1})$. Assuming that the given $Z_Y(z)$ satisfies this property, finding $Z_X(z)$ reduces to polynomial factorization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
problem with continuous functions $f,g:\mathbb{R}\longrightarrow\mathbb{R}$ f,g are continuous functions. $\forall q\in\mathbb{Q}$ $f\left(q\right)\leq g\left(q\right)$ I need to prove that $\forall x\in\mathbb{R}$ $f\left(x\right)\leq g\left(x\right)$
Pick $x\in\mathbb{R}\setminus\mathbb{Q}$. Suppose $f(x)>g(x)$. Let $\varepsilon>0$ be less than the difference between $f(x)$ and $g(x)$. Then there exists $\delta>0$ such that if $w$ differs from $x$ by less than $\delta$, then $f(w)-g(w)$ differs from $f(x)-g(x)$ by less than $\varepsilon$. But some rational numbers are in the interval $(x-\delta,x+\delta)$, so we have a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The meaning of Implication in Logic How do I remember Implication Logic $(P \to Q)$ in simple English? I read some sentence like * *If $P$ then $Q$. *$P$ only if $Q$. *$Q$ if $P$. But I am unable to correlate these sentences with the following logic. Even though the truth table is very simple, I don't want to remember it without knowing its actual meaning. $$\begin{array}{ |c | c || c | } \hline P & Q & P\Rightarrow Q \\ \hline \text T & \text T & \text T \\ \text T & \text F & \text F \\ \text F & \text T & \text T \\ \text F & \text F & \text T \\ \hline \end{array}$$
Remember that "implies" is equivalent to "subset of". It works in exactly the same way: "if an element is in the subset (e.g A), it MUST also be in the superset (e.g. B)". By definition, it is impossible that an element is in the subset, but not in the superset. That's the P=1, Q=0; P=>Q = 0 case. In fact, "A ⊆ B" means that a ∈ A implies that a ∈ B. If a is not in subset A then you can't draw any conclusions on whether a is in the superset B. That's how I keep remembering it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 9, "answer_id": 1 }
Is there a formula for solving integrals of this form? I was wondering if there was a substitution formula to solve integrals of this form: $\int f(g(x))g''(x)dx$
What you can do depends a lot on the form of $g(x)$. I doubt very much that there is a general solution. However there's two steps that I'd look at doing: (These are only valid if $g(x)$ is smooth and monotonic) First an integration buy parts, then a substitution. Taking $u = f(g(x))$ and $dv = g''(x)\;dx$ we'd get that: $$ \int f(g(x))g''(x)\;dx = f(g(x))g'(x) - \int f'(g(x))g'(x)^2\;dx $$ Now making the substitution $u = g(x)$ (which will only be valid for some $g(x)$), so that $du = g'(x)\;dx$, we get: $$ \int f(g(x))g''(x)\;dx = f(g(x))g'(x) - \int f'(u) g'( g^{-1}(u) )\;du $$ Whether or not this is an improvement will depend on what $g'(g^{-1}(u))$ is like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
About Self Number, which is found by D. R. Kaprekar. I'm trying to understand the algorithm to find self-number. But I don't know what does C, k, j, b is mean at this formula. What's that? How do I understand and what should I assign to solve them?
The formula you refer to is a recurrence. It generates some (not all) self-numbers. The first formula works in base 10, while the second works in base 2. The third generalizes to any base. $C_k$ refers to the $k^{th}$ generated self number ($C_1$ is the first, $C_2$ is the second, ..etc.). Let's use the first formula to generate some self-numbers: $C_1 = 9$ (as written between brackets) We get $C_2$ by replacing $k$ in the formula by 2. $C_2 = 8*10^{2-1} + C_{2-1} + 8 = 8*10 + C_1 + 8 = 8*10 + 9 + 8 = 97$ (by substituting $C_1 = 9$). You continue in this way and you get infinitely many self-numbers (but not all). If you really insist on generating all self-numbers, you iterate over all natural numbers and apply the test on each, or you find a cleverer way of doing it..
{ "language": "en", "url": "https://math.stackexchange.com/questions/100396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Every non zero commutative ring with identity has a minimal prime. Let $A$ be a non zero commutative ring with identity. Show that the set of prime ideals of $A$ has minimal elements with respect to inclusion. I don´t know how to prove that, I can suppose that the ring is an integral domain, otherwise the ideal $(0)$ is a prime ideal , but I don´t know how to proceed. Probably it's a Zorn application.
Right, it's Zorn's lemma. Namely, show that the intersection of any downward chain of prime ideals is prime, and use Zorn's lemma to conclude that $\text{Spec}(A)$ has a minimal element. Just in case you're having difficulty proving the statement about the intersections suppose that $\Omega$ is a downward chain of prime ideals and let $\mathfrak{P}$ be the intersection of all the members of $\Omega$. Since the intersection of ideals are ideals, it suffices to show that $\mathfrak{P}$ is prime. To do this suppose that $ab\in\mathfrak{P}$ but neither $a$ nor $b$ was. Since $a$ nor $b$ is in $\mathfrak{P}$ we can find two prime ideals $\mathfrak{p},\mathfrak{p}'\in\Omega$ such that $a\notin\mathfrak{p}$ and $b\notin\mathfrak{p}'$. Since $\Omega$ is a downward chain we may assume without loss of generality that $\mathfrak{p}\subseteq\mathfrak{p}'$ so that $a,b\notin\mathfrak{p}$. That said, since $ab\in\mathfrak{P}$ we know that $ab\in\mathfrak{p}$ which contradicts that $\mathfrak{p}$ is prime. Thus, we see that $ab\in\mathfrak{P}$ implies either $a\in\mathfrak{P}$ or $b\in\mathfrak{P}$ and so $\mathfrak{P}$ is prime. Since $\Omega$ was arbitrary it follows that $\text{Spec}(A)$ has a minimal element, by Zorn's lemma. Remark: I left out a very small detail in the above proof that you should find and add.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Showing that $ \int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dx=\ln2 $ I would like to show that $$ \int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dx=\ln2 $$ What annoys me is that $ x-1 $ is the numerator so the geometric power series is useless. Any idea?
$\displaystyle \int_{0}^{1}\frac{x-1}{\log{x}}\;{dx} = \int_{0}^{1}\int_{0}^{1}x^{t}\;{dt}\;{dx} =\int_{0}^{1}\int_{0}^{1}x^{t}\;{dx}\;{dt} = \int_{0}^{1}\frac{1}{1+t}\;{dt} = \log(2). $
{ "language": "en", "url": "https://math.stackexchange.com/questions/100495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 3, "answer_id": 1 }
One vs multiple servers - problem Consider the following problem: We have a simple queueing system with $\lambda%$ - probabilistic intensity of queries per some predefined time interval. Now, we can arrange the system as a single high-end server ($M/M/1$, which can handle the queries with the intensity of $2\mu$) or as two low-end servers ($M/M/2$, each server working with intensity of $\mu$). So, the question is - which variant is better in terms of overall performance? I suspect that it's the first one, but, unfortunately, my knowledge of queuing / probability theory isn't enough. Thank you.
If "overall performance" is the expected time a client/customer/query spend in the M/M system, then the single server system outperforms the second one. The reasoning is simple: the M/M/1 system functions in "full" intensity even with a single query at the system; the M/M/2 system needs two queries present to reach the highest service intensity. So, queries arriving at an empty system spend less time on the M/M/1. [Queries arriving at a system with at least one query present spend them same time on average]
{ "language": "en", "url": "https://math.stackexchange.com/questions/100571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
What are the integers $n$ such that $\mathbb{Z}[\sqrt{n}]$ is integrally closed? I was recently reading about integral ring extensions. One of the first examples given is that $\mathbb{Z}$ is integrally closed in its quotient field $\mathbb{Q}$. Another is that $\mathbb{Z}[\sqrt{5}]$ is not integrally closed in $\mathbb{Q}(\sqrt{5})$ since for example $(1+\sqrt{5})/2\in\mathbb{Q}[\sqrt{5}]$ is integral over $\mathbb{Z}$ as a root of $X^2-X-1$, but $(1+\sqrt{5})/2\notin\mathbb{Z}[\sqrt{5}]$. Now I'm curious, can we find what are all integers $n$ such that $\mathbb{Z}[\sqrt{n}]$ is integrally closed (equal to its integral closure in its quotient field)? One thing I do know is that unique factorization domains are integrally closed, so I think rings like $\mathbb{Z}[\sqrt{-1}]$, $\mathbb{Z}[\sqrt{-2}]$, $\mathbb{Z}[\sqrt{2}]$ and $\mathbb{Z}[\sqrt{3}]$ are integrally closed, as they are Euclidean domains, and thus are UFDs. But can we say what all integers $n$ are such that $\mathbb{Z}[\sqrt{n}]$ is integrally closed? Thanks!
$\mathbb Z[\sqrt{n}]$ is integrally closed in $\mathbb Q(\sqrt{n})$ ($n\in\mathbb Z$, $n\neq1$) if and only if $n$ is square free and $n$ is not congruent to $1$ mod $4$ (or $n$ is a perfect square, in that case we have $\mathbb Z$ and $\mathbb Q$; thanks Arturo). Moreover, if $n\equiv1 \pmod 4$, then $\mathbb Z[\frac{1+\sqrt{n}}{2}]$ is integrally closed in $\mathbb Q(\sqrt{n})$. Sketch of proof of why $\mathbb Z[\sqrt{n}]$ is integrally closed in $\mathbb Q(\sqrt{n})$ for $n$ a square free number not congruent to $1$ modulo $4$. Let $\mathcal O$ be the set of integers numbers in $\mathbb Q(\sqrt{n})$. We can see that $\mathbb Z[\sqrt{n}]\subseteq\mathcal O$ (looking for suitable polynomials). Let $\alpha=p+q\sqrt{n}\in\mathcal O-\mathbb Z$ with $p,q\in\mathbb Q$. $\alpha$ is a root of the polynomial $$ f(x)=(x-\alpha)(x-\bar\alpha)=x^2-2px+(p^2-nq^2). $$ But $f(x)$ is monic and of minimal degree (with coefficients in $\mathbb Q$), so it has to divide to the monic polynomial $g(x)\in\mathbb Z[x]$ which $g(\alpha)=0$. This implies that $f(x)\in\mathbb Z[x]$, thus $2p,\; p^2-mq^2\in\mathbb Z$. Now, you should prove that $p$ and $q$ are in $\mathbb Z$ since $n\equiv 2,3\pmod4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 0 }
Solutions of some Diophantine equations Respected Mathematicians, The Diophantine equation $$2^x + 5^y = z^2$$ has solutions $$x = 3, y = 0, z = 3$$ and $$x = 2, y = 1, z = 3$$ I got these solutions by trial and error method. To be honest, these solutions are below the number $5$. So, I easily verified them by trial and error method. I would like to know the method which will give the solutions of the above equation, as well as the solutions of equations below. a) $$4^x + 7^y = z^2$$ b) $$4^x + 11^y = z^2$$ Looking forward to your solution and support. baba
I'll do a piece of it to show you some methods you can try on the other pieces. $2^x+5^y=z^2$. Let's do the case where $y=2s$ is even. $2^x=z^2-(5^s)^2=(z+5^s)(z-5^s)$, so $z+5^s=2^m$ and $z-5^s=2^n$ with $m+n=x$. Eliminating $z$, $2\times5^s=2^m-2^n$, so $5^s=2^{m-1}-2^{n-1}$. The left side is odd, so the right side is odd, so $n=1$, and $5^s=2^{m-1}-1$. Left side is 1 modulo 4, so right side is 1 modulo 4, so we must have $m=2$. So if there's a solution with $y$ even, then $x=3$, $y=0$, $z=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How do I find the projection of a point onto a plane Lets say I have the point $(x, y, z)$ and the plane with normal $(a, b, c)$ with the point $(d, e, f)$. I am trying to use this in $3D$ programming. Thank you!
Take the displacement vector from the point in the plane to the given point: $$ {\bf v}=(x-d , y-e, z-f) $$ and let ${\bf w}$ be the normal vector to the plane. We can describe ${\bf v}$ as a sum of two vectors; one that is perpendicular to the normal vector ${\bf w}$ (denoted by ${\bf v}_\perp$), and another that is parallel to the normal vector ${\bf w}$ (denoted by ${\bf v}_\parallel$). $$ {\bf v} = {\bf v}_\perp + {\bf v}_\parallel $$ ${\bf v}_\parallel$ is given by $$ {\bf v}_\parallel = {{\bf v}\cdot{\bf w}\over\Vert{\bf w}\Vert^2} {\bf w} $$ Then $$ {\bf v}_\perp = {\bf v} - {{\bf v}\cdot{\bf w}\over\Vert{\bf w}\Vert^2} {\bf w} $$ From this, the required point is $(d,e,f)+{\bf v}_\perp$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Local diffeomorphism from $\mathbb R^2$ onto $S^2$ Is there any local diffeomorphism from $\mathbb R^2$ onto $S^2$?
Yes, there are smooth onto functions $f : \mathbb R^2 \to S^2$ whose derivative is everywhere rank $2$. This does not contradict Paul's answer since what Paul's answer tells you is $f$ can't be a covering map. I think perhaps a good way to think about this is peeling an orange without "flaking" it: The orange peel you can think of as as being like a neighbourhood of an embedded arc in $S^2$. So let's see if there's a nice formula that does the job. Step 1: $\mathbb R^2$ is diffeomorphic to $\mathbb R \times (-1,1)$. The map is $(x,y) \longmapsto (x,\frac{y}{\sqrt{1+y^2}})$. Step 2: Let $\gamma : \mathbb R \to S^2$ be any immersion of bounded curvature whose image is dense in $S^2$. Step 3: The onto submersion $\mathbb R \times (-1,1) \to S^2$ is given by sending $(x,y)$ to a little push-off of $\gamma(x)$ where you push in the direction a 90-degree counter-clockwise direction to $\gamma'(x)$, some distance proportional to $y$. The proportionality constant will be something like the reciprocal of the maximum curvature of $\gamma$ -- this ensures the map from the geometric normal bundle to the curve $\gamma$ is a submersion. So I hope that gives you the idea. I imagine it's not too hard to cook up an explicit formula for such a $\gamma$ but it's too late for me to think-up one, apparently. umm... or maybe... $\gamma(t) = (\cos t \cos(t/a), \cos t \sin(t/a), \sin t)$ would appear to get the job done provided the vector subspace of $\mathbb R$ generated by $\pi$ and $a$ is $2$-dimensional over $\mathbb Q$. There are of course simpler, perhaps more explicit constructions. The idea would be to think of such an onto submersion $\mathbb R \times (-1,1) \to S^2$ as a describing a "brush stroke" where you are painting a sphere. The first parameter $x$ is time, and the 2nd $y$ is the parameter along the brush. So to construct an onto submersion, the game is to entirely paint the sphere in one stroke, where the only constraint is your direction of travel has to be independent of the line where the brush is contacting the sphere. Clearly you can do this, it's just a matter of writing it out explicitly as some function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Resource to improve ability to interpret and understand formulas and algorithms I wasn't sure whether this belonged here or on StackOverflow but here goes: I've been doing a lot of research into algorithms recently and I've found that my inability to properly interpret and understand some of the formulas and algorithms is starting to impede my ability to get anything meaningful enough to go off and write a program to implement the algorithms described. Is there any resource (online prefered but books are ok) you could recommend I use that is for total beginners? Even though I'm (mainly) a programmer, I don't think knowing 5+ languages are worth anything unless I can change the simple fact that my mathematical abilities are merely average. I'd prefer something that starts off utterly simple and builds up to the more complex/abstract stuff. If it had Pseudo code or an actual implementation, even better. Any recommendations?
I think "Introduction to Algorithms" by Cormen is the standard reference for this. Also, I found "Concrete Mathematics - A Foundation for Computer Science" to be very enjoyable. The first one really concentrates on the theory of algorithms while the second one is more about mathematics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/100964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
a simple example of a non-continuous function whose graph is homeomorphic to the domain There is a theorem that says that if $f$ is a continuous map, then its graph is homeomorphic to the domain. The converse is false. I'd like to find the simplest counterexample possible. I've found this $f:\mathbb{R}\longrightarrow \{0,1\}$, where $\mathbb R$ is endowed with the topology generated by the sets $\{1\},\{2\},\{3\},...$ and $f(x)=1$ for $x=0,1,2,...$ and $f(x)=0$ otherwise. Is there a simpler example? edit: In my example, $\{0,1\}$ has discrete topology.
Here's one that seems easy to visualize. Take $X = [0,1) \times \mathbb{N}$ with its usual topology; i.e. $X$ is a countable disjoint union of half-open "sticks". Define $f : X \to \mathbb{N}$ by $$f(x,n) = \begin{cases} 1, & n=1, x < 1/2 \\ 2, & n=1, x \ge 1/2 \\ n+1, & n > 1.\end{cases}$$ Then a homeomorphism from the graph of $f$ to $X$ is given by $$F((x,n),k) = \begin{cases} (2x,1), & k=1 \\ (2x-1, 2), & k=2 \\ (x,k), & k > 2. \end{cases}$$ (Hopefully I have written it correctly.) The idea is that the graph looks just the same, except that the first stick has been broken in half. By stretching the half-pieces we get two full-size sticks. But there were already infinitely many sticks, so adding one more doesn't produce an "extra" if we re-index.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute: $\int_{0}^{1}\frac{x^4+1}{x^6+1} dx$ I'm trying to compute: $$\int_{0}^{1}\frac{x^4+1}{x^6+1}dx.$$ I tried to change $x^4$ into $t^2$ or $t$, but it didn't work for me. Any suggestions? Thanks!
Edited Here is a much simpler version of the previous answer. $$\int_0^1 \frac{x^4+1}{x^6+1}dx =\int_0^1 \frac{x^4-x^2+1}{x^6+1}dx+ \int_0^1 \frac{x^2}{x^6+1}dx$$ After canceling the first fraction, and subbing $y=x^3$ in the second we get: $$\int_0^1 \frac{x^4+1}{x^6+1}dx =\int_0^1 \frac{1}{x^2+1}dx+ \frac{1}{3}\int_0^1 \frac{1}{y^2+1}dy = \frac{\pi}{4}+\frac{\pi}{12}=\frac{\pi}{3} \,.$$ P.S. Thanks to Zarrax for pointing the stupid mistakes I did...
{ "language": "en", "url": "https://math.stackexchange.com/questions/101049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 3 }
Showing $C=\bigg\{ \sum_{n=1}^\infty a_n 3^{-n}: a_n=0,2 \bigg\}$ is uncountable Let us define the set $$C=\bigg\{ \sum_{n=1}^\infty a_n 3^{-n}: a_n=0,2 \bigg\}$$ This is the Cantor set, could anyone help me prove it is uncountable? I've been trying a couple of approaches, for instance assume it is countable, list the elements of $C$ as decimal expansions, then create a number not in the list, I am having trouble justifying this though. Secondly i've been trying to create a function $f$ such that $f(C)=[0,1]$. Many thanks
Why list the numbers as their decimal expansion? Given a countable list $((a_n^m)_n)_m$ of sequences with values $0$ and $2$, construct another sequence $(b_n)_n$ that does not appear in the list. You can do that just as in Cantor's proof that the unit interval is uncountable. For the surjection from $C$ onto $[0,1]$, notice that every element of $C$ uniquely determines the sequence $a_n$ it comes from. Now map every sequence $(a_n)_n$ to the sequence $(b_n)_n$ where $b_n=0$ if $a_n=0$ and $b_n=1$ if $a_n=2$. Now consider $\sum_{n=1}^\infty b_n2^{-n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
primitive n-th roots of unity Show that the primitive n-th roots of unity have the form $e^{2ki\pi/n}$ for $k,n$ coprime for $0\leq k\leq n$. Since all primitive n-th roots of unity are n-th roots of unity by definition they all have that form, the question is, how to show $k$ and $n$ are coprime.
Primitivity means that no positive power of $\zeta=e^{2\pi i k/n}$ less than $n$ will achieve unity. If $k$ is not coprime to $n$ and $\gcd(k,n)=m$, then observe $\zeta^{(n/m)}=e^{2\pi i (k/m)}=1$ but $n/m<n$ if $m>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Integral for an inverse with no closed-form solution I have a function that gives $t$ in terms of $y$ that has no closed-form solution for $y$ (W|A). I have that $$\frac{dy}{dt} = \sqrt{a - \frac{b}{y(t)}}.$$ Is there some way to set up an integral that I can evaluate that would output $y$ given $t$? Integrating $dt$: $$y = \int {\sqrt{a-\frac{b}{y(t)}}} dt$$ Is there anywhere I can go from here?
Have you heard of the cycloid? It is parametrized by $x = a\dfrac{\theta + \sin \theta}{2}$ $y = a\dfrac{1-\cos \theta}{2}$ and is the solution to $$\frac{{dx}}{{dy}} = \sqrt {\frac{{a - y}}{y}} $$ Since your equation is $$\frac{{dy}}{{dt}} = \sqrt {\frac{{ay - b}}{y}} $$ we can go like this: Put $$\frac{{dt}}{{dy}} = \sqrt {\frac{y}{{ay - b}}} $$ Now let $$y = \frac{b}{a}{\cosh ^2}\theta $$ We get $$dt = \frac{{2b}}{{{a^{3/2}}}}{\cosh ^2}\theta d\theta $$ So $$dt = \frac{{2b}}{{{a^{3/2}}}}\left( {\frac{1}{2} + \frac{{\cosh 2\theta }}{2}} \right)d\theta $$ and integrating gives $$t = \frac{{2b}}{{{a^{3/2}}}}\left( {\frac{{2\theta }}{4} + \frac{{\sinh 2\theta }}{4}} \right)+C$$ So your solution is parametrized by ($\phi = 2\theta$, suppose initial conditions make $C=0$) $$\eqalign{ & y = \frac{b}{{2a}}\left( {1 + \cosh \phi } \right) \cr & t = \frac{b}{{2{a^{3/2}}}}\left( {\phi + \sinh \phi } \right) \cr} $$ In the same way the cycloid is a "deformed" circumeference your solution is a "deformed" hiperbola. The cycloid has a closed form for $(x,y)$ coordinates so you might be able to find one for the above curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find combinations of two digits I am in a puzzle that which is generated by one of my friend . She gave me last 4 numbers of her mobile phone & i want to find the first 6 numbers. I guessed the first 4 letters. Assume that AAAA is the first 4 letters and MMMM is the last 4 letters and XX is the 2 letters after the first 4 letters . So her number is 10 digit soit would be AAAAXXMMMM I know AAAA and MMMM and now i have to find XX . I know getting the exact value is impossible but i think that i can get the combinations of XX [2 digits]. How can i get this ? Thank you.
You should apply formula of permutations with repetition . So , number of permutations in this particular case is given by : $N=10^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/101247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to perform a double integration Suppose you are trying to find the integral of $x^2 + y^2$ such that $(x^2 + y^2) \leq 1$. How would you do this? Attempt: I know the radius is 1 but I am stuck when trying to determine the limits of integration.
ncmathsadist's answer is perfect. I want to expand on it a bit, hinting at the parts where I think you are stuck). We do not need to do polar coordinates (although you'll likely find while doing the integral that you'll use a trig substitution to compute it, ironically doing the polar coordinate bit surreptitiously). Usually, the way to think of these problems is to find the boundary curves. We recognize $x^2 + y^2 = 1$ as the unit circle. So we are integrating over the unit circle. Now let's set up our boundary curves. We're going to write our integral as $\iint (\text{stuff}) \mathrm{d} 1 \mathrm{d} 2$, where the $1$ and the $2$ are $x$ and $y$ in some order. What order today? Usually, this is done by considering which direction is easier. Suppose we wanted to integrate with respect to $x$ first (so that the $1$ in the above integral was $x$). This seems good, because we note that for every $x$, the boundary curves are always the same (so we don't need to split up the integral or do anything fancy). What are the boundaries? Well, it goes as high as the top of the circle and as low as the bottom of the circle. The top has formula $x = \sqrt{1 - y^2}$, and the bottom $-\sqrt{1 - y^2}$. So the inner integral in this case reads $$\int_{-\sqrt{1 - y^2}}^{\sqrt{1 - y^2}} (\text{stuff}) \;\mathrm{d}x$$ Now we have collapsed the $x$ direction. How far does $y$ extend over this boundary? It goes from $-1$ to $1$. That's how we get the second set of limits. Does that make sense? (I deliberately chose the opposite order as ncmathsadist, because they can both be done).
{ "language": "en", "url": "https://math.stackexchange.com/questions/101319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding how many terms of the harmonic series must be summed to exceed x? The harmonic series is the sum 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ... + 1/n + ... It is known that this sum diverges, meaning (informally) that the sum is infinite and (more formally) that for any real number x, there there is some number n such t that the sum of the first n terms of the harmonic series is greater than x. For example, given x = 3, we have that 1 + 1/2 + 1/3 + ... + 1/11 = 83711/27720 &approx; 3.02 So eleven terms must be summed together to exceed 3. Consider the following question Given an integer x, find the smallest value of n such that the sum of the first n terms of the harmonic series exceeds x. Clearly we can compute this by just adding in more and more terms of the harmonic series, but this seems like it could be painfully slow. The best bound I'm aware of on the number of terms necessary is 2O(n), which uses the fact that 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + ... is greater than 1 + (1/2) + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + ... which is in turn 1 + 1/2 + 1/2 + 1/2 + ... where each new 1/2 takes twice as many terms as the previous to accrue. This means that the brute-force solution is likely to be completely infeasible for any reasonably large choice of x. Is there way to calculate the harmonic series which requires less operations than the brute-force solution?
If you replace the sum by an integral you get the logarithm function so that $\ln(n)$ is a first approximation. In fact the Euler $\gamma$ constant ($.577215664901532860606512090$) may be defined by the following formula : $\displaystyle \gamma=\lim_{n \to \infty} \left(H_n-\ln(n+1/2)\right)$ From this you may deduce the equivalence as $n \to \infty$ : $$H_n \thicksim \gamma + \ln(n+1/2) $$ (for $n=10^6$ we get about 14 digits of precision) And revert this (David Schwartz proposed a similar idea) to get the value $n$ required to get a sum $s$ : $$n(s) \approx e^{s-\gamma} -\frac12$$ The first integer to cross the $s$ should be given by $\lfloor e^{s-\gamma}+\frac12\rfloor\;$ ('should be' because of the little error made on $H_n$ compensated by the low probability of people testing values much higher than 20 :-)). Example : the sum will cross the value $20$ for $n$ evaluated at $\rm floor(\rm exp(20-gamma)+0.5)= \rm round(\rm exp(20-gamma))= 272400600$ and indeed (this is not a proof!) : $H_{272400599}=19.9999999977123$ $H_{272400600}=20.0000000013833$
{ "language": "en", "url": "https://math.stackexchange.com/questions/101371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 1 }
Calculate the derivate $dx/dy$ using $\int_0^x \sqrt{6+5\cos t} \, dt + \int_0^y \sin t^2 \, dt = 0$ I want to calculate $\frac{dx}{dy}$ using the equation below. $$\int_0^x \sqrt{6+5\cos t}\;dt + \int_0^y \sin t^2\;dt = 0$$ I don't even know from where to start. Well I think that I could first find the integrals and then try to find the derivative. The problem with this approach is that I cannot find the result of the first integral. Can someone give me a hand here?
First, differentiate both sides with respect to $x$: $$0=\frac{d}{dx} \left(\int_0^x \sqrt{6+5\cos t}\;dt + \int_0^y \sin t^2\;dt\right) = \sqrt{6+5\cos x} + (\sin y^2)\frac{dy}{dx}$$ Now we have a differential equation: $$ \sqrt{6+5\cos x} + (\sin y^2)\frac{dy}{dx} = 0. $$ Separate variables: $$ (\sin y^2)\;dy = -\sqrt{6+5\cos x}\;dx $$ Now the problem is to find two antiderivatives.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
generalized ordering of positive semi-definite matrix by eigenvalues I know positive semi-definite matrices are generalizations of non-negative numbers. So "ordering" of the two systems should be pretty much like each other. How to prove the following theorem? For two symmetric $X$ and $Y$, if $X \geq Y$, then $\lambda_i(X) \geq \lambda_i(Y)$, for every $i$. $\lambda_i(\cdot)$ denotes the $i$-th largest eigenvalue. And what about the converse statement? Is it true? Thanks a lot.
I checked Roger A. Horn's matrix analysis book. It provides a very thorough analysis of the ordering defined on positive semi-definite matrices. Here is a logic stream how to prove the original statement: Courant-Fischer Theorem $\Rightarrow$ Weyl Theorem $\Rightarrow$ Monotonicity Theorem The original statement is a direct consequence of Monotonicity Theorem. And the converse statement is not true. Here is a simple example: $A = \bigl(\begin{smallmatrix} 2&0\\ 0&4 \end{smallmatrix} \bigr)$ $B = \bigl(\begin{smallmatrix} 3&0\\ 0&1 \end{smallmatrix} \bigr)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/101548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Convergence of $\sum_{n=1}^\infty\frac{1}{2\cdot n}$ It is possible to deduce the value of the following (in my opinion) converging infinite series? If yes, then what is it? $$\sum_{n=1}^\infty\frac{1}{2\cdot n}$$ where n is an integer. Sorry if the notation is a bit off, I hope youse get the idea.
To me, the easiest way to see that the harmonic series diverges is to use the Integral test. Then you do not have to deal with coming up with a formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Equation of straight line I know, $Ax + By = C$ is the equation of straight line but a different resource says that: $y = mx + b$ is also an equation of straight line? Are they both same?
$Ax + By = C$ $By = -Ax + C$ $y = -(A/B)x + C/B$ Let $m = -\frac{A}{B}$. Let $b = \frac{C}{B}$. $y = mx + b$ So they are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When is $a^k \pmod{m}$ a periodic sequence? Let $a$ and $m$ be a positive integers with $a < m$. Suppose that $p$ and $q$ are prime divisors of $m$. Suppose that $a$ is divisible by $p$ but not $q$. Is there necessarily an integer $k>1$ such that $a^k \equiv a \pmod{m}$? Or is it that the best we can do is say there are $n>0$ and $k>1$ such that $a^{n+k} \equiv a^n \pmod{m}$ What can be said about $n$ and $k$? EDIT: Corrected to have $k>1$ rather than $k>0$. EDIT: The following paper answers my questions about $n$ and $k$ very nicely. A. E. Livingston and M. L. Livingston, The congruence $a^{r+s} \equiv a^r \pmod{m}$, Amer. Math. Monthly $\textbf{85}$ (1978), no.2, 97-100. It is one of the references in the paper Math Gems cited. Arturo seems to say essentially the same thing in his answer.
(Assuming you meant $k\gt 1$) The best you can say say is that there are $n$ and $k$ such that $a^{n+k}\equiv a^n\pmod{m}$. And of course, there is a least $n$ for which there exist such $k$, and a least $k$ that make this true; in the sense that if $r$ and $s$ are any positive integers such that $a^r\equiv a^s\pmod{m}$, then $r,s\geq n$, and $r\equiv s\pmod{k}$. (These are the "cyclic monoids/semigroups".) For example, $m=12$, $a=2$. Then $a^2\equiv 4\pmod{12}$, $a^3\equiv 8\pmod{12}$, $a^4\equiv 4\pmod{2}$, and you never get back to $2$. The problem will arise whenever you have a prime $p$ that divides both $a$ and $m$, but the highest power of $p$ that divides $a$ is strictly smaller than the highest power of $p$ that divides $m$. Consider the situation one prime at a time. If $\gcd(p,a)=1$, then there is a $k_p$ such that $a^{k_p}\equiv a\pmod{p^{r_p}}$, where $p^{r_m}$ is the exact power of $p$ that divides $m$. We know that $k_p$ divides $p^{r_p-1}(p-1)$, but in general we don't know more than that. If $p|a$, let $p^{s_p}$ be the exact power of $p$ that divides $a$. If $n_p=\lceil \frac{r_p}{s_p}\rceil$ we have that $n_p$ is the smallest positive integer such that $a^{n_p+1}\equiv a_{n_p}\pmod{p^{r_p}}$, and moreover, $a$, $a^2,\ldots,a^{n_p}$ are pairwise distinct modulo $p^{r_p}$. By the Chinese Remainder Theorem, $a^{n+k}\equiv a^n\pmod{m}$ if and only if $a^{n+k}\equiv a^n\pmod{p^{r_p}}$ for each prime $p$ that divides $m$. For primes that do not divide $a$, this implies that $k$ is a multiple of $k_p$; for primes that do divide $a$, this implies that $n\geq n_p$ and $k$ is arbitrary. So you can say that $n\geq \max\{n_p \mid p|\gcd(a,m)\}$ and $\mathrm{lcm}\{k_p\mid p\text{ divides }m\text{ and }p\text{ does not divide }a\}$ divides $k$. Conversely, if $n$ and $k$ satisfy those conditions, then the value of $n$ guarantees that $a^{n+k}\equiv a^n\pmod{p^{r_p}}$ for all primes that divide $\gcd(a,m)$; and the value of $k$ guarantees that $a^{n+k}\equiv a^n\pmod{p^{r_p}}$ for all primes that divide $m$ but not $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Polynomial of at least degree 3 for this Cauchy Problem I'm given this : $$y''(x)+\sin(x)y'(x)+x^3y(x)=1+x$$ $$y'(0)=0,\;y(0)=1$$ and I'm asked to calculate a polynomial of at least degree $3$ that approximates the solution of that problem. May I try a series for $y(x)$ and then just chop at degree $3$?. If this is correct, then $$y(x)=\sum_{k=0}^{\infty}c_{k}x^{k}, y'(x)=\sum_{k=1}^{\infty}kc_{k}x^{k-1},y''(x)=\sum_{k=2}^{\infty}k(k-1)c_{k}x^{k-2}$$ Substitution into the ODE gives $$\sum_{k=2}^{\infty}k(k-1)c_{k}x^{k-2}+\sin(x)\sum_{k=1}^{\infty}kc_{k}x^{k-1}+x^3\sum_{k=0}^{\infty}c_{k}x^{k}=1+x$$ and $$\sum_{k=2}^{\infty}k(k-1)c_{k}x^{k-2}+\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k+1}}{(2k+1)!}\sum_{k=1}^{\infty}kc_{k}x^{k-1}+\sum_{k=0}^{\infty}c_{k}x^{k+3}=1+x$$ Now, how do I match the powers to set the coefficients of $x^0=1$, the coefficients of $x^1=1$ , and the rest equal to zero? I'm stuck in the index change and in the product of $y'(x)$ and $sin(x)$ series. Thanks for your time.
Using $y'(0)=0, y(0)=1$, you can tell $c_0 = 1$ and $c_1=0$ Looking at constant terms in your final line, you have $2c_2=1$ so $c_2=1/2$. Looking at coefficients of $x$ in the final line you have $6 c_3 x +c_1 x = x$ so $c_3 = 1/6$ meaning $$y(x)=1+\frac{x^2}{2} + \frac{x^3}{6} \cdots$$ If you wanted to go further and look at coefficients of $x$ in the final line you have $12 c_4 x^2 + 2c_2 x^2 =0$ so $c_4=-1/12$, and it is not difficult to continue doing this again and again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/101833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Numerical analysis textbooks and floating point numbers What are some recommended numerical analysis books on floating point numbers? I'd like the book to have the following * *In depth coverage on the representation of floating point numbers on modern hardware (the IEEE standard). *How to do arbitrary precision floating point calculations with a reasonably fast modern algorithm. *How to compute the closest 32-bit floating point representation of a dot product and cross product. And do this fast, so no relying on generic arbitrary precision calculations to get the bits of the 32-bit floating point number right. From what I can infer from doing some searches most books tend to focus on stuff like the runge kutta and not put much emphasis on how to make floating point calculations that are ultra precise.
Try these books: * *Numerical Computing with IEEE Floating Point Arithmetic by Overton *Accuracy and Stability of Numerical Algorithms by Higham *Modern Computer Arithmetic by Brent and Zimmermann
{ "language": "en", "url": "https://math.stackexchange.com/questions/101891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Easiest way to perform Euclid's division algorithm for polynomials Let's say I have the two polynomials $f(x) = x^3 + x + 1$ and $g(x) = x^2 + x$ over $\operatorname{GF}(2)$ and want to perform a polynomial division in $\operatorname{GF}(2)$. What's the easiest and most bullet proof way to find the quotient $q(x) = x + 1$ and the remainder $r(x)=1$ by hand? The proposal by the german edition of Wikipedia is rather awkward.
$f$ corresponds to the binary number $1011$ and $g$ to $110$ if you identify $x$ with $2$. Appending a $0$ (rsp. multiplication by $2$) corresponds to multiplying with $x$ and $\oplus$ (exclusive or) is addition. 1011:110 = 11, i.e., the quotient is $x+1$ 110 --- 111 110 --- 1, i.e., the remainder is 1
{ "language": "en", "url": "https://math.stackexchange.com/questions/101963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is it possible to compute the variance without computing the mean first? I have a list of values of a random variable $x \in \mathbb R$. Is it possible to find the varience $\overline{(x - \overline x)^2}$ without computing the mean $\overline x$ first? That is to process the list only once.
You can use that the variance is $\overline{x^2} - \overline {x}^2$, which takes only one pass (computing the mean and the mean of the squares simultaneously), but can be more prone to roundoff error if the variance is small compared with the mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is there a systematic way of finding the conjugacy class and/or centralizer of an element? Is there a systematic way of finding the conjugacy class and centralizer of an element? Could the task be simplified if we are working with "special groups" such as $S_n$ or $A_n$? Are there any intuitive approaches? Thanks.
For a finite group, there is a perfectly systematic way: to find the conjugacy class of $x$, just compute every element $g^{-1}xg$, and to find the centralizer, just compare $gx$ to $xg$ for all $g$ in the group. For $S_n$, two elements are conjugate if and only if they have the same cycle structure. So $(123)(45)$ is conjugate to $(396)(47)$ in $S_9$, but not to $(12)(45)$ or $(12345)$ or.... There are lots of ways to use other facts you may know about a group or about an element in a group to simplify the calculation of a conjugacy class or a centralizer, but I'm afraid there is no systematic way to list all these facts. They just come with doing examples or reading worked examples in textbooks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 2, "answer_id": 1 }
A theorem about inductive inference In the book 'Introduction of the theory of Statistics' by Mood,Graybill,Boes (third edition)on page 220 (Chapter 6 on Sampling) you can read: 'Inductive inference is well known to be a hazardous process.In fact,it is a theorem of logic that in inductive inference uncertainty is present.One simply cannot make absolutely certain generalization.' What theorem of logic do they refer to? Can you give me please some reference to this fundamental result ?
This is an elementary but I think reasonable explanation of the difference between deductive and inductive reasoning. The authors make clear that most arguments can be framed either way, but require different types of support. In their example, one can argue that a kicked ball will fall to the ground by appeal to Newton's law (deductive) or by reference to previous instances in which balls have fallen (inductive). This highlights the risk of induction. Absent a general rule, inference (induction) can lead one in the wrong direction. We would not want to say that a fair roulette wheel will land on black simply because it has done so several times in a row. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Precision and performance of Euclidean distance The usual formula for euclidean distance that everybody uses is $$d(x,y):=\sqrt{\sum (x_i - y_i)^2}$$ Now as far as I know, the sum-of-squares usually come with some problems wrt. numerical precision. There is an obviously equivalent formula: $$d(x,y):= c \sqrt{\sum \left(\frac{x_i - y_i}{c}\right)^2}$$ Where it seems to be a common practise to choose $c = \max_i |x_i - y_i|$. For 2d, this simplifies to a formula of the form: $d(x,y):= c \sqrt{1 + \frac{b}{c}^2}$ Some questions here: * *How big is the gain in precision of doing this, in particular for high dimensionalities? *How much does it increate computational costs? *Is this choice of $c$ optimal? *To compute $c$, this needs two passes over the data. However, it should be possible in a single pass, by starting with $c_0=1$, and then adjusting it when necessary for optimal precision. E.g. let $c_0=1$, $c_i=\max_{j\leq i} |x_i-y_i|$. Then $$S_i:=\sum_{j\leq i} \left(\frac{x_i - y_i}{c_i}\right)^2 = \sum_{j\leq i-1} \left(\frac{x_i - y_i}{c_{i-1}}\right)^2 \cdot \frac{c_{i-1}^2}{c_i^2}+\left(\frac{x_i - y_i}{c_i}\right)^2 = S_{i-1} \cdot \left(\frac{c_{i-1}}{c_i}\right)^2+\left(\frac{x_i - y_i}{c_i}\right)^2$$ This should allow single-pass computation of this formula, right? Any comments in particular on the computational cost and precision benefits of computing Euclidean distance this way? Why is everybody using the naive way, is the gain in precision too small for low dimensionality and the associated computational cost too high? P.S. At least to my understanding, the usual formula should be precise up to the value range of sqrt(Double.MAX_VALUE) to sqrt(Double.MIN_NORMAL), which covers around e+-154, at most divided by the dimensionality - so even for 1000 dimensions, that should be fine for most uses of a distance function ...
I'm no numerics expert, but the only advantage you can possibly get from the rescaling is to avoid arithmetic overflow/underflow if the input values are extreme. If the straightforward formula can be evaluated without overflow or underflow, it is no less precise than your embellished one. In particular, if (a) you're doing the arithmetic in double precision, (b) you know that the true result cannot be larger than googol, and (c) you're willing to have the result be too small when the true result is less than a googolth, then bothering with rescaling will bring you no benefits. General-purpose libraries typically cannot afford the last two assumptions, so they do need to rescale. On the other hand, one situation where these assumptions are guaranteed to be true is if the input coordinates are given as single precision floats (which can represent neither googol nor googolth). If you do rescale, be sure to pick a $c$ that is a power of 2; then the scaling operations can be done with no rounding error (and possibly faster, if implemented with, say, ldexp() in C).
{ "language": "en", "url": "https://math.stackexchange.com/questions/102298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
How can one prove that the cube root of 9 is irrational? Of course, if you plug the cube root of 9 into a calculator, you get an endless stream of digits. However, how does one prove this on paper?
Suppose that $9^{1/3}=m/n$ with $m$, $n$ integers with ${\rm GCD}(m,n)=1$. This may be assumed because if $d$ is an integer divisor of both $m$ and $n$, then $m/n=(m/d)/(n/d)$. Then $$ 9n^3=m^3 $$ so that $3$ divides $m^3$, hence $m$ since $3$ is prime. Thus $3^3=27$ divides both sides of the equality so that $3$ divides $n^3$, hence $n$ for the same reason as above. This contradicts the assumption that $m$ and $n$ are coprime. This arguent generalizes immediately to showing that the $n$-th root of an integer which is not an $n$-th power of an integer is not rational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Find $DF$ in a triangle $DEF$ Consider we have a triangle $ABC$ where there are three points $D$, $E$ & $F$ such as point $D$ lies on the segment $AE$, point $E$ lies on $BF$, point $F$ lies on $CD$. We also know that center of a circle over ABC is also a center of a circle inside $DEF$. $DFE$ angle is $90^\circ$, $DE/EF = 5/3$, radius of circle around $ABC$ is $14$ and $S$ (area of $ABC$), K (area of DEF), $S/K=9.8$. I need to find $DF$. Help me please, I'd be very grateful if you could do it as fast as you can. Sorry for inconvenience.
I refer to the diagram of Victor Liu's answer. This is an analytical verification that $DF=8$ (which means that $u=2$) but omits some details. Using the equation of the circle centered at $(0,0)$ with radius $14$, the equation of $AE$ (tangent to the small circle at the point $(x,y)=(-8/5,6/5)$) $$ y=\frac{4}{3}\left( x+\frac{8}{5}\right) +\frac{6}{5} $$ and the equations of $BF$ ($y=-2)$ and $CD$ ($x=2$), we get the coordinates of the vertices of triangle $ABC$: $$ A\left( -\frac{8}{5}+\frac{24}{5}\sqrt{3},\frac{32}{5}\sqrt{3}+\frac{6}{5} \right) ,\qquad B(-8\sqrt{3},-2),\qquad C(2,-8\sqrt{3}). $$ The coordinates of the vertices of the right triangle's $DEF$ are $$ D(2,8),\qquad E(-4,-2),\qquad F(2,-2). $$ The lengths of the sides of $ABC$ computed by the distance formula are $$ a =BC=14\sqrt{2}, \qquad b =AC=\frac{42}{5}\sqrt{10}, \qquad c =AB=\frac{56}{5} \sqrt{5}. $$ The semi-perimeter $p$ of $ABC$ is thus $$ p=\frac{a+b+c}{2}=7\sqrt{2}+\frac{21}{5}\sqrt{10}+\frac{28}{5}\sqrt{5}. $$ By Heron's formula the area of $ABC$ is $$ S=S_{ABC}=\sqrt{p(p-a)(p-b)(p-c)}. $$ Since $$ \begin{eqnarray*} &&p(p-a)(p-b)(p-c) \\ &=&\left( 7\sqrt{2}+\frac{21}{5}\sqrt{10}+\frac{28}{5}\sqrt{5}\right) \left( -7\sqrt{2}+\frac{21}{5}\sqrt{10}+\frac{28}{5}\sqrt{5}\right) \\ &&\times \left( 7\sqrt{2}-\frac{21}{5}\sqrt{10}+\frac{28}{5}\sqrt{5}\right) \left( 7\sqrt{2}+\frac{21}{5}\sqrt{10}-\frac{28}{5}\sqrt{5}\right) \\ &=&\frac{1382976}{25}, \end{eqnarray*} $$ we get $$ S=\sqrt{\frac{1382976}{25}}=\frac{1176}{5}. $$ The area of $DEF$ is $$K=S_{DEF}=\frac{EF\times DF}{2}=\frac{6\times 8}{2}=24$$ and the ratio $$\frac{S}{K}=\frac{1176/5}{24}=\frac{49}{5}=9.8,$$ as given. Added: The scale is uniform throughout the following diagram drawn with the calculated equations
{ "language": "en", "url": "https://math.stackexchange.com/questions/102406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Simple expressions for $\sum_{k=0}^n\cos(k\theta)$ and $\sum_{k=1}^n\sin(k\theta)$? Possible Duplicate: How can we sum up $\sin$ and $\cos$ series when the angles are in A.P? I'm curious if there is a simple expression for $$ 1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta $$ and $$ \sin\theta+\sin 2\theta+\cdots+\sin n\theta. $$ Using Euler's formula, I write $z=e^{i\theta}$, hence $z^k=e^{ik\theta}=\cos(k\theta)+i\sin(k\theta)$. So it should be that $$ \begin{align*} 1+\cos\theta+\cos 2\theta+\cdots+\cos n\theta &= \Re(1+z+\cdots+z^n)\\ &= \Re\left(\frac{1-z^{n+1}}{1-z}\right). \end{align*} $$ Similarly, $$ \begin{align*} \sin\theta+\sin 2\theta+\cdots+\sin n\theta &= \Im(z+\cdots+z^n)\\ &= \Im\left(\frac{z-z^{n+1}}{1-z}\right). \end{align*} $$ Can you pull out a simple expression from these, and if not, is there a better approach? Thanks!
The answer is "yes", but here are a few more details (absolutely not original with me): Substitute $z = \exp(i \theta)$ and $z^{n+1} = \exp(i (n+1) \theta)$, use Euler's (not his, but what the heck) formula to get quotients involving $\sin(\theta)$, $\cos(\theta)$, $\sin((n+1))\theta)$, and $\cos((n+1) \theta)$, and then separate the real and imaginary parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A Lebesgue measure question involving a dense subset of R, translates of a measurable set, etc. Let $\{b_n\}_{n=1}^\infty$ be a dense subset of $\mathbb{R}$ and let $D \subseteq \mathbb{R}$ be a measurable set such that $m(D \triangle (D + b_n))=0$ for all $n \in \mathbb{N}$ (here, the $\triangle$ denotes the symmetric difference of the two sets, $D+ b_n = \{d + b_n : d \in D\}$, and $m$ stands for the Lebesgue measure). Prove that $m(D)=0$ or $m(D^c)=0$ (here $D^c$ is the complement of $D$ in $\mathbb{R}$). I am having trouble getting a proof off the ground! In particular, it is not clear to me how and where the dense hypothesis would come in. Any help would be greatly appreciated.
Hints: 1) $m(D \Delta (D + x))$ is a continuous function of $x$. 2) If $x$ and $y$ are Lebesgue points of $D$ and $D^c$ respectively, what can you say about $m(D \Delta (D + x - y))$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/102542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
100 Soldiers riddle One of my friends found this riddle. There are 100 soldiers. 85 lose a left leg, 80 lose a right leg, 75 lose a left arm, 70 lose a right arm. What is the minimum number of soldiers losing all 4 limbs? We can't seem to agree on a way to approach this. Right off the bat I said that: 85 lost a left leg, 80 lost a right leg, 75 lost a left arm, 70 lost a right arm. 100 - 85 = 15 100 - 80 = 20 100 - 75 = 25 100 - 70 = 30 15 + 20 + 25 + 30 = 90 100 - 90 = 10 My friend doesn't agree with my answer as he says not all subsets were taken into consideration. I am unable to defend my answer as this was just the first, and most logical, answer that sprang to mind.
Here is a way of rewriting your original argument that should convince your friend: Let $A,B,C,D\subset\{1,2,\dots,100\}$ be the four sets, with $|A|=85$,$|B|=80$,$|C|=75$,$|D|=70$. Then we want the minimum size of $A\cap B\cap C\cap D$. Combining the fact that $$|A\cap B\cap C\cap D|=100-|A^c\cup B^c\cup C^c\cup D^c|$$ where $A^c$ refers to $A$ complement, along with the fact that for any sets $|X\cup Y|\leq |Y|+|X|$ we see that $$|A\cap B\cap C\cap D|\geq 100-|A^c|-|B^c|-|C^c|-|D^c|=10.$$ You can then show this is optimal by taking any choice of $A^c$, $B^c$, $C^c$ and $D^c$ such that any two are disjoint. (This is possible since the sum of their sizes is $90$ which is strictly less then $100$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/102598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84", "answer_count": 9, "answer_id": 6 }
Help to find the domain to this function $$ \sqrt{\log_\frac{1}{2}\left(\arctan\left(\frac{x-\pi}{x-4}\right)\right)} $$ Please, could someone show me the steps to find the domain of this function? It's the sixth time that I try to solve it, and I'm going to burn everything...
Here is a detailed outline. There is an obvious problem at $x=4$. But things also go bad if $\frac{x-\pi}{x-4}\le 0$, for then the $\arctan$ is $\le 0$, so the log does not exist. As a first step to finding the bad places, solve the inequality $$\frac{x-\pi}{x-4} \le 0.$$ The only places where this can change sign are $x=\pi$ and $x=4$. Evaluate $\frac{x-\pi}{x-4}$ at three points, one less than $\pi$, one between $\pi$ and $4$, and one bigger than $\pi$. Now we need to deal with the square root part. What is inside the square root must be $\ge 0$. The $1/2$ as a base for the logarithm is a nuisance. It may make things easier to note that $\log_{1/2} u =-\log_2 u$. So we want the log to the base $2$ to be $\le 0$. That means that we want the $\arctan$ to be positive but $\le 1$. So we want $0<\frac{x-\pi}{x-4}\le \tan(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
two converging sequences results in third can anyone help me with this, I got stuck on it. let $a_n$ and $b_n$ (for $n$ a non-negative integer) be two sequences of real numbers such that $\lim_{n\to \infty}a_n=a$ and $\lim_{n\to \infty}b_n=b$. prove that $\lim_{n\to \infty} \frac{a_0b_n+a_1b_{n-1}+....+a_nb_0}{n}=ab$.
Write $\Delta_n=a_n-a$ and $\delta_n=b_n-b$; clearly these each converge to $0$. Then $$\frac{1}{n}\left(\sum_{k=0}^na_kb_{n-k}\right)-ab=\frac{1}{n}\left(\sum_{k=0}^n \left((\Delta_k+a)b_{n-k}-ab\right)\right)=\frac{1}{n}\left(\sum_{k=0}^n\left(\Delta_k b_{n-k}+a\delta_{n-k}\right)\right).$$ Since $b_i$ converges we can say it is bounded in magnitude by $B$. All you have to prove now is that $$\lim_{m\to\infty}c_m=0\implies\frac{c_0+c_1+\cdots+c_m}{m}\to0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/102727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Why do all circles passing through $a$ and $1/\bar{a}$ meet $|z|=1$ are right angles? In the complex plane, I write the equation for a circle centered at $z$ by $|z-x|=r$, so $(z-x)(\bar{z}-\bar{x})=r^2$. I suppose that both $a$ and $1/\bar{a}$ lie on this circle, so I get the equation $$ (z-a)(\bar{z}-\bar{a})=(z-1/\bar{a})(\bar{z}-1/a). $$ My idea to show that the circles intersect at right angles is to show that the radii at the point of intersection are at right angles, which is the case when the sum of the squares of the lengths of the radii of the circles is the square of the distance to the center of the circle passing through $a$ and $1/\bar{a}$. However, I'm having trouble finding a workable situation, since I don't think there is not a unique circle passing through $a$ and $1/\bar{a}$ to give a center to work with. What's the right way to do this?
You are correct that there isn't a unique circle passing through $a$ and $\frac{1}{\bar{a}}$, but the center of such a circle has to be equidistant from those two points, so on the perpendicular bisector of the segment between them. Such a point can be described by $$c=\frac{1}{2}\left(a+\frac{1}{\bar{a}}\right)+ki\left(a-\frac{1}{\bar{a}}\right)$$ for some $k\in\mathbb{R}$ (that's the midpoint of the segment plus a scalar multiple of a $\frac{\pi}{2}$-rotation of the vector along that segment). This can be simplified to $$c=\frac{1}{2}\left(a(1+2ik)+\frac{1}{\bar{a}}(1-2ik)\right).$$ The radius of the circle is $$\begin{align} r=|c-a|&=\left|\frac{1}{2}\left(a(1+2ik)+\frac{1}{\bar{a}}(1-2ik)\right)-a\right| \\ &=\frac{1}{2}\left|a(-1+2ik)+\frac{1}{\bar{a}}(1-2ik)\right| \\ &=\frac{1}{2}\left|\frac{1}{\bar{a}}\left(a\bar{a}(-1+2ik)+(1-2ik)\right)\right| \\ &=\frac{1}{2}\left|\frac{1}{\bar{a}}(1-a\bar{a})(1-2ik)\right| \\ &=\frac{1}{2}\left|\frac{1}{\bar{a}}\right|\cdot|1-a\bar{a}|\cdot|1-2ik| \\ &=\frac{|1-a\bar{a}|}{2|a|}\sqrt{4k^2+1} .\end{align}$$ So now, as you suggested, we can use the converse of the Pythagorean Theorem: if the sum of the squares of the radii of the unit circle and our new circle is equal to the square of the distance between their centers, then they meet at right angles. Mathematica tells me that the algebra works out and it's true, but I can't quite get there by hand. Below is what I've got so far. $$\begin{align} \text{sum of squares of radii}&=1^2+\left(\frac{|1-a\bar{a}|}{2|a|}\sqrt{4k^2+1}\right)^2 \\ &=1+\frac{(1-a\bar{a})^2}{4a\bar{a}}(4k^2+1) \end{align}$$ $$\begin{align} (\text{distance }&\text{between centers})^2=|c-0|^2 \\ &=\left|\frac{1}{2}\left(a(1+2ik)+\frac{1}{\bar{a}}(1-2ik)\right)\right|^2 \\ &=\frac{1}{4}\left|a(1+2ik)+\frac{1}{\bar{a}}(1-2ik)\right|^2 \\ &=\frac{1}{4|\bar{a}|^2}|a\bar{a}(1+2ik)+(1-2ik)|^2 \\ &=\frac{1}{4a\bar{a}}|a\bar{a}(1+2ik)+(1-2ik)|^2 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/102781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 4 }
How to find the basis for a vector space? I've been given the following as a homework problem: Find a basis for the following subspace of $F^5$: $$W = \{(a, b, c, d, e) \in F^5 \mid a - c - d = 0\}$$ At the moment, I've been just guessing at potential solutions. There must be a better method than guess and check. How do I solve this and similar problems?
Let's look at the following example: $$W = \{ (a,b,c,d)\in\mathbb{R}^4 \mid a+3b-2c = 0\}.$$ The vector space $W$ consists of all solutions $(x,y,z,w)$ to the equation $$x + 3y - 2z = 0.$$ How do we write all solutions? Well, first of all, $w$ can be anything and it doesn't affect any other variable. Then, if we let $y$ and $z$ be anything we want, then that will force $x$ and give a solution. So we have three degrees of freedom: a free choice of $w$, a free choice of $z$, and a free choice of $y$. Then $x$ will be forced. This suggests dimension $3$. How does the choice of $w$ affect $x$, $y$, and $z$? In absolutely no way. Since choosing $w$ does not affect $x$, $y$, or $z$, this gives the vector $(0,0,0,1)$: the choice of $w$ (the $1$) does not affect the others. How does the choice of $z$ affect $x$, $y$, and $w$? It doesn't affect $y$ and $w$. But if $z=1$, then $x$ needs to be $2$: that is, we need to get two $x$s for every $z$. This gives the vector $(2,0,1,0)$. Finally, now does the choice of $y$ affect $x$, $z$, and $w$? It doesn't affect $z$ and $w$ (they are free), but for every $y$, we need to have $-3$ $x$s. That gives the vector $(-3,1,0,0)$. So a basis for my $W$ consists of $(-3,1,0,0)$, $(2,0,1,0)$, and $(0,0,0,1)$. You can verify that all of them lie in $W$, and that every vector in $W$ can be written as a linear combination of these three in a unique way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 1, "answer_id": 0 }
Calculate area of a figure based on vertices Possible Duplicate: How quickly we forget - basic trig. Calculate the area of a polygon How to calculate the area of a polygon? If I know all the vertices of a particular polygon/figure, is there a generalized method/formula to calculate the area?
If you know the two-dimensional cartesian coordinates of the vertices of a (non-self-intersecting, but can be non-convex) polygon, the Shoelace method will find its area. (Let me know if that article is not sufficiently clear and I'll try to explain it in more detail).
{ "language": "en", "url": "https://math.stackexchange.com/questions/102891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there a first-order-logic for calculus? I just finished a course in mathematical logic where the main theme was first-order-logic and little bit of second-order-logic. Now my question is, if we define calculus as the theory of the field of the real numbers (is it?) is there a (second- or) first-order-logic for calculus? In essence I ask if there is a countable model of calculus. I hope my question is clear, english is my third language.
The first order theory of the algebraic and order properties of the real numbers is the theory of real closed fields, and you will find various axiomatizations when you follow the link. A structure with the first order properties of the real numbers may not satisfy the completeness axiom, which is not first order. For example, the field of hyperreal numbers has the same first order properties as the field of real numbers, but the set of finite numbers is nonempty and bounded above by any infinite number, yet has no supremum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 3 }
Perfect squares always one root? I had an exam today and I was thinking about this task now, after the exam of course. $f(x)=a(x-b)^2 +c$ Now, the point was to find C so that the function only has one root. Easy enough, I played with the calculator and found this. But I hate explanations like that, yes. You get a few points but far from full score. But overall I should still get an A, I hope. If $C=0$ then the expression is a perfect square and they only have one root? Is that far of? $a(x-b)^2= - c$ $\frac{a(x-b)^2}{a}= - \frac{c}{a}$ $(x-b)^2= - \frac{c}{a}$ This also argues that c should be 0 for it to only be one root?
We will explicitly assume that $a\ne 0$. Then, more or less as you wrote, $a(x-b)^2+c=0$ if and only if $(x-b)^2=-\frac{c}{a}$. Thus if $-\frac{c}{a} <0$, there is no root, since the square of a real number cannot be negative. If $-\frac{c}{a}>0$, there are two distinct roots, namely $x=b\pm\sqrt{-c/a}$. And finally, if $-\frac{c}{a}=0$, or equivalently $c=0$, there is exactly one root. So there is exactly one root if and only if $c=0$. The above is undoubtedly what you had in mind. What you actually wrote on the exam paper may not have been complete. Much of the time, a bunch of equations with little explanatory text means an incomplete solution. For absolute completeness let's deal with the silly case $a=0$. In that case our equation is equivalent to $c=0$. If $c\ne 0$, this has no solution. If $c=0$, the equation has infinitely many solutions, since $x$ can take on any value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/102999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Negative Binomial Distribution Why is the negative binomial distribution defined as $$P(X=x|r,p)= \binom{x-1}{r-1}p^{r}(1-p)^{x-r}$$ Basically this is the probability that $x$ Bernoulli trials are needed for $r$ successes. So we need $r-1$ successes in the first $x-1$ trials. Then success on the $r^{th}$ trial happens with probability $p$. Why can't we write it as the following: $$P(X = x|r,p) = \binom{x}{r}p^{r} (1-p)^{x-r}$$ This means that you have $r$ successes in the first $x$ trials.
You wrote that $x$ trials are needed for $r$ successes. That means in particular that $x-1$ trials were not enough. So we hit our goal of $r$ successes at the $x$-th trial. Suppose for example that we are tossing a fair coin until we get the first head. What is the probability that $2$ tosses are needed for $1$ success? Here $x=2$ and $r=1$. Two tosses are needed precisely if we get TH. This has probability $1/4$. By way of contrast, the probability of exactly one head in two tosses is $1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/103071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
A metric on $\mathbb{C^n}$ Possible Duplicate: Show that $d$ is a metric on $\mathbb{C^n}$ On $\mathbb{C^n}$, define $||z||=(\sum_{j=1}^{n}|z_{j}|^{2})^{1/2}$ and for $z,w\in\mathbb{C^n}$ define $d(z,w)=||z-w||$. Show that $d$ is a metric on $\mathbb{C^n}$. My attempt: (1) (nonnegativity) It is clear that for any $z,w\in\mathbb{C^n}$, $d(z,w)=||z-w|| = (\sum_{j=1}^{n}|z_{j}-w_{j}|^{2})^{1/2}\geq 0$ since $|z_{j}-w_{j}|^{2}\geq0$. Also, $||z-w||=0$ iff $(\sum_{j=1}^{n}|z_{j}-w_{j}|^{2})^{1/2}= 0$ iff $z=w$. (2) (symmetry) $d(z,w)=||z-w|| = (\sum_{j=1}^{n}|z_{j}-w_{j}|^{2})^{1/2}=(\sum_{j=1}^{n}|w_{j}-z_{j}|^{2})^{1/2}=d(w,z)$ by properties of modulus in $\mathbb{C^n}$. (3) (triangle inequality) $\forall w,z,v\in\mathbb{C^n}$, $$\begin{align*}d(z,w)&=||z-w||\\ &= \left(\sum_{j=1}^{n}|z_{j}-w_{j}|^{2}\right)^{1/2}\\ &=\left(\sum_{j=1}^{n}|z_{j}+v_{j}-v_{j}-w_{j}|^{2}\right)^{1/2}\\ &=\left(\sum_{j=1}^{n}|(z_{j}-v_{j})+(v_{j}-w_{j})|^{2}\right)^{1/2}\leq... \end{align*}$$ I'm not sure how to split up the sum here.
If we consider a complex number as a pair of real numbers, then your metric is equivalent to the Euclidean distance in $\mathbb{R}^{2n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/103116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why does $\mathbb{C}$ have transcendence degree $\mathfrak{c}$ over $\mathbb{Q}$? It's pretty well known that $\text{trdeg}(\mathbb{C}/\mathbb{Q})=\mathfrak{c}=|\mathbb{C}|$. As a subset of $\mathbb{C}$, of course the degree cannot be any greater than $\mathfrak{c}$. I'm trying to understand the justification why it cannot be any smaller. The explanation in my book says that if $\mathbb{C}$ has an at most countable (i.e. finite or countable) transcendence basis $z_1,z_2,\dots$ over $\mathbb{Q}$, then $\mathbb{C}$ is algebraic over $\mathbb{Q}(z_1,z_2,\dots)$. Since a polynomial over $\mathbb{Q}$ can be identified as a finite sequence of rationals, it follows that $|\mathbb{C}|=|\mathbb{Q}|$, a contradiction. I don't see why the polynomial part comes in? I'm know things like a countable unions/products of countable sets is countable, but could someone please explain in more detail this part about the polynomial approach? Since $\mathbb{C}$ is algebraic over $\mathbb{Q}(z_1,z_2,\dots)$, does that just mean that any complex number can be written as a polynomial in the $z_i$ with coefficients in $\mathbb{Q}$? For example, $$ \alpha=q_1z_1^3z_4z_6^5+q_2z_{11}+q_3z^{12}_{19}+\cdots+q_nz_6z_8z^4_{51}? $$ Is the point just that the set of all such polynomials are countable? Thanks,
(Of course I assume the Axiom of Choice...) Choose a transcendence basis $X = \{x_i\}_{i \in I}$ for $\mathbb{C}$ over $\mathbb{Q}$. Then $\mathbb{C}$ is an algebraic extension of $\mathbb{Q}(X)$. Now here are two rather straightforward facts: 1: If $F$ is any infinite field and $K/F$ is an algebraic extension, then $\# K = \#F$. 2: For any infinite field $F$ and purely transcendental extension $F(X)$, we have $\# F(X) = \max (\#F, \# X)$. Putting these together we find $\mathfrak{c} = \# \mathbb{C} = \# \mathbb{Q}(X) = \max (\aleph_0, \# X)$. Since $\mathfrak{c} > \aleph_0$, we conclude $\mathfrak{c} = \# X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/103177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
How do I create a group action table with GAP? Background: Let $G$ be a group of size $k\cdot p^n$. Let $S$ be the set of all subsets of size $p^n$ of $G$. Define the map $f\colon G \times S \rightarrow S$ by $(g, s) \mapsto gs$ if $s \in S$. I would like to create the group action table of $f$ with GAP. So in the case of $S_3$ I would get a table with twenty rows ( elements of $S$ ) and six columns ( elements of $S_3$ ) containing $gs$. Question: How do I create a ( this ) group action table with GAP?
You'll find it easier to work with GAP if you switch to right actions. G := SymmetricGroup( 3 );; S := Combinations( AsSet( G ), 3 );; myLeftAction := function( act, pnt ) # useless for Orbits, Stabilizer, etc. return AsSet( List( pnt, x -> act*x ) ); end; myRightAction := function( pnt, act ) return AsSet( List( pnt, x -> x*act ) ); end;; table := List( S, s -> List( G, g -> myRightAction( s, g ) ) );; PrintArray( table ); # If you have around 200 columns of screen Browse( table ); # takes less screen space, but requires the Browse package # Here is a way to display them in 72 columns using one-line notation Display( JoinStringsWithSeparator( List( table, row -> JoinStringsWithSeparator( List( row, ent -> JoinStringsWithSeparator( List( ent, perm -> JoinStringsWithSeparator( ListPerm( perm, NrMovedPoints(G) ), "" )), "|")), " ")), "\n" ));
{ "language": "en", "url": "https://math.stackexchange.com/questions/103220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Nomenclature of random variables $\{X=0, Y=0\}$ same as $\{X=0\}\cap \{Y=0\}$? just a small doubt. My exercises keep oscillating their nomenclature on this small detail and I always have the other version. Let $X,Y$ be random variables. Is $\{X=0, Y=0\}$ the same as $\{X=0\}\cap \{Y=0\}$? Another example. Let $N$ be the number of Users on a webpage. Two files are available for download, one with 200 kb and another with 400 kb size. $$ \begin{align} X_n(w) := w_n = \{ & 0:=\text{user downloads no file}, \\ & 1:=\text{user downloads the first file (200 kb)}, \\ & 2 :=\text{user downloads the second file (400 kb)}, \\ & 3:=\text{user downloads both files (600 kb)}\} \end{align} $$ I want to express, at least one user downloaded the 200 kb file. Here's how I expressed it $\{X_1 + X_2 + \cdots + X_n \geq 1\}$. Would this be ok? The book expressed it as $\{X_1=1\}\cup\{X_1=3\}\cup \cdots \cup\{X_n=1\}\cup\{X_n=3\}$. Another thing to express: no user downloaded the 200 kb file. I expressed it as $|\{X_k=1, 1 \leq k \leq N\}|=0$. The book as $\{X_1 \neq 1\}\cap \cdots \cap \{X_n \neq 1\}$. Would my solution be ok? I'm always in doubt when I'm allowed to use symbols like $+$ and $|\mathrm{modulo}|$ (to get the number of elements). Is this generally always allowed? Many thanks in advance! Thanks in advance guys!
Your second example is incorrect: if no user downloaded the 200K file, but at least one user downloaded the 400K file, we will still have $X_1 + \dots + X_n \ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/103296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Unfoiling quadratic equation How can I convert $ax^2 + bx + c = 0$ to a FOIL-style $(x + d)(x - e) = 0$ equation? I have an equation in a computer program that I'm currently solving with the standard $\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$. If the discriminator is negative I end up with an imaginary number solution, and I don't know how to deal with that. Using the FOIL method seems like a nice alternative for getting two real solutions to x. Update Based on the answers here, I realized that I probably made a mistake arriving at the values for $a$, $b$, and $c$. I went back and re-solved my equation, and ended up with values that are giving me real solutions. So in summary, getting imaginary results told me I had an incorrect equation. And now I know a lot more about quadratic equations. Thanks for the help everyone!
If a polynomial has a root $r$ (imaginary or real), then you can pull out a factor of $x-r$. For example, to factor $x^2 + 1$, we first find the roots. You can use the quadratic equation for this in general, but it's easy to see in this example that the roots are $i$ and $-i$. This means the polynomial $x^2 + 1$ factors as $(x - i)(x + i)$. Notice this polynomial has no real solutions. There is no amount of algebraic trickery you can do to get around that fact. Depending on your application, you will either have to accept the imaginary roots or regard the polynomial as having no (real) solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/103344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Why doesn't this trigonometric integral evaluate? I'm trying to follow this Wolfram Alpha derivation (click "Show steps"...) of an integral question I posted earlier. My question is specifically, the second last line (before they use "an equivalent for restricted t values") is: $$ \left. -\frac{2}{3} \sqrt{ 1 - \cos{3\theta} }\,\, \right|_0^{2\pi} $$ But why doesn't the definite integral work out to the correct answer (listed under "Definite integral over a period:", near the bottom) of $ 4 \sqrt{2} $? When you plug in the limits, you get $ - \frac{2}{3} ( 0 - 0 ) $ which is just $0$. Edit: You have to click "Try again with more time" for this to appear, but here is the "definite integral over a period":
In the item you posted earlier, there was an absolute value. $0$ is just what you would get if that were neglected. So I suspect it was neglected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/103389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }