Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Euler's formula for connected planar graphs Euler's formula for connected planar graphs (i.e. a single connected component) states that $v-e+f=2$. State the generalization of Euler's formula for planar graphs with $k$ connected components (where $k\geq1$). The correct answer is $v-e+f=1+k$, but I'm not understanding the reasoning behind it. Anyone care to share some insight?
Given k many components you can "connect" it by adding k-1 many edges, say e'=e+(k-1). Then v-e'+f=2 just says v-e-k+1+f=2 or that v-e+f=1+k. Note that this agrees with the regular connected version where k just equals 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Suggest a tricky method for this problem Find the value of: $$ 1+ \biggl(\frac{1}{10}\biggr)^2 + \frac{1 \cdot 3}{1 \cdot 2} \biggl(\frac{1}{10}\biggr)^4 + \frac{1\cdot 3 \cdot 5}{1 \cdot 2 \cdot 3} \biggl(\frac{1}{10}\biggr)^6 + \cdots $$
Hint: What is the generalized binomial expansion of $\left( 1-2 \times \left(\frac{1}{10}\right)^2 \right) ^{-\frac{1}{2}}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/12302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Embedding torus in Euclidean space For $n > 2$, is it possible to embed $\underbrace{S^1 \times \cdots \times S^1}_{n\text{ times}}$ into $\mathbb R^{n+1}$?
Let $e_0, \ldots, e_n$ be the standard basis of $\mathbb {R}^{n+1}$. Take $\epsilon$ small. Consider the vector $v_1$ of length 1 in the span of $e_0, e_1$. Then the vector $v_2$ of length $\epsilon$ in the span of $v_1, e_2$, and in general the vector $v_i$ of length $\epsilon^{i-1}$ in the span of $v_{i-1}, e_i$. Now consider the vector $w=v_1+\ldots v_n$ For small $\epsilon$ the set of $w$'s is a torus embedded $\mathbb {R}^{n+1}$ (any $\epsilon < 1$ will do, actually).
{ "language": "en", "url": "https://math.stackexchange.com/questions/12332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Particle moving at constant speed with Poisson setbacks Consider a particle starting at the the origin and moving along the positive real line at a constant speed of 1. Suppose there is a counter which clicks at random time intervals following the exponential distribution with parameter $\lambda$ and whenever the counter clicks, the position $x > 0$ of the particle at that time instantaneously changes to the position $x/2$. We wish to calculate the expected average speed of the particle. I don't really have any idea of how to go about solving this. Here are a couple of related problems which seem even more difficult to me: * *Modify the puzzle so that when the counter clicks, the particle moves from $x$ to a point chosen uniformly at random from $[0,x]$. *The particle starts moving as above but whenever the counter clicks, its speed increases by 1 (the initial speed was 1). What is the expected time when the particle hits the position 1? What is the expected speed when the particle hits the position 1? This is not a homework problem. Any solutions, hints, thoughts will be appreciated. Thanks,
Sketch of solution: Let $X_t$ denote the location of the particle at time $t$, and set $f(t) = {\rm E}(X_t)$. Consider the time $t + \Delta t$, $\Delta t \approx 0+$. With probability of about $1 - \lambda \Delta t$ the particle continues to position $X_t + \Delta t$, whereas with probability of about $\lambda \Delta t$ it moves to position of about $X_t/2$. This leads straightforwardly to an elementary differential equation in terms of $f(t)$, from which you obtain $f(t)$. The expected average speed is then $f(t)/t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Typical applications of Fubini's theorem and Radon-Nikodym Can someone please share references (websites or books) where I can find problems related with Fubini's theorem and applications of Radon-Nikodym theorem? I have googled yes and don't find many problems. What are the "typical" problems (if there are any) related with these topics? [Yes, exam is coming soon so I don't know what to expect and don't have access to midterms from previous years]. Thank you
Radon-Nikodym is used to prove the existence of the conditional expectation in probability theory. Fubini's theorem is, among other things, a very useful device to compute integrals over product spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
For a Planar Graph, Find the Algorithm that Constructs A Cycle Basis, with each Edge Shared by At Most 2 Cycles In a planar graph $G$, one can easily find all the cycle basis by first finding the spanning tree ( any spanning tree would do), and then use the remaining edge to complete cycles. Given Vertex $V$, edge $E$, there are $C=E-V+1$ number of cycles, and there are $C$ number of edges that are inside the graph, but not inside the spanning tree. Now, there always exists a set of cycle basis such that each and every edge inside the $G$ is shared by at most 2 cycles. My question is, is there any algorithm that allows me to find such a set of cycle basis? The above procedure I outlined only guarantees to find a set of cycle basis, but doesn't guarantee that all the edges in the cycle basis is shared by at most two cycles. Note: Coordinates for each vertex are not known, even though we do know that the graph must be planar.
To suppliment Aryabhata's comment: yes, using plane embedding algorithm will do. One of such algorithm is Boyer and Myrvold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
What is the spectral theorem for compact self-adjoint operators on a Hilbert space actually for? Please excuse the naive question. I have had two classes now in which this theorem was taught and proven, but I have only ever seen a single (indirect?) application involving the quantum harmonic oscillator. Even if this is not the strongest spectral theorem, it still seems useful enough that there should be many nice examples illustrating its utility. So... what are some of those examples? (I couldn't readily find any nice examples looking through a few functional analysis textbooks, either. Maybe I have the wrong books.)
The only way I know to prove "discreteness" of some piece of a spectrum is to find one or more compact operators on it, suitably separating points. That is, somehow the only tractable operators are those closely related to compact ones. Even to discuss the spectral theory of self-adjoint differential operators $T$, the happiest cases are where $T$ has compact resolvent $(T-\lambda)^{-1}$. In particular instances, the Schwartz kernel theorem depends on the compactness of the inclusions of Sobolev spaces into each other (Rellich's lemma). In automorphic forms: to prove the discreteness of spaces of cuspforms, one shows that the natural integral operators (after Selberg, Gelfand, Langlands et alia) restricted to the space of $L^2$ cuspforms are compact. One of Selberg's arguments, Bernstein's sketch, Colin de Verdiere's proof, and (apparently) the proof in Moeglin-Waldspurger's book (credited to Jacquet, credited to Colin de Verdiere!?) of meromorphic continuation of Eisenstein series of various sorts depends ultimately on proving compactness of an operator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43", "answer_count": 3, "answer_id": 0 }
How can I obtain from a differential equation a stochastic version? Suppose $\frac{dx}{dt}=ax+b$ and then assume that $a=c+g$ where $g$ is a Wiener process.
I think that 'Stochastic Differential Equations' by Bernt Oksendal is a good book about SDE. According your equation, lets rewrite it: $$dx = ax\,dt +b\,dt$$ and add noise $$dx = (cx+b)\,dt + xg\,dt$$ If $g$ is a Wiener process, then $g\,dt$ is a Brownian motion, so your SDE equation is $$dX_t(\omega) = (cX_t(\omega)+b)\,dt + X_t(\omega)dB_t(\omega)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/12591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Partitioning an infinite set Can you partition an infinite set, into an infinite number of infinite sets?
An (imo) beautiful practical example from Elementary Number Theory. As you know, any equivalence relation, partitions a set. Now, if you partition the Natural numbers $(\mathbb{N})$ = {1, 2, 3, ... } ( 0 of course NOT included ), based on the relation "Prime Signature", you have what you are looking for. Based on the Main Theorem of Arithmetic we can map any natural number to two ordered lists: $$ n = \Pi_{i=1}^{\omega(n)} p_i^{\alpha^{i}}. $$ For example: $12 = 2^2 3^1$, 30= $2^1 3^1 5^1$, and so on, where $12$ belongs to the partition $\{2,1\}$ and $30$ belongs to the partition $\{1,1,1\}$. We call the ordered list of ${\alpha_{i}}'s$ the "Prime Signature". ( Example: $1000$ has signature $\{3,3\}$ ). What makes this infinite partition so 'beautiful' is that every subset of $\mathbb{N}$ in this infinite partition corresponds to a partition of a natural number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 8, "answer_id": 7 }
Correlation between out of phase signals Say I have a numeric sequence A and a set of sequences B that vary with time. I suspect that there is a relationship between one or more of the B sequences and sequence A, that changes in Bn are largely or wholly caused by changes in sequence A. However there is an unknown time delay between changes in A and their effect on each of the B sequences (they are each out of phase by varying amounts) I am looking for a means of finding the most closely correlating B to A regardless of the time delay. What options are available to me? ** EDIT ** The crux of the problem here is that I have millions of B sequences to test, and there are approx 2 million data points within the lag window that I would like to test over. Working out a correllation for each B for each possible lag scenario is just going to be too computationally expensive (especially as in reality there will be a more dynamic relationship than just lag between A and B, so I will be looking to test variations of relationships as well). So what I am looking for is a means of taking the lag out of calculation.
You can find the lag more efficiently by using the Fourier transform. The operation of cross-correlation of A and B in the time domain is equivalent to convolution of A with time-reversed B, which can be efficiently computed in the frequency domain using the convolution theorem. The cross-correlation is given by $F^{-1} \left\{ F\left\{A\right\} \cdot F\left\{B\right\} \right\}$, where $F$ represents the Fourier transform, and $\cdot$ is pointwise multiplication. But, as you say, you're not really interested in the lag per se. Instead, you want to know how much of $B$ is (linearly) caused by $A$. The answer to this problem is Wiener Filtering. You may also be interested in computing the transfer function and coherence in the frequency domain. A good book on such methods is Random Data: Analysis and Measurement Procedures by Bendat and Piersol, though this particular book might be overkill for your needs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
My Daughter's 4th grade math question got me thinking Given a number of 3in squares and 2in squares, how many of each are needed to get a total area of 35 in^2? Through quick trial and error (the method they wanted I believe) you find that you need 3 3in squares and 2 2in squares, but I got to thinking on how to solve this exactly. You have 2 unknowns and the following info: 4x + 9y = 35 x >= 0, y >= 0, x and y are both integers. It also follows then that x <= 8 and y <= 3 I'm not sure how to use the inequalities or the integer only info to form a direct 2nd equation in order to solve the system of equations. How would you do this without trial and error?
A quick way to see the answer is to convert both sides of the equation mod 4. So the left hand side is y (mod 4) (because 4=0, 9=1 mod 4), and the right hand side is 3 (mod 4). So y=3 mod 4. Since $y\le 3$ as you observed, the only solution (if there is any ) is $y=3$. Then you check that $4x+27=35$ and hence $x=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
20 years since I was in high school. How do I break this down? $-5 = 2x + 5$ I want to "re-start" my mathematical education. I am needing it more and more at work. A co-worker asked me to break that down (he couldn't remember either). Where should I start? What books? Thanks!
Hint: In general for such one variable problems, where you need to find the value of the variable from a given single equation, you collect terms in the variable on one side of the equation and terms not containing the variable on the other side. Then divide the equation by the coefficient of the variable ($x$ in this case) to solve for it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Examples of Class Group I am trying to better understand how unique factorization of algebraic integers in an algebraic number ring implies that the class number of that number ring is 1. I am asking for some examples of this to get me started.
This is actually a frequently used (e.g. in the theory of divisors) fact of commutative algebra. A noetherian domain $A$ is a UFD if and only if every prime ideal of height one (by Krull's principal ideal theorem, this is the same thing as being minimal over a nonzero element) is principal (which corresponds, with a little work, to the statement that the Weil divisor class group of a normal ring is trivial iff it is factorial). In the case of a Dedekind domain (of which a ring of integers in a number field is a paradigm example), this means that every prime ideal is principal (as $(0)$ obviously is). In a Dedekind domain, every ideal is a product of prime ideals. So if every prime ideal is principal, so is every ideal. This is the statement that the class number is one. So this answers your question by reducing it to another result. You can find the proof and discussion of this result as Theorem 18.6 in http://people.fas.harvard.edu/~amathew/CAnotes.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/12822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
The staircase paradox, or why $\pi\ne4$ What is wrong with this proof? Is $\pi=4?$
This problem illustrates the fact that two functions can be very close: $|f(x)-g(x)|<\epsilon$ for all $x\in [0,1]$, but their derivatives can still be far apart, $|f'(x)-g'(x)|>c$ for some constant $c>0$. In our case, let $x=a(t),y=b(t),0\le t\le 1$ and $x=c(t),y=d(t), 0\le t\le 1$ be the parametrizations of the two curves. By smoothing the corners, we may assume that both are smooth. $$ \|(a(t),b(t))\|\approx \|(c(t),d(t))\|$$ does not imply $$ \|(a'(t),b'(t))\|\approx \|(c'(t),d'(t))\|$$ Therefore $\int_0^1 \|(a'(t),b'(t))\| dt$ need not be close to $\int_0^1 \|(c'(t),d'(t))\| dt.$ Here $\|(x,y)\|$ denotes $\sqrt{x^2+y^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "863", "answer_count": 22, "answer_id": 12 }
Is every CW complex homotopic to a Delta-Complex? Both answers to this question seem equally reasonable to me. If the answer is positive, I have no idea what the construction of such a space would look like.... If the answer is negative, I assume one would try to subdivide the cells somehow... but I don't really know how that would go. I guess this came up because I was trying to think of an example of a CW-complex that wasn't homeomorphic to a Delta-complex... and figured the easiest way to make such a thing would be to make one that is not homotopic. This, of course, doesn't seem much easier to build, but at least easier to prove once you're done building.
This question was answered in a comment: By "homotopic" you mean "homotopy-equivalent" yes? CW complexes all have the homotopy type of simplicial complexes, so also of delta complexes. You can make the argument inductively -- argue that if you attach a cell to a simplicial complex, you get something with the homotopy-type of a simplicial complex. Have you read (for example) the proof of excision for singular homology? – Ryan Budney Dec 4 '10 at 0:22
{ "language": "en", "url": "https://math.stackexchange.com/questions/12958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Generating function for modeling the number of ways to select five integers from 1 to n where no two are consecutive So, I guess divide it into six sections representing the difference between the numbers, so the first is $0\rightarrow n$, the second through fifth are $1\rightarrow n$ and the sixth is $0\rightarrow n$ again, and that becomes a generating function of $((1+x+x^2+\cdots+x^n)^2)((x^2+\cdots+x^n)^4)$. Does this make sense, and what coefficient represents what value of $n$?
I don't understand why this question is asked in the language of generating functions. The easy way to solve the problem is to take any subset $\{a,b,c,d,e\}$ of $\{1,\dots,n-4\}$, and choose the subset $\{a,b+1,c+2,d+3,e+4\}$. In particular, there are $\binom{n-4}{5}$ ways to do this. We can recover the generating function using basic generating function knowledge: $$\sum_{n\geq 0} \binom{n-4}{5}x^n =\sum_{n\geq 0} \binom{n-9+5}{5}x^n=x^9\sum_{n\geq 0} \binom{n-9+5}{5}x^{n-9}$$ Changing indices and recognizing the form of the generating function, we have $$=x^9\sum_{N\geq -9} \binom{N+5}{5}x^N=\frac{x^9}{(1-x)^6}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/13013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Lyapunov exponent reference I know a little bit of probability; but I do not know anything from dynamical systems. I want to understand the notion of Lyapunov exponent for a certain sequence of random variables arising out of a dynamical system. But the wikipedia article on Lyapunov exponent is not giving any suitable references. I would be most grateful if someone can give some helpful pointers.
Non Linear Dynamics and Chaos by Strogatz, page 322. I read this a long time ago, and the whole book is worth reading in my opinion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What is the maximum number of primes generated consecutively generated by a polynomial of degree $a$? Let $p(n)$ be a polynomial of degree $a$. Start of with plunging in arguments from zero and go up one integer at the time. Go on until you have come at an integer argument $n$ of which $p(n)$'s value is not prime and count the number of distinct primes your polynomial has generated. Question: what is the maximum number of distinct primes a polynomial of degree $a$ can generate by the process described above? Furthermore, what is the general form of such a polynomial $p(n)$? This question was inspired by this article. Thanks, Max [Please note that your polynomial does not need to generate consecutive primes, only primes at consecutive positive integer arguments.]
The Green-Tao Theorem states that there are arbitrarily long arithmetic progressions of primes; that is, sequences of primes of the form $$ b , b+a, b+2a, b+3a,... ,b+na $$ Since such a progression will be the first $n$ values of the polynomial $ax+b$, this implies that even for degree 1, there is no upper bound to how many primes in a row a polynomial can generate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Integrating 2(100+t) A friend and I were doing some math together when we had to integrate $2(100 + t)$. I just multiplied it out and integrated $200 + 2t$, which should be $200t + t^2$. He did u-substitution and got $(100 + t)^2$. When I take the derivative of both of ours, I seem to get the same thing. But $(100 + t)^2 = 10000 + 200t + t^2$, which is not equal to just $200t + t^2$. What is going on here?
The difference is just the constant of integration. Both your solutions are correct, as is any one of the form C+200t+t^2 for any constant C.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Function Problem If $f(x)$ is a function satisfying $ \displaystyle f(x+y) = f(x) \cdot f(y) \text { for all } x,y \in \mathbb{N} \text{ such that } f(1) = 3 \text { and } $ $ \sum_{x=1}^{n} f(x) = 120 $, then find the value of $n$. How to approach this one?
Hint: Knowing the value of $f(1)$, you can get the value of $f(2)=f(1+1)$ from the defining equation. Knowing these two, you can get value of $f(3)=f(1+2)$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to get a reflection vector? I'm doing a raytracing exercise. I have a vector representing the normal of a surface at an intersection point, and a vector of the ray to the surface. How can I determine what the reflection will be? In the below image, I have d and n. How can I get r? Thanks.
Let $\hat{n} = {n \over \|n\|}$. Then $\hat{n}$ is the vector of magnitude one in the same direction as $n$. The projection of $d$ in the $n$ direction is given by $\mathrm{proj}_{n}d = (d \cdot \hat{n})\hat{n}$, and the projection of $d$ in the orthogonal direction is therefore given by $d - (d \cdot \hat{n})\hat{n}$. Thus we have $$d = (d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$ Note that $r$ has $-1$ times the projection onto $n$ that $d$ has onto $n$, while the orthogonal projection of $r$ onto $n$ is equal to the orthogonal projection of $d$ onto $n$, therefore $$r = -(d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$ Alternatively you may look at it as that $-r$ has the same projection onto $n$ that $d$ has onto $n$, with its orthogonal projection given by $-1$ times that of $d$. $$-r = (d \cdot \hat{n})\hat{n} - [d - (d \cdot \hat{n})\hat{n}]$$ The later equation is exactly $$r = -(d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$ Hence one can get $r$ from $d$ via $$r = d - 2(d \cdot \hat{n})\hat{n}$$ Stated in terms of $n$ itself, this becomes $$r = d - {2 d \cdot n\over \|n\|^2}n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/13261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97", "answer_count": 5, "answer_id": 1 }
A stereographic projection related question This might be an easy question, but I haven't been able to up come up with a solution. The image of the map $$f : \mathbb{R} \to \mathbb{R}^2, a \mapsto (\frac{2a}{a^2+1}, \frac{a^2-1}{a^2+1})$$ is the unit circle take away the north pole. $f$ extends to a function $$g: \mathbb{C} \backslash \{i, -i \} \to \mathbb{C}^2. $$ Can anything be said about the image of $g$?
It is the unit circle. Substituting $\tan t$ for $a$ and simplifying, you get $$ x=\sin 2t, y=-\cos 2t $$ which is the unit circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the formula $A\lor (B\land C) = (A \lor B) \land (A \lor C)$ correct? I've read the formula written on the title on a mathematics' book and it doesn't seem correct for me: For the first part of the formula [A ∨ (B∧C)] I have the following possible values: A; B and C; A and B and C (the Or is not exclusive) For the second part of the formula [(A ∨ B) ∧ (A ∨ C)] I have the following values: A and A (A); A and C; B and A; B and C; ...; A and B and A and C (A and B and C) So I can have for the second part of the formula A and B; A and C which I can't obtain with the first part of the formula. If I'm mistaken can somebody please tell me how and give some examples. thanks, Bruno
Draw a Venn diagram and think of $A,B,C$ as sets. (consisting of True,False). Then you'll see that the formula is correct. Or you could draw a truth table. http://en.wikipedia.org/wiki/Truth_table
{ "language": "en", "url": "https://math.stackexchange.com/questions/13370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Estimate of the derivative of a function that is supremum of a function of two variables I tried to show the following estimation and am not sure, if it holds: Given $U\subseteq\mathbb R$ and $f:\mathbb R\times \mathbb R\to\mathbb R$. Its derivative with respect to the second argument exists and is denoted by $f_b(a,b)$: $\displaystyle\frac{d}{db}\left(\sup_{a\in U}f(a,b)\right)\ge \inf_{a\in U}f_b(a,b)$ My proof: There exists a sequence $a_n$ in $U$ such that $\displaystyle\lim_{n\to\infty}f(a_n,b)=\sup_{a\in U}f(a,b)$ $\forall n\in\mathbb N$ the following holds: $\sup_{a\in U}f(a,b+h)-\sup_{a\in U}f(a,b)\ge f(a_n,b+h)-\sup_{a\in U}f(a,b)$ Using the differentiability of $f$: $=f(a_n,b)+h f_b(a_n,b)-\sup_{a\in U}f(a,b)+\mathcal O(h^2)$ This is always greater than $\displaystyle\ge f(a_n,b)+h\;\inf_{a\in U} f_b(a,b)-\sup_{a\in U}f(a,b)+\mathcal O(h^2)$ As this holds for an arbitrary $n$ and thus $f(a_n,n)$ comes arbitrarily close to $\sup_{a\in U}f(a,b)$ it holds: $\sup_{a\in U}f(a,b+h)-\sup_{a\in U}f(a,b)\ge h\;\inf_{a\in U} f_b(a,b)+\mathcal O(h^2)$ Deviding by $h$ and performing the limit yields the assertion.
Before estimating $\displaystyle\frac{d}{db}\left(\sup_{a\in U}f(a,b)\right)$ one should probably ask if it exists. For example, if $f(a,b)=ab$ and $U=(-1,1)$, then $\sup_{a\in U}f(a,b)=|b|$, which is not differentiable at $0$. But even assuming the differentiability of the envelope, the result is false as stated. For example, the zero function on $(-1,1)$ is the supremum of functions $\min(0, a^{-1}((b-a)^2-a^2))$ over $0<a<1$. At $b=0$ each of these has $b$-derivative equal to $1/2$; thus, the infimum of $b$-derivative is $1/2$ while the derivative of supremum is $0$. The problem with the proof is hidden by the $\mathcal{O}$ symbol, the uniformity of which with respect to $n$ was not discussed. One needs the assumption of uniform differentiability to make it work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how do you solve $y''+2y'-3y=0$? I want to solve this equation: $y''+2y'-3y=0$ I did this: $y' = z$ $y'' = z\dfrac{dz}{dy}$ $z\dfrac{dz}{dy}+2z-3y=0$ $zdz+2zdy-3ydy=0$ $zdz=(3y-2z)dy$ $z=3y-2z$ $z=y$ $y=y'=y''$ ??? now, I'm pretty sure I did something wrong. could you please correct.
HINT $\rm\ \ \ 0\ =\ y'' + 2\ y' - 3\ y\ =\ (D^2 + 2\ D -3)\ y\ =\ (D+3)\ (D-1)\ y, \quad D = d/dx $ i.e. factor the differential operator as you would any polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 9, "answer_id": 4 }
Does $R[x] \cong S[x]$ imply $R \cong S$? This is a very simple question but I believe it's nontrivial. I would like to know if the following is true: If $R$ and $S$ are rings and $R[x]$ and $S[x]$ are isomorphic as rings, then $R$ and $S$ are isomorphic. Thanks! If there isn't a proof (or disproof) of the general result, I would be interested to know if there are particular cases when this claim is true.
I found the paper Isomorphic polynomial rings by Brewer and Rutter that discusses related matters. They cite a forthcoming paper by Hochster which proves there are non-isomorphic commutative integral domains $R$ and $S$ with $R[x]\cong S[x]$. Added Hochster's paper is M. Hochster, Nonuniqueness of coefficient rings in a polynomial ring, Proc. Amer. Math. Soc. 34 (1972), 81-82, and is freely available.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "144", "answer_count": 4, "answer_id": 1 }
Is it possible to convert a polynomial into a recurrence relation? If so, how? I have been trying to do this for quite a while, but generally speaking the partially relevant information I could find on the internet only dealt with the question: "How does on convert a recurrence relation into... well, a non-recurrence relation?" Let's start of with a simple example: $f(n) = 34n^3+51n^2+27n+5$. How do we find $f_{n}$? I'd really like to see this solved in analogy with the following: Consider $g(n)=n^6$ We can then find the recursion formula: $g_n=((g_{n-1})^{1/6}+1)^6$. What about $f_{n}$? We could generalize this question in a number of ways. For instance, is it (also) possible too turn an infinite polynomial, like the Taylor Series expansion of a trigonometric formula, into a recursion formula? Furthermore, what happens when we allow the coefficients of the polynomial to be real and even complex? Thanks, Max Bonus side-question: How, if at all, are "generating functions" useful in this context?
Max, I'm just detailing Qiaochu's answer, using Pari/GP. First define the function $ f(x) = 34*x^3+51*x^2+27*x+5$ Then the first idea to decompose f into a 2-term-recursion is $ f(x)-1*f(x-1) $ Pari/GP: $ <out> = 102*x^2 + 10 $ So the result is already a reduced polynomial. Now subtract from this $f(x-1)-f(x-2) $ because this will be a polynomial of the same degree, only "shifted" by some coefficients. This gives $ f(x)-2*f(x-1)+f(x-2) $ Pari/GP: $ <out> = 204*x - 102 $ Again the resulting polynomial is of reduced order. Now we finish by subtracting $ f(x-1)-2*f(x-2)+f(x-3) $ We get: $ f(x)-3*f(x-1)+3*f(x-2)-f(x-3) $ Pari/GP: $ <out> = 204 $ Now we have a constant expression only. We are finished and write, reordering the terms: $ f(x) = 3*f(x-1)-3*f(x-2)+f(x-3)+204 $ [update] For completeness one can increase the degree. According to Qiaochu's answer/comment we can subtract again the x-1-shifted polynomial $ f(x-1)-3*f(x-2)+3*f(x-3)-f(x-4) $ and of course getting $ f(x)-4*f(x-1)+6*f(x-2)-4*f(x-3)+f(x-4) $ Pari/GP: $ <out> = 0 $ completing the answer $ f(x) = 4*f(x-1)-6*f(x-2)+4*f(x-3)-f(x-4) $ [end update]
{ "language": "en", "url": "https://math.stackexchange.com/questions/13551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Show that a continuous function has a fixed point Question: Let $a, b \in \mathbb{R}$ with $a < b$ and let $f: [a,b] \rightarrow [a,b]$ continuous. Show: $f$ has a fixed point, that is, there is an $x \in [a,b]$ with $f(x)=x$. I suppose this has to do with the basic definition of continuity. The definition I am using is that $f$ is continuous at $a$ if $\displaystyle \lim_{x \to a} f(x)$ exists and if $\displaystyle \lim_{x \to a} f(x) = f(a)$. I must not be understanding it, since I am not sure how to begin showing this... Should I be trying to show that $x$ is both greater than or equal to and less than or equal to $\displaystyle \lim_{x \to a} f(x)$ ?
You could also nuke the mosquito: in particular, this is a special case of Kakutani's fixed point theorem. This works because: * *By the closed graph theorem, the compactness of the codomain of $f : [a,b] \rightarrow [a,b]$ is sufficient to deduce the closedness of its graph. *Given $x \in [a,b]$, the element $f(x)$ can be identified with the corresponding singleton set $\{f(x)\}$, and every singleton set is non-empty and convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 8, "answer_id": 6 }
Applications for Homology The Question: Are there any ways that "applied" mathematicians can use Homology theory? Have you seen any good applications of it to the "real world" either directly or indirectly? Why do I care? Topology has appealed to me since beginning it in undergrad where my university was more into pure math. I'm currently in a program where the mathematics program is geared towards more applied mathematics and I am constantly asked, "Yeah, that's cool, but what can you use it for in the real world?" I'd like to have some kind of a stock answer for this. Full Disclosure. I am a first year graduate student and have worked through most of Hatcher, though I am not by any means an expert at any topic in the book. This is also my first post on here, so if I've done something wrong just tell me and I'll try to fix it.
You may want to check out "Topological and Statistical Behavior Classifiers for Tracking Applications" abstract preprint This has the first unified theory for target tracking using Multiple Hypothesis Tracking, Topological Data Analysis, and machine learning. Our string of innovations are 1) robust topological features are used to encode behavioral information (from persistent homology), 2) statistical models are fitted to distributions over these topological features, and 3) the target type classification methods of Wigren and Bar Shalom et al. are employed to exploit the resulting likelihoods for topological features inside of the tracking procedure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 4, "answer_id": 3 }
Is a Gödel sentence logically valid? This might be an elementary question, but I am just beginning to learn logic theory. From wikipedia article on Gödel's incompleteness theorems Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory (Kleene 1967, p. 250). The true but unprovable statement referred to by the theorem is often referred to as “the Gödel sentence” for the theory. My question: Is a Gödel statement logically valid?. Edit: As Carl answers below, if the Gödel statement is valid, then by completeness theorem, it is provable, which leads to a contradiction. So there exists a model in which the statement is false. Can we construct such a model?
No, a Gödel sentence is not logically valid. Because the Gödel sentence for a theory $T$ is unprovable from $T$, it follows from the completeness theorem for first-order logic that there is a model of $T$ in which the Gödel sentence is false. When the text you quoted says "true" you should read that as "true in the standard model of arithmetic". Logical validity would correspond to truth in all models. An example of a logically valid sentence is $(\forall x) (x=x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
How to solve the following system? I need to find the function c(k), knowing that $$\sum_{k=0}^{\infty} \frac{c(k)}{k!}=1$$ $$\sum_{k=0}^{\infty} \frac{c(2k)}{(2k)!}=0$$ $$\sum_{k=0}^{\infty} \frac{c(2k+1)}{(2k+1)!}=1$$ $$\sum_{k=0}^{\infty} \frac{(-1)^k c(2k+1)}{(2k+1)!}=-1$$ $$\sum_{k=0}^{\infty} \frac{(-1)^k c(2k)}{(2k)!}=0$$ Is it possible?
Following Rotwang, define $b(k)=c(k)/k!$, then note that both the second and fifth can be satisfied as long as $\sum{b(2k)}=0$. Then the first and third are redundant and define $d(k)=b(2k+1)$. Now we have $\sum{d(k)}=1, \sum{(-1)^kd(k)}=-1$. Adding and subtracting, $\sum{d(2k)}=0, \sum{d(2k+1)}=1.$ So the final is that we must have $$\begin{align} \sum\frac{c(2k)}{k!}&=0\\ \sum\frac{c(4k+1)}{(4k+1)!}&=0\\ \sum\frac{c(4k+3)}{(4k+3)!}&=1 \end{align}$$ and any $c(k)$ that satisfies this will work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the value of $1^x$? I am trying to understand why $1^{x}=1$ for any $x\in\mathbb{R}$ Is it OK to write $1^{x}$? As the base 1 should not equal 1 for $1^{x}$ to be an exponential function? Is $1^{x}=1$ just because it is defined to be so? If possible please refer me to a book or article that discusses this topic.
You don't have to specifically define $1^x$ to be something. It follows from the definition of exponentiation in general. The story is roughly as follows: first there was exponentiation with exponents being natural numbers. That is just repeated multiplication of the base with itself. Then people noticed that this exponentiation obeys the rule \begin{eqnarray} a^{n+m} = a^na^m. \end{eqnarray} This suggests that if we want to define $a^0$, then we want it to satisfy $a^m = a^{m+0} = a^ma^0$, so that it is natural to define $a^0 = 1$. Now, we can use the same rule to extend this operation to all integral exponents, i.e. positive and negative. Since we want $a^na^{-n} = a^{n-n} = a^0 = 1$, we are forced to define $a^{-n} = \frac{1}{a^n}$. So, this way you have defined exponentiation with arbitrary integral exponents. Now, going back to the original exponentiation with natural numbers, you also notice that \[(a^n)^m = a^{nm} \] and $a^1 = a$. Applying the same considerations as above, you are forced to define $a^{\frac{1}{n}} = \sqrt[n]{a}$. This way, you have now extended exponentiation to all rational exponents. Note that all these rules imply that $1^x = 1$ for $x\in \mathbb{Q}$. Now, you extend all this to all real exponents using continuity, so you still have $1^x = 1\;\forall x\in \mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Infinitely differentiable How can one find if a function $f$ is infinitely differentiable?
for functions: Each analytic function is infinitely differentiable. Each polynomial function is analytic. Each Elementary function is analytic almost everywhere. I assume this is valid also for the Liouvillian functions. $ $ for function terms: The set of the function terms of the Elementary functions is closed regarding differentiation. The set of the function terms of the Liouvillian functions is closed regarding differentiation. If $n_1,n_2\in\mathbb{N}_0$, $n_1\ne n_2$, $f\colon z\mapsto f(z)$, and $$\frac{d^{n_1}}{dz^{n_1}}f(z)=\frac{d^{n_2}}{dz^{n_2}}f(z),$$ then $f(z)$ is infinitely often differentiable. There are some general differentiation rules for calculating $n$-th derivatives, e.g. higher factor rule, higher sum rule, higher product rule, higher chain rule. $ $ A function with an infinitely differentiable function term is infinitely differentiable if each of its $n$-th derivatives is differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 4 }
Are these transformations of the $\beta^\prime$ distribution from $\beta$ and to $F$ correct? Motivation I have a prior on a random variable $X\sim \beta(\alpha,\beta)$ but I need to transform the variable to $Y=\frac{X}{1-X}$, for use in an analysis and I would like to know the distribution of $Y$. Wikipedia states: if $X\sim\beta(\alpha,\beta)$ then $\frac{X}{1-X} \sim\beta^\prime(\alpha,\beta)$ Thus, the distribution is $Y\sim\beta^\prime(\alpha,\beta)$. The software that I am using, JAGS, does not support the $\beta^\prime$ distribution. So I would like to find an equivalent of a distribution that is supported by JAGS, such as the $F$ or $\beta$. In addition to the above relationship between the $\beta$ and $\beta^\prime$, Wikipedia states: if $X\sim\beta^\prime(\alpha,\beta)$ then $\frac{X\beta}{\alpha}\sim F(2\alpha, 2\beta)$ Unfortunately, neither of these statements are referenced. Questions * *1) Can I find $c$, $d$ for $Y\sim\beta^\prime(\alpha,\beta)$ where $Y\sim\beta(c,d)$ *2) Are these transformations correct? If so, are there limitations to using them, or a reason to use one versus the other (I presume $\beta$ is a more direct transformation, but why)? *3) Where can I find such a proof or how would one demonstrate the validity of these relatively simple transformations?
Answer to 1: There is no $c$, $d$ for $Y~\beta^\prime(\alpha,\beta)$ where $Y~\beta(c,d)$ This is because $Y>0$ but $X\in(0; 1)$ Use the $F$ distribution which has support for $Y$ which is defined for positive real numbers. Answer to 2 (simulation approach): although this is not a mathematical proof, it provides an example of how to sufficiently support the postulate that $X\frac{\beta}{\alpha}\sim F(2*\alpha,2*\beta)$. Feedback appreciated. Choose arbitrary parameter values alpha <- 2 beta <- 4 Create a vector of a $\beta^\prime(\alpha, \beta)$ variate X <- rbeta(1000000,alpha, beta) Y1 <- X/(1-X) Create a vector of an $F(2*\alpha, 2*\beta)$ variate and divide by $\frac{\beta}{\alpha}$ Y2 <- rf(1000000, 2*alpha, 2*beta) / (beta / alpha) Compare the variables at decile intervals across their range. testY1 <- quantile(Y1, seq(0.1,0.9,0.1)) testY2 <- quantile(Y2, seq(0.1,0.9,0.1)) signif(testY1,2) == signif(testY2,2) Answer to 3: Leemis and McQuestion (2008) is an excellent peer-reviewed reference detailing the inter-relationships among distributions Thanks Mike for providing the proper answer to 2 and 3. Leemis, L.M. and J.T. McQuestion. 2008 Univariate Distribution Relationships. The American Statistician 62(1) 45:53
{ "language": "en", "url": "https://math.stackexchange.com/questions/13844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
In every power of 3 the tens digit is an even number How to prove that in every power of $3$, with natural exponent, the tens digit is an even number? For example, $3 ^ 5 = 243$ and $4$ is even.
We know that $3^1=03$ $3^2=09$ $3^3=27$ $3^4=81$ And so on. Here we noticed that 81 is the largest two digit number that is in the form of $3^n$. After that 3 digit number starts . But till 81 if you see tens digit is even so here 2 case arises Case 1 When we have last digit of $3^n$ = 3, 1 After multiplying it by 3 we see it not makes any effect on the ten's digit and as ten's digit is even then any thing multiplied to even will give you even So for this case ten's digit is even. Case 2 When the last digit of $3^n$ = 9 , 7 Here after multiplying by 3 it will have a effect on ten's digit But $9*3=27$ And$3*7=21$ Here wee see 2 will be added to ten's digit Now again as before I said ten's digit is even then the product after multiplying it by 3 will be even and as 2 is added to the result it will also be an even number as 2 is even and even added to even gives us even. Here's what I thought.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 8, "answer_id": 4 }
How to sum up this series? How to sum up this series : $$2C_o + \frac{2^2}{2}C_1 + \frac{2^3}{3}C_2 + \cdots + \frac{2^{n+1}}{n+1}C_n$$ Any hint that will lead me to the correct solution will be highly appreciated. EDIT: Here $C_i = ^nC_i $
REMARK $\ $ The various approaches are all equivalent. Namely, suppose that we desire to prove without calculus the identity arising from integrating the binomial formula, viz. $$\rm (1 + x)^{n+1}\ =\ 1 + \sum_{k=1}^{n+1}\: \frac{n+1}{k+1} {n\choose k}\ x^{k+1}$$ Comparing coefficients reduces it to the identity $$\rm \quad\quad\ {n+1 \choose k+1}\ =\ \frac{n+1}{k+1} {n\choose k} $$ which is precisely the identity employed in Moron's "calculus free" approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
$\left \{ 0,1 \right \}^{\mathbb{N}}\sim \left \{ 0,1,2,3 \right \}^{\mathbb{N}}$ bijection function Prove that $\left \{ 0,1 \right \}^{\mathbb{N}}\sim \left \{ 0,1,2,3 \right \}^{\mathbb{N}}$ and find a direct bijection function. I got the first part by showing that $\left \{ 0,1 \right \}^{\mathbb{N}} \subseteq \left \{ 0,1,2,3 \right \}^{\mathbb{N}} \subseteq {\mathbb{N}}^{\mathbb{N}}$, which implies that $|\left \{ 0,1 \right \}^{\mathbb{N}}| \leq |\left \{ 0,1,2,3 \right \}^{\mathbb{N}}| \leq |{\mathbb{N}}^{\mathbb{N}}|$ and since $|{\mathbb{N}}^{\mathbb{N}}| = |\left \{ 0,1 \right \}^{\mathbb{N}} | = 2^{\aleph_0} $ and Cantor-Bernstein you get that $\left \{ 0,1 \right \}^{\mathbb{N}}\sim \left \{ 0,1,2,3 \right \}^{\mathbb{N}}$. But I'm stuck with formulating a bijection function. More generally, what approach do you use when you need a formulate an exact function?
There are at least two ways to proceed: Either you start as you did, and then you follow the argument of Cantor-Bernstein, which explicitly gives you how to build a bijection from the two given injections. The other way is to directly argue in the case at hand. For example, identify the sequence $(a_0,a_1,a_2,a_3,...)$ in $\{0,1\}^{\mathbb N}$ with the sequence $(b_0,b_1,b_2,\dots)$ in $\{0,1,2,3\}^{\mathbb N}$ as follows: Replace $a_{2n},a_{2n+1}$ with $b_n$, where $0,0$ is replaced with $0$; $0,1$ is replaced with $1$; $1,0$ with $2$; and $1,1$ with $3$. [Edit: I see Jonas wrote the same explicit bijection as I was typing this.] As a slightly more challenging exercise, pick any two positive integers $n<m$, and build a "combinatorial" bijection between $\{0,1,\dots,n\}^{\mathbb N}$ and $\{0,1,\dots,m\}^{\mathbb N}$. Combinatorial meaning here something in the same spirit of the explicit bijection above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/13975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
integral of $\arcsin(\sin(x))$ I'm having trouble with this integral $$\int\arcsin(\sin x)\,\mathrm dx$$ The problem is with the intervals of definition for each function :/ if someone could dumb it down for me. * *$\arcsin\colon [-1, 1] \to [-\pi/2, \pi/2]$. *$\sin:\mathbb{R}\to [-1, 1]$. right? But what about $\arcsin(\sin x)$ ?
My previous hint was a bit misleading - hence here is a (the comments are still appropriate) CORRECTED HINT On $I= [-\pi/2,\pi/2]$ the sine function grows from $\sin(-\pi/2)=-1$ to $\sin(\pi/2)=1$, hence it is invertible on $I$. The inverse, $\arcsin$, is defined on $[-1,1]$ and has the property $\arcsin(\sin x) = x$ if $-\pi/2\le x\le \pi/2$. Since $\sin$ is $2\pi$-peiodic we must have $\arcsin(\sin (x+2n\pi))=\arcsin(\sin(x))$ for all $x$ and integers $n$. Hence it is sufficient to understand $\arcsin(\sin x)$ for $\pi/2\le x\le 3\pi/2$, to achieve that use $\sin(x+\pi)=\sin (-x)$. When you see the function there will be not problem to perform any integration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How do I tell if matrices are similar? I have two $2\times 2$ matrices, $A$ and $B$, with the same determinant. I want to know if they are similar or not. I solved this by using a matrix called $S$: $$\left(\begin{array}{cc} a& b\\ c& d \end{array}\right)$$ and its inverse in terms of $a$, $b$, $c$, and $d$, then showing that there was no solution to $A = SBS^{-1}$. That worked fine, but what will I do if I have $3\times 3$ or $9\times 9$ matrices? I can't possibly make system that complex and solve it. How can I know if any two matrices represent the "same" linear transformation with different bases? That is, how can I find $S$ that change of basis matrix? I tried making $A$ and $B$ into linear transformations... but without the bases for the linear transformations I had no way of comparing them. (I have read that similar matrices will have the same eigenvalues... and the same "trace" --but my class has not studied these yet. Also, it may be the case that some matrices with the same trace and eigenvalues are not similar so this will not solve my problem.) I have one idea. Maybe if I look at the reduced col. and row echelon forms that will tell me something about the basis for the linear transformation? I'm not really certain how this would work though? Please help.
If you have two specific matrices, A and B, here is a method that will work. It's messy, but it will work for any two matrices, regardless of size. First, rewrite the similarity equation in the form AS=SB, where S is a matrix of variables. Multiply out both matrices to obtain a set of n-squared linear equations in n-squared unknowns. Solve the system using Gaussian-Jordan elimination. You need to use the form of the Gaussian-Jordan algorithm that produces matrices in row-echelon form when there is no single solution to the system. The system of equations always has a solution, the zero matrix. If the Gaussian-Jordan elimination produces a single solution, the matrices are not similar. If there is more than one solution, the row-echelon matrix can be solved for a set of basis vectors of the solution space. Since the constant terms are all zero, any matrix generated by the basis vectors is a similarity matrix for A and B.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70", "answer_count": 6, "answer_id": 4 }
Motivation for Ramanujan's mysterious $\pi$ formula The following formula for $\pi$ was discovered by Ramanujan: $$\frac1{\pi} = \frac{2\sqrt{2}}{9801} \sum_{k=0}^\infty \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}\!$$ Does anyone know how it works, or what the motivation for it is?
Here is a nice article Entitled: "Ramanujan's Series for $\displaystyle\frac{1}{\pi}$ : A Survey", by Bruce C.Berndt. This article appeared in the American Mathematical Monthly *August/September* 2009. You can see it here. * *http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.2533&rep=rep1&type=pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/14115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "113", "answer_count": 4, "answer_id": 2 }
Development of a specific hardware architecture for a particular algorithm How does a technical and theoretical study of a project of implementing an algorithm on a specific hardware architecture? Function example: %Expresion 1: y1 = exp (- (const1 + x) ^ 2 / ((conts2 ^ 2)), y2 = y1 * const3 Where x is the input variable, y2 is the output and const1, const3 const2 and are constant. I need to determine the error you get in terms of architecture that decides to develop, for example it suppose that is not the same architecture with 11 bits for the exponent and 52 bits for mantissa. This is the concept of error I handle: Relative Error = (Real Data - Architecture Data) / (Real Data) * 100 I consider as 'Real Data' the ouput of my algorithm I get from Matlab (Matlab use double precission float point, IEEE 754, 52 bits to mantisa, 11 bits for exponent, one bit fo sign ) with the expression 1 and I consider as 'Architecture Data' the ouput of my algorithm running in an particular architecture (For instance a architecture that uses 12 bits for mantisa, 5 bits for exponent and 1 bit for sign) EDIT: NOTE: The kind of algorithm I am refering to are all those which use matemathical functions that can be descompose in terms of addition, multiplication, subtrations and division. Thank you!
I specially recomend you reading What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg. It should illuminate a lot of issues regarding floating-point computations Errors. This paper is a tutorial on those aspects of floating-point arithmetic (floating-point hereafter) that have a direct connection to systems building. It consists of three loosely connected parts. The first section, Rounding Error, discusses the implications of using different rounding strategies for the basic operations of addition, subtraction, multiplication and division. It also contains background information on the two methods of measuring rounding error, ulps and relative error. The second part discusses the IEEE floating-point standard, which is becoming rapidly accepted by commercial hardware manufacturers. Included in the IEEE standard is the rounding method for basic operations. The discussion of the standard draws on the material in the section Rounding Error. The third part discusses the connections between floating-point and the design of various aspects of computer systems. I specially recomend you to design your algorithm according The IEEE 754 rounding methods. I also recomend you to read the third part because it discussed the implications of floating point arithmetic on the design of computer systems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Braid Group, B_4->> S_4 onto, do I know kernel is P_4, pure braid group? I have an epimorphism $f:B_4\longrightarrow S_4$, from the braid group on 4 strands onto the symmetric group on 4 elements. Is it possible the kernel is not isomorphic to $P_4$, the pure braid group on 4 strands?
By Ryan Budney's suggestion, I went ahead and proved the general case. When $B_n$ onto $S_n$ the kernel is isomorphic to $P_n$. A proof sketch is this: relations for the Artin generators in $B_n$ must be satisfied in the image. Relations $b_ib_{i+1}b_i=b_{i+1}b_ib_{i+1}$ can be rewritten as in terms of conjugation so that every $b_i$ has image of a fixed cycle structure. The relations which impose commutativity of non-adjacent generators imply that non-adjacent generators get sent to permutations with cycles either coincidental or disjoint. It can be shown that for $n>4$ the images of non-adjacent generators must actually be disjoint: mildly technical, but not difficult. (The $n=4$ case being easily solved by hand or GAP). Then counting every other odd generator $b_1,b_3,\ldots$, of which there are $\lceil \frac n 2\rceil$ we have that they must be transpositions, since $3\lceil \frac n 2\rceil>n$. Essentially that's it: up to isomorphism of $S_n$ the images of the generators for $B_n$ are the usual transpositions they induce, so kernel is $P_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Cardinality of Two Sets of Points The problem is stated- Do the following two sets of points have the same cardinality and if so, establish a bijection: A line segment of length four and half of the circumference of radius one (including both endpoints). My reasoning is they do have the same cardinality and my bijection is a picture in which I drew a horizontal line and a semicircle under it (separated by approx. 2cm with the semicircle's open side facing down). At what would be the center of the circle made by the semi-circle I drew a point P. I then drew lines vertical from the original horizontal line to point P. This shows, goes my reasoning, that for every point on the line there is a corresponding point on the semi-circle. My question is is this drawing enough to show a bijection or do I need to do more? Thank you for your thoughts.
I think your lines are not vertical, right? With your construction, there are points on the semi-circle that do not have a corresponding point on the line. But you are quite close. You can 1) modify the construction so that every line from P through the semicircle hits the line (and still have every line from the line segment hit the semi-circle. What does that tell you about the endpoints of each?) Or 2)find another construction that injects the semi-circle into the line, then argue from the Cantor–Bernstein–Schroeder theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Removable sets for harmonic functions and Hardy spaces of general domains Let $\Omega$ be a domain of the complex plane. The Hardy space $H^p(\Omega)$ is defined, for $1 \leq p<\infty$, as the class of functions $f$ that are holomorphic on $\Omega$ such that $|f|^p$ has a harmonic majorant on $\Omega$, i.e. there is a function $u$ harmonic on $\Omega$ such that $$|f(z)|^p \leq u(z) $$ for all $z \in \Omega$. For $p=\infty$, $H^\infty(\Omega)$ is the class of bounded holomorphic functions on $\Omega$. I'm interested in cases when $H^p(\Omega)$ consist only of constant functions. For example, this is the case when $\Omega$ is the whole plane, because positive harmonic functions on $\mathbb{C}$ are constant. I came upon the following question : Let $E$ be a compact subset of the real line, and suppose that $E$ has zero length. Let $\Omega$ be the complement of $E$. Does $H^p(\Omega)$ consist only of the constant functions? For $p=\infty$, the answer is yes : one can use Cauchy's formula to extend any bounded holomorphic function on $\Omega$ to a bounded holomorphic function on $\mathbb{C}$, and that function is now constant by Liouville's theorem. For $1 \leq p<\infty$, I am pretty sure the answer is also yes. However, I can't seem to find a way to extend $f$ or the harmonic majorant of $|f|^p$ to the whole plane. Is there any way to do so? Thank you, Malik
An answer by Georges Lowther has been given at Math Overflow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Intuition for not-so-smooth manifolds in standard text books on (smooth) manifolds, for example the known series by John M. Lee or Jeffrey Lee, you either deal with continuous manifolds, or with smooth manifolds. However, neither in these books nor in lectures I have encountered real examples when a manifold may be $C^k$, but not $C^{k+1}$. Intuitively, I would suppose the $|\cdot|_\infty$-Ball with radius $1$ to be a merely continuous, non-smooth manifold, because smoothness fails at the edges of the cube. In contrast to this, polar coordinates show the $|\cdot|_{2}$ with radius $1$ is in fact a smooth manifold. I'd be thankful for some examples with clues to basic techniques, how the different degrees of smoothnesses manifest 'in real life'.
One can show that any $C^k$-manifold, for $k \geq 1$, has a unique enrichment to a $C^{\infty}$-manifold. (I.e. given $M$ with its $C^k$-atlas, we can find a $C^{\infty}$-atlas on $M$, compatible with the given $C^k$-atlas, and this $C^{\infty}$-atlas is unique up to equivalence; see wikipedia for more details.) So there is not much point in considering $C^k$-manifolds other than for $k = 0$ or $\infty$. With regard to your unit ball examples, note that the $| \cdot |_{\infty}$-unit ball, although it has corners, is homeomorphic to the $| \cdot |_2$-unit ball; one says that it can be smoothed. There are topological manifolds that cannot be smoothed (in dimension 4 and higher), in the sense that they are not homeomorphic to a smooth manifold. There are also smooth manifolds that are homeomorphic, but not diffeomorphic. (E.g. when $n \geq 7,$ one can find smooth manifolds that are homeomorphic to $S^n$, but not diffeomorphic to it; these are so-called exotic spheres.) Again, the wikipedia entry has more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Easy question about finite energy due to convergence The infinite-length sequence $x_1[n]$ defined by \begin{multline} x_1[n]= \begin{cases} \dfrac{1}{n}& \text{if $n \geq $1},\ 0& \text{if $n \leq $0}. \end{cases} \end{multline} has an energy equal to $\mathcal{E _x {_1}} = \sum^\infty_{n=1}(\dfrac{1}{n})^{2}$ which converges to $\pi^2/6$ indicating that $x_1[n]$ has finite energy. I don't get where we find $\pi^2/6$. It would be great if anyone can help me out.
The sum of the series $\displaystyle\sum^\infty_{n=1}(\dfrac{1}{n})^{2}=\dfrac{\pi^2}{6}$ is a classical result due to Euler. Several proofs are given in the answers to Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ (Basel problem). PS. Here Robin Chapman collects 14 proofs. PPS. The improper double integral $$\int_{0}^{1}\int_{0}^{1}\left(\dfrac{1}{1-xy}\right) \mathrm{d}x\mathrm{d}y=\int_{0}^{1}\int_{0}^{1}\left(\sum_{n=1}^{\infty }\left( xy\right)^{n-1}\right) \mathrm{d}x\mathrm{d}y=\sum^\infty_{n=1}\dfrac{1}{n^2} =\dfrac{\pi^2}{6}=\zeta(2)$$ is finite, as pointed out in Proofs from THE BOOK by Martin Aigner and Günter Ziegler. The original article by Tom Apostol is here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The logic behind the rule of three on this calculation First, to understand my question, checkout this one: Calculating percentages for taxes Second, consider that I'm a layman in math. So, after trying to understand the logic used to get the final result. I was wondering: Why multiply $20,000 by 100 and then divide by 83? I know this is the rule of three, but, I can't understand the "internals" of this approach. It isn't intuitive as think in this way: Say 100% of one value, is the same of divide this value by 100. In other words: I have 100 separeted parts of this integer. It's intuitive think about the taxes like this: $$X - 17\% = \$20.000$$ So: $$\$20.000 = 83\%$$ For me, the most easy and compreensive way to solve this is: $$\$20.000 / 83 = 240.96$$ It's the same as think, if 100% is 100 parts of one integer, 83% of that integer is the same of divide this integer by 83. And finally to get the result: $$\$20.000 + 17 * 240.96$$ My final question is: How can I think intuitively like this using the Rule of Three? In other words, why multiply 20.000 by 100 and then divide by 83 is a shorcut to get the result?
I think that the form to see this more easy,is to see it as proportionally; is say: $$\frac{X}{100}=\frac{20000}{83}$$ (Are equals as they have the same proportion) Thus only, you must solve the equation for $X$. Then $$X= \frac{(20000)(100)}{83}$$ Passing to multiply to 100, remember this is solved in cross multiplying . Arturo had been clear in the other post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why isn't there a uniform probability distribution over the positive real numbers? Apparently, the solution to the Card Doubling Paradox is that a uniform probability distribution over the positive real numbers doesn't exist. Can anyone explain why this is the case and what probability distributions can exist over the positive real numbers (it seems that this would be quite limited, given that such a simple distribution is impossible)?
For every probability density $f$, $\liminf_{x\to \pm \infty} f(x) = 0$ must hold. It is clear that for a uniform distribution, the density has to be constant on the considered interval. Combining both requirements, only $g(x) = 0$ remains as a choice for a uniform density on $(-\infty, \infty)$. But $\int_{-\infty}^\infty g(x)dx = 0 \neq 1$, that is $g$ is no proper probability density.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 5, "answer_id": 0 }
help confirm my answer for the following question I was given the question:what is 9+99+999+9999+...+999..99(30 digits) After noticing a trend, I came with the conclusion that the answer would be 28 1's 080. Can anyone confirm my answer and give a reason as to why?
Note that $$\underbrace{99\cdots 9}_{k\text{ digits}} = 10^k - 1.$$ So your sum is the same as $$(10-1) + (10^2-1) + (10^3-1) + \cdots + (10^{30}-1),$$ which is equal to $$(10 + 10^2 + 10^3 + \cdots + 10^{30}) - 30.$$ The first sum is easy to do, the difference is easy to do, and it gives your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
an example of a continuous function whose Fourier series diverges at a dense set of points Please give me a link to a reference for an example of a continuous function whose Fourier series diverges at a dense set of points. (given by Du Bois-Reymond). I couldn't find this in Wikipedia.
Kolmogorov improved his result to a Fourier series diverging everywhere. Original papers, in French: Kolmogorov, A. N.: Une série de Fourier-Lebesgue divergente presque partout, Fund. Math., 4, 324-328 (1923). Kolmogorov, A. N.: Une série de Fourier-Lebesgue divergente partout, Comptes Rendus, 183, 1327-1328 (1926).
{ "language": "en", "url": "https://math.stackexchange.com/questions/14855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Probability that a random permutation has no fixed point among the first $k$ elements Is it true that $\frac1{n!} \int_0^\infty x^{n-k} (x-1)^k e^{-x}\,dx \approx e^{-k/n}$ when $k$ and $n$ are large integers with $k \le n$? This quantity is the probability that a random permutation of $n$ elements does not fix any of the first $k$ elements.
This is not an answer but an observation that the quantity is equal to $\sum_{i=0}^{k} \frac{{(-1)}^i}{n!}{k \choose i}(n-i)!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
What is the equation for finding a number that is X% less than another number? I know this seems like a juvenile question, but for some reason I can't recall how to do this. Lets say I want to find 87 reduced by 99.75%. What is the equation to do that? Thanks!
If you want $X$% of 87, multiply the latter by $\frac{X}{100}$. (So, for example, to get 25 percent, you multiply by $0.25$). If by "reduced by 99.75%" you mean 99.75% of the total, then, multiply by $.9975$. If "reduced by 99.75%" means taking 99.75% off (so you are left with only 0.25%), then multiply by $0.0025$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/14971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Convolution of multiple probability density functions I have a series of tasks where when one task finishes the next task runs, until all of the tasks are done. I need to find the probability that everything will be finished at different points in time. How should I approach this? Is there a way to find this in polynomial time? The pdfs for how long individual tasks will run have been found experimentally and are not guaranteed to follow any particular type of distribution.
I don't know if that is what you are looking for, but if $X_1,\ldots,X_n$ are i.i.d. random variables with mean $\mu$ and (finite) variance $\sigma^2$, then $$ {\rm P}\bigg(\sum\limits_{i = 1}^n {X_i } \le t \bigg) = {\rm P}\bigg(\frac{{\sum\nolimits_{i = 1}^n {X_i } - n\mu }}{{\sigma \sqrt n }} \le \frac{{t - n\mu }}{{\sigma \sqrt n }}\bigg), $$ and the random variable $\frac{{\sum\nolimits_{i = 1}^n {X_i } - n\mu }}{{\sigma \sqrt n }}$ converges to the standard normal distribution as $n \to \infty$. Now, you can estimate the unknown parameters $\mu$ and $\sigma^2$ (assuming the variance is finite), so if $n$ is sufficiently large, you are actually done. For further details see this (note the subsection Density functions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/15024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
A group of order 28 with a normal Sylow 2-subgroup is abelian How to prove that the group of order 28 which have normal subgroup of order 4 is Abelian group?
Let $G$ be the group of order $28$, $N$ the normal subgroup of order $4$, and let $K = G/N$ be the quotient group, which has order $7$ and is therefore cyclic. By the Schur-Zassenhaus Theorem, $G$ is a semidirect product of $N$ and $K$. It will be abelian iff the semidirect product is actually direct. In order to give a nontrivial semidirect product here, it is necessary and sufficient to find a nontrivial homomorphism from $K$ into the automorphism group $\operatorname{Aut}(N)$ of $N$, or equivalently an element of order $7$ in $\operatorname{Aut}(N)$. But $N$ is either cyclic of order $4$ -- in which case $\operatorname{Aut}(N)$ has order $2$ -- or is the Klein group $C_2 \times C_2$, in which case $\operatorname{Aut}(N)$ has order $6$. Either way there is no element of order $7$. Alternately, consider what Sylow theory has to say about the Sylow $7$-subgroups. Note: of course the second method, which is only hinted at, is probably what is intended. But the two methods interact in an interesting way with regard to the following more general problem. Suppose $G$ is a finite group of order $np$ with $p$ a prime number which does not divide $n$, and that there is an abelian normal subgroup $N$ of order $n$. Under what conditions on $N$ and $p$ is $G$ necessarily abelian? If you use Sylow theory, you get that it is enough for $p$ to be sufficiently large compared to $n$. (One can be explicit; I don't want to be yet to give the OP some time to think about his particular problem.) Now comparing to the approach of showing that any semidirect product of $N$ with the cyclic group of order $p$ is necessarily direct one gets information about the largest prime $p$ which divides the order of the automorphism group $\operatorname{Aut}(N)$. It could be fun to explore this further. (More fun than proving something about groups of order $28$, anyway. This type of overly specific question tends to bore me.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/15064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Good Book On Combinatorics What is your recommendation for an in-depth introductory combinatoric book? A book that doesn't just tell you about the multiplication principle, but rather shows the whole logic behind the questions with full proofs. The book should be for a first-year-student in college. Do you know a good book on the subject? Thanks.
Try Principles and Techniques in Combinatorics by Chen Chuan Chong and Koh Khee Meng or Combinatorics by Peter Cameron. The latter is more advanced and has more topics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "125", "answer_count": 21, "answer_id": 8 }
Sum of $n$ consecutive numbers divided by $n$ I was trying to write a poker program yesterday, and trying to figure out a way to tell the computer how to detect a straight. I realized that if you add $5$ consecutive numbers and divide by $5$, you'll get no remainder. I tried for $6$ consecutive numbers, but it didn't work. It turns out it only works for odd numbers. Really silly question, but does this 'method' have a name? Has it been proven that for all odd $n$, the sum of any $n$ consecutive numbers divided by $n$ yields a remainder of zero?
Yes, If $n=2k+1$, write the numbers as $a - k, a-(k-1), \dots,a-1, a, a + 1, \dots, a+(k-1), a+k$ Adding gives you $na$ which is divisible by $n$. Try something similar for even $n =2m$ and you see that the sum is $m \mod n$, which is not divisible by $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
famous space curves in geometry history? For an university assignment I have to visualize some curves in 3 dimensional space. Until now I've implemented Bézier, helix and conical spiral. Could you give me some advice about some famous curves in geometry history?
Let $t \in [0,2\pi]$. And let $r(t)=\frac{1}{2-\sin (2t)}[\cos(3t),\sin(3t),\cos(2t)]$. Then $r(t)$ parametrizes a 3-dimensional curve with no three-tangent plane (I must admit I'm not sure of the English terminology here) - that is, every tangent plane of the curve meets the curve in no more than two points. (I have no clue how to prove such things, however)
{ "language": "en", "url": "https://math.stackexchange.com/questions/15260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 1 }
integral test to show that infinite product $ \prod \limits_{n=1}^\infty\left(1+\frac{2}{n}\right)$ diverges This is part of an assignment that I need to get a good mark for - I'd appreciate it if you guys could look over it and give some pointers where I've gone wrong. (apologies for the italics) $$\prod_{n=1}^\infty\left(1+\frac{2}{n}\right)\; \text{ converges when } \sum_1^\infty \ln\left(1+\frac{2}{n}\right)\; \text{ converges }.$$ $$\sum_1^\infty \ln\left(1+\frac{2}{n}\right)\;=\;\sum_1^\infty \ln(n+2)-\ln(n)$$ $$ \text{let } f(x)=\ln(x+2)-\ln(x) \rightarrow f'(x)=\frac{1}{x+2} - \frac{1}{x}$$ $$ = \frac{x-x-2}{x(x+2)} = \frac{-2}{x(x+2)}<0$$ $$f(x)\ \text{is a decreasing function}.$$ $$f(x) \; \text{is a positive function for} \;x\geq1$$ $$f(x)\;\text{is a continuous function for} \;x>=1$$ using integration test. $$\int_1^\infty \ln(x+2) - \ln(x) = \lim_{t \to \infty}\int_1^t \ln(x+2)dx - \lim_{t \to \infty}\int_1^t \ln x dx$$ $$\int \ln(x)dx = x \ln x - x + c \Rightarrow \int \ln(x+2) = (x+2)\ln(x+2) - (x+2) + c$$ Therefore $$\int \ln(x+2) - \ln(x)dx = (x+2)\ln(x+2)-x - 2 - x \ln(x) + x + c$$ $$ = x \ln(\frac{x+2}{x})+ 2\ln(x+2)-2 + c $$ Therefore, $$\int_1^\infty \ln(x+2) - \ln(x)dx = \lim_{t \to \infty}\left[x \ln(\frac{x+2}{x}) + 2 \ln(x+2) - 2\right]_1^t$$ $$ = \lim_{t \to \infty}\left[t \ln(\frac{t+2}{t}) + 2\ln(t+2) - 2\right] - \lim_{t \to \infty}\left[\ln(\frac{3}{1}) + 2\ln(3) - 2\right] $$ $$ =\lim_{t \to \infty}\left[t \ln(\frac{t+2}{t}) + 2\ln(t+2) - 3\ln(3)\right]$$ $$ As\; t\rightarrow\infty, \; \lim_{t \to \infty}t \ln\left(\frac{t+2}{t}\right) + 2\ln(t+2) = \infty. $$ Therefore the series $$\sum_1^\infty \ln\left(1+\frac{2}{n}\right) $$ is divergent. Similarly the infinite product $$\prod_{n=1}^\infty\left(1+\frac{2}{n}\right)$$ is also divergent.
If You dont have to use integration test then You can do something like this, it is much easier. $$\prod\limits_{n=1}^{k}(1+\frac{2}{n})=\frac{(k+1)(k+2)}{2}.$$ It is very easy to show since it is equal to: $\frac{1}{3}\cdot\frac{2}{4}\cdot\frac{3}{5}\cdot\frac{4}{6}\cdot\frac{5}{7}\cdot\cdot\cdot\cdot$ You can reduce almost all this fractions. Denominator of nth fraction can be reduced with numerator of n+2th fraction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Using the second principle of finite induction to prove $a^n -1 = (a-1)(a^{n-1} + a^{n-2} + ... + a + 1)$ for all $n \geq 1$ The hint for this problem is $a^{n+1} - 1 = (a + 1)(a^n - 1) - a(a^{n-1} - 1)$ I see that the problem is true because if you distribute the $a$ and the $-1$ the terms cancel out to equal the left side. However, since it is telling me to use strong induction I am guessing there is more I am supposed to be doing. On the hint I can see that it is a true statement, but I am not sure how to use that to prove the equation or how the right side of the hint relates to the right side of the problem. Also, I do realize that in the case of the hint $n = 1$ would be the special case.
Sometimes the easiest way to figure out an induction argument like this is to prove a particular case. I'll take care of proving $n = 3$, assuming you've already proven $n = 1$ and $n = 2$. By the hint, we have $$a^{3} - 1 = (a + 1)(a^2 - 1) - a(a - 1)$$ But the cases $n = 1$ and $n = 2$ hold, so we rewrite this as $$a^{3} - 1 = (a + 1)(a - 1)(a + 1) - a(a - 1)$$ Now by factoring $(a - 1)$ on the right hand side, we have $$a^3 - 1 = (a - 1)(a^2 + 2a + 1 - a) = (a - 1)(a^2 + a + 1)$$ This is less interesting than the case $n = 4$; try to work that out on your own! After that, you should be able to work out a general $n + 1$ case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How to make a sphere-ish shape with triangle faces? I want to make an origami of a sphere, so I planned to print some net of a pentakis icosahedron, but I have a image of another sphere with more polygons: I would like to find the net of such model (I know it will be very fun to cut). Do you know if it has a name ?
This whitepaper on Geodesic Math may be helpful. Probably less helpful is this Ruby Quiz I hosted on writing a program to calculate Geodesic spheres.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
How to understand compactness? How to understand the compactness in topology space in intuitive way?
Maybe you should think about compactness, as something that takes local properties to global properties. For example, if $f:K\rightarrow \mathbb{R}$ is continuous, $K$ is compact, and $f(x)>t_x>0$ for all x, then you can find $t>0$ such that $f(x)>t>0$ for all x - so from $f(x)>t_x>0$ point wise, you know that $f>t>0$ as a function. (This is a simple consequence of Weierstrass theorem in $[a,b]\rightarrow \mathbb{R}$) Usually we find some property that is true for every "small" enough open sets, then use compactness to reduce the case to finitely many open sets and use induction to show that the property is true for all of the space. This is at least how I understand compactness. As the commenters below this message wrote (and I didn't emphasize enough), we usually use compactness to reduce infinite problems\conditions\restraints to a finite subset that cover the entire space, and then use some argument that works only for finite cases (like induction, taking max\min, take finite sums etc). In my example above, we wanted to find a minimum over all the lower bounds, but in the infinite case this is usually just an infimum (and can be zero), but when we reduce to a finite case there is a minimum. In this way we can think of compactness as something that let us use some finite argument on infinite covers (and many times to transfer some property from the cover to the entire space).
{ "language": "en", "url": "https://math.stackexchange.com/questions/15486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 8, "answer_id": 3 }
Riddle (simple arithmetic problem/illusion) I'm not sure how well known this "riddle" is but here it goes. 3 people go to a restaurant, each buy food worth 10.00. When they're done, they give 30.00 to the waitress. She gives the money to the manager, manager says they paid too much, gives 5.00 back to the waitress. She comes back to the table, gives a dollar back to each person and then puts the remaining 2.00 in her pocket for tip. So now, each person has paid 9.00. 9.00 x 3 = 27.00. Plus the 2.00 in the waitresses pocket is 29.00. What happened to the 30th dollar? So what's the issue with this way of calculating (since if you backward it works fine), that doesn't give us the correct result?
I am not sure how 'mathematical' the riddle part of this story is. :) Each guest paid \$9 because together they paid \$30, and got back \$27. Of those \$27, the manager got \$25, and the waitress got the other \$2 -- there is no sense/reason in adding \$27 to \$2 -- though you might have a gut feeling that you are headed in the 'right direction' of getting the initial \$30 that way. But take a look -- there really is no extra dollar left, that's all it should be -- \$25+\$2. In other words, each guest paid 9 dollars with the tip included into that. Another, possibly more insightful way to look at things: The guests initially paid 30 dollars. The waitress returned them 3 dollars. So the guests ended up paying \$27 (not thirty!). Of those 27 dollars, 2 dollars were pocketed as a tip by the waitress, so you could also say that they paid \$8.333... each and then added a two dollar tip. I think it's a question of order of operations (and using it with respect to appropriate quantities) that could be confusing here. Punchline: the waitress' 2 dollars were part of 27 dollars paid by the customers. They saved the other 3 dollars of the initial \$30 because of what the manager said, so if you are wondering 'what happened to \$30?' it's more of 27+3, or 25+2+3, where \$25 is what they paid without a tip, \$2 is the tip, and \$3 is the amount they saved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Equation for a circle I'm reading a book about Calculus on my own and am stuck at a problem, the problem is There are two circles of radius $2$ that have centers on the line $x = 1$ and pass through the origin. Find their equations. The equation for circle is $(x-h)^2 + (y-k)^2 = r^2$ Any hints will be really appreciated. EDIT: Here is what I did. I drew a triangle from the origin and applied the pythogoras theorem to find the perpendicular, the hypotenuse being $2$ ($\text{radius} = 2$) and base $1$ (because $x = 1$), the value of y-coordinate is $\sqrt3$. Can anyone confirm if this is correct?
Note that you can write down the equation of a circle if you know the co-ordinates of the center $(h,k)$ and its radius $r$. You already know that the two circles in question have radius $2$. It remains to figure out where their centers lie. You are told that their centers lie on the line $x=1$ which is a line parallel to the $Y$-axis. So you know the $x$-coordinate of the centers. The only thing that remains to be figured out are the $y$-coordinates of the two circles. To figure this out, you are given an additional information that both circles pass through the origin. I would suggest drawing a picture of the Cartesian plane and of the line $x=1$ on it. You know the centers lie on this line and you know the centers have to be at a certain distance from the origin (why?). Given these two constraints, figure out what possible locations the centers can be at. If this isn't clear enough, post your work and indicate where you are getting stuck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Why is the Möbius strip not orientable? I am trying to understand the notion of an orientable manifold. Let M be a smooth n-manifold. We say that M is orientable if and only if there exists an atlas $A = \{(U_{\alpha}, \phi_{\alpha})\}$ such that $\textrm{det}(J(\phi_{\alpha} \circ \phi_{\beta}^{-1}))> 0$ (where defined). My question is: Using this definition of orientation, how can one prove that the Möbius strip is not orientable? Thank you!
Let $M:=\{(x,y)|x\in\mathbb R, -1<y<1\}$ be an infinite strip and choose an $L>0$. The equivalence relation $(x+L,-y)\sim(x,y)$ defines a Möbius strip $\hat M$. Let $\pi: M \to \hat M$ be the projection map. The Möbius strip $\hat M$ inherits the differentiable structure from ${\mathbb R}^2$. We have to prove that $\hat M$ does not admit an atlas of the described kind which is compatible with the differentiable structure on $\hat M$. Assume that there is such an atlas $(U_\alpha,\phi_\alpha)_{\alpha\in I}$. We then define a function $\sigma:{\mathbb R}\to\{-1,1\}$ as follows: For given $x\in{\mathbb R}$ the point $\pi(x,0)$ is in $\hat M$, so there is an $\alpha\in I$ with $\pi(x,0)\in U_\alpha$. The map $f:=\phi_\alpha^{-1}\circ\pi$ is a diffeomorphism in a neighbourhood $V$ of $(x,0)$. Put $\sigma(x):=\mathrm{sgn}\thinspace J_f(x,0)$, where $J_f$ denotes the Jacobian of $f$. One easily checks that $\sigma(\cdot)$ is well defined and is locally constant, whence it is constant on ${\mathbb R}$. On the other hand we have $f(x+L,y)\equiv f(x,-y)$ in $V$ which implies $\sigma(L)=-\sigma(0)$ -- a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84", "answer_count": 3, "answer_id": 1 }
What's stopping me from choosing the nth Eilenberg Mac Lane space to be the following simplicial abelian group? Given an abelian group $X$, let $F_n(X)$ denote the simplicial abelian group defined as follows: $F_n(X)_j=0$ for all $j<n$ and $F_n(X)_j=X$ for all $j\geq n$ with the appropriate zero and identity maps between them so the normalization (normalized moore complex) gives the appropriate homology (say, $d_i=0$ for every $i\geq 1$ (obviously for the parts higher than $n$) and $d_0=id_X$). Then by the Hurewicz theorem, this simplicial abelian group seems like it should have homotopy concentrated in degree $n$ with $n$th component isomorphic to $X$ by computing the homology of the normalization. Since this thing is a Kan complex (because it's a simplicial abelian group), isn't this a representative for the Eilenberg-Mac Lane space $\kappa(X,n)$?
The object you describe can't actually be made into a simplicial object because the boundary maps are incompatible with being able to define degeneracies. Let's examine $n=0$. Then you're going to need to define a degeneracy map $s_0: X \to X$ that satisfies the simplicial identities $d_0 s_0 = d_1 s_0 = id$. However, substituting in your value for $d_1$, this says $id = 0$. More generally, if you have a simplicial object which is zero in degrees less than $n$ and $X$ in degree $n$, then the degeneracies give rise in degree $m$ to - at least - one summand isomorphic to $X$ per surjection of ordered sets $\{0\ldots m\} \twoheadrightarrow \{0\ldots n\}$. However, you can define a simplicial object which, in degree $m$, is $$ \bigoplus_{\{0\ldots m\} \twoheadrightarrow \{0\ldots n\}} X $$ with appropriate boundary maps, and this does give you an Eilenberg-Mac Lane space for $X$. This is some kind of "direct sum of copies of $X$ indexed by the simplices of $S^n$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/15641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
For what functions $f(x)$ is $f(x)f(y)$ convex? For which functions $f\colon [0,1] \to [0,1]$ is the function $g(x,y)=f(x)f(y)$ convex over $(x,y) \in [0,1]\times [0,1]$ ? Is there a nice characterization of such functions $f$? The obvious examples are exponentials of the form $e^{ax+b}$ and their convex combinations. Anything else? EDIT: This is a simple observation summarizing the status of this question so far. The class of such functions f includes all log-convex functions, and is included in the class of convex functions. So now, the question becomes: are there any functions $f$ that are not log-convex yet $g(x,y)=f(x)f(y)$ is convex? EDIT: Jonas Meyer observed that, by setting $x=y$, the determinant of the hessian of $g(x,y)$ is positive if and only if $f$ is a log-convex. This resolves the problem for twice continuously differentiable $f$. Namely: if $f$ is $C^2$, then $g(x,y)$ is convex if and only if $f$ is log-convex.
This may just be a silly observation. But, let $\alpha=(a_1,a_2)$ and $\beta=(b_1,b_2)$. If $f$ is convex and the sum of $g$ at the four corners of the square formed by $\alpha$ and $\beta$ is non-positive, then $g(t\alpha + (1-t)\beta) \leq tg(\alpha)+(1-t)g(\beta)$, where $t \in [0,1]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/15707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 1 }
Stochastic integral and Stieltjes integral My question is on the convergence of the Riemann sum, when the value spaces are square-integrable random variables. The convergence does depend on the evaluation point we choose, why is the case. Here is some background to make this clearer. Suppose $f\colon \Re \mapsto \Re $ is some continuous function on $[a,b]$, the Stieltjes integral of $f$ with respect to itself $f$ is $\int^{b}_{a} f(t)df(t)$ if we take a partition $ \Delta_n = \{t_0, t_1, \cdots, t_n \}$ of $[a,b]$ the Riemmans sums is $$ L_{n} = \sum^{n}_{i=1} f(t_{i-1})(f(t_{i})-f(t_{i-1})) $$ Now if the limit exists say $\lim \limits_{n\to\infty} L_{n}= A$, then if we choose the evaluation point $t_{i}$ then the sum $$ R_{n} = \sum^{n}_{i=1} f(t_{i})(f(t_{i})-f(t_{i-1})) $$ will also converge to $A$ so $$\lim_{n\to\infty}L_{n} = \lim_{n\to\infty}R_{n} .$$ Now we apply same idea for a stochastic integral. Here $W(t)$ is a wiener process and we wish to find $$\int^{b}_{a}W(t)dW(t) $$ $$ L_{n} = \sum^{n}_{i=1} W(t_{i-1})(W(t_{i})-W(t_{i-1})) $$ $$ R_{n} = \sum^{n}_{i=1} W(t_{i})(W(t_{i})-W(t_{i-1})) $$ in $L^2$ norm the limits of $L_{n}$ and $R_{n}$ exist but are different $$\lim_{n\to\infty} \Vert R_{n}-L_{n}\Vert = b-a $$ can someone explain why the limits are different ? If the limit exists which in this case it does. I would have expected $\lim_{n\to\infty} \Vert R_{n}-L_{n}\Vert = 0 $ in $L^2$ norm.
Limits of $R_n$ and $L_n$ coincide when Stieltjes integral exists. Existence of the Stieltjes integral does not follow from the existence of these limits. In general existence and definition of Stieltjes integral can be messy business as the Figure 2.1 in page 6 (page 10 of the ps file) of this document can attest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Simplicity of $A_n$ I have seen two proofs of the simplicity of $A_n,~ n \geq 5$ (Dummit & Foote, Hungerford). But, neither of them are such that they 'stick' to the head (at least my head). In a sense, I still do not have the feeling that I know why they are simple and why should it be 5 and not any other number (perhaps this is only because 3-cycles become conjugate in $A_n$ after $n$ becomes greater than 4). What is the most illuminating proof of the simplicity of $A_n,~ n \geq 5$ that you know?
Here are the ingredients I think about: * *$A_n$ is $n-2$-transitive on $\{1,2,\ldots,n\}$. That is, if $1\leq i_1\lt i_2\lt\cdots \lt i_{n-2}\leq n$ are any $n-2$ integers, and $j_1,\ldots,j_{n-2}$ are any $n-2$ distinct integers between $1$ and $n$, then there is an element of $A_n$ that maps $i_k$ to $j_k$ (in fact, one and only one element of $A_n$ that achieves this). This is easy: just write out the corresponding permutation. If it is even, this gives you an element of $A_n$. If it is not even, then adding a transposition involving the elements that do not occur among the $j_k$ makes it even and does not change the image of the $i_k$. *In particular, if $\sigma\in A_n$ fixes at least two elements, then every element with the same cycle structure to $\sigma$ is conjugate to $\sigma$ in $A_n$. *If $N$ is a normal subgroup of $A_n$, and $\sigma\in N$ fixes at least two elements, then every element with the same cycle structure as $\sigma$ is in $N$. *In particular, if $N$ is normal, contains a $3$-cycle, and $n\geq 5$, then $N$ contains all $3$-cycles; and since the $3$-cycles generate $A_n$ when $n\geq 3$, then $N=A_n$. *If $N$ is nontrivial and normal in $A_n$, and $n\geq 5$, then it contains a $3$-cycle: this involves a bit of manipulation, but the fact that $n\geq 5$ is as essential here as in (4) above: it gives you enough room to maneuver, room that you do not have in $A_4$ ($A_3$ and $A_2$ are also simple, but for silly reasons). But I'm not sure if this particular line qualifies as "illuminating" in terms of giving you great insight into the structure of $A_n$; I feel they give me a good feel for what you can and cannot do in terms of playing with permutations (especially the transitivity, and the actual mechanics of point 5 above), and they remind me why $n\geq 5$ is important.
{ "language": "en", "url": "https://math.stackexchange.com/questions/15773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 5, "answer_id": 3 }
What is the origin of the term "Differentiable"? I was wondering today about why the word differentiable is used for describing functions that have a derivative or are differentiable. Perhaps because originally one considered finite differences? But that seems somewhat not right, because roughly speaking a derivative measures not the difference $f(x+h)-f(x)$, but rather the ratio $(f(x+h)-f(x))/h$. So, could people here shed light on why we use "differentiable"? Any pointers to academic / historical / etymological explanations are also welcome. Thanks!
From the Earliest Known Uses of Some of the Words of Mathematics webpage: DIFFERENTIAL CALCULUS. The term calculus differentialis was introduced by Leibniz in 1684 in Acta Eruditorum 3. Before introducing this term, he used the expression methodus tangentium directa (Struik, page 271). The OED has a nice quotation from Joseph Raphson’s Mathematical Dictionary of 1702: “A different way....passes....in France under the Name of Leibnitz's [sic] Differential Calculus, or Calculus of Differences.”
{ "language": "en", "url": "https://math.stackexchange.com/questions/15846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is long division the most optimal "effective procedure" for doing division problems? (Motivation: I am going to be working with a high school student next week on long division, which is a subject I strongly dislike.) Consider: $\frac{1110}{56}=19\frac{46}{56}$. This is really a super easy problem, since once you realize $56*20=1120$ its trivial to write out $1110=56*19+46$. You can work out the long division for yourself if you want; needless to say it makes an otherwise trivial problem into a tedious, multi-step process. Long division is an "effective procedure", in the sense that a Turing machine could do any division problem once it's given the instructions for the long division procedure. To put it another way, an effective procedure is one for which given any problem of a specific type, I can apply this procedure systematically to this type of problem, and always arrive at a correct solution. Here are my questions: 1) Are there other distinct effective procedures for doing division problems besides long division? 2) Is there a way to measure how efficient a given effective procedure is for doing division problems? 3) Does there exist an optimal effective procedure for division problems, in the sense that this procedure is the most efficient?
In binary computing multiplying by $2$ is a shift. But humans also like to double numbers. An efficient algorithm can be constructed that only multiplies by $2$, subtracts numbers, compares numbers and adds numbers. Here is this division algorithm applied to $\frac{1110}{56}$. Create the table: $56 \times 2^0 \quad \;\, \quad \quad\quad = \;\, 56;\quad 2^0 = 1$ $56 \times 2^1 = 56 \times 2 \;\,= 112;\quad 2^1 = 2$ $56 \times 2^2 = 112 \times 2 = 224;\quad 2^2 = 4$ $56 \times 2^3 = 224 \times 2 = 448;\quad 2^3 = 8$ $56 \times 2^4 = 448 \times 2 = 896;\quad 2^4 = 16$ $56 \times 2^5 = 896 \times 2 \gt 1110 \text{ == STOP == }$ From this point on all that is needed to complete the calculation are the subtraction, comparison and addition operators. $1110 = 56 \, 2^4 + (1110 - 896) = 56 \, 2^4 + 214 = 56 \, 2^4 +56 \, 2^1 + (214 - 112) = $ $\quad 56 \, 2^4 +56 \, 2^1 + 102 = 56 \, 2^4 + 56 \, 2^1 + 56 \, 2^0 + (102-56) =$ $\quad 56 \, 2^4 + 56 \, 2^1 + 56 \, 2^0 + 46 =$ $\quad (2^4 + 2^1 + 2^0)\,56 + 46 =$ $\quad (16 + 2 + 1)\,56 + 46 =$ $\quad 19\times 56 + 46 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/15881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 3 }
what is the usual topology on a vector space? I do not understand the topology of a Lie group clearly. Let $G$ be a Lie group and $T_eG$ be its tangent space at the identity $e \in G$. Why $Aut(T_eG)$ is an open subset of the vector space of endomorphisms of $T_eG$ (i.e. $End(T_eG)$)? What does "open" mean?
First, this has nothing to do with Lie groups or Lie algebras. The only important part is that $T_eG$ is a vector space. For any vector space $V$ (say, finite dimensional over the reals), the set $End(V)$ is naturally a finite dimensional vector space. Hence, $End(V)$ is isomorphic to $\mathbb{R}^N$ for some $N$ (in fact, $N = $(dim$V)^2$). Use any choice of isomorphism to topologize $End(V)$. This choice of isomorphism is equivalent to choosing a basis of $V$. Now, one has the determinant $det:End(V)\rightarrow\mathbb{R}$ which is given as a polynomials in the entries of the matrices in $End(V)$ (they are matrices after choosing a basis), and hence is continuous. Since $det$ is continuous, $det^{-1}(\mathbb{R}-\{0\})$ is an open subset of $End(V)$. But this subset is precisely $Aut(V)$, the invertible transformations from $V$ to $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Considering a combination of functions I am sorry for the vague title - its because I don't know what I am talking about. I have a function whose value is determined by considering the outputs of some other functions. For example, F(x) = G(a,b) and H(x) and T(y) * *As you can see, I am not sure how to write the "and" part. That is, I want to say: In order to calculate F(x), you first have to calculate G(a,b) and H(x) and T(y); and then "mix them up in a certain way". *How can I represent this idea mathematically? Can somone please point me to some resources that further explain this in baby-steps? Followup: Thank you for all your replies. Let me clarify what I want to do. Consider a scenario where I want to find the value of a car. This depends on factors such as mileage, model, number of accidents, and also owner-bias. The owner-bias itself is a function of gender, years of driving experience, and age So.. how can I come up with a symbolic way to represent this AND also define the interralations between all these factors? I am sorry for the newbie question, please feel free to direct me to the relevant literature/terminology
The issue appears to be one of "dummy variables" and maybe composition of functions. When you write, for example, q(x)=x^2, it really doesn't matter that the variable is x. You also have that q(y)=y^2 and the same for any variable you name. But we expect the same variables on both sides of the equation. If we could show F(x)=G(a,b) that would say that F(x) doesn't depend upon a and b, so F(x) and hence G(a,b) must be constant. If F really depends upon G(a,b), H(x), and T(y) we would expect to see F(a,b,x,y)= some expression involving G(a,b), H(x), and T(y). An example would be F(a,b,x,y)=G(a,b)*H(x)+T(y). Sometimes H(x) and T(y) are the inputs to G. That we would write F(x,y)=G(H(x),T(y)) Qiaochu Yuan and I are trying to get at the same thing using different language. I hope at least one is helpful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Conditional existence of real numbers Are there such numbers $a$ and $b$ that if $a < b$, then $a > b$ ? Thanks.
I don't really think so. The condition $a < b$ implies that $a \ngeq b$ because $<$ is a total order on $\mathbb{R}$. If you're not familiar with orderings, they are just relations that are reflexive, anti-simmetric and transitive (see Wikipedia. A set is said to be totally ordered if every element of the set can be compared to another with the ordering given.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to check if a subset is a generator of a vector space I have a very noob questions about generators: what algorithm do I have to follow so I can prove that a finite subset is a generator? Here is the background story (I'll tell it all because I suck at maths and I might have understood the whole concept wrongly): I want to know how to compute the image of a vector space morphism. I'll continue on a concrete example: $f\colon \mathbb{R}^2 \to \mathbb{R}^2$, $f(x,y) = (x-y, 2x + y)$ is the morphism. To find its image, I observe that $f(x,y)$ can be rewritten as $x(1,2) + y(-1,1)$. If I am correct, that means that $\mathrm{Im}(f) = \langle(1,2), (-1,1)\rangle$ (that's the notation we use for set generators). Now the question is: does the $\{(1,2), (-1,1)\}$ set generate $\mathbb{R}^2$ or not?
If you want to know whether the two vectors span the plane, just use the standard results in this case: they span the plane if and only if they are linearly independent, or the $2$-by-$2$ matrix is invertible, and so on. Alternatively you can just show that for every point in the plane there exists the required $x$ and $y$ by solving a pair of simultaneous equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Replacing $\text{Expression}<\epsilon$ by $\text{Expression} \leq \epsilon$ As an exercise I was doing a proof about equicontinuity of a certain function. I noticed that I am always choosing the limits in a way that I finally get: $\text{Expression} < \epsilon$ However it wouldn't hurt showing that $\text{Expression} \leq \epsilon$ would it, since $\epsilon$ is getting infinitesimally small? I have been doing this type of $\epsilon$ proofs quite some time now, but never asked myself that question. Am I allowed to write $\text{Expression} \leq \epsilon$? If so, when?
Since $\varepsilon>0$ is arbitrary it does not matter. If you have a non strict inequality for each $\varepsilon>0$ you can get strict inequality by adding arbitrary $\eta>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
What is the point of logarithms? How are they used? Why do you need logarithms? In what situations do you use them?
Logarithms are primarily used for two thing: i) Representation of large numbers. For example pH(the number of hydrogen atoms present) is too large (up to 10 digits). To allow easier representation of these numbers, logarithms are used. For example let's say the pH of the substance is $10000000000$. This can written as $10^{10}$. Or let the pH of another substance be $1000000$. This can be written as $10^6$. Note the base is always the same, but the exponent is unique. Therefore the log of the substance can be used to identify the substance. For example the first substance can be represented as $log$ $10000000000$ or $10$ and the second substance can be represented as $log$ $1000000$ or $6$. Note $6$ and $10$ are much easier to deal with. But what if you're not a chemist? How would you use logs? ii) Algebra. Let's say you have the equation $316 = 10^x$. How would you solve for $x$? You could find the log of $316$ which is approximately $2.5$. The equation would then be $10^{2.5} = 10^x$. Therefore $x$ is $2.5$. Logs are therefore extremely useful when solving for exponents. Note that although I have restricted my examples to log base 10 for simplicity, logs can exist in other bases. For example $\log_2 32$ (log to the base 2 of 32) is $5$ since $2^5= 32$. Other important log bases include the the natural log, which is commonly used in advanced mathematics. What other appliances do logs have? Logs have a variety of real life applications such as calculating half lives and exponential growth/decay. In fact the inverse of an exponential function is a logarithmic function!
{ "language": "en", "url": "https://math.stackexchange.com/questions/16342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 5, "answer_id": 4 }
Minkowski's inequality Minkowski's inequality says the following: For every sequence of scalars $a = (a_i)$ and $b = (b_i)$, and for $1 \leq p \leq \infty$ we have: $||a+b||_{p} \leq ||a||_{p}+ ||b||_{p}$. Note that $||x||_{p} = \left(\smash{\sum\limits_{i=1}^{\infty}} |x_i|^{p}\right)^{1/p}$. This is how I tried proving it: \begin{align*} ||a+b||^{p} &= \sum |a_k+b_k|^{p}\\\ &\leq \sum(|a_k|+|b_k|)^{p}\\\ &= \sum(|a_k|+|b_k|)^{p-1}|a_k|+ \sum(|a_k|+|b_k|)^{p-1}|b_k|. \end{align*} From here, how would you proceed? I know that you need to use Hölder's inequality. So maybe we can bound both the sums on the RHS since they are products.
Holder's Inequality would say that $$\sum |x_ky_k| \leq \left(\sum |x_k|^r\right)^{1/r}\left(\sum|y_k|^s\right)^{1/s}$$ where $\frac{1}{r}+\frac{1}{s}=1$. Apply Holder's twice, once to each sum, using $x_k = a_k$, $y_k = (|a_k|+|b_k|)^{p-1}$ in one, and similarly in the other, with $r=p$ and $\frac{1}{s}=1-\frac{1}{p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Sum of series $2^{2m}$ How to sum $2^{2m}$ where $m$ varies from $0$ to $n$?
If you really mean $n$ varies from $0$ to $n$ then the answer is $\sum _{n=0}^n 2^{2 m}=2^{2 m} (1 + n)$ Otherwise if $m$ goes from $0$ to $n$ its just a geometric series ( http://mathworld.wolfram.com/GeometricSeries.html ) $\sum _{m=0}^n 2^{2 m}=\frac{1}{3} \left(2^{2 n+2}-1\right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/16452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Do values attached to integers have implicit parentheses? Given $5x/30x^2$ I was wondering which is the correct equivalent form. According to BEDMAS this expression is equivalent to $5*\cfrac{x}{30}*x^2$ but, intuitively, I believe that it could also look like: $\cfrac{5x}{30x^2}$ I asked this question on MathOverflow (which was "Off-topic" and closed) and was told it was ambiguous. I was wondering what the convention was or if such a convention exists. According to Wikipedia the order of operations can be different based on the mnemonic used.
Not even calculator manufacturers agree on the subject of precedence:
{ "language": "en", "url": "https://math.stackexchange.com/questions/16502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 11, "answer_id": 0 }
Why do books titled "Abstract Algebra" mostly deal with groups/rings/fields? As a computer science graduate who had only a basic course in abstract algebra, I want to study some abstract algebra in my free time. I've been looking through some books on the topic, and most seem to 'only' cover groups, rings and fields. Why is this the case? It seems to me you'd want to study simpler structures like semigroups, too. Especially looking at Wikipedia, there seems to be a huge zoo of different kinds of semigroups.
Historical inertia. A relatively small number of people were responsible for more or less deciding the modern abstract algebra curriculum around the beginning of the 20th century, and their ideas were so influential that their choice of topics is rarely questioned, for better or for worse. See, for example, Section 9.7 of Reid's Undergraduate commutative algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79", "answer_count": 8, "answer_id": 4 }
Example of a ring with $x^3=x$ for all $x$ A ring $R$ is a Boolean ring if $x^2=x$ for all $x\in R$. By Stone representation theorem a Boolean ring is isomorphic to a subring of the ring of power set of some set. My question is what is an example of a ring $R$ with $x^3=x$ for all $x\in R$ that is not a Boolean ring? (Obviously every Boolean ring satisfies this condition.)
Actually, not only would $\mathbb{Z}_3$ work, but it's the only solution that's an integral domain not of characteristic 2 (since, in such a case, $x^3-x=0\,\Rightarrow\,x\in {0,1,-1}$). Another solution would be $R:=\mathbb{Z}_3[\mathbb{Z}_2]$, the group ring of $\mathbb{Z}_3$ over $\mathbb{Z}_2$ (ie the ring of "polynomials" over $\mathbb{Z}_3$, except that exponents are in $\mathbb{Z}_2$). To see why, given an element $f(x)\in R$ with $f(x)=a_0+a_1 x^{b_1} + \cdots + a_n x^{b_n}$ with $a_i\in \mathbb{Z}_3$ and $b_j \in \mathbb{Z}_2$. Since $R$ is a ring of characteristic 3, the Freshman's Dream implies that $(f(x))^3=a_0^3+a_1^3 x^{3b_1} + \cdots + a_n^3 x^{3b_n} = a_0 + a_1 x^{b_1} + \cdots + a_n x^{b_n}=f(x)$. In fact, by the same argument, if you're given any ring $T$ of characteristic 3 with $x^3=x$ for all $x\in T$, then $T[\mathbb{Z}_2]$ satisfies this property as well. I can't think of another class of examples off the top of my head, but I'd be surprised if boolean rings and the class of examples above were the only examples of rings of this type. EDIT: Yes, the solution of modding out a free algebra by an appropriate ideal would also work nicely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Continuous bijections from the open unit disc to itself - existence of fixed points I'm wondering about the following: Let $f:D \mapsto D$ be a continuous real-valued bijection from the open unit disc $\{(x,y): x^2 + y^2 <1\}$ to itself. Does f necessarily have a fixed point? I am aware that without the bijective property, it is not necessarily true - indeed, I have constructed a counterexample without any trouble. However, I suspect with bijectivity it may be the case. I'm aware of the Brouwer Fixed Point Theorem and I imagine these two are intricately linked. However, i'm not certain where the bijectivity comes in - I believe we can argue something along the lines f now necessarily maps boundary to boundary - something about how if $x^2+y^2 \to 1$, $\|f(x,y)\| \to 1$ maybe. However, how does this help? Even if we could definitely define a limit to f(x,y) along the whole boundary and apply Brouwer, we can't guarantee the fixed points aren't all on the boundary anyway. Conversely however, I still can't construct a counterexample. Could anyone help me finish this off please? Thanks!
Let $D$ be open disk with center $0$ of radius $1$ in complex plane. Consider holomorphic automorphism of $D$ given by formula $z \mapsto \frac{z+a}{1+\bar a z}$ with $a\in D$. Does it have fixed point if $a\neq 0$ ? [You must solve $z=\frac{z+a}{1+\bar a z}$] You can also biholomorphically send $D$ to right half plane $H$ of $\mathbb C$ by $z \mapsto \frac{1+z}{1-z}$ and then apply Chris Eagle comment to $H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What is $\frac{d}{dx}\left(\frac{dx}{dt}\right)$? This question was inspired by the Lagrange equation, $\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial \dot{q}} = 0$. What happens if the partial derivatives are replaced by total derivatives, leading to a situation where a function's derivative with respect to one variable is differentiated by the original function?
Alternatively, the chain rule $\frac{dy}{dx} = \frac{dy}{dt}\frac{dt}{dx}$ gives $$\frac{d}{dx}(\frac{dx}{dt}) = \frac{d}{dt}(\frac{dx}{dt})\frac{dt}{dx} = \frac{d^2x}{dt^2}\frac{dt}{dx}.$$ Of course, this is the same as Ross' answer above. That is, we have the identity $$-\left(\frac{dx}{dt}\right)^2\frac{d^2t}{dx^2} = \frac{d^2x}{dt^2}\frac{dt}{dx},$$ which follows from differentiating the equation $\frac{dx}{dt}\frac{dt}{dx} = 1$ with respect to $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
An approximation of an integral Is there any good way to approximate following integral? $$\int_0^{0.5}\frac{x^2}{\sqrt{2\pi}\sigma}\cdot \exp\left(-\frac{(x^2-\mu)^2}{2\sigma^2}\right)\mathrm dx$$ $\mu$ is between $0$ and $0.25$, the problem is in $\sigma$ which is always positive, but it can be arbitrarily small. I was trying to expand it using Taylor series, but terms looks more or less this $\pm a_n\cdot\frac{x^{2n+3}}{\sigma^{2n}}$ and that can be arbitrarily large, so the error is significant.
A standard way to get a good approximation for integrals that "look" Gaussian is to evaluate the Taylor series of the logarithms of their integrands through second order, expanding around the point of maximum value thus (continuing with @Ross Millikan's substitution): $$\eqalign{ &\log\left(\sqrt{y}\cdot \exp\left(-\frac{(y-\mu )^2}{2\sigma ^2}\right)\right) \cr = &\frac{-\mu ^2-\sigma ^2+\mu \sqrt{\mu ^2+2 \sigma ^2}+2 \sigma ^2 \log\left[\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right]}{4 \sigma ^2} \cr + &\left(-\frac{1}{2 \sigma ^2}-\frac{1}{\left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)^2}\right) \left(y-\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right)^2 \cr + &O\left[y-\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right]^3 \cr \equiv &\log(C) - (y - \nu)^2/(2\tau^2)\text{,} }$$ say, with the parameters $C$, $\nu$, and $\tau$ depending on $\mu$ and $\sigma$ as you can see. The resulting integral now is a Gaussian, which can be computed (or approximated or looked up) in the usual ways. The approximation is superb for small $\sigma$ or large $\mu$ and still ok otherwise. The plot shows the original integrand in red (dashed), this approximation in blue, and the simpler approximation afforded by replacing $\sqrt{y} \to \sqrt{\mu}$ in gold for $\sigma = \mu = 1/20$. (Added) Mathematica tells us the integral, when taken to $\infty$, can be expressed as a linear combination of modified Bessel Functions $I_\nu$ of orders $\nu = -1/4, 1/4, 3/4, 5/4$ with common argument $\mu^2/(4 \sigma^2)$. From the Taylor expansion we can see that when both $\mu$ and $\sigma$ are small w.r.t. $1/2$--specifically, $(1/4-\mu)/\sigma \gg 3$, the error made by including the entire right tail will be very small. (With a little algebra and some simple estimates we can even get good explicit bounds on the error as a function of $\mu$ and $\sigma$.) There are many ways to compute or approximate Bessel functions, including polynomial approximations. From looking at graphs of the integrand, it appears that the cases where the Bessel function approximation works extremely well more or less complement the cases where the preceding "saddlepoint approximation" works extremely well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Got to learn matlab I have this circuits and signals course where i have been asked to learn matlab all by myself and that its treated as a basic necessity now i wanted some help as to where should i start from as i hardly have around less than a month before my practicals session start should i go with video lectures/e books or which one should i prefer over the other?
I am learning also MatLab, am reading this guide may be usefull for you. The guide has been very usefull for me. Link : http://www.phy.ohiou.edu/computer/matlab/techdoc/pdfdocs/getstart.pdf I wait all answers. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/16907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Proving there is no natural number which is both even and odd I've run into a small problem while working through Enderton's Elements of Set Theory. I'm doing the following problem: Call a natural number even if it has the form $2\cdot m$ for some $m$. Call it odd if it has the form $(2\cdot p)+1$ for some $p$. Show that each natural number number is either even or odd, but never both. I've shown most of this, and along the way I've derived many of the results found in Arturo Magidin's great post on addition, so any of the theorems there may be used. It is the 'never both' part with which I'm having trouble. This is some of what I have: Let $$ B=\{n\in\omega\ |\neg(\exists m(n=2\cdot m)\wedge\exists p(n=2\cdot p+1))\}, $$ the set of all natural numbers that are not both even and odd. Since $m\cdot 0=0$, $0$ is even. Also $0$ is not odd, for if $0=2\cdot p+1$, then $0=(2\cdot p)^+=\sigma(2\cdot p)$, but then $0\in\text{ran}\ \sigma$, contrary to the first Peano postulate. Hence $0\in B$. Suppose $k\in B$. Suppose $k$ is odd but not even, so $k=2\cdot p+1$ for some $p$. Earlier work of mine shows that $k^+$ is even. However, $k^+$ is not odd, for if $k^+=2\cdot m+1$ for some $m$, then since the successor function $\sigma$ is injective, we have $$ k^+=2\cdot m+1=(2\cdot m)^+\implies k=2\cdot m $$ contrary to the fact that $k$ is not even. Now suppose $k$ is even, but not odd. I have been able to show that $k^+$ is odd, but I can't figure a way to show that $k^+$ is not even. I suppose it must be simple, but I'm just not seeing it. Could someone explain this little part? Thank you.
Suppose there exists some $n \in \mathbb{N}$ which is both even and odd. Then $n= 2m = 2p+1$. So $2m = 2p+1$ or $2(m-p) = 1$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
What is the P versus NP question asking? Is the P versus NP question asking "P = NP" or "ZFC |- P = NP" (or "|- P = NP" for that matter)? Because if I say P = NP, then I will be asked to prove it. But if the goal is "ZFC |- P = NP" then the result will not be useful because of the set theoretic assumptions of ZFC. So you may say the third choice matches our intuition, but it's not a (complete) question. If P = NP, then we are asked to prove |- P = NP and if P != NP we are not asked ~(|- P = NP) but |- P != NP. So what is the P versus NP question asking? Actually this question can be asked for any question, it's not about P versus NP alone.
For the $1 million P vs NP prize, the Clay Mathematics Institute problem description does not even talk about proofs explicitly: Problem Statement. Does P = NP? http://www.claymath.org/millennium/P_vs_NP/pvsnp.pdf A proof is any completely convincing argument; set-theoretic foundations are just one tool that helps us study mathematical objects.
{ "language": "en", "url": "https://math.stackexchange.com/questions/16979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is complete induction, by example? $4(9^n) + 3(2^n)$ is divisible by 7 for all $n>0$ So, I've been revising for an exam and I came up against the question " prove $4(9^n) + 3(2^n)$ is divisible by 7 for all $n>0$. Now, I know how to do this. If I assume $n=k$ divisible by $7$, then I need to show that $n=k+1$ is divisible by $7$. The easiest way to do this is to state that if we label $n_k=k$ and $n_{k+1} = k+1$ then if $7|n_{k+1}-n_{k}$ and $7|n_{k} \Rightarrow 7|n_{k+1}$. So, without further ado, $4(9^k)9 - 4(9) + 3(2^k)2 - 3(2^k) = 8\cdot4(9^k) + 3\cdot2^k = 8(4(9^k) + 3(2^k)) - 7\cdot 3(2^k)$. As required. Now clearly for $n=0$ this expression is $7$ so divisible for all $n\geq 0$. My question is, how would I go about proving this via complete induction? I asked because "proof by strong induction also accepted" was mentioned in the mark scheme. Now according to wikipedia, my first assumption is that not only is $n=k$ true but so is $n=k-1$ and so on down to $n=0$. How do I improve my expression of that and how do I go from there to show a proof using this technique? Edit: The build up to the question is on the topic of induction, so that's why I proved it that way, but Yuval Filmus has pointed out that if we are simply asked to prove it, the fact that $9 \equiv 2 \mod(7)$ means the proof is trivial.
One way you can use complete induction is to notice that $$9^{n+1} - 1 = 8(1 + 9 + 9^2 + \dots + 9^n)$$ and $$2^{n+1} - 1 = 1 + 2 + 2^2 + \dots + 2^n$$ Mutiply the first by $\displaystyle 4$ and second by $\displaystyle 3$ and add them up. Now, notice that $\displaystyle 32(9^k) + 3(2^k) = 28(9^k) + 4(9^k) + 3(2^k)$ If you assume $\displaystyle 4(9^k) + 3(2^k)$ is divisible by $\displaystyle 7$ for $k=0, 1, 2, \dots, n$, then the above shows that it is also true for $\displaystyle n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Different ways to represent functions other than Laurent and Fourier series? In the book "A Course of modern analysis", examples of expanding functions in terms of inverse factorials was given, I am not sure in today's math what subject would that come under but besides the followings : power series ( Taylor Series, Laurent Series ), expansions in terms of theta functions, expanding a function in terms of another function (powers of, inverse factorial etc.), Fourier series, infinite Products (Complex analysis) and partial fractions (Weisenstein series), what other ways of representing functions have been studied? is there a comprehensive list of representation of functions and the motivation behind each method? For example , power series are relatively easy to work with and establish the domain of convergence e.g. for $ \sin , e^x \text {etc.}$ but infinite product representation makes it trivial to see all the zeroes of $\sin, \cos \text etc. $ Also if anyone can point out the subject that they are studied under would be great. Thank you
There are literally dozens of ways to represent "arbitrary" (given or unknown) functions $f$ in terms of "special" functions. Each of these ways responds to the particular geometrical situation at hand, to ways of encoding the available information about $f$, to a-priori-conditions that $f$ must fulfill, etc. The special functions used in each particular case are "taken from a catalogue", they are well understood and usually have an algebraically describable behavior with respect to the natural operations (differentiation, shifts, rotations, etc.) present in the given environment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Torsion module over PID Suppose $p$ is irreducible and $M$ is a tosion module over a PID $R$ that can be written as a direct sum of cyclic submodules with annihilators of the form $p^{a_1} | \cdots | p^{a_s}$ and $p^{a_i}|p^{a_i+1}$. Let now $N$ be a submodule of $M$. How can i prove that $N$ can be written a direct sum of cyclic modules with annihilators of the form $p^{b_1} | \cdots | p^{b_t}, t\leq s$ and $\ p^{b_i}| p^ {a_(s-t+i)}$? I've already shown that $t\leq s$ considering the epimorphism from a free module to $M$ and from its submodule to $N$.
Here's a proof : The result can be easily shown by strong induction to $\sum_{i=1}^{s}{a_i}$ .Suppose that the result is true if the sum above is less or equal to k.Suppose now that the sum is equal to k+1.Then for the induction step we use the invariant factors of pM for which the sum is less than k.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Apparent inconsistency between integral table and integration using trigonometric identity According to my textbook: $$\int_{-L}^{L} \cos\frac{n \pi x}{L} \cos\frac{m \pi x}{L} dx = \begin{cases} 0 & \mbox{if } n \neq m \\ L & \mbox{if } n = m \neq 0 \\ 2L& \mbox{if } n = m = 0 \end{cases} $$ According to the trig identity given on this cheat sheet: $$ \cos{\alpha}\cos{\beta} = \frac{1}{2}\left [ \cos \left (\alpha -\beta \right ) + \cos \left(\alpha +\beta \right ) \right ] $$ Substituting this trig identity in and integrating from $-L \mbox{ to } L$ gives: $$\int_{-L}^{L} \cos\frac{n \pi x}{L} \cos\frac{m \pi x}{L} dx = \frac{L}{\pi} \left [\frac{\sin \left ( \pi (n - m) \right )}{n - m} + \frac{\sin \left ( \pi (n+m) \right )}{n + m}\right ] $$ Evaluating the right side at $n = m$ gives a zero denominator, making the whole expression undefined. Evaluating the right hand side at $n \neq m$ gives $0$ because the sine function is always $0$ for all integer multiples of $\pi$ as can be clearly seen with the unit circle. None of these results jive with the first equation. Could you explain what mistakes I am making with my thinking?
What Arturo says is correct that the answer you get is not quite right on the diagonal ( m=n) and the anti diagonal (m=-n). In a certain sense it is almost right. What do I mean? If we take a limit of the $$ lim_{n-m\rightarrow 0} \frac{L}{\pi} \left [\frac{\sin \left ( \pi (n - m) \right )}{n - m} + \frac{\sin \left ( \pi (n+m) \right )}{n + m}\right ] =$$ $$\frac{L}{\pi}[\pi+\frac{sin(2\pi n)}{2n}]$$. Now if $n$ is a nonzero integer, you recover the answer $L$. If $n=0$, then when we take the limit of the new expression as $n$ approaches zero, we get the answer $2L$. Now a technical note. I took this limit in a particular way. To make this well defined, what one must show is that the two dimensional limits, $$lim_{(x,y)\rightarrow (n,n)}\frac{sin(x-y)}{x-y}=1$$. Also note that the case where $m=-n$ is similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Does this probability distribution have a name? We have a set of numbers, of size $m$. We are going to pick $a$ numbers with uniform probability from that set, with replacement. Let X be the random variable denoting the probability of having X of those picks distinct (exactly X distinct values are picked). Motivation: I need to calculate this probability in order to calculate a more advanced distribution regarding Bloom filters, in particular the distribution of the number of bits set to 1 in a Bloom filter. Letting that aside, I am having trouble formulating the the PMF for X. I've tried to look out for multi-variate binomial distribution but I couldn't relate it to what I want to do. The question is whether there is such a probability distribution in the literature, and if now, how can I approach this problem ? Thanks. Update: I have managed to make a formulation: the probability we pick $x$ distinct values is $$ \frac{1}{m} \frac{1}{m-1} \cdots \frac{1}{m-x+1} $$ And the probability of picking the rest of our $a-x$ picks in that set of $x$ values is $$ \left(\frac{x}{m}\right)^{a-x} $$ Finally, the number of such configurations is $\binom{m}{x}$. Multiplying all that together and simplifying gives us a PMF $$ P(X=x;a,m) = \frac{ \left( \frac{m}{x} \right) ^{x-a}}{x!} $$ Does that seem to make any sense ?
Look up multiset coefficients in Wikipedia's Multiset. Your calculation would actually give $\frac{m!}{x!}\left(\frac{x}{m}\right)^{(a-x)}\binom {m}{x}$ the way you are arguing but the approach is not correct and this will not sum to 1. You are not counting the correct number of ways to get x different items.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
If $4^x + 4^{-x} = 34$, then $2^x + 2^{-x}$ is equal to...? I am having trouble with this: If $4^x + 4^{-x} = 34$, then what is $2^x + 2^{-x}$ equal to? I managed to find $4^x$ and it is: $$4^x = 17 \pm 12\sqrt{2}$$ so that means that $2^x$ is: $$2^x = \pm \sqrt{17 \pm 12\sqrt{2}}.$$ Correct answer is 6 and I am not getting it :(. What am I doing wrong?
Let $a=2^x$. Then $2^{-x}=1/a$ and $$\left(a+\frac1a\right)^2=a^2+2+1/a^2=2+(2^x)^2+(2^{-x})^2=2+4^x+4^{-x}=2+34=36.$$ This means that $a+1/a=6$ (it should be 6 or -6, since these are the square roots of $36$ but since $a$ and $1/a$ are positive, it is 6).
{ "language": "en", "url": "https://math.stackexchange.com/questions/17291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Angle for pointing at a certain point in 2d space Recently, I have been programming a simple game. Very simple: There is a tank, and the cannon will aim at whatever position the mouse is at. Now lets talk about the cannon graphic. The cannon graphic points to the north, with 0 rotation. Here are the variables I have for my game, and that might be important factors for solving my problem: Tx = The tank's X position in the world. Ty = The tank's Y position in the world. Mx = The mouse's X position in the world. My = The mouse's Y position in the world. Also, in this programming language, the greater the Y coordinate, the lower you are. And the less the Y coordinate is, the higher you are. So, Y = 0 means the top. My problem is, how do calculate the rotation needed for my cannon graphic to "point" to the mouse's position? Thank you for your time.
Suppose this is the situation: $\displaystyle D_x$ is the difference of the $\displaystyle x$-coordinates and $\displaystyle D_y$ is the difference of the $\displaystyle y$-coordinates. Then angle $\displaystyle w$ is given by $\displaystyle \tan w = \frac{D_y}{D_x}$ and thus $\displaystyle w = \arctan (\frac{D_y}{D_x})$. The angle you will need to rotate would then be anti-clockwise $\displaystyle \frac{\pi}{2} + w$, if the tank is point "up". Note: The above assumes that $\displaystyle w$ is acute. I will leave it to you to try to work out the other cases (for different tank and mouse positions) and come up with a general formula. I would suggest reading up on atan or (to avoid a trap involving division by $0$) atan2. Most likely the Math package of your programming language will have both. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why are all homogenous systems consistent? A linear system of form $A\vec{x}=\vec{0}$ is called homogeneous. Why are all homogenous systems consistent?
A system is defined as inconsistent if its row-reduced echelon form contains a row of form $\begin{bmatrix} 0 & 0 & 0 & ... & 0 & | & k \end{bmatrix}$ where $k \neq 0$ and | is a separator within augmented matrix. Since your system equals $\vec{0}$, it is impossible to have $k \neq 0$, rendering the system consistent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Creating a parametrized ellipse at an angle I'm creating a computer program where I need to calculate the parametrized circumference of an ellipse, like this: x = r1 * cos(t), y = r2 * sin(t) Now, say I want this parametrized ellipse to be tilted at an arbitrary angle. How do I go about this? Any obvious simplifications for i.e 30, 45 or 60 degrees?
If you rotate $(x=r_1\cos t,y=r_2\sin t)$ by $\theta$ about $(0,0)$, the resulting curve is given by $(x'=r_1\cos t\cos\theta-r_2\sin t\sin\theta, y'=r_1\cos t\sin\theta+r_2\sin t\cos\theta)$. (Using the fact that complex multiplication by $e^{i\theta}$ rotates by $\theta$ about $0$, the point $(x,y)=x+yi$ is mapped to $(x+iy)(\cos\theta+i\sin\theta)=$ $x\cos\theta-y\sin\theta+i(x\sin\theta+y\cos\theta)=$ $(x\cos\theta-y\sin\theta,x\sin\theta+y\cos\theta)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/17465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Are there rules of Logic dealing with the implies operator? I'm just beginning a course in discrete mathematics and I'm learning a few of the basic laws to prove propositions. I understand how to use propositions that use the logical connectives, AND , OR, NOT. However I'm not sure how to prove a proposition that has an implies operator. For example prove the following is a tautology. $(p\land (p\implies q))\implies q$
There are two possible answers. One, your language doesn't really contain the $\implies$ operator, and it is thought of as a shorthand (as in user3123's answer): $$a \implies b \text{ is a shorthand for } \lnot a \lor b.$$ Two, your language does include it, and then you have some axioms expressing its "meaning". For example, you could have a system with only $\lnot$ and $\implies$, having the following axioms $$ A \implies (B \implies A) $$ $$ (A \implies B) \implies ((B \implies C) \implies (A \implies C)) $$ $$ (A \implies B) \implies (\lnot B \implies \lnot A) $$ and Modus Ponens as the only inference rule: given $A$ and $A \implies B$, deduce $B$. This system is complete (can prove every true proposition). The other connectives are then shorthands: $$ A \lor B \text{ stands for } \lnot A \implies B $$ $$ A \land B \text{ stands for } \lnot(\lnot A \lor \lnot B) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/17522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Going back from a correlation matrix to the original matrix I have N sensors which are been sampled M times, so I have an N by M readout matrix. If I want to know the relation and dependencies of these sensors simplest thing is to do a Pearson's correlation which gives me an N by N correlation matrix. Now let's say if I have the correlation matrix but I want to explore tho possible readout space that can lead to such correlation matrix what can I do? So the question is: given an N by N correlation matrix how you can get to a matrix that would have such correlation matrix? Any comment is appreciated.
May I tell you guys my silly solution to this problem? Only if you won't laugh at me! I thought I need to explore this space so I want to randomly sample it (the space of sensor readings). I know what the correlation matrix should look like and I know that the sensor readings are coming from a Gaussian distribution. So I generated a random N by M matrix and started tweaking the values in small steps. and check if each change moves the matriv toward the target or away and kept the changes toward the target. So I choose a random cell in this matrix and increase it by 10% and calculate the correlation matrix and compare it to the target correlation matrix the difference is smaller than what it was before the 10% increase I keep the change and move to next randomly selected cell and continue until I get close enough to the target correlation matrix. This method, although it is silly, works well and I can get different samples of the sensor reading space. What you guys think?! In practice I am working on rather large matrices like N = 8000, M = 1000
{ "language": "en", "url": "https://math.stackexchange.com/questions/17575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
How do you integrate a Bessel function? I don't want to memorize answers or use a computer, is this possible? I am attempting to integrate a Bessel function of the first kind multiplied by a linear term: $\int xJ_n(x)\mathrm dx$ The textbooks I have open in front of me are not useful (Boas, Arfken, various Schaum's) for this problem. I would like to do this by hand. Is it possible? I have had no luck with expanding out $J_n(x)$ and integrating term by term, as I cannot collect them into something nice at the end. If possible and I just need to try harder (i.e. other methods or leaving it alone for a few days and coming back to it) that is useful information. Thanks to anyone with a clue.
I would do it numerically. Also for x not too large, you could use the power series expansion, but this will run into convergence issues before x gets too high. Numerical integration can be done to high accuracy without too much computation using Gaussian quadrature. Good luck looking for an analytic solution (although maybe one exists?). Possibly you can use Bessel's equation, and by substituting your integral you can derive a new differential equation, but I think you'd be very lucky if this allowed an analytic solution. But, a numerical solution should be straightforward (at least for fixed N).
{ "language": "en", "url": "https://math.stackexchange.com/questions/17634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Why a complete graph has $\frac{n(n-1)}{2}$ edges? I'm studying graphs in algorithm and complexity, (but I'm not very good at math) as in title: Why a complete graph has $\frac{n(n-1)}{2}$ edges? And how this is related with combinatorics?
$\frac{n(n-1)}{2}$ comes from simple counting argument. You could directly say that every edge is obtained by asking the question: "how many pairs of vertices can I choose?", and this choosing of vertices is $C(n,2) = \frac{n(n-1)}{2}$. or you could take the other way of counting. Label the vertices $1,2, \ldots ,n$. The first vertex is now joined to $n-1$ other vertices. The second vertex has already been joined to vertex $1$ and hence has to be joined to the remaining $n-2$ vertices and in general the $k^{th}$ vertex has already been joined to the previous $k-1$ vertices and hence has to be joined to the remaining $n-k$ vertices. So the total number of edges is given by $(n-1) + (n-2) + \ldots 2 + 1 = \frac{n(n-1)}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 4, "answer_id": 3 }