Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What does $\ll$ mean? I saw two less than signs on this Wikipedia article and I was wonder what they meant mathematically. http://en.wikipedia.org/wiki/German_tank_problem EDIT: It looks like this can use TeX commands. So I think this is the symbol: $\ll$
In the occurrence of "$\ll$" you are asking about, it means "much less than". If you look at the fourth entry here, this is the first meaning listed for $\ll$. As Charles has correctly pointed out, this symbol is also used in advanced mathematics to describe a certain relationship in the growth of two functions. That is the second meaning listed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 3 }
how to find the value of $\log_3 7$ Can I ask how to compute $\log_3 7$, using the changing the base of logarithm.
If you mean, "How can I calculate $\log_3 7$ using the change of base formula?": I've never memorized the change of base formula, I always re-derive it as needed. The key is to remember what the expression means: $\log_3 7 = r$ means that $3^r = 7$. Taking logarithms base $b$ on both sides, we have $$\begin{align*} 3^r &= 7\\ \log_b(3^r) &= \log_b(7)\\ r\log_b 3&= \log_b 7\\ r &= \frac{\log_b 7}{\log_b 3}\\ \log_3 7 &= \frac{\log_b 7}{\log_b 3}. \end{align*}$$ So if you want to compute $\log_3 7$ using the natural log, you would have $$\log_3 7 = \frac{\ln 7}{\ln 3}.$$ If you want to compute them using the common logarithm (base 10), you would compute $$\log_3 7 = \frac{\log 7}{\log 3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/36442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to compute homotopy classes of maps on the 2-torus? Let $\mathbb T^2$ be the 2-Torus and let $X$ be a topological space. Is there any way of computing $[\mathbb T^2,X]$, the set of homotopy class of continuous maps $\mathbb T^2\to X$ if I know, for instance, the homotopy groups of $X$? Actually, I am interested in the case $X=\mathbb{CP^\infty}$. I would like to classify $\mathbb T^1$-principal bundles over $\mathbb T^2$ (in fact $\mathbb T^2$-principal bundles, but this follows easily.)
If you want to calculate $[\mathbb T^2,\mathbb CP^\infty]$, perhaps, it's easier to use the classification of maps to $\mathbb CP^\infty$ instead: $[X,\mathbb CP^\infty]=H^2(X)$; so $[\mathbb T^2,\mathbb CP^\infty]=H^2(\mathbb T^2)=\mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Gram matrix invertible iff set of vectors linearly independent Given a set of vectors $v_1 \cdots v_n$, the $n\times n$ Gram matrix $G$ is defined as $G_{i,j}=v_i \cdot v_j$ Due to symmetry in the dot product, $G$ is Hermitian. I'm trying to remember why $|G|=0$ iff the set of vectors are not linearly independent.
JasonMond's "only if" part is not as general as it should, because s/he assumes that $A$ is square. In the following, I complete the proof that holds whether $A$ is square or not. Let $G = A^T A$. If the column vectors of $A$ are linearly dependent, there exists a vector $u \neq 0$ such that $$ A u = 0. $$ It follows that $$ 0 = A^T A u = G u. $$ Since $u \neq 0$, $G$ is not invertible. Conversely, if $G$ is not invertible, there exists a vector $v \neq 0$ such that $$ G v = 0. $$ It follows that $$ 0 = v^T G v = v^T A^T A v = (A v)^T A v = \lVert A v\rVert^2 $$ and therefore that $$ A v = 0. $$ Since $v \neq 0$, the column vectors of $A$ are linearly dependent. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 2 }
Evenly distribute points along a path I have a user defined path which a user has hand drawn - the distance between the points which make up the path is likely to be variant. I would like to find a set of points along this path which are equally separated. Any ideas how to do this?
You're looking for an arc-length parametrization that you can sample uniformly. See http://groups.google.com/group/comp.graphics.algorithms/msg/c7025fd53b18db94 . You probably need to smooth the input data before doing this. See for instance Efficient Curve Fitting. See also a summary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Convergence in the mean of Fourier series I need to do some research on fourier series for my analysis class so I'm trying to find info (preferably a book or paper with the proof) on this: "If $f$ is Riemann integrable on $[-l,l]$ then its fourier series converge in the mean to $f$ on $[-l,l]$" Or in other words that "the fourier trigonometric system is complete over the Riemann integrable functions on $[-l,l]$" I'm talking about this specific fourier series $$f \sim \frac{a_0}{2} + \sum_{n=1}^{\infty}{a_n\cos\frac{n\pi}{l}x + b_n\sin\frac{n\pi}{l}x}$$ Where $$a_n = \frac{1}{l}\int_{-l}^l{f(x)\cos\frac{n\pi}{l}x \, dx}$$ and $$b_n = \frac{1}{l}\int_{-l}^l{f(x)\sin\frac{n\pi}{l}x \, dx}$$ Any info you can give me on this would be appreciated.
Fourier Analysis an Introduction by Elias Stein and Rami Shakarchi.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing that a level set is not a submanifold Is there a criterion to show that a level set of some map is not an (embedded) submanifold? In particular, an exercise in Lee's smooth manifolds book asks to show that the sets defined by $x^3 - y^2 = 0$ and $x^2 - y^2 = 0$ are not embedded submanifolds. In general, is it possible that a level set of a map which does not has constant rank on the set still defines a embedded submanifold?
The set given by $(x^2+y^2)(x^2+y^2-1)=0$ is an embedded submanifold in $\mathbb R^2$ but it has components of different dimension and so I guess the map does not have constant rank on the set, but I haven't checked.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
What is the math behind the game Spot It? I just purchased the game Spot It. As per this site, the structure of the game is as follows: Game has 55 round playing cards. Each card has eight randomly placed symbols. There are a total of 50 different symbols through the deck. The most fascinating feature of this game is any two cards selected will always have ONE (and only one) matching symbol to be found on both cards. Is there a formula you can use to create a derivative of this game with different numbers of symbols displayed on each card. Assuming the following variables: * *S = total number of symbols *C = total number of cards *N = number of symbols per card Can you mathematically demonstrate the minimum number of cards (C) and symbols (S) you need based on the number of symbols per card (N)?
$n^2 -n + 1$ where $n$ is the number of images. This is the simplest formula to arrive at the number of both individual symbols and total number of cards required to display them (these are the same). I derived this formula logically but not necessarily mathematically as follows: I picked a random card and focused on a single image. Assuming eight images per card as are found in this game, this image can only be found $8$ times, once on the card you're holding and $7$ more times. The same holds true for the next image. It can only appear $8$ times if it to remain unique - once on the card you are holding and once over each of $7$ more cards. I noticed the trend. Each image appears once on the card you're holding and requires $7$ more cards. So, you need the 1 card you are holding and 7 more per image. Mathematically, I guess that's: $1 \text{card} + (7\text{cards}\times 8\text{images})$. That's $1+(7\times8)$ or $1+56 = 57.$ Logical, so far. Then, I ran the same logic and considered a card with only $4$ images. Each card would require one base card and $3$ additional cards per image. Mathematically, that would be $1+ (3x4)$. That's $1+12$ or $13$ cards. Then, I tried to tie these observations together. I asked myself "Is there a formula that would arrive at the right answer no matter the number of images?" The answer is yes. I remembered that in the examples above I started with 1 card then added (one less than the number of images) $\times$ (the number of images). That's $1+ (n-1)(n)$ if $n$ is the number of images. Then I just kinda rearranged a little: $$\begin{eqnarray*}1+ (n-1)(n) \\ 1+ (n)(n) - n \\ 1+ n^2 - n \\ n^2 - n + 1 \end{eqnarray*}$$ I tested it and it works out every time. I was very happy before I got yelled at by my wife for taking so long on the computer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76", "answer_count": 9, "answer_id": 1 }
Proving any infinite set has a denumerable subset with the Axiom of Choice Derive from the axiom of choice that any infinite set contains a denumerable subset
Here's a way to get started. Let $A$ be an infinite set. Let $F$ be a choice function on $\mathscr{P}(A)-\{\emptyset\}$. Now let $B$ be the collection of all finite subsets of $A$, and let $\emptyset\in B$ as well. Now let $f\colon B\to B$ be defined by $X\mapsto X\cup\{F(A-X)\}$. By the recursion theorem on $\omega$, we know there exists a function $h\colon\omega\to B$ such that $h(0)=\emptyset$ and $$ h(n^+)=f(h(n))=h(n)\cup\{F(A-h(n))\} $$ for every $n\in\omega$. Claim. For every $m\leq n$, we have $h(m)\subset h(n)$. To see this, use induction. Let $$ K=\{n\in\omega\ |\ m\leq n\implies h(m)\subset h(n)\}. $$ Clearly $0\in K$, for if $m\leq 0$, then $m=0$, and obviously $h(0)\subset h(0)$. So suppose $n\in K$. If $m\leq n^+$ then either $m\leq n$ or $m=n^+$. In the first case, $h(m)\subset h(n)\subset h(n^+)$. In the second, $h(m)=h(n^+)$, so the conclusion follows either way. Hence $n^+\in K$, so $K=\omega$. Now let $g\colon\omega\to A$ be defined as $n\mapsto F(A-h(n))\in A-h(n)$, which implies immediately that $g(n)\notin h(n)$. Try showing that $g$ is injective, which will prove that $g$ is surjective onto its range, which is a subset of $A$. Since $g$ is then a bijection from $\omega$ onto $\text{ran }g$, $\text{ran }g$ will be countable, and you'll have your result. Added: To show $g$ is injective, suppose $m\neq n$, and let's suppose $m<n$. This means $m^+\leq n$, so by the above claim, $h(m^+)\subset h(n)$. Then $$ h(m^+)=h(m)\cup\{F(A-h(m))\}=h(m)\cup\{g(m)\}. $$ What does this tell you about $g(m)$ in relation to $h(m^+)$ and $h(n)$? Is it then possible that $g(m)=g(n)$? Why not? This proves $g$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
A property of $J$-semisimple rings I'd like a little help on how to begin this problem. Show that a PID $R$ is Jacobson-semisimple $\Leftrightarrow$ $R$ is a field or $R$ contains infinitely many nonassociate irreducible elements. Thanks.
HINT $\ $ A PID has a nonzero element divisible by every prime iff it has finitely many primes. NOTE $\ $ This holds much more generally. We have the following generalization to Krull domains (e.g. UFDs and Noetherian integrally-closed domains, e.g. Dedekind domains, number rings) THEOREM $\ $ A Krull domain has a nonzero element in every nonzero prime ideal iff it is a PID with finitely many primes. For a proof see e.g. Theorem 1 in Gilmer: The pseudo-radical of a commutative ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/36875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
About the factors of the product of prime numbers If a number is a product of unique prime numbers, are the factors of this number the used unique prime numbers ONLY? Example: 6 = 2 x 3, 15 = 3 x 5. But I don't know for large numbers. I will be using this in my code to speed up my checking on uniqueness of data. Thanks! :D Edit: I will be considering all unique PRIME factors only. For example, I will not generate 9 because it's factors are both 3 (I don't consider 1 here), And also 24 (= 2 x 2 x 2 x 3). I want to know if it is TRUE if unique PRIME numbers are multiplied, the product's PRIME factors are only those PRIME factors that we multiplied in the first place. Sorry for not clarifying it earlier.
Yes, the unique factorization of squarefree integers is simply a special case of the unique factorization of integers. More generally, in any integral domain, the same proof as for integers shows that prime factorizations are necessarily unique. However, generally an element needn't have a prime factorization since generally an irreducible element $\rm\:p\:$ needn't be prime, i.e. $\rm\:p\ |\ a\:b\ \Rightarrow\ p\ | a\:$ or $\rm\:p\ |\ b\ $ may not be true for all irreducible elements $\rm\:p\:.$ Additionally, the existence of factorizations into irreducibles may fail, e.g. there are no irreducible elements (hence no primes) in the domain of all algebraic integers since $\rm\ \ a\ =\ \sqrt{a}\ \sqrt{a}\:.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/36927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
A Curious Binomial Sum Identity without Calculus of Finite Differences Let $f$ be a polynomial of degree $m$ in $t$. The following curious identity holds for $n \geq m$, \begin{align} \binom{t}{n+1} \sum_{j = 0}^{n} (-1)^{j} \binom{n}{j} \frac{f(j)}{t - j} = (-1)^{n} \frac{f(t)}{n + 1}. \end{align} The proof follows by transforming it into the identity \begin{align} \sum_{j = 0}^{n} \sum_{k = j}^{n} (-1)^{k-j} \binom{k}{j} \binom{t}{k} f(j) = \sum_{k = 0}^{n} \binom{t}{k} (\Delta^{k} f)(0) = f(t), \end{align} where $\Delta^{k}$ is the $k^{\text{th}}$ forward difference operator. However, I'd like to prove the aforementioned identity directly, without recourse to the calculus of finite differences. Any hints are appreciated! Thanks.
I just ran across this question after working on this answer, and realized that the same method could be used here. Notice that your equation is equivalent to $$ \sum_{j=0}^n(-1)^{n-j}\binom{n}{j}\frac{f(j)}{t-j}=\frac{n!f(t)}{t(t-1)(t-2)\dots(t-n)} $$ As long as $f$ is a polynomial of degree $n$ or less, apply the Heaviside Method for Partial Fractions to the right hand side to get the left hand side. That is, to compute the coefficient of $\frac1{t-j}$ on the left hand side, multiply both sides by $t-j$ and set $t=j$. The right hand side becomes $$ \frac{n!f(j)}{j(j-1)(j-2)\dots1(-1)(-2)\dots(j-n)}=(-1)^{n-j}\binom{n}{j}f(j) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/36990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
What is boxcar averaging? This is an application in signal processing but what I don't understand is how it's done algorithmically. I've seen some stuff online but most of it is just pictures. I would like an example on some type of sample data such as [0 1 2 3 4 5 6 7 8 9 10] and if the width is 3 or 5. Also what is the purpose of smoothing? Thanks!
This is usually known as a moving average in my experience. You may have better search results using this term. Let's take your sequence of data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. We want a moving average with a window of 3. The resulting sequence is [0/1, (0+1)/2, (0+1+2)/3, (1+2+3)/3, (2+3+4)/3, (3+4+5)/3, (4+5+6)/3, (5+6+7)/3, (6+7+8)/3, (7+8+9)/3, (8+9+10)/3]. Performing all the arithmetic: [0,.5,1,2,3,4,5,6,7,8,9]. This procedure is used to try to get a more accurate picture of the trend of a time series, most notably financial time series. One can model a time series S(t)=T(t)+N(t) where S(t) is the series, T(t) is the trend, and N(t) is noise. Smoothing tries to get rid of N(t). And you should know that there are better ways to do smoothing than moving averages.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do we prove the existence of uncountably many transcendental numbers? I know how to prove the countability of sets using equivalence relations to other sets, but I'm not sure how to go about proving the uncountability of the transcendental numbers (i.e., numbers that are not algebraic).
If you accept the premise that $|\mathbb R| = |P(\mathbb N)|$ then you know it is uncountable, due to Cantor's theorem. Take all polynomials in rational numbers, each have only finitely many roots in $\mathbb R$. Since the set of polynomials is equivalent to $\bigcup_{n\in\mathbb N} \mathbb Q^n$ (the union of polynomials from all degrees), which is countable you have only countably many possibly roots for all rational polynomials and thus only countably many algebraic numbers. Now consider $\mathbb R\setminus A$ where $A$ is the set of all algebraic numbers, if it was not uncountable then $\mathbb R = \mathbb R\setminus A\cup A$ which was a union of two countable sets, which then would be countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Combination of smartphones' pattern password Have you ever seen this interface? Nowadays, it is used for locking smartphones. If you haven't, here is a short video on it. The rules for creating a pattern is as follows. * *We must use four nodes or more to make a pattern at least. *Once a node is visited, then the node can't be visited anymore. *You can start at any node. *A pattern has to be connected. *Cycle is not allowed. How many distinct patterns are possible?
I don't have the answer as "how to mathematically demonstrate the number of combinations". Still, if that helps, I brute-forced it, and here are the results. * *$1$ dot: $9$ *$2$ dots: $56$ *$3$ dots: $320$ *$4$ dots: $1624$ *$5$ dots: $7152$ *$6$ dots: $26016$ *$7$ dots: $72912$ *$8$ dots: $140704$ *$9$ dots: $140704$ Total for $4$ to $9$ digits $:389,112$ combinations
{ "language": "en", "url": "https://math.stackexchange.com/questions/37167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 6, "answer_id": 2 }
Factorize $x^3-3x+2$ How can I factorize $x^3-3x+2$ ? The answer that I got on the internet is $x^3-2x^2+x+2x^2-4x+2$=$(x-1)^2(x+2)$ It would be nice if anyone could also tell what these type of equations are called and where can I learn more?
They are called cubic functions / cubic equations. A closed formula for the solutions exists but it is quite ugly so the common method to factorize the term is to guess one root $x_0$ and then do long division by $(x-x_0)$. So the method one would be to use the formula on $x^3-3x+2=0$ and find that the roots are $1,1,-2$ and you are done. The trivial method is to guess $x_0=1$ and use the long division
{ "language": "en", "url": "https://math.stackexchange.com/questions/37217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
If for every $v\in V$ $\langle v,v\rangle_{1} = \langle v,v \rangle_{2}$ then $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$ Let $V$ be a vector space with a finite Dimension above $\mathbb{C}$ or $\mathbb{R}$. How does one prove that if $\langle\cdot,\cdot\rangle_{1}$ and $\langle \cdot, \cdot \rangle_{2}$ are two Inner products and for every $v\in V$ $\langle v,v\rangle_{1}$ = $\langle v,v\rangle_{2}$ so $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$ The idea is clear to me, I just can't understand how to formalize it. Thank you.
Hint: Note that the associated norms satisfy $\|v\|_1 = \sqrt{\langle v,v\rangle_1} = \sqrt{\langle v,v\rangle_2} = \|v\|_2$ and then use the polarization identity to recover the scalar products and see that they are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Will normal random variables form a normal random vector? If $X_1, ..., X_n$ are normal random variables, will $[X_1, ..., X_n]$ be a normal random vector? I know if $X_1, ..., X_n$ are further independent, then $[X_1, ..., X_n]$ is a normal random vector. Can there be more relaxed condition on $X_1, ..., X_n$ than being independent? Thanks in advance!
$[X_1,...,X_n]$ is a normal (or Gaussian) random vector is the same as saying that those variables are jointly normal -or that it is a multivariate Gaussian. That each $X_i$ is normal (i.e., the marginal distributions are normal) is necessary but not sufficient (easy to give counterexamples). That each $X_i$ is normal and they are independent is -as you say- sufficient, but not necessary. A general (necessary and sufficient) condition can be expressed in terms of a linear combination of normal iid scalar variables. Specifically: given $Z_1 ... Z_n$ iid standard normals (mean 0 and variance 1), then $X = A Z + b$ (with A any square nonsingular matrix and b any fixed vector) is jointly normal - and this is fully general. From this comes the formula of the (general) multivariate Gaussian variable. Conversely, if $X$ is a normal random vector then one can find a matrix C and a vector d such that $Z = C X + d$ is a vector of iid standard normal variables. This comes to be a multivariate generalization of the well known formula to standardize a Gaussian: $z = \frac{x-\mu}{\sigma}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/37318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Exhibit an integral domain $R$ and a non-zero non-unit element of $R$ that is not a product of irreducibles. Exhibit an integral domain $R$ and a non-zero non-unit element of $R$ that is not a product of irreducibles. My thoughts so far: I don't really have a clue. Could anyone direct me on how to think about this? I'm struggling to get my head round irreducibles. Thanks.
how about something like $\mathbb{C}[x_1,x_2,x_3,...]$ where $x_i^2=x_{i-1}$ for $i>1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/37485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Counting trails in a triangular grid A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$. How many trails are there from $1$ to $N$ in this graph? A trail is allowed to visit a vertex more than once, but it cannot travel along the same edge twice. I wrote a program to count the trails, and I obtained the following results for $1 \le N \le 17$. $$1, 1, 2, 4, 9, 23, 62, 174, 497, 1433, 4150, 12044, 34989, 101695, 295642, 859566, 2499277$$ This sequence is not in the OEIS, but Superseeker reports that the sequence satisfies the fourth-order linear recurrence $$2 a(N) + 3 a(N + 1) - a(N + 2) - 3 a(N + 3) + a(N + 4) = 0.$$ Question: Can anyone prove that this equation holds for all $N$?
This is not a new answer, just an attempt to slightly demystify user9325's very elegant answer to make it easier to understand and apply to other problems. Of course this is based on what I myself find easier to understand; others may prefer user9325's original formulation. The crucial insight, in my view, is not the use of a variable weight and a polynomial (which serve as convenient bookkeeping devices), but that the problem becomes more tractable if we generalize it. This becomes apparent when we try a similar approach without this generalization: We might try to decompose $a(n)$ into two contributions corresponding to the two edges from $n-2$ and $n-1$ by which we can get to $n$, and in each case account for the new possibilities arising from the new vertices and edges. The contribution from $n-1$ is straightforward, but the contribution from $n-2$ causes a problem: We can now travel between $n-3$ and $n-2$ either directly or via $n-1$, and we can't just add a factor of $2$ to take this into account because there are trails using both of these possibilities. This is where the idea of an edge parallel to the final edge arises: Even though we're only interested in the final result without a parallel edge, the recurrence leads to parallel edges, so we need to include that possibility. We can do this without edge weights or polynomials by just counting the number $b(n)$ of trails that use the parallel edge separately from the number $a(n)$ of trails that don't. (I'm not saying we should; the polynomial, like a generating function, is an elegant and useful way to keep track of things; I'm just trying to emphasize that the polynomial isn't an essential part of the central idea of generalizing the original problem.) Counting the number $a(n)$ of trails that don't use the parallel edge, we have a contribution $a(n-1)$ from trails ending with the normal edge from $n-1$, and a contribution $a(n-2)+b(n-2)$ from trails ending with the edge from $n-2$, which may ($b$) or may not ($a$) go via $n-1$: $$a(n)=a(n-1)+a(n-2)+b(n-2)\;.$$ Counting the number $b(n)$ of trails that do use the parallel edge, we have a contribution $a(n-1)+b(n-1)$ from trails ending with the parallel edge, which may ($b$) or may not ($a$) go via $n$, a contribution $b(n-1)$ from trails ending with the normal edge from $n-1$, which have to go via $n$ (hence $b$), and a contribution $2b(n-2)$ from trails ending with the edge from $n-2$, which have to go via $n-1$ (hence $b$) and can use the normal edge from $n-1$ and the parallel edge in either order (hence the factor $2$): $$b(n)=a(n-1)+b(n-1)+b(n-1)+2b(n-2)\;.$$ This is precisely user9325's result, with $a(n)=d_n$ and $b(n)=c_n$. There was a tad more work in counting the possibilities, but then we didn't have to compare coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 2, "answer_id": 1 }
Expected Value for summing over distinct random integers? Let $L=\{a_1,a_2,\ldots,a_k\}$ be a random (uniformly chosen) subset of length $k$ of the numbers $\{1,2,\ldots,n\}$. I want to find $E(X)$ where $X$ is the random variable that sums all numbers. We might want that $k < n$ too. My main problem is that I cannot get the function $q(a,k,n)$ that gives me the number of ways to write the number $a$ as the sum of exactly $k$ distinct addends less or equal $n$. This seems related but it doesn't limit the size of the numbers.
Hint: (1) Compute $E(a_1)$. (2) Show that $E(a_k)=E(a_1)$ for every $k$. (3) Deduce $E(X)$. This is a good example where computing the distribution is messy but computing the expectation is easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Variance for summing over distinct random integers Let $L=\{a_1,a_2,\ldots,a_k\}$ be a random (uniformly chosen) subset of length $k$ of the numbers $\{1,2,\ldots,n\}$. I want to find $\operatorname{Var}(X)$ where $X$ is the random variable that sums all numbers with $k < n$. Earlier today I asked about the expected value, which I noticed was easier than I thought. But now I am sitting on the variance since several hours but cannot make any progress. I see that $E(X_i)=\frac{n+1}{2}$ and $E(X)=k \cdot \frac{n+1}{2}$, I tried to use $\operatorname{Var}\left(\sum_{i=1}^na_iX_i\right)=\sum_{i=1}^na_i^2\operatorname{Var}(X_i)+2\sum_{i=1}^{n-1}\sum_{j=i+1}^na_ia_j\operatorname{Cov}(X_i,X_j)$ but especially the second sum is hard to evaluate by hand ( every time I do this I get a different result :-) ) and I have no idea how to simplify the Covariance term. Furthermore I know that $\operatorname{Var}(X)=\operatorname{E}\left(\left(X-\operatorname{E}(X)\right)^2\right)=\operatorname{E}\left(X^2\right)-\left(\operatorname{E}(X)\right)^2$, so the main Problem is getting $=\operatorname{E}\left(X^2\right)$. Maybe there is also a easier way than to use those formulas. I think I got the correct result via trial and error: $\operatorname{Var}(X)=(1/12) k (n - k) (n + 1)$ but not the way how to get there..
Let me (almost) quote myself: Hint: (1) Compute $E(a_1^2)$. (2) Show that $E(a_k^2)=E(a_1^2)$ for every $k$. (3) Compute $E(a_1a_2)$. (4) Show that $E(a_ka_i)=E(a_1a_2)$ for every $k\ne i$. (5) Deduce $E(X^2)$. You probably already know the distribution of $a_1$ hence (1) and (2) are easy. Now, you simply need to know the distribution of $(a_1,a_2)$. This is (still) a good example where computing the distribution is messy but computing expectations (and variances) is easy. Edit One finds $$ \mbox{var}(a_1)=\frac{1}{12}(n+1)(n-1),\quad \mbox{cov}(a_1,a_2)=-\frac{1}{12}(n+1). $$ Hence $$ \mbox{var}(X)=k\mbox{var}(a_1)+k(k-1)\mbox{cov}(a_1,a_2)=\frac{1}{12}k(n+1)(n-k), $$ as computed by the OP. Sanity check: if $k=n$, $\mbox{var}(X)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to evaluate $\lim\limits_{h \to 0} \frac {3^h-1} {h}=\ln3$? How is $$\lim_{h \to 0} \frac {3^h-1} {h}=\ln3$$ evaluated?
$\lim_{h\to 0}\frac{3^h-1}{h}=\lim_{h\to 0}\frac{e^{h\log 3}-1}{h}$. Now expansion of $e^{h\log 3}=1+\frac{h\log 3}{1!}+\frac{h^2(\log 3)^2}{2!}\cdots \implies \frac{e^{h\log 3}-1}{h}=\log 3+\frac{h(\log 3)^2}{2!}\cdots \implies \lim_{h\to 0}\frac{e^{h\log 3}-1}{h}=\log 3+0+0+\cdots = \log 3$ Hence, $\lim_{h\to 0}\frac{3^h-1}{h}=\log 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 4 }
Computing the integral of $\log(\sin x)$ How to compute the following integral? $$\int\log(\sin x)\,dx$$ Motivation: Since $\log(\sin x)'=\cot x$, the antiderivative $\int\log(\sin x)\,dx$ has the nice property $F''(x)=\cot x$. Can we find $F$ explicitly? Failing that, can we find the definite integral over one of intervals where $\log (\sin x)$ is defined?
An excellent discussion of this topic can be found in the book The Gamma Function by James Bonnar. Consider just two of the provably equivalent definitions of the Beta function: $$ \begin{eqnarray} B(x,y)&=& 2\int_0^{\pi/2}\sin(t)^{2x-1}\cos(t)^{2y-1}\,dt\\ &=& \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}. \end{eqnarray} $$ Directly from this definition we have $$ B(n+\frac{1}{2},\frac{1}{2}): \int_0^{\pi/2}\sin^{2n}(x)\,dx=\frac{\sqrt{\pi} \cdot\Gamma(n+1/2)}{2(n!)} $$ $$ B(n+1,\frac{1}{2}): \int_0^{\pi/2}\sin^{2n+1}(x)\,dx=\frac{\sqrt{\pi} \cdot n!}{2 \Gamma(n+3/2)} $$ Hence the quotient of these two integrals is $$ \begin{eqnarray} \frac{ \int_0^{\pi/2}\sin^{2n}(x)\,dx}{\int_0^{\pi/2}\sin^{2n+1}(x)\,dx}&=& \frac{\Gamma(n+1/2)}{n!}\frac{\Gamma(n+3/2)}{n!}\\ &=& \frac{2n+1}{2n}\frac{2n-1}{2n}\frac{2n-1}{2n-2}\cdots\frac{3}{4}\frac{3}{2}\frac{1}{2}\frac{\pi}{2} \end{eqnarray} $$ where the quantitiy $\pi/2$ results from the fact that $$ \frac{\int_0^{\pi/2}\sin^{2\cdot 0}(x)\,dx}{\int_0^{\pi/2}\sin^{2\cdot 0+1}x\,dx}=\frac{\pi/2}{1}=\frac{\pi}{2}. $$ So we have that $$ \int_0^{\pi/2}\sin^{2n}(x)\,dx=\frac{2n-1}{2n}\frac{2n-3}{2n-2}\cdots\frac{1}{2}\frac{\pi}{2}=\frac{(2n)!}{4^n (n!)^2}\frac{\pi}{2}. $$ Hence an analytic continuation of $\int_0^{\pi/2}\sin^{2n}(x)\,dx $ is $$ \int_0^{\pi/2}\sin^{2z}(x)\,dx=\frac{\pi}{2}\frac{\Gamma(2z+1)}{4^z \Gamma^2(z+1)}=\frac{\pi}{2}\Gamma(2z+1)4^{-z}\Gamma^{-2}(z+1). $$ Now differentiate both sides with respect to $z$ which yields $$ \begin{eqnarray} 2\int_0^{\pi/2}\sin^{2z}(x)\log(\sin(x))\,dx =\frac{\pi}{2} \{2\Gamma'(2z+1)4^{-z}\Gamma^{-2}(z+1)\\ +2\Gamma(2z+1)4^{-z}\Gamma^{-3}(z+1)\Gamma'(z+1)\\ -\log(4)\Gamma(2z+1)4^{-z}\Gamma^{-2}(z+1)\}. \end{eqnarray} $$ Finally set $z=0$ and note that $\Gamma'(1)=-\gamma$ to complete the integration: $$ \begin{eqnarray} 2\int_0^{\pi/2}\log(\sin(x))\,dx&=&\frac{\pi}{2}(-2\gamma+2\gamma-\log(4))\\ &=& -\frac{\pi}{2}\log(4)=-\pi\log(2). \end{eqnarray} $$ We conclude that $$ \int_0^{\pi/2}\log(\sin(x))\,dx=-\frac{\pi}{2}\log(2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/37829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 11, "answer_id": 8 }
Is there any mathematical operation on Integers that yields the same result as doing bitwise "AND"? I'll provide a little bit of a background so you guys can better understand my question: Let's say I have two positive, non-zero Binary Numbers.(Which can, obviously, be mapped to integers) I will then proceed onto doing an "AND" operation for each bit, (I think that's called a bitwise operation) which will yield yet another binary number. Ok. Now this new Binary number can, in turn, also be mapped to an Integer. My question is: Is there any Integer operation I can do on the mapped Integer values of the two original binary numbers that would yield the same result? Thanks in advance. EDIT : I forgot to mention that What I'm looking for is a mathematical expression using things like +,-,/,pow(base,exp) and the like. I'm not 100% sure (I'm a compuer scientist) but I think what I'm looking for is an isomorphism. LAST EDIT: I think this will clear any doubts as to what sort of mathematical expression I'm looking for. I wanted something like: The bitwise AND of two Integers A and B is always equal to (AB)X(B)X(3). The general feeling I got is that it's not possible or extremely difficult to prove(either its validity or non-validity)
Note that this is an integer operation as you have defined it and it is a perfectly valid mathematical definition. So you need to refine your question: You want a formula? What kind of formula would you accept?
{ "language": "en", "url": "https://math.stackexchange.com/questions/37877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 2 }
generators of the symplectic group In Masoud Kamgarpour's paper "Weil Representations" he uses a set of generators for the symplectic group, referring to a book by R. Steinberg which I do not have access to. If it matters at all, I am working in characteristic zero. After choosing a symplectic basis, the generators can be written \begin{equation} \left( \begin{array}{cc} A & 0 \newline 0 & (A^t)^{-1} \end{array} \right), \ \left( \begin{array}{cc} I & B \newline 0 & I \end{array} \right), \ \text{and} \ \left( \begin{array}{cc} 0 & I \newline -I & 0 \end{array} \right), \end{equation} where $A$ ranges through invertible matrices and $B$ ranges through symmetric matrices. Does anyone know of a reference or an explanation for this, especially a coordinate-free conceptual and/or geometric one?
This is done in Symplectic groups By Onorato Timothy O'Meara and uses the fact that the symplectic group over a field is generated by symplectic transvections.
{ "language": "en", "url": "https://math.stackexchange.com/questions/37947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
$\mu$ measurable functions and separable metric spaces I was reading a math textbook and the author gives the following without proof. I have no clue on how to proceed. Let $(X, \mathcal{F}, \mu)$ be a measure space and $(Y,d)$ be a separable metric space ($d$ is the metric). If $f:(X,\mathcal{F}) \rightarrow (Y, d)$ is a $\mu$-measurable function prove that there exists an $\mathcal{F}$ measurable function which coincides with $f$ everywhere except on a $\mu$-negligible set. Any help is greatly appreciated. EDIT: The textbook is "Functions of Bounded Variation and Free Discontinuity Problems" by Luigi Ambrosio et. al.
Edit: I have just figured a much easier way. So, I edited the answer. Let $\mathcal{V} = \{V_n : n = 1, 2, \dotsc\}$ be a countable base for the topology of $Y$. For each $V_n$, choose a negligible $E_n \subset X$ such that $f^{-1}(V_n) \setminus E_n \in \mathcal{F}$. It may happen that $\bigcup E_n \not \in \mathcal{F}$. But since it is a negligible set, there is a negligible $Z \in \mathcal{F}$ such that $\bigcup E_n \subset Z$. Fix some $y \in Y$, and then define $$ g(x) = \left\{ \begin{array}{} f(x), & x \not \in Z \\ y, & x \in Z \end{array} \right. $$ Notice that for any $V_n \in \mathcal{V}$, if $y \not \in V_n$, $$ \begin{align*} g^{-1}(V_n) &= f^{-1}(V_n) \setminus Z \\&= (f^{-1}(V_n) \setminus E_n) \setminus Z \in \mathcal{F}. \end{align*} $$ And if $y \in V_n$, $$ \begin{align*} g^{-1}(V_n) &= f^{-1}(V_n) \cup Z \\&= (f^{-1}(V_n) \setminus E_n) \cup Z \in \mathcal{F}. \end{align*} $$ That is, $g^{-1}(\mathcal{V}) \subset \mathcal{F}$. All open sets of $Y$ are (countable) union of elements in $\mathcal{V}$. Therefore, $\mathcal{V}$ generates the $\sigma$-algebra of Borel sets $\mathcal{B}$. And so, $g$ is $\mathcal{F}$-measurable. In fact, $$ g^{-1}(\mathcal{B}) = g^{-1}(\sigma(\mathcal{V})) = \sigma \left(g^{-1}(\mathcal{V})\right) \subset \mathcal{F}. $$ Since it is evident that $g$ and $f$ are equal almost everywhere, the proof is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
How to evaluate the following stochastic integral? How to evaluate the following stochastic integral? $$\int_0^t M_{s^-}^2 dM_s$$ where $M_t = N_t - \lambda t$ is a compensated Poisson process. I tried to apply Ito's formula to $M_t^2$ but still cannot solve it. Any help appreciated. References * *http://en.wikipedia.org/wiki/Poisson_distribution *http://en.wikipedia.org/wiki/Itō's_lemma *http://en.wikipedia.org/wiki/Stochastic_calculus
You can see this book Introduction to Stochastic Integration p.109 and following; example 7.6.3 answers your question. Sorry for my english.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to solve forward equation for a continuous-time Markov chain? Given the transition rate matrix of a CTMC as $G$, I was wondering how the forward equation $P'(t) = P(t) G, P(0)=I$ is usually solved for the transition matrix $P(t)$? Some book says the solution has the form $P(t) = exp\{tG\}$. Since exponential of a matrix is defined as a series form, I don't know if such form for solution can be simplified, and be helpful in determining the distribution given the beginning/ending state i.e. a row/column vector in $P(t)$. Thanks and regards!
For a generic matrix there is no simpler expression for the exponential. But there are many ways to calculate the exponential of a matrix. I suggest that you ask a new question with an explicit example (possibly with parameters) to get a feel for the method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Good (Auto)Biographies of von Neumann and other physicists/mathematicians Which is the "best" biography of von Neumann available to the casual reader (math undergrad)? Also, other than the Ulam book, which other good biographies of physicists/mathematicians can be recommended?
A bit of a meta-answer: for biographical searches on mathematicians, a very good way it to visit the Mathematical Biographies maintained by University of St Andrews, find the guy, scroll to the bottom, and click on the link to the list of bibliographic references.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 10, "answer_id": 2 }
How to calculate hyperbola from data points? I have 4 data points, from which I want to calculate a hyperbola. It seems that the Excel trendline feature can't do it for me, so how do I find the relationship? The points are: (x,y) (3, 0.008) (6, 0,006) (10, 0.003) (13, 0.002) Thanks!
If other people coming across this question want to fit a general hyperbola of the form $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ there is a slightly cheap way of getting an estimate. Note: The best way is to do this is to use an iterative least squares model or something like that, but this'll give you a rough idea. (Thanks to Claude Leibovici for pointing this out) You can rearrange the general formula to: $y^2 = \frac{b^2x^2}{a^2} - b^2$ and then: * *substitute $\theta_1 = b^2/a^2$ and $\theta_2 = -b^2$ *substitute $Y = y^2$ and $X = x^2$ and voila! you can now do a standard linear regression to find $\theta_1$ and $\theta_2$ from the linear equation: $Y = \theta_1X + \theta_2$ Example You convert your data to X and Y first: +----+------+ +-----+--------+ | x | y | | X | Y | +----+------+ -> +-----+--------+ | 4 | 0 | | 16 | 0 | | 5 | 2.3 | Y=y^2 | 25 | 5.29 | | 6 | 3.34 | X=x^2 | 36 | 11.16 | | 10 | 6.85 | | 100 | 46.92 | | 12 | 8.48 | | 144 | 71.91 | | 17 | 12.4 | | 289 | 153.76 | | 20 | 14.7 | | 400 | 216.09 | +----+------+ +-----+--------+ Then run a linear regression on $X$ and $Y$ to get $\theta_1 = 0.563$ and $\theta_2 = -9.054$ Which implies: $b = \pm \sqrt{- \theta_2} \approx \pm 3.01$ and $a = \pm\sqrt{\frac{b^2}{\theta_1}} \approx \pm 4.01$
{ "language": "en", "url": "https://math.stackexchange.com/questions/38219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Edge of factoring technology? Schneier in 1996's Applied Cryptography says: "Currently, a 129-decimal-digit modulus is at the edge of factoring technology" In the intervening 15 years has anything much changed?
For factoring RSA moduli (product of two large distinct primes), the Number Field Sieve is still king. It has two phases: a parallelizable sieving phase where we look for polynomials, then a non-parallelizable matrix reduction step that is usually done on a large computer. If quantum computers ever become practical, they can factor integers via Shor's algorithm in polynomial time. In fact, this is the only known compelling application for them, but it's enough for the military to spend millions in the U.S. funding research on quantum computers. Note: Peter Shor is a frequent contributor on this site.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Double sampling distribution One bag contains $n_1$ white and $n_2$ red balls. Another bag contains $m_1$ white and $m_2$ red balls. $N$ balls are drawn at random from the first bag and transferred to the second. Then $M$ balls are drawn at random from second bag. What is the probability that * *exactly $x$ balls are white? *$x$ balls or more are white?
Let $n=n_1+n_2$ denote the total number of balls in the first bag and $m=m_1+m_2$ the total number of balls in the second bag. Let $N_1$ denote the number of white balls transferred to the second bag and $N_2$ the number of red balls transferred to the second bag. Let $M_1$ denote the number of white balls drawn from the second bag and $M_2$ the number of red balls drawn from the second bag. Hence $N_1$, $N_2$, $M_1$ and $M_2$ are random but $N=N_1+N_2$ and $M=M_1+M_2$ are deterministic. Conditionally on $(N_1,N_2)$, $M_1$ is the number of white balls in a subset of $M$ balls chosen from $m_1+N_1$ white balls and $m_2+N_2$ red balls, hence $$ P(M_1=x|N_1=y)={m_1+y\choose x}{m_2+N-y\choose M-x}{m+N\choose M}^{-1}. $$ Likewise, $$ P(N_1=y)={n_1 \choose y}{n_2\choose N-y}{n\choose N}^{-1}. $$ Hence $$ P(M_1=x)=\sum_y{n_1\choose y}{n_2\choose N-y}{m_1+y\choose x}{m_2+N-y\choose M-x}{n\choose N}^{-1}{m+N\choose M}^{-1}. $$ The expectation is simpler, since one can get directly $$ E(M_1)=\frac{M}{n}\frac{nm_1+n_1N}{m+N}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/38351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Exact values of $\cos(2\pi/7)$ and $\sin(2\pi/7)$ What are the exact values of $\cos(2\pi/7)$ and $\sin(2\pi/7)$ and how do I work it out? I know that $\cos(2\pi/7)$ and $\sin(2\pi/7)$ are the real and imaginary parts of $e^{2\pi i/7}$ but I am not sure if that helps me...
$\cos2\pi/7$ is a root of a cubic equation with integer coefficients. You can find that cubic by using $\cos\theta=(1/2)(e^{i\theta}+e^{-i\theta})$, computing the square and the cube, and looking for linear relations, bearing in mind that the $7$ $7$th roots of unity add up to zero. Then you can use Cardano's formula to solve the cubic. I don't know if I recommend actually doing all this - I'm sure you get a mess, although the discriminant will be a perfect square, so you'll get some simplification there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 1 }
Distribution of N balls numbered 1 to N with replacement An urn contains N balls numbered 1.2.3...N. I draw at random n balls, one by one with replacement. Let X the smallest number, the largest Y and S the sum of all the n numbers How to compute: -the probability P(X=x,Y=y) that X=x AND Y=y -the probability that S=s
You should count the number of the "good" instances and divide by the total number of instances. For the first question - if the smallest is $x$ and the largest is $y$, then you are writing words of length $n$ using only the symbols $x,x+1,...,y-1,y$ (why?). How many such words do you have? For the second question - it's equivalent to counting the number of solutions to $X_1+...+X_n=s$ where $1\leq X_i\leq N$, and then dividing by the total number of words of length $n$ over $1,...,N$ (why?). As to finding the number of solutions to this equation: it's the same as the number of solutions to $Y_1+...+Y_n=s-n$ where $0\leq X_i\leq N-1$. This can be done with exclusion-inclusion principle. The other question, in your second post, is very similar. You can use the same approach, but there each letter/value can appear only once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Degree 2 Field extensions Are all degree $2$ field extensions Galois? I know that this is true over the rationals. But is it true in general?
It is true over any field of characteristic different from $2$. If $F$ has characteristic different from $2$, and $K$ is of degree $2$ over $F$, then let $\alpha\in K$, $\alpha\notin F$. Then $K=F(\alpha)$, since $2=[K:F]=[K:F(\alpha)][F(\alpha):F]$, and $[F(\alpha):F]\gt 1$. Let $p(x)$ be the minimal polynomial of $\alpha$ over $K$. Then $p(x) = x^2 + rx+t$ for some $r,t\in F$, and $\alpha = \frac{-r+\sqrt{r^2-4t}}{2}$ or $\alpha=\frac{-r-\sqrt{r^2+4t}}{2}$ (since the characteristic is not $2$). Moreover, the polynomial is irreducible and separable, and $\sqrt{r^2-4t}\notin F$. So $K = F(\sqrt{r^2-4t})$ and $K$ is a splitting field of an irreducible separable polynomial (namely, $x^2 - (r^2-4t)$), hence is Galois over $F$. If the characteristic is $2$, then the result is true for perfect fields, but not in general, as the examples by Zev Chonoles and Giovanni De Gaetano show.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Infinite area under a curve has finite volume of revolution? So I was thinking about the harmonic series, and how it diverges, even though every subsequent term tends toward zero. That meant that its integral from 1 to infinity should also diverge, but would the volume of revolution also diverge (for the function y=1/x)? I quickly realized that its volume is actually finite, because to find the volume of revolution the function being integrated has to be squared, which would give 1/x^2, and, as we all know, that converges. So, my question is, are there other functions that share this property? The only family of functions that I know that satisfy this is 1/x, 2/x, 3/x, etc.
User's answer is a good one - but I wanted to mention a related topic. You might also note that the surface area of your object is also infinite, despite its finite volume. Thus, if you were to 'hold' such an object, you could fill it with paint but never cover its walls. This has a name - it's Gabriel's Horn ( or Torricelli's Trumpet), and you can read about it here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Help to understand material implication This question comes from from my algebra paper: $(p \rightarrow q)$ is logically equivalent to ... (then four options are given). The module states that the correct option is $(\sim p \lor q)$. That is: $$(p\rightarrow q) \iff (\sim p \lor q )$$ but I could not understand this problem or the solution. Could anybody help me?
As it was suggested in a comment above, drawing a truth-table, especially when there are only two or three variables (i.e. atomic sentences) can really help to illustrate exactly when two given expressions are equivalent. In this case, we see that $(\sim p \lor q)$ is true exactly when $(p\rightarrow q)$ is true, and it is false exactly when $(p\rightarrow q)$ is false. That is, $(p\rightarrow q) \equiv (\sim p \lor q)$. Alternatively, we can recognize the equivalence of the two expressions simply by comparing column of truth-values corresponding to each expression and see that the two columns are identical, and hence, the expressions are logically equivalent. Logical Equivalence : (p → q) $\equiv$ (¬p ∨ q) $\;\;$ Note that $"\equiv"$ is equivalent to $"\iff"$ Another way to use the truth-table above is to see that the implication $(p \rightarrow q)$ is false if and only if the truth-value of $p$ is true and value of $q$ is false. Symbolically, we can express that fact by asserting that for the implication to be true, then (it cannot be the case) that $(p \land \sim q)$; in other words, $\sim (p \land \sim q)$. This conveys exactly the same information as the material implication $(p \rightarrow q)$. Note that $\sim (p \land \sim q) \equiv (\sim p \lor q)$, by De Morgan. As for understanding that when $(p \rightarrow q)$, then if $p$ is true, we must have that $q$ is true: perhaps the following analogy will help. In many respects, the proper inclusion (proper "is a subset of") relation corresponds to material implication, where $\subset$ corresponds to the $\rightarrow$ relation. For example, suppose $A \subset B$. Then if it is true that $x \in A$, then it must be true that $x \in B$, since $B$ contains $A$. However, if $x \notin A$, that does not mean that $x \notin B$, since if $A \subset B$, then $B$ contains elements that $A$ does not contain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 3 }
How to find a polynomial from a given root? I was asked to find a polynomial with integer coefficients from a given root/solution. Lets say for example that the root is: $\sqrt{5} + \sqrt{7}$. * *How do I go about finding a polynomial that has this number as a root? *Is there a specific way of finding a polynomial with integer coefficients? Any help would be appreciated. Thanks.
The simplest polynomial that has $r$ as a root is just $x-r$. But of course, that is not what you are dealing with here. This is really a question about finding the minimal polynomial of $\alpha=\sqrt{5}+\sqrt{7}$ over $\mathbb{Q}$; such polynomials exist for any algebraic number, by definition of algebraic. Galois Theory provides all the necessary tools to solve this problem. Basically: * *Consider the field $\mathbb{Q}(\sqrt{5},\sqrt{7})$; this field contains the number $\alpha=\sqrt{5}+\sqrt{7}$, so we can work there. Every element of this field can be written uniquely as $$a + b\sqrt{5} + c\sqrt{7} + d\sqrt{35}$$ for some $a,b,c,d\in\mathbb{Q}$. *The field $\mathbb{Q}(\sqrt{5},\sqrt{7})$ has four automorphisms (functions $f\colon\mathbb{Q}(\sqrt{5}+\sqrt{7})\to\mathbb{Q}(\sqrt{5},\sqrt{7})$ that are additive, $f(a+b)=f(a)+f(b)$, multiplicative, $f(ab)=f(a)f(b)$, and invertible), and these automorphisms fix elements of $\mathbb{Q}$ (that is, $f(q)=q$ for all $q\in\mathbb{Q}$). These automorphism are the following: $$\begin{align*} \sigma_1\colon a+b\sqrt{5}+c\sqrt{7}+d\sqrt{35} &\longmapsto a+b\sqrt{5}+c\sqrt{7}+d\sqrt{35} &\qquad&\text{(the identity)};\\ \sigma_2\colon a+b\sqrt{5}+c\sqrt{7}+d\sqrt{35} &\longmapsto a-b\sqrt{5}+c\sqrt{7}-d\sqrt{35}\\ \sigma_3\colon a+b\sqrt{5}+c\sqrt{7}+d\sqrt{35} &\longmapsto a+b\sqrt{5}-c\sqrt{7}-d\sqrt{35}\\ \sigma_4\colon a+b\sqrt{5}+c\sqrt{7}+d\sqrt{35} &\longmapsto a-b\sqrt{5}-c\sqrt{7}+d\sqrt{35} &&\text{(equal to }\sigma_3\circ\sigma_2\text{)} \end{align*}$$ The maps are induced by the conjugation maps $\sqrt{5}\mapsto-\sqrt{5}$ and $\sqrt{7}\mapsto-\sqrt{7}$. An important feature is that an element of $\mathbb{Q}(\sqrt{5},\sqrt{7})$ lies in $\mathbb{Q}$ if and only if it is fixed by all four maps. *Suppose $p(x)$ is a polynomial with coefficients in $\mathbb{Q}$, and that $a\in\mathbb{Q}(\sqrt{5},\sqrt{7})$. If $$p(x) = a_nx^n + \cdots + a_0$$ then $$\sigma_i(p(a)) = \sigma_i(a_na^n+\cdots+a_0) = a_n\sigma_i(a)^n+\cdots+a_0 = p(\sigma_i(a)).$$ In particular, if $p(a)\in\mathbb{Q}$, then $p(a)=\sigma_i(p(a)) = p(\sigma_i(a))$ for $i=1,2,3,4$. *Now suppose you find a polynomial $p(x)$ with coefficients in $\mathbb{Q}$ that has $\sqrt{5}+\sqrt{7}$ as a root. Then $p(\sqrt{5}+\sqrt{7})=0$, so by 3 above, it must also be true that $p(\sigma_i(\sqrt{5}+\sqrt{7}))=0$ for $i=1,2,3,4$. That means that $p(x)$ must also have $\sqrt{5}-\sqrt{7}$, $-\sqrt{5}+\sqrt{7}$, and $-\sqrt{5}-\sqrt{7}$ as roots. By unique factorization, we conclude that $p(x)$ must be divisible by $$\left(x - (\sqrt{5}+\sqrt{7})\right)\left(x - (-\sqrt{5}+\sqrt{7})\right)\left(x - (\sqrt{5}-\sqrt{7})\right)\left(x - (-\sqrt{5}-\sqrt{7})\right).$$ That is, any polynomial with coefficients in $\mathbb{Q}$ that has $\sqrt{5}+\sqrt{7}$ as a root must be a multiple of this product. But if you multiply out this product, you will discover that this polynomial already has coefficients in $\mathbb{Q}$. Once you have a polynomial with coefficients in $\mathbb{Q}$ that has $\sqrt{5}+\sqrt{7}$ as a root, you can get a polynomial with coefficients in $\mathbb{Z}$ by simply clearing denominators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
The set of differences for a set of positive Lebesgue measure Quite a while ago, I heard about a statement in measure theory, that goes as follows: Let $A \subset \mathbb R^n$ be a Lebesgue-measurable set of positive measure. Then we follow that $A-A = \{ x-y \mid x,y\in A\}$ is a neighborhood of zero, i.e. contains an open ball around zero. I now got reminded of that statement as I have the homework problem (Kolmogorov, Introductory Real Analysis, p. 268, Problem 5): Prove that every set of positive measure in the interval $[0,1]$ contains a pair of points whose distance apart is a rational number. The above statement would obviously prove the homework problem and I would like to prove the more general statement. I think that assuming the opposite and taking a sequence $\{x_n\}$ converging to zero such that none of the elements are contained in $A$, we might be able to define an ascending/descending chain $A_n$ such that the union/intersection is $A$ but the limit of its measures zero. I am in lack of ideas for the definition on those $A_n$. I am asking specifically not for an answer but a hint on the problem. Especially if my idea turns out to be fruitful for somebody, a notice would be great. Or if another well-known theorem is needed, I surely would want to know. Thank you for your help.
If you need to find information on the subject, the first proof of the fact that the set of differences contains a neighbourhood of the origin is (for Lebesgue measure on the line) due to Steinhaus. There is a substantial collection of generalizations of the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 4, "answer_id": 0 }
Does the cartesian product have a neutral element? Let $A$ be any set. Is there a set $E$ such that $A \times E = E \times A = A$? I thought of the empty set, but Wikipedia says otherwise. This operation changes dimension, so an isomorphism might be needed for such element to exist.
For "equality", no ... if $a \in A$ then it does not have the form $(a,e)$ of an element of $A \times E$, by the Axiom of Foundation. For "bijection", yes. I leave that to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/38940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Bijection with constraints between sets of polynomial How can I show the existence (or even better: construct explicitly) of a bijection (if one exists) between the set of all polynomial that have integer coefficients and a integer root and the set of all polynomials that have positive integer coefficients and positive integer roots ? If I drop the additional "root" constraint, the problem is very simple to solve. But with the root constraint I have no idea how to solve it, since my knowledge of algebra is rather limited. For example if I use the function $h:\mathbb{Z} \rightarrow \mathbb{N}, \ h:=\begin{cases} 2t & ,t\geqslant0\\ -2t-1 & ,t<0\end{cases} $ to map the integer coefficients bijectively to positive integer coefficients, I can find counterexample that have no integer roots whatsoever.
A polynomial with positive coefficients can't have a positive root - just think about what happens when you plug a positive number into such a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does $c^n(n!+c^n)\lt (n+c^2)^n$ hold for all positive integers $n$ and $c\gt 0$? I am not sure whether the following inequality is true? Some small $n$ indicates it is true. Let $n$ be a positive integer and $c\gt0$, then $$c^n(n!+c^n)\lt(n+c^2)^n.$$
For $n=1$, you have $c+c^2 < 1+c^2$, which is equivalent to $c < 1$, hence always false because you supposed $c > 0$. For $n=2$, you have $2c^2 + c^4 < c^4 +4c^2 + 4$, which is equivalent to $2c^2 > -4$ (always true). Aryabhatta's example ensures us that we cannot conjecture that the inequality will always be true for $n > 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Dependence of Axioms of Equivalence Relation? This question is problem 11(a) in chapter 1 in 'Topics in Algebra' by I.N. Herstein. These are the properties of equivalence relation given in this book. * *Prop 1 $a \sim a$ *Prop 2 $a \sim b$ implies $b \sim a$. *Prop 3 $(a \sim b$ and $b \sim c)$ imply $a \sim c$. Statement Property 2 of an equivalence relation states that if $a \sim b$ then $b \sim a$. By property 3, we have the transitivity i.e. $a \sim b$, and $b \sim c$ then $a \sim c$. What is wrong with following proof that property 2 and 3 imply property 1? Let $a \sim b$; then $b \sim a$, whence by property 3 (using $a = c$), $a \sim a$. I think I can prove this to be wrong. Without proving equivalence relation first, one can not use $a = c$. Right? After all, equality is equivalent to 'equivalence relation' and 'axiom of substitution' are satisfied. If this is right, then I have trouble with the next part of this problem. Part 2 Can you suggest an alternative of property 1 which will insure us that prop 2 and prop 3 do imply 1? Can one give such a formulation without using the idea of '=' or otherwise? EDIT : Italics are my comments. Rest is as it appeared in the book. Notion of Equality I have read in Terry 'Analysis 1' in Appendix A.7 published by Hindustan Book Agency that there are four axioms of 'equality'. First 3 are same as equivalence relation where $\sim $ in replaced by $ = $. The fourth one is known as axiom of substitution. Given any two objects $x$ and $y$ of some type, if $ x = y $, then $f(x) = f(y) $ for all functions or operations $f$.
For the first part: The "proof" assumes that for $a$ there is a $b$ such that $a\sim b$. This of course is not necessarily given. The empty relation i.e. for no $a,b$ we have $a\sim b$ is transitive, symmetric but not reflexive. (This example seems to many students not very explanatory although it is the simplest example of this situation. See the example of Arturo Magidin if that helps.) For the second part: I think the book might ask for something like this. Prop 1': For any $a$, for which there is any $b$ such that $a\sim b$, we also have $a\sim a$. This might seem to be an weird property, but we could also take "suffices Prop 2 and Prop 3", which wouldn't make much more sense. Or as Arturo Magidin suggested Prop 1'': For any $a$, there is a $b$ such that $a\sim b$ together with Prop 2 and Prop 3 implies Prop 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 0 }
How to apply Stokes' Theorem for manifolds with boundary Original motivation: How can I apply Stokes' Theorem to the annulus $1 < r < 2$ in $\mathbb{R}^2$? Concerns: * *Since the annulus is a manifold without boundary, it would seem that Stokes' Theorem would always return an answer of $\int_M d\omega = \int_{\partial M} \omega = 0$ for compactly supported forms $\omega$. Is this correct? *What about the annulus $1 < r \leq 2$? This seems like a manifold-with-boundary to me, yet an application of Stokes' Theorem will return a different answer. And what about $1 \leq r \leq 2$? For instance, consider $\omega = -y\,dx + x\,dy$ on the annulus $1 < r \leq 2$, so that $d\omega = 2\,dx\,dy$. Then $$\int_M d\omega = 2\,\text{Area}(M) = 6\pi,$$ whereas $$\int_{\partial M} \omega = \int_0^{2\pi} 4\,dt = 8\pi,$$ where $\partial M$ is the circle $r = 2$. What explains this discrepancy? A friend of mine has suggested that this can be explained by the fact that $\omega = -y\,dx + x\,dy$ is not compactly supported on $1 < r \leq 2$, and hence Stokes' Theorem can't really be applied. Is this correct? For reference, I am using the following version of the theorem: Stokes' Theorem: Let $M$ be a smooth, oriented $n$-manifold with boundary, and let $\omega$ be a compactly supported smooth $(n-1)$-form on $M$. Then $$\int_M d\omega = \int_{\partial M} \omega.$$
Annulus with $1 < r < 2$ does not have a boundary and the form you pick is not compactly supported there. The form $\omega$ only vanishes at the origin so its support is in fact open in $\mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Possible to calculate Yaw,Pitch,Roll from Quaternion without using tangent? I'm currently working on a project that involves using the Yaw, Pitch and Roll from a given Quaternion to calculate an objects orientation and acceleration. I've searched a lot about how to obtain the YPR from a Quaternion, but they all seem to involve using tangent - and this seems to be causing problems whenever the sensor is rotated anything near 90 degrees. So my question is: is it possible to obtain the YPR from a Quaternion without using tangent?
Your problem isn't that the tangent function is giving you bad results, it's that you have a singularity (See: gimbal lock) at a pitch of $\pm 90^o$. At those orientations, you can have roll or yaw, but not both.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A good book for learning mathematical trickery I've seen several question here on what book to read to learn writing and reading proofs. This question is not about that. I've been doing that for a while, and I'm quite comfortable with proofs. I am looking for resources (books, ideally) that can teach not the concept of proofs, but rather some of the specific mathematical tricks that are commonly employed in proofs: those that mostly include clever number manipulation, ad-hoc integration techniques, numerical methods and other thing you are likely never to learn in theory-oriented books. I come mainly from applied math and engineering, and when I look at proofs from Stochastic Processes, Digital Signal Processing, Non-Linear Systems and other applied subjects, I feel like I need to learn a new method to understand every proof I read. Is there any good literature on such mathematical tricks?
I don't know if you're interested in inequalities, but a very nice book which teaches lots of tricks is Steele's The Cauchy–Schwarz Master Class.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
Boolean algebras without atoms Why is the theory of Boolean algebras without atoms $\omega$-categoric?
It isn't true generally that all atomless Boolean algebras of the same cardinality are isomorphic, so we don't expect $\kappa$-categoricity, and the diversity of such Boolean algebras give rise to all the various distinct forcing notions. But in the countable case, it turns out that there is just one atomless Boolean algebra up to isomorphism. There are evidently a variety of proofs. * *Here is a 1972 article by Abian that gives a brief topological proof, as well as a detailed proof for the case of atomless Boolean rings. *This book by Givant seems to have an explanation of the proof using the back-and-forth technique, which I believe is probably how you would prefer to understand it. (Here is a link to [the Google Books version][3], where you can see a complete and fully detailed proof with exercises afterward.) [3]: https://books.google.com/books?id=ORILyf8sF2sC&pg=PA134&lpg=PA134&dq=countable+atomless+Boolean+algebra&source=bl&ots=YL2jQrXiS9&sig=PTWu4vLDZO2TqSgHj_ksKjZ6oFU&hl=en&ei=fYHRTYCqKILe0QGK86yODg&sa=X&oi=book_result&ct=result#v=onepage&q=countable atomless Boolean algebra&f=false
{ "language": "en", "url": "https://math.stackexchange.com/questions/39478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Parameterizing Geodesics on the Sphere in Polar Coordinates I seem to have this seemingly trivial Problem, but can't figure it out. Situation: I have my Unit Sphere, $S^2$ defined as a Riemannian Manifold. Parameterizing Geodesics (Great circles) on this sphere is absolutely no Problem, IF I embed it in $\mathbb{R}^3$. BUT, I don't want to do this. What I really want to do, is use riemannian polar coordinates. Now genereally, this isn't a problem, as I could just take any point on the geodesic and call it $(0,0)$ (with $(r,\varphi)$ being my riemannian polar coordinates). And all the lines going through the origin would be my great circles. However, The origin of the exponential map I'm using does not lie on the geodesic of interest. I do however know 2 points (say $P_1, P_2$) and their polar coordinates. So basically, I have $\text{exp}_p^{-1} : S^2\to \mathbb{R}^2$ and my two points in $\mathbb{R}^2$ defined by $(r_1,\varphi_1)$ and $(r_2,\varphi_2)$. I'm looking for some parameterisation to connect these two points via a great circle. And I just can't think of anything sensible. Any help would be highly appreciated. Edit: To clarify, I have a working method using an $\mathbb{R}^3$ embedding, but it's very bulky and "unpretty". Further, I'm using this geodesic to "bound" one of my integrals and shifting the Origin is therefore not an option
Why don't you just use spherical trigonometry? You have a "base triangle" with vertices $O$, $A_1$ and $A_2$, sides $O A_i$ of length $r_i$ and an angle $\alpha:=|\phi_2-\phi_1|$ between them. Now consider an arbitrary point $P$ on the third side $A_1 A_2$. The "line" $OP\ $ has length $r$ and encloses angles $\alpha_i$ with the sides $OA_i$. The formulas of spherical trigonometry will give you an equation connecting $r$ and the $\alpha_i$. Let $s_i$ be the lengths of the sides $A_i P$. Then $$\cos s_i=\cos r_i \cos r +\sin r_i \sin r \cos\alpha_i\qquad(i=1,2)$$ and $$\cos(s_1+s_2)=\cos r_1\cos r_2+\sin r_1\sin r_2 \cos(\alpha_1+\alpha_2)\ .$$ Eliminate $s_1$ and $s_2$ from these three equations to get the desired result. Hint: Square the identity $\sin s_1 \sin s_2=\cos s_1 \cos s_2-\cos(s_1+s_2)$ and replace $\sin^2 s_i$ by $1-\cos^2 s_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to calculate $\int_0^{2\pi} \sqrt{1 - \sin^2 \theta}\;\mathrm d\theta$ How to calculate: $$ \int_0^{2\pi} \sqrt{1 - \sin^2 \theta}\;\mathrm d\theta $$
\begin{align} \int_0^{2\pi} \sqrt{1 - \sin^2 \theta} d\theta &= \int_0^{2\pi} \sqrt{\cos^2 \theta} d\theta \\ &= \int_0^{2\pi} | \cos \theta | d\theta \\ &= 4 \int_0^{\frac{pi}{4}} \cos \theta d\theta \\ &= 4 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/39643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Landmarks of subjects of mathematics In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it). For example in natural number theory it is good to know quadratic reciprocity and in linear algebra it's good to know the Cayley-Hamilton theorem (to give two examples). So, what is one (per post) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)
Differential Geometry: the Gauss-Bonnet theorem. I took a one-semester intro course on differential geometry class and we got to this towards the end of the semester, so I feel that a couple of months is an appropriate time frame for this theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 16, "answer_id": 0 }
Induced map on Eilenberg-MacLane space Let $X$ be an $n-1$ connected space. Why does a map $X\rightarrow K(\pi_n(X),n)$ that induces an isomorphism on $\pi_n$ exist and what is this map?
If $X$ is a CW-complex, then you can build a $K(\pi_n(X),n)$ by attaching cells (of dimension $n+2$ and above) to $X$ to kill off the higher homotopy groups. The map you are thinking of is then an inclusion of $X$ into $K(\pi_n(X),n)$ as a sub-complex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
To sum $1+2+3+\cdots$ to $-\frac1{12}$ $$\sum_{n=1}^\infty\frac1{n^s}$$ only converges to $\zeta(s)$ if $\text{Re}(s)>1$. Why should analytically continuing to $\zeta(-1)$ give the right answer?
The notation "$1+2+3+\cdots$" is as meaningless as "$1/0$". If you treat such notation as though it defined a real number and conformed in its syntax to the rules of formation for genuine real numbers, you can easily "prove" it to equal any number you like, including $-1/12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "449", "answer_count": 18, "answer_id": 5 }
Global conformally flat coordinates in 2d spacetimes Let $(M,g)$ be a 2 dimensional pseudo-Riemannian manifold that is topologically a disc. Is it possible to construct a global coordinate system in which the metric is conformally flat? I.e. coordinates $(t,x)$ which cover the whole manifold such that the line element takes the form $ds^2=\Omega^2(t,x)(-dt^2 + dx^2)$ for some conformal factor $\Omega$.
This is an old question but it deserves a correct answer. As it turns out, the open 2-dimensional disk admits continuum of conformally inequivalent pseudo Riemannian metrics, see Uncountably many $C^0$ conformally distinct Lorentz surfaces and a finiteness theorem, by Robert W. Smyth, Proc. Amer. Math. Soc. 124 (1996), 1559-1566. Edit. Furthermore, there are Lorentz metrics on the open 2-disk which do not embed conformally in the Lorentzian plane, see p. 117 of T. Weinstein, An Introduction to Lorentz Surfaces, de Gruyter, 1996. Last thing: In his answer Luboš Motl confused the Riemannian and pseudo Riemannian cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Complete induction of $10^n \equiv (-1)^n \pmod{11}$ To prove $10^n \equiv (-1)^n\pmod{11}$, $n\geq 0$, I started an induction. It's $$11|((-1)^n - 10^n) \Longrightarrow (-1)^n -10^n = k*11,\quad k \in \mathbb{Z}. $$ For $n = 0$: $$ (-1)^0 - (10)^0 = 0*11 $$ $n\Rightarrow n+1$ $$\begin{align*} (-1) ^{n+1} - (10) ^{n+1} &= k*11\\ (-1)*(-1)^n - 10*(10)^n &= k*11 \end{align*}$$ But I don't get the next step.
Why don't you try this: $10 \equiv -1 \ (\text{mod} \ 11) \Longrightarrow (10)^{n} \equiv (-1)^{n} \ (\text{mod} \ 11)$ * *And for using induction assume it's clear that for $n=1$, the result holds. Now assume that is true for $n=k$. That is $(10)^{k} \equiv (-1)^{k} \ (\text{mod} \ 11)$, and prove it for $n=k+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 12, "answer_id": 2 }
Riemann integral question Suppose that $ f : [a,b] \rightarrow \mathbb{R}$ is Riemann integrable on $[a,b]$ and $g:[a,b] \rightarrow \mathbb{R}$ differs from $f$ at only one point $x_0 \in [a,b]$, that is, $g(x)=f(x)$ for $x \neq x_0$ and $g(x_0) \neq f(x_0)$. Show that $g$ is Riemann integrable on $[a,b]$. I'm having a little trouble, my thing was that maybe find a partition and look at how it behaves in the partition containing $x_0$ Appreciate any help
Here is an unnecessarily slick answer: There is a famous Lebesgue criterion for Riemann integrability of a function $f: [a,b] \rightarrow \mathbb{R}$ (my colleague Roy Smith informs me that it can actually be found already in the work of Riemann!): it is necessary and sufficient that $f$ be bounded and that its set of discontinuities have (Lebesgue!) measure zero. Given this: it is an easy exercise to show that modifying a function by changing its values at any finite set $S$ does not change its boundedness/unboundedness, and similarly could only create or destroy discontinuities at $x$ for $x \in S$. So the Lebesgue criterion applies here. (Beware: changing a function at a countable set of values only can change the continuity at every point: I leave it to the reader to supply the canonical example of this.) Of course one can -- and should -- also show this directly from the definition of Riemann integrability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to prove the inequality $\Theta(x,y)\le \Theta(x,z)+\Theta(z,y)$? Let $x, y$ be two complex vectors, $$\cos\Theta(x,y):=\operatorname{Re} \frac{y^*x}{\|x\|\|y\|} .$$ Then I want to prove that $$\Theta(x,y)\le \Theta(x,z)+\Theta(z,y) .$$
You should think about distances on the unit sphere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/39972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A convexity inequality The function $\mathbb{R}^n\to \mathbb{R}$, $x\mapsto |x|^p$ (where $p>1$) is convex and thus the inequality $$|y|^p-|x|^p\ge p(y-x)\cdot x |x|^{p-2}$$ is valid. In some lecture notes of Peter Lindqvist, it is remarked that this inequality can be strengthened to $$|y|^p-|x|^p\ge p(y-x)\cdot x |x|^{p-2} + C(p) |y-x|^p$$ (of course $C(p)>0$) at least for $p>2$. Does anyone know a proof of this inequality?
(I'm expanding my comment.) This is a two-dimensional problem. One may assume $x=(1,0)$, $y=(1+\alpha, \beta)$ with $\sqrt{\alpha^2+\beta^2}=:r$. Then $$|y|^p-|x|^p=((1+\alpha)^2 +\beta^2)^{p/2}-1\ ,\qquad p(y-x)\cdot x |x|^{p-2}=p\alpha\ .$$ It follows that we have to prove an inequality of the form $$(1+2\alpha+r^2)^{p/2}\geq 1 + p\alpha + Cr^p \qquad\qquad (1)$$ for a suitable $C$, and we may assume $p\geq2$. Putting $r:=0$ in (1) the statement reads $(1+2\alpha)^{p/2}\geq 1 + p\alpha$, and this is true for $p\geq2$ by Bernoulli's inequality. Now the derivative of the left side of (1) with respect to $r$ is $${p\over 2}(1+2\alpha +r^2)^{p/2 -1}\ 2r\geq {p\over 2}r^{p-2}\ 2r=pr^{p-1}\ ,$$ and the derivative of the right side of (1) with respect to $r$ is $Cp r^{p-1}$. So if $C=1$ the left side of (1) grows faster with $r$ than the right side. It follows that (1) is true with $C=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Conformal structure of regions of the complex plane and the ring of holomorphic functions How is the conformal structure of regions of the complex plane determined by the integral domain of holomorphic functions defined on those regions? Thanks
The conformal structure on the plane domain is completely determined by the ring of holomorphic functions. More precisely if two such rings are isomorphic, and isomorphism is identity on the constants, the regions are conformally equivalent. If the rings are only isomorphic as abstract rings, then the regions are either conformally equivalent or anti-conformally equivalent. This is due to Bers (BAMS 54, 1948) for plane domains and to Rudin (BAMS 61, 1955) for open Riemann surfaces. The idea is to consider the space of maximal ideals of the ring. They are in 1-1 correspondence with the points of the region. Remark. BAMS is free onine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
rank function on Spec (help with definition) one definition of the line bundle over a ring is: a finitely generated projective A-module such that the rank function Spec A → N (positive integers) is constant with value 1. We call A itself the trivial line bundle. so here i think that spec is equipped with zariski topology and N with the discrete one. Does this mean that in general the rank function is not continuous? Does one know about a basic example of non constant rank function to illustrate the peculiarities implied by the definition above? Many thanks
I am only going to talk about the case that $A$ is Noetherian. Here is part of Theorem A3.2 of Eisenbud's "commutative algebra with a view toward Algebraic geometry": a finitely generated module $M$ over a noetherian ring $A$ is projective if and only if there exists a finite set of elements $x_1,\ldots, x_r$ in $A$ that generates the unit ideal of $R$ such that $M[x_i^{-1}]$ is free over $A[x_i^{-1}]$ for each $i$. It follows that the rank is a locally constant function. So if $\mathrm{Spec}(A)$ is connected, then the rank has to be constant. I'll leave it to you to construct an example of a projective module with nonconstant rank.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Primitive polynomials of finite fields there are two primitive polynomials which I can use to construct $GF(2^3)=GF(8)$: $p_1(x) = x^3+x+1$ $p_2(x) = x^3+x^2+1$ $GF(8)$ created with $p_1(x)$: 0 1 $\alpha$ $\alpha^2$ $\alpha^3 = \alpha + 1$ $\alpha^4 = \alpha^3 \cdot \alpha=(\alpha+1) \cdot \alpha=\alpha^2+\alpha$ $\alpha^5 = \alpha^4 \cdot \alpha = (\alpha^2+\alpha) \cdot \alpha=\alpha^3 + \alpha^2 = \alpha^2 + \alpha + 1$ $\alpha^6 = \alpha^5 \cdot \alpha=(\alpha^2+\alpha+1) \cdot \alpha=\alpha^3+\alpha^2+\alpha=\alpha+1+\alpha^2+\alpha=\alpha^2+1$ $GF(8)$ created with $p_2(x)$: 0 1 $\alpha$ $\alpha^2$ $\alpha^3=\alpha^2+1$ $\alpha^4=\alpha \cdot \alpha^3=\alpha \cdot (\alpha^2+1)=\alpha^3+\alpha=\alpha^2+\alpha+1$ $\alpha^5=\alpha \cdot \alpha^4=\alpha \cdot(\alpha^2+\alpha+1) \cdot \alpha=\alpha^3+\alpha^2+\alpha=\alpha^2+1+\alpha^2+\alpha=\alpha+1$ $\alpha^6=\alpha \cdot (\alpha+1)=\alpha^2+\alpha$ So now let's say I want to add $\alpha^2 + \alpha^3$ in both fields. In field 1 I get $\alpha^2 + \alpha + 1$ and in field 2 I get $1$. Multiplication is the same in both fields ($\alpha^i \cdot \alpha^j = \alpha^{i+j\bmod(q-1)}$. So does it work so, that when some $GF(q)$ is constructed with different primitive polynomials then addition tables will vary and multiplication tables will be the same? Or maybe one of presented polynomials ($p_1(x), p_2(x)$) is not valid to construct field (altough both are primitive)?
To more directly answer the questions asked: * *Yes, in general using different primitive polynomials will change the operations. If one uses expressions such as αi to refer to field elements (as is done in GAP and Magma, for instance), then the multiplication table stays the same, and the addition table changes. However, if one uses expressions like α2+α+1 (as is done in Macaulay2 and Maple, for instance) , then addition table stays the same, and the multiplication table changes. Zech logarithms are used to efficiently convert between the two representations. *Both of your polynomials p1 and p2 are good. This is proven in Charles Staats's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Finding a vector in Euclidian space that minimizes a loss function subject to some constraints I'm trying to solve the following minimization problem, and I'm sure there must be a standard methodology that I could use, but so far I couldn't find any good references. Please let me know if you have anything in mind that could help or any references that you think would be useful for tackling this problem. Suppose you are given $K$ points, $p_i \in R^n$, for $i \in \{1,\ldots,K\}$. Assume also that we are given $K$ constants $\delta_i$, for $i \in \{1,\ldots,K\}$. We want to find the vector $x$ that minimizes: $\min_{x \in R^n} \sum_{i=1,\ldots,K} || x - p_i ||^2$ subject the following $K$ constraints: $\frac{ || x - p_i ||^2 } { \sum_{j=1,\ldots,K} ||x - p_j||^2} = \delta_i$ for all $i \in {1,\ldots,K}$. Any help is extremely welcome! Bruno edit: also, we know that $\sum_{i=1,\ldots,K} \delta_i = 1$.
Yes, this works. If $n \leq K-2,$ you have no guarantee of any legal solution, even when the $\delta_i$ sum to 1, as required. It may be that the sample points, your $v_j$ and $Y,$ were in a Euclidean space of much lower dimension, however, that does not guarantee you can repeat that piece of luck if the new $n$ in $\mathbf R^n$ is too small. If $n = K -1,$ there should be a single feasible point, "near" the simplex with the $K$ points as vertices. No need (or ability) to minimize anything. Actually, unless the $\delta$'s are all equal, it appears there is a second feasible point far away. If all angles in the simplex are acute, there is a feasible point in its interior. So, my advice is, figure out how to find a feasible point when $n=K-1.$ If circumstance forces $n \geq K,$ rotate so the hyperplane containing all the $p_i$ becomes the hyperplane $x_1, x_2, \ldots, x_{K-1}, 0,0,\ldots,0,$ solve the problem there, then rotate back. Meanwhile, I see nothing wrong with a numerical method for finding the single feasible point near the simplex when $n=K-1.$ Easier than finding the intersection of a large number of spheres and planes. Note that, when $n=K,$ the full set of all feasible points is either a straight line (if all $\delta_i$ are equal) or, in fact, an actual circle. Go figure. In either case, meeting the hyperplane that contains the $p_i$'s orthogonally. For that matter, your easiest program is just to solve the problem in the original $v_i, Y$ location, that is, a numerical method that finds the point $Z$ near the $v_i$ simplex with the correct $\delta$'s. Then you can just map $Z$ along with the $v_i.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/40401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Subset sum problem is NP-complete? If I know correctly, subset sum problem is NP-complete. Here you have an array of n integers and you are given a target sum t, you have to return the numbers from the array which can sum up to the target (if possible). But can't this problem be solved in polynomial time by dynamic programming method where we construct a table n X t and take cases like say last number is surely included in output and then the target becomes t- a[n]. In other case, the last number is not included, then the target remains same t but array becomes of size n-1. Hence this way we keep reducing size of problem. If this approach is correct, isn't the complexity of this n * t, which is polynomial? and so if this belongs to P and also NP-complete (from what I hear) then P=NP. Surely, I am missing something here. Where is the loophole in this reasoning? Thanks,
This is one of those fine points sometimes neglected when learning about the subject. The efficiency of an algorithm is always measured in regards to the size of representation of the input - how much bits you need to encode it. In the case of numbers this distinction is crucial since the number $n$ is usually represented by $\lg n$ (log in the base 2) of bits. Hence, a solution that is $O(n)$ is exponential in the input size, hence extremely inefficient. The classic example for this distinction is primality checking; even the most naive algorithm is $O(n)$, but we can't think of something like this as truly efficient even if we adopt a real-life approach - we can (and do) work with numbers with hundereds of digits on a daily basis, and usual arithmetic with those numbers is quite fast (being polynomial in the number of digits), but naive primality testing methods will never finish in real life even for numbers with a hundred digits or so. The same danger lurks in any problem involving numbers, in particular subset sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Regarding limits and $1^\infty$ Possible Duplicate: Why is $1^{\infty}$ considered to be an indeterminate form I have some questions about limits and the undefinability of $1^\infty$. For example, is $\lim_{x\to\infty}1^x$ indefinite? Why is it not $1$? Or do mathematicians, when saying that $1^\infty$ is indefinite, actually refer to cases such as $lim_{x\to\infty} \left(1 + \frac{a}{x}\right)^x$ where even though at a first glace the result is $1$, this is actually a special case and it is equal to $e^a$?
When people say that $1^{\infty}$ is an indeterminate form, what they mean is that if $f(x)$ is a function such that $\lim_{x \to r} f(x) = 1$ and $g(x)$ is a function such that $\lim_{x \to r} g(x) = \infty$, the value of $\lim_{x \to r} f(x)^{g(x)}$ is not uniquely determined in general. It is determined in the special case that $f(x) = 1$, in which case the limit is obviously just $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the symmetry between the definitions of the bounded universal/existential quantifiers? What is the symmetry between the definitions of the bounded universal/existential quantifiers? $\forall x \in A, B(x)$ means $\forall x (x \in A \rightarrow B(x))$ $\exists x \in A, B(x)$ means $\exists x (x \in A \land B(x))$ These make intuitive sense, but I would expect there to be some kind of symmetry between how the definitions of the bounded quantifiers work, and I can't see one. $A \rightarrow B$ means $\lnot A \lor B$ which doesn't seem to have a direct relationship with $A \land B$. What am I missing?
Is this more symmetrical? $\forall x \in A, B(x)$ means $\forall x (x \notin A \lor B(x))$ $\exists x \in A, B(x)$ means $\exists x (x \in A \land B(x))$
{ "language": "en", "url": "https://math.stackexchange.com/questions/40564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Diophantine equation : $N= \frac{x^2+y}{x+y^2}$ I am looking for information about the following diophantine equation : $N = \displaystyle\frac{x^2+y}{x+y^2}$ Has it been studied ? Is there any efficient algorithm to solve it? Any links? I have tried to solve it by myself this week-end, but haven't made any progress ... Thanks in advance Philippe P.S: My first post. Sorry for being unclear. Does this equation have solutions in integers x,y for all integer N > 0 ?
Follow user9325's suggestion about completing the squares, and then (look up and) apply the theory of Pell equations. Edit: OK, I guess you didn't get anything out of user9325's suggestion, so I'll take it up for you. $N=(x^2+y)/(x+y^2)$, $Ny^2-y=x^2-Nx$, $U^2-NV^2=1-N^3$ where $U=2Ny-1$, $V=2x-N$. This has the solution $U=-1$, $V=\pm N$. The solution $U=-1$, $V=N$ corresponds to $x=N$, $y=0$, which already shows that there's a solution for each $N$, but maybe that's too trivial. Then take any solution to $a^2-Nb^2=1$ and you get another solution, $U=-a\pm bN^2$, $V=-b\pm aN$. Now $a^2-Nb^2=1$ has lots of solutions - that's the Pellian I alluded to. For $y$ to be an integer, you need $a\equiv 1\pmod N$, so you have to study enough of the theory to see if that can be made to happen. If $N$ is a square, say, $N=m^2$, then the Pellian doesn't apply, but you have something simpler; $(U+mV)(U-mV)=1-m^6$. Now you'll get at most finitely many solutions, since there are only finitely many ways to factor $1-m^6$. Here's one example; take $N=4=2^2$ so $m=2$ and $1-m^6=-63$; take $U+2V=63$, $U-2V=-1$ to get $U=31$, $V=16$; then $x=10$, $y=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Constructing self-complementary graphs How does one go about systematically constructing a self-complementary graph, on say 8 vertices? [Added: Maybe everyone else knows this already, but I had to look up my guess to be sure it was correct: a self-complementary graph is a simple graph which is isomorphic to its complement. --PLC]
If you have a self-complementary graph of order 4n, half the vertices must each lie on fewer than 2n edges and the other half must lie on 2n or more edges. Add a vertex by connecting it to the 2n vertices lying on fewer than 2n edges. The result is a self-complementary graph of order 4n+1. You can create a second self-complementary graph of order 4n+1 by taking the self-complementary graph of order 4n and connecting the new point to the 2n vertices lying on 2n or more edges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 3 }
Why are singular conics reducible? I'm currently working through Rick Miranda's book on Algebraic Geometry and Riemann Surfaces, and I've been stuck on a problem in the first chapter, and I can't seem to get anywhere. I think that for example Bezout's theorem would solve it, but I would want something more elementary, which I think there is. Let X be an affine plane curve of degree 2, that is, defined by a quadratic polynomial f(z,w). Suppose that f is singular. Show that f(z,w) then factors as the product of linear factors. UPDATE So far I've done the following, set $f(x,y) = ax^2+bxy+cx+dy+ey^2+f$. Say that $p=(m,n)$ is a root, and that it is singular. Set $z=(x+m)$, $w=(y+n)$. Then we have a polynomial: f(z,w) which will have a singular point at (0,0). Taking the partial derivatives, and further, we solve for some coefficients, and at the end we get that a conic, singular polynomial should be (after some transformations) of the form: $az^2+bzw+cw^2$, which is reducible into linear factors. However, I'm not completly sure this method is correct, so any tips would be helpful, as to whether I'm on the right path or not.
Hint: Make a linear change of coordinates so that (one of) the singular point(s) is located at $(0,0)$. (Check that this is okay in the context of this particular problem.) Now consider the Taylor series expansion of $f(z,w)$, i.e. write $f = f_0 + f_1 + f_2 + ...$, where $f_n$ is homogeneous of degree $n$ in $z$ and $w$. For which degrees $n$ is $f_n$ non-zero? What does this tell you?
{ "language": "en", "url": "https://math.stackexchange.com/questions/40779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Convergence of infinite/finite 'root series' Let $S_n=a_1+a_2+a_3+...$ be a series where $ {a}_{k}\in \mathbb{R}$ and let $P = \{m\;|\;m\;is\;a\;property\;of\;S_n\}$ based on this information what can be said of the corresponding root series: $R_n=\sqrt{a_1} + \sqrt{a_2} + \sqrt{a_3} + ...$ In particular, if $S_n$ is convergent/divergent then in what circumstances can we say that $R_n$ is also convergent/divergent? EDIT (1) Eg: $$S_n = \frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...$$ we know that the series converges to $1$. While the corresponding root series $$R_n = \frac{\sqrt{1}}{\sqrt{2}}+\frac{\sqrt{1}}{\sqrt{4}}+\frac{\sqrt{1}}{\sqrt{8}}+...$$ also converges (which we know does to $1+\sqrt2$). We also know that the above convergence cannot generalised to all root series as, the series $\displaystyle \frac{1}{n^2}$ converges to $\displaystyle \frac{\pi^2}{6}$, while the corresponding root series $\displaystyle \sqrt{\frac{1}{n^2}}$ diverges. My Question is: Is there a way to determine which 'root series' diverges or converges based only on information about the parent series.
By Cauchy Schwarz we have $$\sum_{N\leq n\leq N+x}f(n)\leq\left(\sum_{N\leq n\leq N+x}\sqrt{f(n)}\right)^{2}\leq x\sum_{N\leq n\leq N+x}f(n).$$ Now, since these inequalities are best possible, that is since I can find $f$ with equality, or arbitrarily close to equality at either end, nothing more can be said without additional conditions on $f$. Notice that in particular, the left hand side gives $f$ diverges $\Rightarrow$ $\sqrt{f}$ diverges. I mean, I flirted with the idea that $$\sum_{n=1}^\infty \frac{\sqrt{f}}{\sqrt{n}}$$ converging implies that $\sum_{n=1}^\infty f(n)$ must as well. However, I think it is instructional to explain why this is not so: Let $f(n)$ be the characteristic function of the fourth powers. If monotonicity is also required, then this condition is true, but for general $f$, little can be said.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Epanechnikov Kernel with Variable Width I have the Epanechnikov kernel that looks like this: $$K(x) = \frac{3}{4} * (1 - x^2)$$ I have added $\sigma$ to the equation so that I can adjust the width, very much like the gaussian kernel. It looks like this: $$K(x) = \frac{3}{4} * \left(1 - \left(\frac{x}{\sigma^2}\right)^2\right)$$ Can someone verify that the above equation is correct?
A kernel is suppose to have a unit integral. You should think of it as a differential: $$K(x) = \frac{3}{4}(1-x^2) dx.$$ (Of course it is understood the values are $0$ in the complement of the interval $[-1,1]$.) To change the radius from $1$ to $\sigma$, re-express $x$ as $u / \sigma$: $$K(u) = \frac{3}{4}(1 - (u/\sigma)^2) d(u/\sigma) = \frac{3}{4\sigma}(1 - (u/\sigma)^2).$$ (Now it is understood $K$ is $0$ in the complement of the interval $[-\sigma, \sigma]$.) Geometrically this is clear: uniformly stretching the kernel out so it is supported on $[-\sigma, \sigma]$ would multiply its integral by $\sigma$, so you have to rescale its values by $1/\sigma$ to compensate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/40922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conditional probability Given the events $A, B$ the conditional probability of $A$ supposing that $B$ happened is: $$P(A | B)=\frac{P(A\cap B )}{P(B)}$$ Can we write that for the Events $A,B,C$, the following is true? $$P(A | B\cap C)=\frac{P(A\cap B\cap C )}{P(B\cap C)}$$ I have couple of problems with the equation above; it doesn't always fit my logical solutions. If it's not true, I'll be happy to hear why. Thank you.
Yes and this is also known as multiple conditioning. In general, $$P(A_1 \cap \cdots \cap A_n) = P(A_1) \prod_{i=2}^{n} P(A_{i}|A_{1} \cap \cdots \cap A_{i-1})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/41092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Compound angle formula confusion I'm working through my book, on the section about compound angle formulae. I've been made aware of the identity $\sin(A + B) \equiv \sin A\cos B + \cos A\sin B$. Next task was to replace B with -B to show $\sin(A - B) \equiv \sin A\cos B - \cos A \sin B$ which was fairly easy. I'm struggling with the following though: "In the identity $\sin(A - B) \equiv \sin A\cos B - \cos A\sin B$, replace A by $(\frac{1}{2}\pi - A)$ to show that $\cos(A + B) \equiv \cos A\cos B - \sin A\sin B$." I've got $\sin((\frac{\pi}{2} - A) - B) \equiv \cos A\cos B - \sin A\sin B$ by replacing $\sin(\frac{\pi}{2} - A)$ with $\cos A$ and $\cos(\frac{\pi}{2} - A)$ with $\sin A$ on the RHS of the identity. It's just the LHS I'm stuck with and don't know how to manipulate to make it $\cos(A + B)$. P.S. I know I'm asking assistance on extremely trivial stuff, but I've been staring at this for a while and don't have a tutor so hope someone will help!
This is the same as Henry's answer, only presented differently. $\left(\displaystyle \frac{\pi}{2}-A\right) - B = \displaystyle \frac{\pi}{2} - (A+B)$. Now you can use the fact that $\sin \left(\displaystyle \frac{\pi}{2} - C\right) = \cos(C)$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/41133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
questions about the paper: Affine quivers and canonical bases I am reading the paper Affine quivers and canonical bases. I have a question on page 114 of the paper. In the proof of property (b), line 6 of page 114, why "for each $\gamma \neq 1$, $tr(\gamma, M)=0$" implies "$M=n\mathbf{r}$, where $\mathbf{r}=\mathbf{C}[\Gamma]$"? Thank you.
A complex representation of a finite group $G$ whose character vanishes on all non-unit elements of $G$ is a multiple of the regular representation. That follows immediately by standard character theory, as set up in, for example, Serre's book on the Representation Theory of Finite Groups. Later: You asked for an proof of this... The hypothesis you have is that the character $\chi$ of your module $M$ is supported on $\{1_G\}\subseteq G$. Now, the character $\chi_{\mathrm{reg}}$ of the regular representation is also supported on $\{1_G\}$, and its value at $1_G$ is $|G|$. It follows that $\chi=p\chi_{\mathrm{reg}}$ for some $p\in\mathbb Q$, because the value of a character at $1_G$ is always an integer. Now $\chi$ is a linear combination of irreducible characters with integer coefficients, and since the trivial representation of $G$ occurs with multiplicity one occurs exactly once in the regular representation, it occurs with multiplicity $p$ in $M$. It follows immediately that $p$ must be an integer and, since a representation is determined by its character, that $M=p\mathbb C[G]$. This argument, again, assumes the basic character theory of finite groups which you can find in lots of places (the book Harris+Fulton of on representation theory, Feit's or Huppert's on characters, Serre's, and many others) Google should find other expositions—I happen to have just taught a minicourse on the subject, and you'll find the notes, with most proofs, here, but in Spanish. Aside: I think it would be extremely useful for you to study the basics of the representation theory of finite groups a bit. Lots of the things that one wants to do in representation theory, and which in particular Lusztig does, can be seen, in their simplest incarnation, in that context, so learning it has immense practical value!
{ "language": "en", "url": "https://math.stackexchange.com/questions/41179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Order of operations - why are they in the order they're in? I understand the order of operations, but why are they ordered the way they're ordered? Is there a particular reason why multiplication should have a higher precedence than subtraction, other than to prevent ambiguity? Edit: I'm a curious software developer that's relatively lousy at math. A simple explanation that your grandma could understand would be very welcome. :-)
I don't think there is any mathematical reason. The order of operations is only a matter of notation to save some brackets. Careful: a typical calculator does not have a different order of operations but none at all instead. So 5 - 4*3 on a calculator is actually (5-4)*3 while with our convention for algebra it is 5-(4*3). Both assumptions are valid, the latter one is just the more common form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 10, "answer_id": 0 }
Limit of monotonic functions at infinity I understand that if a function is monotonic then the limit at infinity is either $\infty$,a finite number or $-\infty$. If I know the derivative is bigger than $0$ for every $x$ in $[0, \infty)$ then I know that $f$ is monotonically increasing but I don't know whether the limit is finite or infinite. If $f'(x) \geq c$ and $c \gt 0$ then I know the limit at infinity is infinity and not finite, but why? How do I say that if the limit of the derivative at infinity is greater than zero, then the limit is infinite?
In each interval $[n,n+1)$ the function increases by at least $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Outer product of a vector with itself Is there a special name for an outer product of a vector with itself? Is it a special case of a Gramian? I've seen them a thousand times, but I have no idea if such product has a name. Update: The case of outer product I'm talking about is $\vec{u}\vec{u}^T$ where $\vec{u}$ is a column vector. Does is have a name in the form of something of $\vec{u}$? Cheers!
The result is a particular case of a dyadic tensor. Is that what you are looking for?
{ "language": "en", "url": "https://math.stackexchange.com/questions/41329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Finite field, I don't quite understand the concept What is the concept behind AES's finite field? I understand most of you will laugh at my lack of understanding of the concept. I'm trying to learn a greater amount of higher math concepts and I am implementing AES to do so. I understand that in a basic term, polynomial is an equation of only positive integer expodetials and variables but what does it mean by treated as a polynomial over GF(2^8) and is 2^8, 2 to the power of 8 or 2 XOR 8? Corrected: from Wikipedia AES In more general sense, each column is treated as a polynomial over $GF(2^8)$ and is then multiplied modulo $x^4 + 1$ with a fixed polynomial $c(x) = 0x03*x^3 + x^2 + x + 0x02$. The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from $GF(2)[x]$.
I will try to give a layman's terms description; hoping this helps clarify some of your confusions. But you should be warned that this is not a precise definition. $\mathbb{Z}$ is the set of all integers. $\mathbb{Z}_2$ is the set of all integers modulo 2. That is, ${0,1}.$ Arithmetic operations in $\mathbb{Z}_2$ are performed modulo $2.$ Consider a vector of length 8, whose entries are elements in $\mathbb{Z}_2$. In other words, it's a bit vector of length 8. A given vector $(b_1,b_2,\dots,b_8) \in (\mathbb{Z}_2)^8$ can be represented as a polynomial: $b_1 x^7 + b_2 x^6 + \dots + b_7 x + b_8.$ That is, a polynomial in $x$ with coeffiecients from $\mathbb{Z}_2.$ $\mathrm{GF}(2^8)$ is defined as a set of such vectors (equivalently such polynomials. This is what the article refers to as each column is treated as a polynomial..). In order to construct $\mathbb{Z}_2$ we need the element $2,$ and performed all arithmetic modulo $2.$ If you want to construct $\mathbb{Z}_5,$ will perform operations mod $5$ etc. Similarly, In order to construct $\mathrm{GF}(2^8)$, you need a specific (irreducible) polynomial $p$ of degree $8.$ All arithmetic operations in $\mathrm{GF}(2^8)$ are then performed modulo $p.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/41365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Easy Proof Adjoint(Compact)=Compact I am looking for an easy proof that the adjoint of a compact operator on a Hilbert space is again compact. This makes the big characterization theorem for compact operators (i.e. compact iff image of unit ball is relatively compact iff image of unit ball is compact iff norm limit of finite rank operators) much easier to prove, provided that you have already developed spectral theory for C*-algebras. By the way, I'm using the definition that an operator $T\colon H \to H$ is compact if and only if given any [bounded] sequence of vectors $(x_n)$, the image sequence $(Tx_n)$ has a convergent subsequence. edited for bounded
What you're asking about is called Schauder's theorem. An operator $T: X \to Y$ is compact if and only if $T^{\ast}: Y^{\ast} \to X^{\ast}$ is compact. I'm using the following definition of compactness: An operator $T: X \to Y$ between Banach spaces is compact if and only if every sequence $(x_{n}) \subset B_{X}$ in the unit ball of $X$ has a subsequence $(x_{n_{j}})$ such that $(Tx_{n_j})$ converges. This implies that $K = \overline{T(B_{X})} \subset Y$ is compact, as it is sequentially compact and metric. Now let $(\phi_{n}) \subset B_{Y^{\ast}}$ be any sequence and we want to show that $(T^{\ast}\phi_{n})$ has a convergent subsequence. Observe that the sequence $f_{n} = \phi_{n}|_{K}$ in $C(K)$ is bounded and equicontinuous, so by the theorem of Arzelà-Ascoli, the sequence $(f_{n})$ has a convergent subsequence $(f_{n_{j}})$ in $C(K)$. Now observe $$\|T^{\ast}\phi_{n_i} - T^{\ast}\phi_{n_{j}}\| = \sup_{x \in B_{X}} \|\phi_{n_i}(Tx) - \phi_{n_j}(Tx)\| = \sup_{k \in K} |f_{n_i}(k) - f_{n_j}(k)|$$ where the last equality follows from the fact that $T(B_{X})$ is dense in $K$. But this means that $(T^{\ast}\phi_{n_j})$ is a Cauchy sequence in $X^{\ast}$, hence it converges. I leave the other implication as well as the translation to the Hilbert adjoint to you as an exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Two-colourings of the complete graph on n vertices The question is: Show that there is a two-colouring of the complete graph $K_n$ on $n$ vertices with at most $\displaystyle {n \choose k} 2^{1-{k \choose 2}}$ monochromatic subgraphs $K_k$. (Hint: Compute expectation of the number of monochromatic $K_k$. ) I'm confused about what a two-colouring is. Diestel (Graph Theory) says a $k$-colouring is a vertex partition into $k$ independent sets. But for $n \geq 2$ I don't see how you can do this on $K_n$. Can someone work out what "two-colouring" means in this context? If I assume it means colouring each edge one of two colours randomly, I can prove the result. But I don't see why it should mean this. It's just me doing what's easy. Thanks!
This appears in the classic probabilistic proof of existence of the Ramsey number $R(k,k)$, due to Erdos, and is indeed a colouring of the edges. The wiki page on the probabilistic method has the proof here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intuitive explanation of the tower property of conditional expectation I understand how to define conditional expectation and how to prove that it exists. Further, I think I understand what conditional expectation means intuitively. I can also prove the tower property, that is if $X$ and $Y$ are random variables (or $Y$ a $\sigma$-field) then we have that $$\mathbb E[X] = \mathbb{E}[\mathbb E [X | Y]].$$ My question is: What is the intuitive meaning of this? It seems quite puzzling to me. (I could find similar questions but not this one.)
The expected value of $X$ is still the expected value of $X$ when you take into account the possible values of $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 8, "answer_id": 3 }
Can it be shown that the limit of a bounded sequence is no greater than the bounding value? Let $(a_n)$ be a convergent sequence. Since $(a_n)$ converges it is bounded and therefore there exists a number $\alpha \geq 0$ such that $|a_n| \leq \alpha \; \forall \; n \in \mathbb{N}$. Is it true $\lim_{n \to \infty} |a_n| \leq \alpha$ ? I believe it is true and my proposed answer to this question will attempt to confirm this belief.
As a general tip, usually when I believe something is true, a proof by contradiction is in order. In this case if $a_{n} \to a$, but $a>\alpha$ then $a-\epsilon>\alpha$ for some $\epsilon>0$. By definition of convergence, there is some $a_N \in (a-\epsilon, a+\epsilon)$, but then $a_N > a-\epsilon >\alpha$, contrary to $\alpha$ being an upper bound for $(a_n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is an integer uniquely determined by its multiplicative order mod every prime Let $x$ and $y$ be nonzero integers and $\mathrm{ord}_p(w)$ be the multiplicative order of $w$ in $ \mathbb{Z} /p \mathbb{Z} $. If $\mathrm{ord}_p(x) = \mathrm{ord}_p(y)$ for all primes (Edit: not dividing $x$ or $y$), does this imply $x=y$?
Yes. It is a consequence of Chebotarev's density theorem that if two Galois extensions $L_1, L_2$ of a number field $K$ have the property that the same primes split in both of them (or even almost all the same primes), then in fact $L_1 = L_2$. The condition in the OP implies that the polynomials $t^k - x, t^k - y$ split at the same primes over $\mathbb{Q}$ for all $k$, hence that their splitting fields over $\mathbb{Q}$ are identical. Let $L_k$ denote this splitting field. Then $\mathbb{Q}(\zeta_k) \subset L_k$ where $\zeta_k$ is a primitive $n^{th}$ root of unity; moreover, $L_k$ is a Kummer extension of $\mathbb{Q}(\zeta_k)$ with Galois group $\text{Gal}(L_k/\mathbb{Q}(\zeta_k)) \cong C_k$ acting on a root of either $t^k - x$ or $t^k - y$ by multiplication by a primitive $k^{th}$ root of unity. Let $a^k = x, b^k = y$. WLOG a generator of $C_k$ acts on $a$ by multiplication by $\zeta_k$ and acts on $b$ by multiplication by $\zeta_k^m$ for some $m \in (\mathbb{Z}/k\mathbb{Z})^{\ast}$. It follows that $\frac{a^m}{b}$ is fixed under the action of $C_k$, hence $\frac{a^m}{b} \in \mathbb{Q}(\zeta_k)$ and $$\left( \frac{a^m}{b} \right)^k = \frac{x^m}{y}.$$ When $k = 2$ we conclude that $x, y$ have the same squarefree parts; in particular, $x, y$ have the same sign. In general, I want to conclude that For infinitely many $k$, we have $\frac{a^m}{b} \in \mathbb{Q}$ for some choice of $a, b$. Edit: This is true for all odd $k$. The above result allows us to finish. Given a prime $p$ let $\nu_p$ denote the function which, given an integer, returns the exponent of $p$ in the prime factorization of that integer. Given two primes $p_i, p_j$ and fixing $k$, the condition that $\frac{a^m}{b} \in \mathbb{Q}$ implies that $$\nu_{p_i}(x) \nu_{p_j}(y) \equiv \nu_{p_j}(x) \nu_{p_i}(y) \bmod k.$$ If this is true for infinitely many $k$, it follows that $\nu_{p_i}(x) \nu_{p_j}(x) = \nu_{p_j}(x) \nu_{p_i}(x)$ for all $i, j$. Together with the fact that $x, y$ have the same sign, it follows that one of $x, y$ is a power of the other, say an $n^{th}$ power. If $n$ has a nontrivial odd factor $o$, then this is a contradiction by taking $k$ a sufficiently large power of $o$; otherwise, $n$ is a power of $2$, and by taking $k$ a sufficiently large power of $2$ we get a contradiction using the more detailed result in the link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/41774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 4, "answer_id": 2 }
how to solve the following inequality? I have the following inequality: $$n \ge \frac{K_n^2}{\epsilon^2} \frac{\log K_n}{\epsilon},\text{ where }K_n = (\log n)^3.$$ I would like to solve it, even numerically. I thought that numerically it can be solved by iteratively setting $K_n$ using a current value for $n$ and then $n$ using a current value for $K_n$. However, I get a weird behavior. For different $\epsilon$ (more specifically for $\epsilon = 3.3409202$) I get that the answer for $n$ is 2.0127e+06. If I increase $\epsilon$ by a bit, I get a complex number. For other similar inequalities, I would get a different, but similar interesting "boundary" behavior: there would be a certain epsilon under which the value of solved $n$ will be very large, and over which the $n$ will suddenly jump to a really small value. Is there any explanation for what's going on here? Is there a better way to solve this inequality? Thanks.
I assume that solving the inequality means replacing the inequality sign by an equality sign and solving for $n$. From your post, I understand that you run into problems for "large" $\epsilon$ ($\epsilon >3.6$). The equation written explicitly reads $$\epsilon^3 n = 3 \log^6 n \,\log \log n.$$ It turns out that the equation has two solutions for $\epsilon< \epsilon^*$: one in the region $(e,n^*)$ and the other in the interval $(n^*,\infty)$. For $\epsilon=\epsilon^*$, the only solution is at $n^*$ and for $\epsilon> \epsilon^*$ there is no (real) solution. The values for $\epsilon^*$ and $n^*$ and some explanation you find below: Let us assume $n\geq 1$ as the right hand side for $n \leq 1$ is in general complex (and also $\epsilon>0$). The right hand side is negative in the region $[1,e]$. As the left hand side is positive, the solution to the equation has to be for $n\geq e$. The right hand side grows slower than the left hand side (for $n \to \infty$). Furthermore, the left hand side at $n=e$ is $\epsilon^3 e$ whereas the right hand side is 0. Therefore, it is clear that there will be a maximal $\epsilon= \epsilon^*$ for which the equation has a solution (and for $\epsilon < \epsilon^*$ there will be two solutions). The condition for $\epsilon^*$ is that the left hand side is a tangent (at $n^*$) to the right hand side. In formulas $$ \begin{align} \text{LHS}(n^*) &= \text{RHS}(n^*)\\ \text{LHS}'(n^*) &= \text{RHS}'(n^*) \end{align} $$ with $\text{LHS}(n)= \epsilon^* n$ and $\text{RHS}(n) = 3 \log^6 n \,\log \log n$. The solution is $$ \begin{align} \epsilon^* &= 8.60323 &n^* &= 687.328 \end{align}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/41895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Fixed-point-free permutations An $i \in [n]$ is called a fixed point of a permutation $\sigma \in S_n$ if $\sigma(i) = i$. Let $D(n)$ be the number of permutations $\sigma \in S_n$ without any fixed point. Prove that i) $D(n) = n \cdot D(n-1) + (-1)^n$ for $n \geq 2$ ii) $D(n) = (n-1)(D(n-1) + D(n-2))$ for $n \geq 3$ First I tried to write down all fixed point-less permutations (in one-line notation): $n \leq 1 \rightarrow$: none $n = 2$: (2 1) $n = 3$: (2 3 1) (3 1 2) $n = 4$ (2 1 4 3) (2 3 4 1) (2 4 1 3) (3 1 4 2) (3 4 1 2) (3 4 2 1) (4 1 2 3) (4 3 1 2) (4 3 2 1) Unfortunately I didn't find any way of constructing these permutations by using the ones from $(n-1)$. Could you please help me a little bit? Thank you in advance!
These are called derangements. For a discussion, please see for example http://en.wikipedia.org/wiki/Derangement
{ "language": "en", "url": "https://math.stackexchange.com/questions/41949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proving an integer $3n+2$ is odd if and only if the integer $9n+5$ is even How can I prove that the integer $3n+2$ is odd if and only if the integer $9n+5$ is even, where n is an integer? I suppose I could set $9n+5 = 2k$, to prove it's even, and then do it again as $9n+5=2k+1$ Would this work?
(This bit of writing is organized by notes and proofs, as noted by headings and X)s) Proof 1 1) Assuming an even plus an even is an odd, odd plus an odd is an even, and even plus an odd is an odd... -- (See end proofs) 2) Assuming n is even... 3) Anything even can be stated as 2x, where x equals any non-decimal number, therefore the product of 3*2x would be divisible by two, and even. Adding two to that would produce an even according to our original assumption. 4) Now assuming n is odd... 5) 3n would always result in an odd, due to the fact that any value for n that is even would result in an even number, we would only need to add 3 to n-1 to get 3n. Since we're assuming n is odd, n-1 would be even, and an even plus an odd is an odd. 6) Now adding two to this odd would result in another odd, due to an odd plus an even being an odd. Proof 2 Now to prove that 9n + 5 is an even if 3n + 2 is an odd... 1) N must be an odd for 3n + 2 to be an odd 2)Plugging any odd number into the equation 9n would result in an odd number(see note five of previous proof) 3) An odd plus an odd is an even (see note 1 of previous proof), five is an odd, 9n is an odd, so 9n + 5 is an even. Now, a quick proof on odd plus odd being an even and etc. Definitions: Even - 2n or an odd +- 1 where n is any complete number Odd - 2n +- 1 where n is any complete number Proof 3 1) an odd plus an odd would be (2n +- 1) + (2x +- 1). 2) After simplification, this can become 3 things. 1 - 2n + 2x 2 - 2n + 2x + 2 3 - 2n + 2x - 2 3) To check even-ness, divide by two and check for decimals. Doing so in the above three results results in no decimals, therefore, all even. So an odd plus an odd is always an even. Proof 4 1) An even plus an odd would be 2x + (2n + 1) 2) Dividing by two gives us x + n + .5. .5 is a decimal, therefore, the result is odd. So an even plus an odd is an odd. Proof 5 1) an even plus an even would be 2x + 2n 2) Divide by two - x + n. Since there are no decimals in this result, it is an even. Therefore, an even plus an even is always an even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Difference between "Show" and "Prove" In many mathematics problems you see the phrase "prove that..." or "show that..." something is. What's the difference between these two phrases? Is "showing" something different from "proving" something in mathematics? Thanks.
I remembered when I was in high school my maths teacher said specifically there was a difference between showing and proving. He said at our - high school - level, everything is generally considered a 'show' which are typically quite specific and do not involve abstract theories; whereas once you hit the higher levels, then you'd start with proofs which are considerably more rigorous. e.g. prove that 1+1 = 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 7, "answer_id": 5 }
Can I use Ravi Vakil's way of learning for elementary subjects? I mean you can't learn math in a linear order. Can I just read a paper first on a subject I haven't studied and just work backwards? For example, I have never studied combinatorics but I sort of have a fuzzy idea about the paper on alternating permutations by Stanley.
This may in the end, for an absolute beginner, be quite unproductive. Normally introductory texts are written in such a way as to be helpful to a newcomer by first building a knowledge-base of basic facts about a field of mathematics, and then perhaps having a few chapters on selected topics of moderate difficulty towards the back. I would recommend a compromise. First pick up an elementary text which is well-known and accepted among your peers, or accepted among the people who are writing the papers you wish to read. (This information is not very hard to find.) Then read and study the first few chapters with the basic material thoroughly. Then you may read the advanced paper, and be thoroughly confused, and work your way backwards as you describe. But without the first step I do believe you will be not only creating more problems for yourself in the beginning, but also in the end, where you may indeed miss the entire point of a paper. Very often top research papers replace a standard part of a big elementary machine with a clever alternative to attack a difficult problem and find a new solution. You will not understand or appreciate this without a basic knowledge of how the solution works in the standard case first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 5, "answer_id": 2 }
How to rewrite logarithmic equation in exponential form? How would I rewrite this logarithmic equation: $\ln(37)= 3.6109$, in exponential form? -Thanks
The definition of $\ln(x)$ is that it is the number $y$ such that $e^y=x$. In other words, $$e^{\ln(x)}=x.$$ We have the equation $$\ln(37)=3.6109.$$ Because both sides are equal, we have that $$e^{\ln(37)}=e^{3.6109}.$$ By the definition of $\ln$, this simplifies to $$37=e^{3.6109}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/42243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that $\lim\limits_{x\to\infty}f'(x) = 0$ when $\lim\limits_{x\to\infty}f(x)$ and $\lim\limits_{x\to\infty}f'(x)$ exist I've been trying to solve the following problem: Suppose that $f$ and $f'$ are continuous functions on $\mathbb{R}$, and that $\displaystyle\lim_{x\to\infty}f(x)$ and $\displaystyle\lim_{x\to\infty}f'(x)$ exist. Show that $\displaystyle\lim_{x\to\infty}f'(x) = 0$. I'm not entirely sure what to do. Since there's not a lot of information given, I guess there isn't very much one can do. I tried using the definition of the derivative and showing that it went to $0$ as $x$ went to $\infty$ but that didn't really work out. Now I'm thinking I should assume $\displaystyle\lim_{x\to\infty}f'(x) = L \neq 0$ and try to get a contradiction, but I'm not sure where the contradiction would come from. Could somebody point me in the right direction (e.g. a certain theorem or property I have to use?) Thanks
To expand a little on my comment, since $\lim_{x \to \infty} f(x) = L$, we get $$\lim_{x \to \infty} \frac{f(x)}{x} =0 \,.$$ But also, since $\lim_{x \to \infty} f'(x)$ exists, by L'Hospital we have $$\lim_{x \to \infty} \frac{f(x)}{x}= \lim_{x \to \infty} f'(x) \,.$$ Note that using the MTV is basically the same proof, since that's how one proves the L'H in this case.... P.S. I know that if $L \neq 0$ one cannot apply L'H to $\frac{f(x)}{x}$, but one can cheat in this case: apply L'H to $\frac{xf(x)}{x^2}$ ;)
{ "language": "en", "url": "https://math.stackexchange.com/questions/42277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55", "answer_count": 6, "answer_id": 5 }
Nowhere monotonic continuous function Does there exist a nowhere monotonic continuous function from some open subset of $\mathbb{R}$ to $\mathbb{R}$? Some nowhere differentiable function sort of object?
The Weierstrass function, mentioned in other answers, is indeed an example of a nowhere monotone function, meaning that $f$, even though continuous and bounded, is increasing at no point, decreasing at no point (and differentiable at no point as well). Details of this can be found in Example 7.16 in van Rooij, and Schikhof, A second course on real functions, Cambridge University Press, 1982. That $f$ is increasing at $a$ means that there is a neighborhood $I$ of $a$ such that if $t<a$ is in $I$, then $f(t)\le f(a)$, and if $t>a$ is in $I$, then $f(t)\ge f(a)$. Thus, $f$ not increasing at $a$ iff any neighborhood of $a$ has points $t$ such that $(f(t)-f(a))(t-a)<0$. Being decreasing at $a$ can be stated similarly. See here. We know that if $f$ is differentiable at $a$ and $f'(a)>0$ then $f$ is increasing at $a$, and if $f'(a)<0$, then $f$ is decreasing at $a$, so if a nowhere monotone function has a point $a$ in its domain where $f'(a)$ exists, then we must have $f'(a)=0$. It is indeed possible for a non-constant continuous increasing function $f$ to satisfy $f'(a)=0$ almost everywhere (we say that $f$ is singular). (Of course, if $f'(a)=0$ everywhere, then $f$ is constant.) The best known example of this phenomenon is Cantor's function, also known as the Devil's staircase (The link goes to O. Dovgoshey, O. Martio, V. Ryazanov, M. Vuorinen. The Cantor function, Expositiones Mathematicae, 24 (1), (2006), 1-37). The above being said, note anyway that being increasing at a point is far from being increasing in a neighborhood of the point. If we require that $f$ is differentiable (and not constant), then there will be points $a$ where $f'(a)>0$ (so $f$ is increasing at $a$) or $f'(a)<0$ (so $f$ is decreasing at $a$). Nevertheless, as shown for example in Katznelson and Stromberg (Everywhere differentiable, nowhere monotone, functions, The American Mathematical Monthly, 81, (1974), 349-353) we can still find differentiable functions $f$ that are monotone on no interval. (I briefly state some properties of their example here; there used to be an accessible link to the paper, but apparently that is no longer the case.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/42326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 4 }
Are open sets required to be in Topology T over set X? Assume that we choose two sets A and B to be open sets, contained in topology (X,T). A and B and the constructed sets that fulfill the requirements to make A and B open (unions and intersects of A, B, empty, and O) are represented by (X,O). Do all the constructed sets in (X,O) (sets other than A and B within (X,O)) have to be contained in the topology (X,T)?
As an example demonstrating that two topologies need not be comparable, let $X = \{a, b\}$ (a set with just two elements). Define on $X$ the topologies $$ \sigma = \{\emptyset, \{a\}, X\} $$ and $$ \tau = \{\emptyset, \{b\}, X\}. $$ (the sets listed are taken to be the open sets of the topology). Both $(X, \sigma)$ and $(X, \tau)$ are topological spaces (you can check that they satisfy the axioms), but neither one is contained in the other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Eigenvalues of diff-system(can't understand) In this paper the authors have the dynamical system $$\begin{align} T_f \dot{y}_f & = -y_f + (1-\alpha(v))\varphi(z,d) \\ T_r \dot{y}_r & = -y_r + \alpha(v) \varphi(z,d) \\ \dot{z} & = -\varphi(z,d) + y_r + u \end{align}$$ and they state in eqns (8-10) that the eigenvalues of the linearization at the equilibrium points $(\overline{y}_f, \overline{y}_r, \overline{z})$ are $$\begin{align} \lambda_1 & = -T_f^{-1} \\ \lambda_2 + \lambda_3 & = -\varphi_z(\overline{z},d) - T_r^{-1} \\ \lambda_2 \lambda_3 & = T_r^{-1} \phi_z(\overline{z},d)(1-\alpha(\overline{v})) \end{align}$$ Can someone explain to me how these are derived?
Do you know how to linearize a dynamical system around an equilibrium? The idea is that you have $x\in \mathbb{R}^n$ and $f:\mathbb{R}^n\rightarrow\mathbb{R}^n$ and you define the system $\dot{x}=f(x)$. Now to find the linearization of the system you have to expand to a Taylor polynomial around the equilibrium and keep only the linear terms. Practically you find the Jacobian matrix of $f$. In most cases you move the equilibrium to $0$ and you end up with $$\dot x=Jx$$ with $J$ the Jacobian of $f$. For example consider $$\dot{x_1}=x_1+x_1 x_2$$ $$\dot{x_2}=2x_1+x_1^2-x_2$$ The equilibrium is $(0,0)$ already and the Jacobian of $f$ here is $$\left[ \begin{array}{cc} 1+x_2 & x_1 \\ 2+2x_1 & -1 \end{array}\right]$$ at $(0,0)$ this becomes $$\left[ \begin{array}{cc} 1 & 0 \\ 2 & -1 \end{array}\right]$$ and the linearization of the system is $$\left[ \begin{array}{c} \dot x_1 \\ \dot x_2 \end{array}\right]=\left[ \begin{array}{cc} 1 & 0 \\ 2 & -1 \end{array}\right]\left[ \begin{array}{c} x_1 \\ x_2 \end{array}\right].$$ The eigenvalues of the equilibrium are the eigenvalues of the Jacobian at the equilibrium. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How many permutations are there if you have n+1 items, where the extra item can be repeated? This is a little different than the normal case of permutations with repetition. Basically, let's say we have $n$ numbered balls, and there are $n$ spots. However, we can leave a spot empty if we want. The solution I got was basically... $$ \sum_{i=0}^n {n \choose i} \frac{n!}{(n-i)!} $$ The idea being that for a given number of blank spots, you can calculate the permutations in the remainder...and the combination gives you the distribution of those blank spots for a given number of blank spots. But I'm wondering if there is a way to collapse this sum? Thanks! Edit: here is the clarification that was asked for (sorry for the delay). The answer for case 2 would be 7. You have 2 spaces, and three numbers. 012. 1 and 2 can only appear once, but 0 can repeat. The posibilities are as follow: 00,01,02,12,21,10,20 Make sense? For 3 balls, you have to do it out but it turns out to be 34. It follows the equation I posted. I hpoe that helps.
Your expression is OEIS A002720 and does not seem to have a simpler form, apart perhaps from $$n! \; L_n(-1)$$ where $L_n(x)$ is a Laguerre polynomial and so may not be simpler.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing $\exists~$ some $c$ such that $f(z)=cg(z)$ Possible Duplicate: Holomorphic functions and limits of a sequence Hi There, I was looking through an old text of mine, just refreshing myself on some material and I came across an interesting exercise statement that looks promising to understand. It just have been some time and I do not know if I am/even know how to approach the problem correctly. Some guidance of a solution for my thoughts to think on would be appreciated. The question is as stated: Suppose that $f$ and $g$ are analytic on the disk $A=\{z \text{ such that }|z| \lt 2 \}$ and that neither $f(z)$ nor $g(z)$ is ever $0$ for $z \in A.$ If $$ \frac{f^{\;'}(1/n)}{f(1/n)}=\frac{g'(1/n)}{g(1/n)} \quad {\text{ for }} \quad n=1,2,3,4 \ldots, $$ could it be shown that there is a constant $c$ such that $f(z)=cg(z)~~\forall ~~ z \in A.$
Denote $h(x)=\frac{f^{'}(x)}{f(x)}-\frac{g^{'}(x)}{g(x)}$, which is holomorphic in $A$, since $f,g$ are never zero and analytic on $A$. The statement says that $h(\frac{1}{n})=0,\ \forall n \geq 1$. There is a theorem in complex analysis: Suppose $f$ is a holomorphic function in a region $\Omega$ that vanishes on a sequence of distinct points with a limit point in $\Omega$. Then $f$ is identically $0$. Since this is the case for $h$, we have $h(x)=0,\ \forall x \in A$. This means that $f^{'}(x)g(x)-f(x)g^{'}(x)=0,\ \forall x \in A$, and therefore $(f/g)^\prime(x)=0,\forall x \in A$. Therefore $f/g=c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Inequality involving factorial $\binom nk<(en/k)^k$ I am trying to prove following inequality: $$\binom{n}{k}<(en/k)^k$$ I tried Stirling approximation but I could not get anything further. Then I get $$\binom{n}{k}\approx \frac{\sqrt{2\pi n}n^n}{2\pi \sqrt{k(n-k)}(n-k)^{n-k}k^k}$$
$$\binom{n}{k} \left( \frac{k}{en} \right)^k = \frac{n(n-1) \ldots (n-k+1)}{n^k} \frac{k^k}{k! e^k} \leq \frac{k^k}{k! e^k} \text{ and since } e^k = \sum_m \frac{k^m}{m!},\;\;\; \frac{k^k}{k! e^k} < 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/42785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Probability, Discrete random variables Let $X$ and $Y$ be independent random variables, taking values in the positive integers and having the same mass function $f(x)=2^{-x}$ for $x=1,2,...$ .Find $P(X\geq kY)$, for a given positive integer I did: $\displaystyle P(X\geq kY)=P(Y\leq X/k)=\sum_{r=1}^{\infty}P(Y\leq r, X=rk)=\sum_{r=1}^{\infty}P(Y\leq r)P(X=rk)$ $P(Y\leq r)=\displaystyle\sum_{u=1}^{r}\frac{1}{2^{u}}= 1-\frac{1}{2^{r}}$ $P(X\geq kY)=\displaystyle \sum_{r=1}^{\infty} (1-\frac{1}{2^{r}})\frac{1}{2^{rk}}=\frac{2^k}{(2^k-1)(2^{k+1}-1)}$ But the solution says $P(X\geq kY)=\displaystyle\frac{2}{2^{k+1}-1}$ and it's solved by doing $\displaystyle P(X\geq kY)=\sum_{r=1}^{\infty}P(X\geq kr,Y=r)$ i think that this approach can't be so far from mine but results are different. Am i missing something? Thanks beforehand.
$$ P(X \geq kY) = \sum_{y=1}^{\infty} P(X \geq kY \ | \ Y = y) P(Y = y) = \sum_{y=1}^{\infty} P(X \geq ky) 2^{-y} $$ Now $$ P(X \geq ky) = 1 - P(X < ky) = 1 - \sum_{r=1}^{ky - 1} 2^{-r} = 1 - 1 + (1/2)^{ky-1} = (1/2)^{ky-1} $$ Therefore $$ P(X \geq kY) = \sum_{y=1}^{\infty} 2^{-ky+1} 2^{-y} = 2\sum_{y=1}^{\infty} 2^{-y(k+1)} = 2\frac{2^{-(k+1)}}{1 - 2^{-(k+1)}} = \frac{2}{2^{k+1} - 1} $$ which is the answer your solution gave.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Common term for differential equations and recurrence relations Recently I have been working with recurrence relations (mostly linear), and systems of coupled recurrence relations. I have noticed a lot of common ground with differential equations. In a way, you can generalize these types of equations to equations which involve different "states" or "versions" of a function. Differential equations involve a function and its derivative, and recurrence relations involve a function, and itself in another "state". Is there a general term for these types of equations, and is there some general results and theory?
One general approach to unifying differential and difference equations goes by the term time scale calculus. From Wikipedia: "Many results concerning differential equations carry over quite easily to corresponding results for difference equations, while other results seem to be completely different from their continuous counterparts. The study of dynamic equations on time scales reveals such discrepancies, and helps avoid proving results twice — once for differential equations and once again for difference equations. The general idea is to prove a result for a dynamic equation where the domain of the unknown function is a so-called time scale (also known as a time-set), which may be an arbitrary closed subset of the reals. In this way, results apply not only to the set of real numbers or set of integers but to more general time scales such as a Cantor set." Besides the Wikipedia article, a standard reference is Dynamic Equations on Time Scales, by Martin Bohner and Allan Peterson. The field is relatively new - Wikipedia says it was introduced in 1988 by Stefan Hilger in (I believe, although Wikipedia does not say this) his doctoral dissertation - and so it is not that well-known yet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/42881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Counting outcomes of flipping coins I know this is an extremely basic question, but I have a slight misunderstanding (?) regarding this question: How many possible outcomes are there when flipping two coins? At first glance, this is a really easy problem: 2 x 2 = 4! But if I list all the possible outcomes: {H, H} {H, T} {T, H} {T, T} I noticed that there are really three "distinct" outcomes: {H, H} {H, T} {T, T} Is it wrong to consider {H, T} and {T, H} the same? Why or why not?
It depends on how you decide to count them. You could say either. But, if your three events are two heads, one heads and one tails, or two tails - they do not have the same probability. But if your events are HH, HT, TH, or TT - they all have the same probability. Ultimately, one can define one's events and sample space however one wants. But we usually design it with some sort of problem in mind. Does that make sense?
{ "language": "en", "url": "https://math.stackexchange.com/questions/42952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Compute the expression $(a^2 + 4b - 1)(b^2 + 4a - 1)$ without calculating the roots of $x^2 - x - 5 = 0$ Let $a$ and $b$ be the roots of this equation: $$x^2 - x - 5 = 0$$ Find the value of $$(a^2 + 4b - 1)(b^2 + 4a - 1)$$ Without calculating the values of a and b. I saw this on a problems site and tried it but I got 100 and I don't think that's correct.
Hint: $$ a^2 + 4b - 1 = a^2 - a - 5 + 4b + a + 4 = 4b + a + 4 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/42996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Derivative Line Equation question I am not sure how to go about solving this problem: I know the derivative of $f$ would be $2x$ but I am not sure where to go from there. If anyone could help out that would be great. Thanks! Edit: The answer is: $y = 2x - 1$ , see below for why. EDIT: For this problem: plugging in $x_1$ back into the function did not seem to yield the correct $y_1$ to use in the point slope formula according to the solution software I use: I plugged in $x_1$ into the function $f(x)=3x^3$ to get 3 for $y_1$. So the final equation would be $y - 3 = 3(x-1)$ -> $y=3x$ I don't see any errors being made. what could be the issue?
Here's a hint to get started. Rewrite the equation of the given line as $y=2x+1$. This line has slope $2$, so the line you're trying to find should also have slope $2$, in order to be parallel to the given line. Also, any line tangent to $f$ has slope $2x$. If you set these two slopes equal to each other, you solve for the $x$-coordinate of a line tangent to $f$ with slope $2$, which is then parallel to the given line, as desired. Once you have that, you can find a point on your desired line. Since you already know the needed slope, you then have a slope and a point on the line, and you can use point-slope form to write your equation. Please ask if anything is unclear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }