Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Irrationality of "primes coded in binary" For fun, I have been considering the number
$$
\ell := \sum_{p} \frac{1}{2^p}
$$
It is clear that the sum converges and hence $\ell$ is finite. $\ell$ also has the binary expansion
$$
\ell = 0.01101010001\dots_2
$$
with a $1$ in the $p^{th}$ place and zeroes elsewhere. I have also computed a few terms (and with the help of Wolfram Alpha, Plouffe's Inverter, and this link from Plouffe's Inverter) I have found that $\ell$ has the decimal expansion
$$
\ell = .4146825098511116602481096221543077083657742381379169778682454144\dots.
$$
Based on the decimal expansion and the fact that $\ell$ can be well approximated by rationals, it seems exceedingly likely that $\ell$ is irrational. However, I have been unable to prove this.
Question: Can anyone provide a proof that $\ell$ is irrational?
| If it was rational, then the binary expansion would eventually repeat -- but the distribution of primes doesn't repeat.
To wit: if the repeat had period $n$ and $p$ was a prime large enough to be inside the repeating part, then $p+pn=(1+n)p$ would have to be prime, which obviously isn't the case. On the other hand, the repeating period cannot consist of all zeroes, because there are infinitely many primes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Give an example of a discontinuous bilinear form. I'm having a hard time with this, I asked a tutor and he couldn't think of an example either.
The bilinear form on a vector space $V$ is a map $f: V \times V \rightarrow F$ that satisfies linearity in both variables.
| In order to have such an example, you need $V$ to be infinite-dimensional. Here’s a concrete example.
Let $F=\mathbb{R}$, and let $V$ be the set of infinite sequences of real numbers that are eventually $0$ (i.e., that have only finitely many non-zero terms) with the uniform (or sup) norm. $V$ is a subspace of $\ell^\infty$, and in fact of $c_0$. Now define $$f:V\times V\to\mathbb{R}:\langle x,y\rangle\mapsto\sum_{n\in\mathbb{N}}2^nx(n)y(n)\;;$$ for each $\langle x,y\rangle\in V\times V$ only finitely many terms of the sum are non-zero, so $f$ is well-defined, and it’s obviously bilinear.
For $n\in\mathbb{N}$ define $x_n\in V$ by $$x_n(k)=\begin{cases}2^{-n},&\text{if }k=n\\0,&\text{if }k\ne n\end{cases}\;,$$ and let $z$ be the zero vector. Clearly $$\Big\langle\langle x_n,z\rangle:n\in\mathbb{N}\Big\rangle\to\langle z,z\rangle$$ in $V\times V$, since $\|x_n\|_\infty=2^{-n}$, but $f\big(\langle x_n,z\rangle\big)=1$ for every $n\in\mathbb{N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Is $f(\operatorname{rad}A)\subseteq\operatorname{rad}B$ for $f\colon A\to B$ not necessarily surjective? If I have two $K$-algebras $A$ and $B$ (associative, with identity) and an algebra homomorphism $f\colon A\to B$, is it true that $f(\operatorname{rad}A)\subseteq\operatorname{rad}B$, where $\operatorname{rad}$ denotes the Jacobsen radical, the intersection of all maximal right ideals?
I can think of two proofs in the case that $f$ is surjective, but both depend on this surjectivity in a crucial way. The first uses the formulation of $\operatorname{rad}A$ as the set of $a\in A$ such that $1-ab$ is invertible for all $b\in A$, and the second treats an algebra as a module over itself, and uses the fact that the radical of $A$ as a module agrees with the radical of $A$ as an algebra, and is the intersection of kernels of maps onto simple modules; here the surjectivity is needed to make $B$ into an $A$-module in such a way that the radical of $B$ as an $A$-module is contained in the radical of $B$ as a $B$-module.
If a counter example exists, $A$ will have to be infinite-dimensional, as in the finite dimensional case all elements of $\operatorname{rad}A$ are nilpotent, and (I think, although I don't remember a proof right now, so maybe I'm wrong) that the radical always contains every nilpotent element.
This is my first question on here, so let me know if I should have done anything differently!
| This is pretty obviously wrong: Take $A$ to be the ring of upper-triangular $2 \times 2 $-matrices $ \left[ {\begin{array}{*{20}c}
{K } & {K} \\
{0 } & {K } \\
\end{array}} \right]$, $B$ to be the ring of all $2\times 2$ -matrices $ \left[ {\begin{array}{*{20}c}
{K } & {K} \\
{K } & {K } \\
\end{array}} \right]$, and $f$ to be the canonical inclusion. Then $radA=\left[ {\begin{array}{*{20}c}
{0 } & {K} \\
{0 } & {0 } \\
\end{array}} \right]$ and $radB=0$. However, when $f$ is a surjective homomorphism of $K$-algebras, it is right. In this case, the inclusion $f(radA)\subseteq radB$ is clear since, for any maximal
left ideal $m$ of $B$, the inverse image $f^{-1}(m)$ is a maximal left ideal of $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
how does one lift a morphism from a reduction? Let $X$ be a scheme and let $f: X^\mathrm{red} \to Y$ be a morphism of schemes. Is it always possible to lift it to a morphism $f': X \to Y$ so that $f=f' \circ \mathrm{red}$ where $X^{red}$ is the reduction of $X$ and $\mathrm{red}: X^\mathrm{red} \to X$ is the natural closed embedding? Can one describe the set of such liftings, e.g. in terms of some cohomology group?
| Suppose all schemes are $S$-schemes, for some scheme $S$.
Then you can always lift $f: X^\mathrm{red} \to Y$ to $f': X \to Y$ if $X$ is affine and $Y\to S$ is smooth.
The most important case is of course when $S=Spec(k)$ is the spectrum of a field.
Then for closed subschemes $Y\subset \mathbb A^n_k$ or $Y\subset\mathbb P^n_k$, smoothness can be checked through the Jacobian criterion, just like in advanced calculus!
Over perfect fields (for example algebraically closed fields or fields of characteristic $0$) smoothness coincides with non-singularity.
Edit:Bibliography
Hartshorne has written a fine set of lecture notes on deformation theory freely downloadable here.
(The Springer GTM #257 book Deformation theory is exactly these notes plus some exercises.)
You will find a version of the lifting theorem for smooth schemes in Chapter 1, Section 4.
The most comprehensive treatment is EGA IV.4. §17, but as always cross-references make for more difficult reading.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Closed form for $ \int_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$ I've been looking at
$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$
It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:
$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$
$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$
$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$
So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.
UPDATE:
The integral reduces to finding
$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$
With $a =\dfrac{n+1}{m}$ which converges only if
$$0 < a < 1$$
Using series I find the solution is
$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$
Can this be put it terms of the Digamma Function or something of the sort?
| Contour Integration Approach
Assuming only that $m>0$ and $-1<n<m-1$, let $z=x^m$ with the exaggerated contour $\gamma$:
$\gamma$ is actually tight above and below the positive real axis and around the origin and circles back at an arbitrarily large distance from the origin.
The part just above the positive real axis captures the integral. The part just below the positive real axis gets $-e^{2\pi i\frac{n-m+1}{m}}$ times the integral. The residue at $z=-1$ of $\frac{z^{\frac{n-m+1}{m}}}{1+z}$ is $e^{\pi i\frac{n-m+1}{m}}$. Putting this all together yields
$$
\frac1m\int_\gamma\frac{z^{\frac{n-m+1}{m}}}{1+z}\,\mathrm{d}z=\left(1-e^{2\pi i\frac{n-m+1}{m}}\right)\int_0^\infty\frac{x^n}{1+x^m}\,\mathrm{d}x
$$
$$
\frac{2\pi i}{m}e^{\pi i\frac{n-m+1}{m}}=\left(1-e^{2\pi i\frac{n-m+1}{m}}\right)\int_0^\infty\frac{x^n}{1+x^m}\,\mathrm{d}x
$$
$$
\frac{\pi}{m}=\sin\left(\pi\frac{n+1}{m}\right)\int_0^\infty\frac{x^n}{1+x^m}\,\mathrm{d}x
$$
$$
\frac{\pi}{m}\csc\left(\pi\frac{n+1}{m}\right)=\int_0^\infty\frac{x^n}{1+x^m}\,\mathrm{d}x\tag1
$$
Relation to $\bf{\Gamma(\alpha)\Gamma(1-\alpha)}$
Setting $m=1$, $n=\alpha-1$, and using the substitution $s=tu$ yields Euler's Reflection Formula:
$$
\begin{align}
\Gamma(\alpha)\Gamma(1-\alpha)
&=\int_0^\infty s^{\alpha-1}e^{-s}\,\mathrm{d}s\int_0^\infty t^{-\alpha}e^{-t}\,\mathrm{d}t\\
&=\int_0^\infty\int_0^\infty u^{\alpha-1}e^{-(tu+t)}\,\mathrm{d}u\,\mathrm{d}t\\
&=\int_0^\infty\frac{u^{\alpha-1}}{1+u}\,\mathrm{d}u\\[6pt]
&=\pi\csc(\pi\alpha)\tag2
\end{align}
$$
Another Proof of $\bf{(1)}$
$$
\begin{align}
\int_0^\infty\frac{x^{\alpha-1}}{1+x}\,\mathrm{d}x
&=\int_0^1\frac{x^{-\alpha}+x^{\alpha-1}}{1+x}\,\mathrm{d}x\\
&=\sum_{k=0}^\infty(-1)^k\int_0^1\left(x^{k-\alpha}+x^{k+\alpha-1}\right)\mathrm{d}x\\
&=\sum_{k=0}^\infty(-1)^k\left(\frac1{k-\alpha+1}+\frac1{k+\alpha}\right)\\
&=\sum_{k\in\mathbb{Z}}\frac{(-1)^k}{k+\alpha}\\[9pt]
&=\pi\csc(\pi\alpha)\tag3
\end{align}
$$
where the last step of $(3)$ is proven in $(3)$ of this answer. Then apply $(3)$ to get
$$
\begin{align}
\int_0^\infty\frac{x^n}{1+x^m}\,\mathrm{d}x
&=\frac1m\int_0^\infty\frac{x^{\frac{n-m+1}m}}{1+x}\,\mathrm{d}x\\
&=\frac\pi{m}\csc\left(\pi\frac{n+1}m\right)\tag4
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "109",
"answer_count": 11,
"answer_id": 4
} |
Calculating the percentage difference of two numbers The basic problem is this: "I have this number x and I ask you to give me another number y. If the number you give me is some percentage c different than my number then I do not want it." Given that you will know x and c, how do you calculate whether or not I should take y?
The naive approach I came up with is to just divide y / x < c but this fails for obvious reason (try y bigger than x).
The next approach I is that the percentage difference is really just a ratio of the smaller number divided by the larger number. So thereforce we could try min(x, y) / max(x, y) < c. However this does not work, here is an example:
x = 1.2129 y = 1.81935 c = 50%
If we do the above we get 1.2129 / 1.81935 = 0.67 which is greater than 0.50. The problem here is that I obtained y by multiplying 1.2129 by 1.5, therefore y is only 50% greater than x. Why? I still don't understand why the above formula doesn't work.
Eventually through some googling I stumbled accross the percentage difference formula but even this doesn't suit my needs. It is abs(x - y) / ((x + y) / 2). However, this does not yield the result I am looking for. abs(x - y) = abs(1.2129 - 1.81935 ) = 0.60645. (x + y) / 2 = 3.03225 / 2 = 1.516125 0.60645 / 1.516125 = 0.4
Eventually I ended up writing some code to evaluate x * c < y < x * (1 + c). As the basic idea is that we don't want any y that is 50% less than my number, nor do we want any number that is 50% greater than my number.
Could someone please help me identify what I'm missing here? It seems like there ought to be another way that you can calculate the percentage difference of two arbitrary numbers and then compare it to c.
| As far as I understood, you want the difference between two numbers (x-y) to be c(or less)% different than your number (x).
When you use min(x, y) / max(x, y) < c, you are calculating whether minimum of two numbers is c(or less)% smaller than maximum of them.
abs(x - y) / ((x + y) / 2) calculates whether the difference between two numbers (x-y) is c(or less)% different than average of those two numbers.
Those are completely different problems.
For yours, abs(x-y)/x < c should work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can you check my proof on piecewise linear approximations? Suppose that $f :[a,b]\rightarrow \mathbb{R}$ is continuous. Let $\epsilon>0$. Show that there exists a continuous, piecewise linear function $g: [a,b]\rightarrow \mathbb{R}$ such that $|f(x)-g(x)|<\epsilon$ for all $x$ in $[a,b]$.
Proof: Suppose that $f$ is continuous at $p$. Then for every $\epsilon>0$ there exists a $\delta>0$ such that
$ |f(x)-f(p)|<3\epsilon $ for all points $x$ in $[a,b]$ for which $|x-p|<\delta$.
let $|g(p)-f(p)|<\epsilon$ for some $\epsilon$, then
$|f(x)-f(p)|= |f(x)-g(x)+g(x)-g(p)+g(p)-f(p)|\leq|f(x)-g(x)|+|g(x)-g(p)|+|g(p)-f(p)|<3\epsilon$
we get $|g(x)-g(p)|<\epsilon$ since $|f(x)-g(x)|<\epsilon$ and $|g(p)-f(p)|<\epsilon$. Hence there exists a continuous, piecewise linear function $g$.
| Reading your proof, I wonder a few things. Firstly, you say ' let $|g(p) - f(p) | < \epsilon$... does such a $g$ exist? Is it linear? Piecewise linear? Is it even continuous?
These are the problems I have with the proof. But if you update your proof, I'll update this answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Evaluating the definite integral $\int_0^\infty \mathrm{e}^{\sum_ia_ix^{n_i}}\mathrm{d}x $ As a follow-up to a question on evaluating the definite integral $\int_0^\infty \mathrm{e}^{-x^n}\,\mathrm{d}x$, I wish to know if there is a general analytic solution to the related integral where $-x^n$ is replaced by a polynomial of arbitrary degree, namely $$ \int_0^\infty \mathrm{e}^{\sum_ia_ix^{n_i}}\mathrm{d}x $$ for $n_i\in\mathbb{Z}$ and where the individual coefficients $a_i\in\mathbb{R}$ can be positive or negative, but in a manner such that the argument of the exponent (i.e,. the polynomial) is negative.
| ALL the $ a_{i} $ are negative aren't they ??.. otherwise the integral would be DIVERGENT
from the property of the exponential $ f(a+b)=f(a)f(b) $ and the fact that
$ \int_{0}^{\infty}dxexp(-x^{n}) = \int_{0}^{\infty}duu^{1/n-1}exp(-u)= \frac{1}{n}\Gamma(1/n)$
so your itnegral will be equal to the product $ \prod _{i}\frac{1}{a_{i}^{1/n_{i}}}\Gamma(1/n_{i}) $ or similar.. :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Simple algebra. How can you factor out a common factor if it is $\frac{1}{\text{factor}}$ in one of the cases? I'm sure this is simple. I want to pull out a factor as follows...
I have the expression
$$\frac{a(\sqrt{x}) - (b + c)(\frac{1}{\sqrt{x}})c}{x}.$$
It would be useful for me to pull out the $\sqrt{x}$ from the numerator and try to simplify to remove the denominator, but how can I pull out the $\sqrt{x}$ from the right-most statement $\frac{1}{\sqrt{x}}$.
Thanks for your help!
| In general, if you have an expression $a+\frac{1}{c}b$ and you want to factor out the $\frac{1}{c}$, multiply the expression by $\frac{c}{c}$ like this:
$$
a+\frac{1}{c}b \\
\frac{c}{c}\left(a+\frac{1}{c}b\right) \\
\frac{1}{c}\left(c\cdot a + c\cdot \frac{1}{c}b\right) \\
\frac{1}{c}\left( ca+b \right)
$$
I find multiplying the whole expression by $\frac{c}{c}$ the clearest way to pull out these fractions, especially if the expression is part of a larger fraction, like yours. Then multiplying by $\frac{c}{c}$ is the same as just multiplying the numerator and denominator by $c$. For example,
$$
\frac{a+\frac{1}{c}b}{d+e} \\
\frac{c}{c}\frac{a+\frac{1}{c}b}{d+e} \\
\frac{c\left(a+\frac{1}{c}b\right)}{c\left(d+e\right)} \\
\frac{c\cdot a+c\cdot \frac{1}{c}b}{c\cdot d+c\cdot e} \\
\frac{ca+b}{cd+ce}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
You throw a fair coin one million times. what is the expected number of strings of 6 heads followed by 6 tails. I tried to solve this problem, and I used something similar to the probability of the Bernoulli distribution to find how many tosses you need to obtain a probability of almost one for one occurrence. Then I just divide the number of tosses in the problem statement by the number of tosses I get.
Is there a better way to do this?
Edit: What I tried is
m = $12$
N = $10^6$
Probability of having X = "m heads followed by m tails" is $\frac{1}{2^m}$
Probability of X at exactly at position i is $1/2^m*(1-1/2^m)^i$
Probability of X after p tries is $\sum_{i=0}^{p}1/2^m*(1-1/2^m)^i =1-(1-1/2^m)^{p+1}$
Now when p is big enough the probability is 1. And then I can just divide . But it depends what you define as big enough and lead to speculative results. There must be a solution based on statistics but It's been a while and I forgot quite a bit.
| Well, your question suggest that you want to calculate the expected value of the number of substrings of the form $h^6t^6$ in your million-character string. However your calculations seem like trying to calculate the expected value of an indicator saying if there is a $h^6t^6$ substring (one or more!) in the million-character string.
If I were to answer your question, then I would repeat the hints given above: the probability of $h^6t^6$ at $i^{th}$ position is $1/2^m$, so the expected value of such an indicator is $1/2^m$ and the result is (by linearity) the sum of expected values of all the indicators ($10^6-m+1$ of them).
The problem you are trying to solve is a little bit harder. First, you shouldn't write things like:
Probability of X at exactly at position i is $1/2^m*(1-1/2^m)^i$
because the events "no $h^6t^6$ at $k$" and "$h^6t^6$ at $k+1$" aren't independent, so you can't multiply their probabilities to compute the probability of the intersection.
If the numbers were smaller, then we could use the inclusion-exclusion principle to get a precise answer. In this case I can think of a Markov chain with states $nothing, h, hh, hhh, hhhh, h^5, h^{6+}, h^6t, h^6t^2, h^6t^3, h^6t^4, h^6t^5, success$. You can prepare a 13 x 13 transition matrix $P$ (it is not hard since it is very regular and there are mainly zeros in it, some 1/2 and one 1) and use a computer to compute it's $10^{6th}$ power $R$, multiply $e_1^T \cdot R$ and then the last coordinate of the result vector is the answer. Maybe it is possible to avoid using a computer here because of the regularity of $P$, but I don't feel like checking it now :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
A minimization problem for a function involving maximum Let $a,b,c,d$ be constants in the interval $[-1,1]$. Define $$f(x,y)=\max\{|y-a|,1-b\}+\max\{1-x,1-y\}+\max\{|x-c|,1-d\}$$ for $ -1\le x\le 1, -1\le y\le 1.$
Prove, or disprove, that the minimum value of $f$ is $$\max\{2-b-c,2-a-d, 2-b-d\}.$$
Numerical evidence seems to show that this is true.
| That is true. Let $A=2-b-c, B=2-a-d, C=2-b-d$. Note that $f(x,y)\ge \max\{A,B,C\}$ for all $(x,y)$ in the domain. To prove that there exist $(x,y)$ in the domain such that $f(x,y)= \max\{A,B,C\}$ consider six cases: $A\le B\le C, A\le C\le B,\cdots$. For example in the case $C\le B\le A$, we have $$a\le b, c-d\le a-b.$$
If $1+c-d\le a+b-1$, we choose $x,y$ such that $$1+c-d\le x\le y \mbox{ and } a+b-1\le y\le 1+a-b,$$ and if $a+b-1\le 1+c-d$, we choose $x,y$ such that
$$1+c-d\le x\le y\le 1+a-b.$$
Direct checking shows that $f(x,y)= \max\{A,B,C\}=A.$ Other cases are similar; in some cases $x=y=1$ suffices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Combining Taylor expansions How do you taylor expand the function $F(x)={x\over \ln(x+1)}$ using standard results? (I know that WA offers the answer, but I want to know how to get it myself.) I know that $\ln(x+1)=x-{x^2\over 2}+{x^3\over 3}+…$ But I don't know how to take the reciprocal. In general, given a function $g(x)$ with a known Taylor series, how might I find $(g(x))^n$, for some $n\in \mathbb Q$?
Also, how might I evaluate expressions like $\ln(1+g(x))$ where I know the Taylor expansion of $g(x)$ (and $\ln x$). How do I combine them?
Thank you.
| You can do this by long division:
$\dfrac{x}{x-{x^2\over 2}+{x^3\over 3}-\cdots} = 1 + \dfrac{{x^2\over 2}-{x^3\over 3}+\cdots}{x-{x^2\over 2}+{x^3\over 3}\cdots} = 1 + \frac{x}{2} + \dfrac{-{x^3\over 12}+\cdots}{x-{x^2\over 2}+\cdots} = 1 + \frac{x}{2} - \frac{x^2}{12} + \cdots $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Area of a trapezoid from given the two bases and diagonals Find the area of trapezoid with bases $7$ cm and $20$ cm and diagonals $13$ cm and $5\sqrt{10} $ cm.
My approach:
Assuming that the bases of the trapezoid are the parallel sides, the solution I can think of is a bit ugly,
*
*Find the other two non-parallel sides of the trapezoid by using this formula.
*Find the height using this $$ h= \frac{\sqrt{(-a+b+c+d)(a-b+c+d)(a-b+c-d)(a-b-c+d)}}{2(b-a)}$$
Now, we can use $\frac12 \times$ sum of the parallel sides $\times$ height.
But, this is really messy and I am not sure if this is correct or feasible without electronic aid, so I was just wondering how else we could solve this problem?
| Here is a very short solution: Consider a point B' on the line (A,B) to the right of point B such that |BB'|=b. Then consider the triangle ACB'. It is easy to see that the area of triangle ACB' is equal to the area of our trapezoid ABCD since the area of triangle ACD is equal to the area of trianlge BCB'. For the triangle ACB' it is easy to find the area from the Heron formula since its three sides are known: 27, 13, and 5$\sqrt{10}$. Plugging in these numbers we find that the area is 67.5.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How to get from $a\sqrt{1 + \frac{b^2}{a^2}}$ to $\sqrt{a^2 + b^2}$ I have the following expression: $a\sqrt{1 + \frac{b^2}{a^2}}$. If I plug this into Wolfram Alpha, it tells me that, if $a, b$ are positive, this equals $\sqrt{a^2 + b^2}$.
How do I get that result? I can't see how that could be done. Thanks
| If $a\ge0$,
$$\begin{align}
a\sqrt{1 + \frac{b^2}{a^2}}
&=\sqrt{a^2}\sqrt{1 + \frac{b^2}{a^2}}
\\
&=\sqrt{a^2\left(1 + \frac{b^2}{a^2}\right)}
\\
&=\sqrt{a^2 + b^2}.
\end{align}$$
($\sqrt{a^2}=|a|$ for all $a\in\mathbb{R}$ and $|a|=a$ when $a\ge0$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to deal with multiplication inside of integral? I have an undefined integral like this:
\begin{aligned}
\ \int x^3 \cdot \sin(4+9x^4)dx
\end{aligned}
I have to integrate it and I have no idea where to start. I have basic formulas for integrating but I need to split this equation into two or to do something else.
| Hint:
$$\begin{eqnarray*}
(\cos (u(x)))^{\prime } &=&-\sin (u(x))u^{\prime }(x)\qquad\text{(by the chain rule)} \\
&\Leftrightarrow &\int \sin (u(x))u^{\prime }(x)dx=-\cos (u(x))+\text{Const.}
\end{eqnarray*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove by induction that $n!>2^n$
Possible Duplicate:
Proof the inequality $n! \geq 2^n$ by induction
Prove by induction that $n!>2^n$ for all integers $n\ge4$.
I know that I have to start from the basic step, which is to confirm the above for $n=4$, being $4!>2^4$, which equals to $24>16$.
How do I continue though. I do not know how to develop the next step.
Thank you.
| Suppose that when $n=k$ $(k≥4)$, we have that $k!>2^k$.
Now, we have to prove that $(k+1)!>2^{k+1}$ when $n=(k+1) (k≥4)$.
$(k+1)! = (k+1)k! > (k+1)2^k$ (since $k!>2^k$)
That implies
$(k+1)!>2^k \cdot 2$ (since $(k+1)>2$ because of $k$ is greater than or equal to $4$)
Therefore, $(k+1)!>2^{k+1}$
Finally, we may conclude that $n!>2^n$ for all integers $n≥4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 3
} |
Example of a module whose support is not closed?
Possible Duplicate:
The support of a module is closed?
Is there a simple example of a module $M$ of a Noetherian commutative ring $R$ such that $\operatorname{Supp}(M)\subset\operatorname{Spec}(R)$ is not closed?
When typing this question, this answer popped up.
So I think we can take $R=\mathbb{Z}$, and let $M=\bigoplus_{\mathfrak{p}\in S}\mathbb{Z}/\mathfrak{p}$ for a nonclosed subset $S$ of $\operatorname{Spec}(\mathbb{Z})$.
Is there an actual explanation as to why the support of such $M$ is not closed in $\operatorname{Spec}(\mathbb{Z})$? I didn't gather one from the original answer.
(I don't mind seeing a completely different example either, I just figured I'd ask about this one since it's already here.)
| Note that $(\mathbb{Z}/\mathfrak{p})_{\mathfrak{q}}=(0)$ whenever $\mathfrak{p}\neq \mathfrak{q}$, and $(\mathbb{Z}/\mathfrak{p})_{\mathfrak{p}} = \mathbb{Z}/\mathfrak{p}$. Moreover, localization commutes with direct sums, so for every $\mathfrak{q}$,
$$M_\mathfrak{q} = \left(\bigoplus_{\mathfrak{p}\in S}\mathbb{Z}/\mathfrak{p}\right)_{\mathfrak{q}} = \bigoplus_{\mathfrak{p}\in S}(\mathbb{Z}/\mathfrak{p})_{\mathfrak{q}}.$$
So the support of $M$, that is, the set of primes $\mathfrak{q}$ such that $M_{\mathfrak{q}}\neq 0$ is precisely $S$, which by assumption is not closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to find inverse Fourier transform I have the function
$$ \delta(f-2) $$
How can we inverse Fourier transform it? It's easy if $f$ is replaced with $w$. But based on my knowledge, $w = 2\pi f$.
The correct answer is
$$ e^{4\pi i t} $$
Can somebody explain to me what happened? Thanks.
| The inverse Fourier transform of $\delta(f-2)$ is
$$\mathcal F^{-1}[\delta](t) = \int \delta(f-2) e^{i2\pi ft} \, df = e^{i2\pi2t} = e^{i4\pi t}$$
The 2nd equality holds by definition of the delta function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Density function with absolute value Let $X$ be a random variable distributed with the following density function:
$$f(x)=\frac{1}{2} \exp(-|x-\theta|) \>.$$
Calculate: $$F(t)=\mathbb P[X\leq t], \mathbb E[X] , \mathrm{Var}[X]$$
I have problems calculating $F(t)$ because of the absolute value. I'm doing it by case statements but it just doesn't get me to the right answer.
So it gets to this:
$$
\int_{-\infty}^\infty\frac{1}{2} \exp(-|x-\theta|)\,\mathrm dx $$
| If $x\ge\theta$ then $|x-\theta|=x-\theta$.
If $x<\theta$ then $|x-\theta| = \theta-x$.
So
$$
\int_{-\infty}^\infty x \frac 1 2 \exp(-|x-\theta|)\,dx = \int_{-\infty}^\theta x\frac 1 2 \exp(\theta-x)\;dx + \frac 1 2 \int_\theta^\infty x \exp(\theta-x)\;dx.
$$
By symmetry, the expected value should be $\theta$ if there is an expected value at all. And, as it happens, there is. The only thing that would prevent that is if one of the integrals were $+\infty$ and the other $-\infty$.
If you use the substitution
$$
u = x-\theta, \qquad du = dx,
$$
then what you have above becomes
$$
\frac 1 2 \int_{-\infty}^0 (u+\theta) \exp(u)\;du + \frac 1 2\int_0^\infty (u+\theta)\exp(-u)\;du
$$
This is
$$
\begin{align}
& \frac 1 2 \int_{-\infty}^0 u \exp(u)\;du + \frac 1 2 \int_0^\infty u\exp(-u)\;du + \frac 1 2 \int_{-\infty}^0 \theta \exp(u)\;du + \frac 1 2 \int_0^\infty \theta\exp(-u)\;du \\ \\
& = \frac 1 2 \int_{-\infty}^\infty u \exp(-|u|)\;du + \theta\int_{-\infty}^\infty \frac 1 2 \exp(-|u|)\;du
\end{align}
$$
The first integral on the last line is $0$ because you're integrating an odd function over the whole line. The second is $1$ because you're integrating a probability density function over the whole line.
So you're left with $\theta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Self-avoiding walk on $\mathbb{Z}$ How many sequences $a_1,a_2,a_3,\dotsc$, satisfy:
i) $a_1=0$
ii) ($a_{n+1}=a_n-n$ or $a_{n+1}=a_n+n$)
iii) $a_i\neq a_j$ for $i\neq j$
iiii) $\mathbb{Z}=\{a_i\}_{i>0}$
Are the two alternating sequences the only solutions?
$a_1,a_2,a_3,..=0,1,-1,2,-2,3,-3,4,...$
or
$a_1,a_2,a_3,..=0,-1,1,-2,2,-3,...$
Is there a sequence satsifying i),iii),iiii) and ii) ($a_{n+1}=a_n-n^2$ or $a_{n+1}=a_n+n^2$) ?
| Only uncountably many. See this link to MO.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 2,
"answer_id": 0
} |
Examples of patterns that eventually fail Often, when I try to describe mathematics to the layman, I find myself struggling to convince them of the importance and consequence of "proof". I receive responses like: "surely if Collatz is true up to $20×2^{58}$, then it must always be true?"; and "the sequence of number of edges on a complete graph starts $0,1,3,6,10$, so the next term must be 15 etc."
Granted, this second statement is less logically unsound than the first since it's not difficult to see the reason why the sequence must continue as such; nevertheless, the statement was made on a premise that boils down to "interesting patterns must always continue".
I try to counter this logic by creating a ridiculous argument like "the numbers $1,2,3,4,5$ are less than $100$, so surely all numbers are", but this usually fails to be convincing.
So, are there any examples of non-trivial patterns that appear to be true for a large number of small cases, but then fail for some larger case? A good answer to this question should:
*
*be one which could be explained to the layman without having to subject them to a 24 lecture course of background material, and
*have as a minimal counterexample a case which cannot (feasibly) be checked without the use of a computer.
I believe conditions 1. and 2. make my question specific enough to have in some sense a "right" (or at least a "not wrong") answer; but I'd be happy to clarify if this is not the case. I suppose I'm expecting an answer to come from number theory, but can see that areas like graph theory, combinatorics more generally and set theory could potentially offer suitable answers.
| Let
$$\pi^{(4)}_1(N) = \text{ Number of primes }\leq N\text{ that are of the form } 1 \bmod 4$$ and $$\pi^{(4)}_3(N) = \text{ Number of primes }\leq N\text{ that are of the form } 3 \bmod 4$$
$$
\begin{array}{ccc}
N & \pi^{(4)}_1(N) & \pi^{(4)}_3(N) \\
100 & 11 & 13\\
200 & 21 & 24\\
300 & 29 & 32\\
400 & 37 & 40\\
500 & 44 & 50
\end{array}
$$
Looking at the pattern, one can wonder if $\pi^{(4)}_1(N) \leq \pi^{(4)}_3(N)$ is true for all $N$. In fact, this remains true for $N$ up-to $26,860$.
$26,861$ is a prime $\equiv 1 \bmod 4$ and we find that $\pi^{(4)}_1(26,861) = \pi^{(4)}_3(26,861) + 1 > \pi^{(4)}_3(26,861)$. You can read more about this and similar questions on primes here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "612",
"answer_count": 41,
"answer_id": 25
} |
Is the sequence of continuous functions such that $\left\Vert f_{n}(x) -f_{m}(x) \right\Vert _{\infty}=1$ equicontinuous?
Consider a bounded sequence of continuous functions $f_n:\left[0,1\right]\to\mathbb{R}$ such that $$\left\Vert f_{n}-f_{m}\right\Vert _{\infty}=\sup_{x\in\left[0,1\right]}\left|f_{n}\left(x\right)-f_{m}\left(x\right)\right|=1$$ whenever $n\neq m$. Can such a sequence be equicontinuous?
I want to say no and my reasoning is that if you have such a sequence, then it must act like $\left\{x^n\right\}$ which can be shown to be not equicontinuous. However, I am not sure.
| Note, as Jose27 points out in the comments, that the boundedness assumption is not needed, as it is implied by the norm condition.
From the norm condition,
$$\tag{1}
\Vert f_n-f_m\Vert_\infty=1,\quad \text{whenever }\ n\ne m,
$$
it follows that
the sequence $\{f_n\}_{n=1}^\infty$ is uniformly bounded over $[0,1]$.
Equation (1) also implies that $\{f_n\}_{n=1}^\infty$ cannot be equicontinuous over $[0,1]$:
Suppose $\{f_n\}_{n=1}^\infty$ were equicontinuous over $[0,1]$. Then by the
Arzelà-Ascoli Theorem, there is a subsequence $\{f_{n_k}\}_{k=1}^\infty$ of $\{f_n\}_{n=1}^\infty$ that converges uniformly on $[0,1]$. In particular, $\{f_{n_k}\}_{k=1}^\infty$ is uniformly Cauchy. That is, for each $\epsilon>0$, there is a positive integer $N$ so that
$$
|f_{n_k}(x ) -f_{n_{l}}(x )|<\epsilon, \quad \text{for all }\ k,l\ge N\ \text{ and all }\ x\in[0,1].
$$
But, setting $\epsilon={1\over2}$ and fixing a positive integer $N$, we have by equation (1) the existence of some $x_{\scriptscriptstyle N}\in[0,1]$ with
$|f_{n_N}(x_{\scriptscriptstyle N}) -f_{n_{N+1}}(x_{\scriptscriptstyle N})|>{1\over2}$. As $N$ was arbitrary, this contradicts the fact that $\{f_{n_k}\}_{k=1}^\infty$ is uniformly Cauchy.
It follows that $\{f_n\}_{n=1}^\infty$ is not equicontinuous over $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
For $f$ Riemann integrable prove $\lim_{n\to\infty} \int_0^1x^nf(x)dx=0.$
Suppose $f$ is a Riemann integrable function on $[0,1]$. Prove that $\lim_{n\to\infty} \int_0^1x^nf(x)dx=0.$
This is what I am thinking: Fix $n$. Then by Jensen's Inequality we have $$0\leq\left(\int_0^1x^nf(x)dx\right)^2 \leq \left(\int_0^1x^{2n}dx\right)\left(\int_0^1f^2(x)dx\right)=\left(\frac{1}{2n+1}\right)\left(\int_0^1f^2(x)dx\right).$$Thus, if $n\to\infty$ then $$0\leq \lim_{n\to \infty}\left(\int_0^1x^nf(x)dx\right)^2 \leq 0$$ and hence we get what we want. How correct (or incorrect) is this?
| Just so people can agree : Wikipedia states that a function is Riemann integrable if and only if it is bounded and continuous almost everywhere (just type Riemann integral on wiki). Since the function "squaring" is continuous and that composition of continuous function at a point preserves continuity, $f^2$ is continuous almost everywhere as well, and an obvious bound for $f^2$ is the bound for $f$, squared. The rest is taken care of by $OP$'s proof.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Is $3^x \lt 1 + 2^x + 3^x \lt 3 \cdot 3^x$ right? Is $3^x \lt 1 + 2^x + 3^x \lt 3 \cdot 3^x$ right?
This is from my lecture notes which is used to solve:
But when $x = 0$, $(1 + 2^x + 3^x = 3) \gt (3^0 = 1)$? The thing is how do I choose which what expression should go on the left & right side?
| The inequality is, for $x>0$
$$3^x<1+2^x+3^x<3\cdot 3^x$$
When $x=0$, you obtain $1<1+1+1\le 3\cdot 1$ (I think you have a type in the last part of the post.
But, this is of no concern. You are trying to find $\lim_{x\rightarrow\infty} (1+2^x+3^x)^{4/x}$. Since the limit is being taken at infinity, you are concerned only with values of $x$ that are large. Large in particular means eventually $x$ is positive, and you can use the inequality above (or, rather, the inequality following it in your post).
The Theorem being used is the following:
Suppose the inequality
$$\tag{1}
f(x)\le g(x)\le h(x)
$$
holds for all $x\ge a$, for some real number $a$.
If $\lim\limits_{x\rightarrow\infty} f(x)$ and $\lim\limits_{x\rightarrow\infty} h(x)$ exist and are equal with common value $L$, then $\lim\limits_{x\rightarrow\infty} g(x)$
exists and is equal to $L$.
So the Squeeze Theorem is valid whenever the required inequality holds for $x$ sufficiently large in the case where the limit is taken at infinity.
(If the limit is taken at $b$, then the inequality need onlly holds for $x$ near $b$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Smallest angle among two lines in $n \times n$ grid Given an $n \times n$ grid, what is the minimum angle between any two distinct lines, each going through some grid point $p$ and at least one other grid point?
My guess is the minimum is attained by the line through $(0,0), (n,1)$ and the line through $(0,0), (n,0)$ but how can I show that this is the optimal line pair?
| I think the smallest angle is between lines $(0,0)-(n-1, n)$ and $(0,0) - (n-2,n-1)$. For these two directional vectors we have
$$
\det \begin{pmatrix}
n-1 & n-2 \\
n & n-1
\end{pmatrix} = 1
$$
and so for the angle $\alpha$ we have
$$
\sin(\alpha) = \frac{1}{||(n-1, n)|| \cdot ||(n-2, n-1)||} < \frac{1}{2(n-1)^2}.
$$
Among all the vectors in the grid, only the following are longer than $(n-2, n-1)$:
$$
(n-3,n), (n, n-3), (n-2, n), (n, n-2), (n-1, n), (n, n-1), (n-1, n-1), (n,n)
$$
The pair given above has the smallest (positive) angle in this set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
If $A,B\in M(2,\mathbb{F})$ and $AB=I$, then $BA=I$ This is Exercise 7, page 21, from Hoffman and Kunze's book.
Let $A$ and $B$ be $2\times 2$ matrices such that $AB=I$. Prove that
$BA=I.$
I wrote $BA=C$ and I tried to prove that $C=I$, but I got stuck on that. I am supposed to use only elementary matrices to solve this question.
I know that there is this question, but in those answers they use more than I am allowed to use here.
I would appreciate your help.
| Here we explain how to derive entries of $B$ from the entries of $A$. As the book explains in the beginning of page 19 (section 1.5), if we let $B_1$ and $B_2$ denote the first and second columns of the matrix $B$ then one can write $AB = [AB_1,AB_2]$. Hence AB = I if and only if
$AB_1=\left[\begin{array}{c}
1\\0\\\end{array}\right]$
and
$AB_2=\left[\begin{array}{c}
0\\1\\\end{array}\right]$. So we know that the two systems of two linear equations in two unknowns $AX=\left[\begin{array}{c}
1\\0\\\end{array}\right]$ and $AY=\left[\begin{array}{c}
0\\1\\\end{array}\right]$ are both having solutions. We would like to prove that the only solutions are $B_1$ and $B_2$, and we would like to find these solutions in terms of entries of $A$.
If we let
$A=
\left[ \begin{array}{cc}
a & b \\
c & d \end{array} \right]$ then the two equations can be written as
$\left[ \begin{array}{ccc}
a & b & 1\\
c & d & 0 \end{array} \right]$
and
$\left[ \begin{array}{ccc}
a & b & 0\\
c & d & 1 \end{array} \right]$. Since both these equations have solutions ($B_1$ and $B_2$), then the equations
$\left[\begin{array}{ccc}
ad-bc & 0 & d\\
0 & ad-bc & -c \end{array}\right]$
and $\left[ \begin{array}{ccc}
ad-bc & 0 & -b\\
0 & ad-bc & a \end{array} \right]$, which are linear combinations of the original equations, are also having solutions (section 1.2 of the book). Now based on explanation in page 14, which explains the conditions for which non-homogenous systems of equations have solutions, if $ad-bc=0$ then $a=b=c=d=0$. This is a contradiction since if $A=0$, then $AB=0B=0\neq I$. So we can assume that $ad-bc \neq 0$. This immediately proves that the two equations are having unique solutions, therefore $B$ can only be of the following form
$$\left[ \begin{array}{cc}
\frac{d}{ad-bc} & \frac{-b}{ad-bc} \\
\frac{-c}{ad-bc} & \frac{a}{ad-bc} \end{array} \right].$$
Now using this form for $B$, one can verify that $BA = I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
Independence of Axioms in an axiomatic system How do we show that we are using independent axioms in an axiomatic systems i.e
*
*$A\rightarrow (B \rightarrow A)$
*$(A\rightarrow (B\rightarrow C)) \rightarrow ((A\rightarrow B)\rightarrow (A\rightarrow C))$
*$(\lnot A\rightarrow \lnot B)\rightarrow (B\rightarrow A)$
I think I know how to show that the third is independent of the first two, we can take $\lnot \phi = \phi$ and then the first two are still valid but the third is not, but I'm not sure how to go about doing this for the other axioms.
Thanks for any help.
| OK so here goes,
To show that these three axioms are all independent we want to construct an interpretation that shows that two of the axioms are still valid but the third is not (as said in the comments). The first of these will just use two truth values (T,F) and the rest will use three (T,F,A).
To show that A3) is independent from A1) and A2)
This is the simplest case, we simply let our interpretation of $\lnot\phi$ and $\phi$ be the same. In this case we can see that A3) is no longer valid, by taking $A=F$ and $B=T$ then this is no longer valid but A1) and A2) obviously still are (they don't have a negation in them).
To show that A2) is independent from A1) and A3)
$ \begin{array}
\hline
A & B & A\rightarrow B \\ \hline
T & T & T \\ \hline
T & A & A \\ \hline
T & F & F \\ \hline
A & T & T \\ \hline
A & A & T \\ \hline
A & F & A \\ \hline
F & T & T \\ \hline
F & A & T \\ \hline
F & F & T \\ \hline
\end{array}$
$ \begin{array}
\hline
A & \lnot A \\ \hline
T & F \\ \hline
A & A \\ \hline
F & T \\ \hline
\end{array} $
We can now see that under these new interpretations that A1) and A3) are still valid but A2) is no longer valid under this new interpretation.
Showing that A1) is independent from A2) and A3)
We use the same argument as above but with the first table slightly different:
$ \begin{array}
\hline
A & B & A\rightarrow B \\ \hline
T & T & T \\ \hline
T & A & F \\ \hline
T & F & F \\ \hline
A & T & T \\ \hline
A & A & T \\ \hline
A & F & F \\ \hline
F & T & T \\ \hline
F & A & T \\ \hline
F & F & T \\ \hline
\end{array}$
$ \begin{array}
\hline
A & \lnot A \\ \hline
T & F \\ \hline
A & A \\ \hline
F & T \\ \hline
\end{array} $
So we can see that A1) is independent from A2) and A3) as they are still valid here but A1) is not.
We have now shown that these three axioms are independent from each other. (We should also notice that modus ponens is preserved under these two new interpretations)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/111838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Can there exist a non-constant continuous function that has a derivative of zero everywhere? Somebody told me that there exists a continuous function with a derivative of zero everywhere that is not constant.
I cannot imagine how that is possible and I am starting to doubt whether it's actually true. If it is true, could you show me an example? If it is not true, how would you go about disproving it?
| If it's differentiable at every point, then this can't happen. This follows from the mean value theorem:
If $f(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$, then for at least one point $c$ between $a$ and $b$, we have $$f'(c) = \dfrac{f(b)-f(a)}{b-a}.$$
If your $f(x)$ is not constant, but is differentiable everywhere, pick an $a$ and $b$ with $f(a)\neq f(b)$. By the MVT, we have $$f'(c) = \dfrac{f(b)-f(a)}{b-a} \neq 0$$ since $f(b) \neq f(a)$.
On the other hand, if you assert your function is differentiable only "almost everywhere" instead of "everywhere" (in a sense which can be made precise) and that the derivative "almost everywhere" is equal to $0$, then this can happen. The standard example is Cantor's function (also known as the Devil's Staircase). See http://en.wikipedia.org/wiki/Cantor_function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Simplest Example of a Poset that is not a Lattice A partially ordered set $(X, \leq)$ is called a lattice if for every pair of elements $x,y \in X$ both the infimum and suprememum of the set $\{x,y\}$ exists. I'm trying to get an intuition for how a partially ordered set can fail to be a lattice. In $\mathbb{R}$, for example, once two elements are selected the completeness of the real numbers guarantees the existence of both the infimum and supremum. Now, if we restrict our attention to a nondegenerate interval $(a,b)$ it is clear that no two points in $(a,b)$ have either a suprememum or infimum in $(a,b)$.
Is this the right way to think of a poset that is not a lattice? Is there perhaps a more fundamental example that would yield further clarity?
| Here a simple example, consider the $\{0,1,2\}$ with the order
$$\{(0,0),(1,1),(2,2),(0,1),(0,2)\}$$
this is indeed a poset (the verification is simple) but it is not a lattice because it doesn't have any supremum of $1$ and $2$, i.e $1 \lor 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 6,
"answer_id": 2
} |
Find a maximum of difference without sorting. Suppose that I have an unsorted array of real numbers $\{a_i\}_{i=1}^n$. We sorted it in ascending order and now we have array $\{b_i\}_{i=1}^n$. Let $N=\max\limits_{i=\overline{1..n-1}} |b_{i+1}-b_{i}|$
I want to know $N$, but if I will sort the array $\{a_i\}_{i=1}^n$ and then will find $N$, the complexity of all operation will be $\mathcal{O}\left(n\log n \right)$. It isn't fast enough.
Is it possible to find better way, without sorting?
| In the Algebraic Decision Tree model, we can prove an $\Omega(n\log n)$ lower bound for your problem.
This follows because the Element Distinctness Problem has the same lower bound and can be reduced to your problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Velocity, Radius and Angular Speed - Finding Units I have been trying to figure this one out for two days, and I really have no idea what to do.
To begin with, I need to know what units will be produced in the following situations:
*
*Velocity's units when angular speed and radius are rpm and feet, respectively. (I think it is feet/minute)
*Anuglar speed's units when radius and velocity are foot and feet per minute, respectively. (It stays as feet/minute?)
*Radius' units when velocity and angular speed are mph and rpm, respectively. (I am guessing it will come out as miles, according to what my teacher tried to communicate)
Those are just a few issues, but there are more. I believe that, if I am correct/find why I am incorrect, I will be able to apply the concept to the other problems.
The second problem I am having is that I have a problem where:
Velocity is 50 feet/second, Angular Speed is 100 revolutions/second and I need to find the radius in miles.
According to what I know, velocity is angular speed times theta, so I just have to divide by angular speed. However, I do not know where to go when there are different units.
I know it is a lot of text, but I appreciate anyone who can help me; my teacher said she is preoccupied and cannot answer any more of my questions today, so I'm stuck.
| Angular speed has to be some measure of angle divided by some measure of time. In your second bullet point it will probably be revolutions per minute (rpm).
You presumably have some formula saying velocity is related the product of radius and angular speed. Here you probably have $$50 \text{ feet/ second} = r \times 2\pi \text{ radians/revolution }\times 100 \text{ revolutions/ second}$$ which gives a radius of $r = 0.25 / \pi \text{ feet}$, not very much. If you must convert it into miles, then use $1 \text{ mile} = 5280 \text{ feet}$, though that will give you an even smaller number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Translating intuition into rigor. The chain rule. When considering two functions $f(x)$ and $g(x)$, it is known that
$$\left(f\circ g(x)\right)' = f'\circ g(x)\cdot g'(x)$$
So my intuitive approach is:
$$\mathop {\lim }\limits_{\Delta x \to 0} \frac{{f\left( {g\left( {x + \Delta x} \right)} \right) - f\left( {g\left( x \right)} \right)}}{{\Delta x}}$$
$$\mathop {\lim }\limits_{\Delta x \to 0} \frac{{f\left( {g\left( {x + \Delta x} \right)} \right) - f\left( {g\left( x \right)} \right)}}{{g\left( {x + \Delta x} \right) - g\left( x \right)}}\frac{{g\left( {x + \Delta x} \right) - g\left( x \right)}}{{\Delta x}}$$
Put $g\left( {x + \Delta x} \right) - g\left( x \right) = \Delta g\left( x \right)$
$$\mathop {\lim }\limits_{\Delta x \to 0} \frac{{f\left( {g\left( x \right) + \Delta g\left( x \right)} \right) - f\left( {g\left( x \right)} \right)}}{{\Delta g\left( x \right)}}\frac{{g\left( {x + \Delta x} \right) - g\left( x \right)}}{{\Delta x}}$$
So I guess the problem boils down to translating how $\Delta x \to 0 \Rightarrow \Delta g\left( x \right) \to 0$ and to adress ${\Delta g\left( x \right)}$'s behaviour.
The last intuition is to recklessly write
$$g\left( {x + \Delta x} \right) - g\left( x \right) = \Delta g$$
and put
$$\mathop {\lim }\limits_{\Delta x \to 0} \frac{{\Delta f\left( {g\left( x \right)} \right)}}{{\Delta g\left( x \right)}}\frac{{\Delta g\left( x \right)}}{{\Delta x}}$$
which is the idea behind
$$\frac{{df}}{{dx}} = \frac{{df}}{{dg}}\frac{{dg}}{{dx}}$$
| The chain rule is very simple, if you use the correct definition of the derivative. The derivative $f'(x)$ is a function such that
$$f(x + \epsilon) = f(x) + \epsilon f'(x) + o(\epsilon).$$
If you don't know what the "little oh" notation mean, think of it as
$$f(x + \epsilon) \approx f(x) + \epsilon f'(x).$$
Similarly,
$$g(x + \epsilon) \approx g(x) + \epsilon g'(x).$$
Therefore, using continuity,
$$f(g(x+\epsilon)) \approx f(g(x) + \epsilon g'(x)) \approx f(g(x)) + \epsilon g'(x) f'(g(x)).$$
We get the chain rule:
$$(f \circ g)'(x) = f'(g(x)) g'(x).$$
The only non-trivial part is
$$y \approx z \Longrightarrow f(y) \approx f(z), $$
which is a statement of continuity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Shorthand notation for "increases" and "decreases" I want to write out something like:
"As $x$ increases, $y$ decreases."
Is there a standard symbolic notation for this, such as an up arrow and a down arrow? (And if you can tell me how to write it in latex, that would be awesome, too).
Thanks!
| Inverse proportionality means that
$y=\frac{k}{x}$ for some constant $k$.
If (as usual) the constant $k$ is positive, then (if $x$ ranges over positive numbers), as $x$ increases, indeed $y$ decreases.
However, there are many other ways that $y$ can decrease as $x$ increases. For example, we could have
$$y=\frac{1}{\sqrt{x}},$$
or
$$y=e^{-x}.$$
There is no really standard symbolic notation for this, but sometimes arrows are used, as in "as $x\uparrow$, $y\downarrow$." I have also seen slanted arrows used instead, but the standard LaTeX slanted arrows are longer than the arrows I remember seeing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Solving the system $\sum \sin = \sum \cos = 0$. Can we solve the system of equations:
$$\sin \alpha + \sin \beta + \sin \gamma = 0$$
$$\cos \alpha + \cos \beta + \cos \gamma = 0$$
?
(i.e. find the possible values of $\alpha, \beta, \gamma$)
| Here's an algebraic proof : Write $z_k = e^{i \alpha_k}$. Then your equations are equivalent to $z_1 + z_2 + z_3 = 0$. Write $\theta = \frac{\alpha_1 + \alpha_2 + \alpha_3}{3}$ and $z = e^{i \theta}$
Expand the polynomial $P = (X-z_1)(X-z_2)(X-z_3)$. The $X^2$ term is $0$ by hypothetis, and the $X$ term can be written as $z_1 z_2 z_3 (z_1^* + z_2^* + z_3^*) = 0$. So $P = X^3 - z_1 z_2 z_3 = X^3 - z^3$. So the roots are $z . e^{2i k \pi /3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
$L^p$ and $L^\infty$ So I am trying to prove that for a set $E$ of finite measure, and for $1 \leq p < \infty$, $||f||_p \leq (m(E))^{1 - 1/p}||f||_{\infty}$. But I think I have proved the wrong thing. Can you help me see where I went wrong?
My proof is something like
$$||f||_p =\left(\int_E |f|^p\right)^{1/p} \leq \left(\int_E ||f||_{\infty}^p\right)^{1/p} = \left(||f||_{\infty}^p \int_E 1\right)^{1/p} = ||f||_{\infty} (m(E))^{1/p},$$
which is not what was asked for in the problem.
Thanks!
| What you get is true but not the wanted inequality. But you can write, assuming that $f\in L^{\infty}$ $|f|^p=|f|^{p-1}|f|\leq ||f||_{\infty}^{p-1}|f|$ then apply Hölder's inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What are the possible values for $\gcd(a^2, b)$ if $\gcd(a, b) = 3$? I was looking back at my notes on number theory and I came across this question.
Let $a$, $b$ be positive integers such that $\gcd(a, b) = 3$. What are the possible values for $\gcd(a^2, b)$?
I know it has to do with their prime factorization decomposition, but where do I go from here?
| Hint $\rm\: \ (a,b)\ |\ (a^2,b)\ |\ (a,b)^2\ $ since $\rm\ (a^2,b)\ |\ a^2,ab,b^2\ \Rightarrow\ (a^2,b)\ |\ (a^2,ab,b^2) = (a,b)^2 $
Therefore $\rm\:3\ |\ (a^2,b)\ |\ 9\:$ when $\rm\:(a,b) = 3\ \ $ QED
Or: $\:$ let $\rm\:a = 3^K A,\:\ b = 3^N B,\ \ 3\nmid A,B.\ \ $ $\rm(a,b)=3\ \Rightarrow \ (A,B) = 1\:$ and $\rm\:\min(K,N) = 1.\:$ Thus
$$\rm (a^2,b) = (3^{2K} A^2, 3^N B) = (3^{2K},3^N)(A^2,B) = (3^{2K},3^N) = 3\: \ if\: \ N\! =\! 1\: \ else \ 9\ ( N \ge 2\Rightarrow K = 1)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
On the factorial equations $A! B! =C!$ and $A!B!C! = D!$ I was playing around with hypergeometric probabilities when I wound myself calculating the binomial coefficient $\binom{10}{3}$. I used the definition, and calculating in my head, I simplified to this expression before actually calculating anything
$$
\frac {8\cdot9\cdot10}{2\cdot3} = 120
$$
And then it hit me that $8\cdot9\cdot10 = 6!$ and I started thinking about something I feel like calling generalized factorials, which is just the product of a number of successive naturals, like this
$$
a!b = \prod_{n=b}^an = \frac{a!}{(b-1)!},\quad a, b \in \mathbb{Z}^+, \quad a\ge b
$$
so that $a! = a!1$ (the notation was invented just now, and inspired by the $nCr$-notation for binomial coefficients). Now, apart from the trivial examples $(n!)!(n!) = n!$ and $a!1 = a!2 = a!$, when is the generalized factorial a factorial number? When is it the product of two (non-trivial) factorial numbers? As seen above, $10!8$ is both.
| Consider the general equation $$a_1!a_2!\cdots a_n! = b!.$$
People like Paul Erdős worked on this kind of equations. So, they are rather serious problems. Even for the simplest case where $n=2$, namely the equation $A!B!=C!$, we don't know whether there is any other non-trivial solution other than $10!=7!6!$.
Erdős proved that in the equation $A!B!=C!$ if we assume $1< A \leq B$, then for large enough values of $C$, the difference of $B$ and $C$ does not exceed $5 \log \log n$. It is clear that $5 \log \log n$ grows to infinity as $n$ increases. But in fact, this is a very slowly growing function.
For instance, the first $n$ for which $5 \log \log n = 50$ has more than $9566$ digits. Caldwell proved (1994) that the only non-trivial solution with $C < 10^6$ is $10!=7!6!$. Florian Luca proved in this paper (2007) that considering the $abc$ hypothesis, we have $C-B=1$ for large enough $n$. Read also a remark (2009) on this paper if you're interested.
Suppose that $P(m)$ denotes the largest prime factor of $m$. Erdős also showed in one of his many papers that the assertion
$$
P(n(n+1)) > 4 \log n
$$
would imply that the general equation $(1)$ has finitely many non-trivial solutions.
The last paragraph is taken from a recent work of Hajdua, Pappa, and Szakács, who prove proved in this paper (2018) that writing $k=B-A$ for all non-trivial solutions of the equation $A!B!=C!$ different from $(A, B, C) =(6, 7, 10)$, we have $C<5k$. Further, if $k \leq 10^6$, then the only non-trivial solution is given by $(A, B, C) =(6, 7, 10)$. I presented their paper in our MathSciNet seminar at UBC and you can find the slides here.
More references:
*
*Discussion on ArtofProblemSolving
*Luca, Florian. "The Diophantine equation $P (x)= n!$ and a result of M. Overholt." Glasnik matematički 37.2 (2002): 269-273.
*Berend, Daniel, and Jørgen Harmse. "On polynomial-factorial diophantine equations." Transactions of the American Mathematical Society 358.4 (2006): 1741-1779.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 0
} |
Supersolvable groups I would like to find an exact sequence $1\rightarrow N\rightarrow G\rightarrow Q\rightarrow 1$ of groups where $N,Q$ are supersolvable and G is not. I found it but in my example $G$ is finite, I was wondering if you have a nice example with $G$ an infinite group, I tried with a lot of groups without finding it. Do you have any idea?
EDIT: Suppose that $G$ is a finite not supersolvable group and we have an exact sequence $1\rightarrow K\rightarrow G\rightarrow G/K\rightarrow 1$ with $K$ and $G/K$ supersolvable. Define $G^\prime=G\times\mathbb{Z}$, then $G^\prime$ is not supersolvable because if I have a normal-cyclic series the I project it on $G$ and I obtain a normal-cyclic series for $G$ and this is a contradiction. So I can take $1\rightarrow K\times\{0\}\rightarrow G\rightarrow G/(K\times\{0\})\rightarrow1$. Now $K\times\{0\}$ is supersolvable and $G/(K\times\{0\})\cong (G/K)\times\mathbb{Z}$ and it's supersolvable, am I right?
| If you want a more essentially infinite example, you could take a semidirect product of ${\mathbb Z}^2$ by ${\mathbb Z}$ with an irreducible action. For example $\langle x,y,z \mid xy=yx, z^{-1}xz=y, z^{-1}yz=xy \rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the significance of theoretical linear algebra in machine learning/computer vision research? I am a computer science research student working in application of Machine Learning to solve Computer Vision problems.
Since, lot of linear algebra(eigenvalues, SVD etc.) comes up when reading Machine Learning/Vision literature, I decided to take a linear algebra course this semester.
Much to my surprise, the course didn't look at all like Gilbert Strang's Applied Linear algebra(on OCW) I had started taking earlier. The course textbook is Linear Algebra by Hoffman and Kunze. We started with concepts of Abstract algebra like groups, fields, rings, isomorphism, quotient groups etc. And then moved on to study "theoretical" linear algebra over finite fields, where we cover proofs for important theorms/lemmas in the following topics:
Vector spaces, linear span, linear independence, existence of basis.
Linear transformations. Solutions of linear equations, row reduced
echelon form, complete echelon form,rank. Minimal polynomial of a
linear transformation. Jordan canonical form. Determinants.
Characteristic polynomial, eigenvalues and eigenvectors. Inner product
space. Gram Schmidt orthogonalization. Unitary and Hermitian
transformations. Diagonalization of Hermitian transformations.
I wanted to understand if there is any significance/application of understanding these proofs in machine learning/computer vision research or should I be better off focusing on the applied Linear Algebra?
| Personally, linear algebra will be very important for your research in machine learning. Because it provides not only the best interpretaion of many real problems, but also gives the easy solutions to these problems, such as linear regression model and linear classification model. In addition, with the knowledge of linear algebra, it will be easier to study other courses including statistics which is important for machine learning.
Maybe, I can recommend a book for you on linear algebra, "Linear Algebra Done Right" edited by Sheldon Axler.
Best!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Galois covers of Riemann surfaces Let $G$ be a finite abelian group, and $C$ a compact Riemann surface (algebraic curve) of genus $g$. I am interested in topological Galois $G$-covers $X \to C$, aka \'etale $G$-principal bundles over $C$.
Let us write the abelian group as a direct sum of cyclic groups of order $d_1, \ldots, d_k$ with $d_1|d_2|\ldots|d_k$.
Is it true, and why, that if $k>2g$, then $X$ must be disconnected?
| By the Galois correspondance for covering spaces, you will have a connected topological $G$-cover $X \to C$ precisely if you can find a surjective morphism $$\pi_1 C = \langle a_1, b_1, \ldots, a_n, b_n \, | \, [a_1,b_1]\cdots[a_g,b_g]\rangle \to G.$$ (The complex/algebraic structure is irrelevant here).
As $\pi_1 C$ is generated by $2g$ elements, any such $G$ (abelian or not) must also be generated by $2g$ elements. If $G$ is abelian, it is then a quotient of $\mathbb Z^{2g}$, and the classification of finite type abelian groups tell you that this can only happen if its canonical decomposition
$G = \oplus_{i=1}^r \mathbb Z/d_i$, (canonical means $d_i|d_{i+1}$ and we know that such a decomposition is unique) has at most $2g$ factors.
So yes, it is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/112901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$f(x)$ is positive, continuous, monotone and integrable in (0,1]. Is $\lim_{x \rightarrow 0} xf(x) = 0$? I'm having trouble with this question from an example test.
We have a positive function $f(x)$ that's monotone, continuous and integrable in $(0,1]$. Is $\lim_{x \rightarrow 0} xf(x) = 0$?
Progress
The only problematic case seems to be when $f(x)$ is unbounded and monotonic decreasing. For that case, I found out that $xf(x)=\int_{0}^{x} f(x)dt$ and that $0\leq xf(x)\leq \int_{0}^{x} f(t)dt$. From here I'm not sure how to go on.
Thanks!
| If $f$ is improperly integrable over $[0,1]$, then $\lim\limits_{a\rightarrow0^+}\int_a^1 f(x)\,dx=L$ for some finite number $L$.
Note, for any $b$ in $(0,1)$:
$$\eqalign{
L =\int_0^1f(x)\,dx = \int_0^bf(x)\,dx +\int_b^1f(x)\,dx\cr
}
$$
Now, letting $b\rightarrow0^+$, we have that $ \int_b^1f(x)\,dx$ converges to $L$, which implies that
$$
\lim_{b\rightarrow0^+}\int_0^b f(x)\,dx=0.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
Indian claims finding new cube root formula Indian claims finding new cube root formula
It has eluded experts for centuries, but now an Indian, following in the footsteps of Aryabhatt, one of the earliest Indian mathematicians, claims to have worked out a simple formula to find any number's cube root.
Nirbhay Singh Nahar, a retired chemical engineer and an amateur mathematician, claims he has found a formula that will help students and applied engineers to work out the cube roots of any number in a short time.
"Give me any number - even, odd, decimals, a fraction...and I will give you the cube root using a simple calculator to just add and subtract within a minute and a half. We do have methods and patterns, but no formula at the moment. Even the tables give cube roots of 1 to 1,000, not of fractions or of numbers beyond 1,000, for which people have to use scientific calculators," Nahar, who retired as an engineer from Hindustan Salts Ltd at Sambhar (Rajasthan), said.
Is there any sense to this claim? Is it possible to have an algorithm that gives the cube root which uses only additions and subtractions?
| Unless there is something lost in translation, the claims in the article are inconsistent...
Even the tables give cube roots of 1 to 1,000, not of fractions or of numbers beyond 1,000, for which people have to use scientific calculators," Nahar, said.
While Newton's formula arrives at an approximation, Nahar claims his formula leads to direct and perfect value, and no approximation.
So, he claims to be able to do this for any number, including fractions, and get the perfect value, not an approximation.... Great, so he is able to calculate all the digits of $\sqrt[3]{2}$....
Judging by this, I have big doubts.
Of course it matters what he means by a formula; technically a formula of the type $x=10^{\frac{\log_{10} x}{3}}$ is simple enough and easy to use if one uses a logarithmic scale for the real numbers...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
$f(x)$ is linear if and only if $f(x) + f(x - a - b) = f(x - a) + f(x - b)$ I'm trying to show that a function $f:\mathbb{R} \rightarrow \mathbb{R}$ is linear if and only if $$f(x) + f(x - a - b) = f(x - a) + f(x - b)$$ for all $a, b, x\in\mathbb{R}$.
The forward direction is trivial and I'm having some problems with the backwards direction. I've thought of using the continuous Cauchy functional equation $$f(x + y) = f(x) + f(y) \Longrightarrow f(x) = cx$$
but I'm having trouble with getting the equation into a usable form and also I'm not sure how to show that the function is continuous.
Any help would be appreciated.
Edit Linear in the sense of $ax + b$ for some constants $a, b$.
| I assume by linear you mean "of the form ax+b", as otherwise the statement is false (see Brett's answer). Suppose that $f$ satisfies $$f(x) + f(x - a - b) = f(x - a) + f(x - b)$$ but is not linear, so we have some $x,y$ such that $f(x)+f(y)\neq f(x+y)+f(0)$. Then $$f(x+y)+f(0)\neq f(x)+f(x-x-(-y))= f(x-x)+f(x-(-y))=f(0)+f(x+y)$$
a contradiction, so $f$ must be linear in this sense.
Edit: Because of your mention of the continuous Cauchy functional equation, and because the statement you are trying to prove is false without the assumption of continuity, I assume that $f$ is continuous. To see why any nonlinear $f$ must have some $x,y$ such that $f(x)+f(y)\neq f(x+y)+f(0)$, suppose that $f$ is such that $f(x)+f(y)= f(x+y)+f(0)$ for all $x,y$. Let $a=f(1)-f(0)$ and $b=f(0)$. Then $f(1)=f(\frac{m}{m})=mf(\frac{1}{m})-(m-1)f(0)$ so $f(\frac{1}{m})=\frac{1}{m}(f(1)-f(0))+f(0)=a\frac{1}{m}+b$, which means $$f(\frac{n}{m})=nf(\frac{1}{m})-(n-1)f(0)=a\frac{n}{m}+nb-(n-1)b=a\frac{n}{m}+b$$ and so by continuity and the fact that the rationals are dense in the reals we see that $f(x)=ax+b$ for all real numbers $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Indefinite integral of $\cos^{3}(x) \cdot \ln(\sin(x))$ I need help. I have to integrate $\cos^{3} \cdot \ln(\sin(x))$ and I don´t know how to solve it. In our book it is that we have to solve using the substitution method. If somebody knows it, you will help me..please
| If you mean $\cos^3 x\ln(\sin x)$, let $u=\sin x$. Then $du=\cos x dx$, and $$\begin{align*}
\cos^3 x\ln(\sin x)dx&=\cos^2 x\ln(\sin x)\Big(\cos x dx\Big)\\
&=\cos^2 x\ln u \,du\\
&=(1-\sin^2 x)\ln u\,du\\
&=(1-u^2)\ln u\,du\;,
\end{align*}$$
which can be integrated by parts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Two group theory questions I have two questions here about this topic that have absolutely baffled me. Assistance in solving these would be appreciated.
Q1:
Let $S := \{(123),(235)\} \subseteq \Sigma_5$. Determine the subgroup $\langle$$S$$\rangle$ $\leq$ $\Sigma_5$.
Q2:
Let $G$ be a finite group with subgroups $A,B \leq G$. Show that
$$[G : A\cap B] = [G : A] \centerdot [A : A\cap B] \leq [G : A] \centerdot [G : B].$$
| For question 1: As has been noted, since $4$ is fixed by both generators of $G$, we can imagine that we are actually working in $S_4$ (acting on ${1,2,3,5}$). Moreover, since $3$-cycles are even, we are actually inside of $A_4$; $A_4$ has $12$ elements, and $S$ contains at least $5$ elements (namely, $\{1\}$, $(123)$, $(132)$, $(235)$, and $(253)$), it must either have $6$ elements or must be all of $A_4$ (by Lagrange's Theorem). However, $A_4$ is famous for being the smallest counterexample to the converse of Lagrange's Theorem: $A_4$ does not have subgroups of order $6$. Thus, $S$ must be all of $A_4$ (or rather, the even permutations in $S_5$ that fix $4$). If you want to explicitly produce the other seven elements explicitly in terms of the generators, we have (I compose my permutations right to left):
$$\begin{align*}
(123)(235) &= (12)(35)\\
(235)(123) &= (13)(25)\\
(123)(253) &= (125)\\
(253)(123) &= (153)\\
(132)(235) &= (135)\\
(235)(132) &= (152)\\
(152)(12)(35)(125) &= (15)(23)
\end{align*}$$
which gives all twelve elements of $A_4$.
For Question 2: Both the equality and the inequality hold for infinite groups.
Theorem. Let $K\leq H\leq G$ be groups; then $$[G:K]=[G:H][H:K]$$ in the sense of cardinalities.
Proof. Let $\{h_i\}_{i\in I}$ be a set of left coset representatives of $K$ in $H$; and let $\{g_j\}_{j\in J}$ be a set of left coset representatives for $H$ in $G$. We claim that $\{g_jh_i\}_{(i,j)\in I\times J}$ is a set of left coset representatives for $K$ in $G$.
Indeed, let $g\in G$. Then there exists $j\in J$ such that $g\in g_jH$; let $h\in H$ be such that $g=g_jh$. Then there exists $i\in I$ such that $h\in h_iK$. Therefore, there exists $k\in K$ such that $h=h_ik$. Hence, $g=g_jh_ik\in g_jh_i K$. That is: any element of $G$ is equivalent, modulo $K$ on the right, to some $g_jh_i$.
Now assume that $g_jh_iK = g_rh_sK$; we need to show that $j=r$ and $i=s$. Indeed, there exists $k\in K$ such that $g_jh_i = g_rh_sk$. Since $k\in K\subseteq H$, then $g_jh_i\in g_jH$, and $g_rh_sk\in g_rH$. Hence, $g_jH\cap g_rH\neq\varnothing$, hence $j=r$ (since the $g_j$ are a set of coset representatives). Thus, $g_j=g_r$, so $g_jh_iK=g_rh_sK$ implies $h_iK = h_sK$; since the $h_i$ form a set of coset representatives for $K$ in $H$, it follows that $i=s$, as desired. $\Box$
Theorem. Let $G$ be a group, and let $A$ and $B$ be subgroups. Then
$$[A:A\cap B]\leq [G:B]$$
in the sense of cardinalities.
Proof. Let $\{a_i\}_{i\in I}$ be a set of coset representatives for $A\cap B$ in $A$. I claim that if $a_iB = a_jB$, then $i=j$. This will show that the number of left cosets of $B$ in $G$ is at least as large as $[A:A\cap B]$.
Assume that $a_iB=a_jB$. Then there exists $b\in B$ such that $a_i = a_jb$. Hence $b=a_ia_j^{-1}\in A\cap B$, so $a_i\in a_j(A\cap B)$; hence, $i=j$, as desired. $\Box$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does one solve this integral equation $1+ax=\int_{-\infty}^xf(x-t)dt$ I've run into having to solve this equation for $f(x)$:
$$1+ax=\int_{-\infty}^xf(x-t)dt$$
Unfortunately, I am not familiar with solving integral equations. Can anyone help? Is is even soluble?
Edit: Fixed a typo in the upper limit in the integral.
| Note that rhs of this equation is constant. Indeed,
$$
\int\limits_{-\infty}^{x} f(x-t)dt=
\{\tau=x-t\}=
\int\limits_{0}^{+\infty}f(\tau)d\tau=
\int\limits_{0}^{+\infty}f(t)dt
$$
Therefore the lhs of this equation must be constant. But this is possible only if $a=0$. For the case when $a=0$, we have $1=\int\limits_{0}^{+\infty}f(t)dt$, otherwise there is no solution.
Finally if $a=0$ the solution of this equation is any integrable function $f$ such that $\int\limits_{0}^{+\infty}f(t)dt=1$. If $a\neq 0$, solution doesn't exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Disprove uniform convergence of $\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}$ in $[0,\infty)$ How would I show that $\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}$ does not uniformly converge in $[0,\infty)$?
I don't know how to approach this problem.
Thank you.
| $$ \sup_{x \in [0,\infty)} \sum_{n=m}^{2m} \frac{x}{(1+x)^n} \geq \sup_{x \in [0,\infty)} \frac{mx}{(1+x)^{2m}} \geq \frac{mx}{(1+x)^{2m}} \bigg|_{x=1/m} = \frac1{(1+ \frac{1}{m} )^{2m}}\to \frac1{e^2}.$$
Thus the series is not uniformly Cauchy on $[0,\infty).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
What are the quadratic extensions of $\mathbb{Q}_2$? How do you classify the non-squares in $\mathbb{Q}_2$? I've tried writing down expansions for "odd" numbers in $\mathbb{Z}_2$, but unlike in $\mathbb{Z}_p$, the n$^{th}$ term in the expansion is not always uniquely determined once you know the first n-1 terms, and when you come across a non-square it's not obvious (to me at least) whether it is divisible by another non-square you've already found.
| As for any field $K$ of characteristic different from $2$, the quadratic extensions are all of the form $K(\sqrt{d})$ for $d \in K^{\times} \setminus K^{\times 2}$; moreover $K(\sqrt{d_1}) \cong K(\sqrt{d_2}) \iff d_1 = a^2 d_2$. Thus they are parameterized by the nontrivial elements of $K^{\times}/K^{\times 2}$, it is then sufficient to understand this quotient. Note that this is an $\mathbb{F}_2$-vector space, so it's enough to determine its dimension. In what follows I will denote this $\mathbb{F}_2$-dimension simply by "$\operatorname{dim}$".
If $K$ is the fraction field of a discrete valuation ring $R$, then from $K^{\times} \cong R^{\times} \oplus \mathbb{Z}$ it is easy to see that
$\dim K^{\times}/K^{\times 2} = 1+ \dim R^{\times}/R^{\times 2}$.
So, here, you want to know the square classes in $\mathbb{Z}_2^{\times}$. I claim that an element $u \in \mathbb{Z}_2^{\times}$ is a square iff its residue modulo $8$ is a square in $\mathbb{Z}/8\mathbb{Z}$: to see this, use Hensel's Lemma. From this it follows that
$\dim \mathbb{Z}_2^{\times} / \mathbb{Z}_2^{\times 2} = \dim (\mathbb{Z}/8\mathbb{Z})^{\times} / (\mathbb{Z}/8\mathbb{Z})^{\times 2} = 2$
and thus
$\dim \mathbb{Q}_2^{\times} / \mathbb{Q}_2^{\times 2} = 3$.
This argument should give you explicit representatives as well: that is, the $2^3-1 = 7$ quadratic extensions of $\mathbb{Q}_2$ are gotten by adjoining square roots of $3,5,7,2,6,10,14$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Show that $\tan 3x =\frac{ \sin x + \sin 3x+ \sin 5x }{\cos x + \cos 3x + \cos 5x}$ I was able to prove this but it is too messy and very long. Is there a better way of proving the identity? Thanks.
| You want to prove
$$\frac{\sin 3x}{\cos 3x}=\frac{\sin x+\sin 3x+\sin 5x}{\cos x+\cos 3x+\cos 5x}$$
Or, in other words, that the two vectors $(\cos3x,\sin3x)$ and $(\cos x+\cos 3x+\cos 5x,\sin x+\sin 3x+\sin 5x)$ are parallel. The latter is the sum of $(\cos x,\sin x)$, $(\cos 3x,\sin 3x)$ and $(\cos 5x,\sin5x)$.
Now, $(\cos x,\sin x)$ and $(\cos 5x,\sin5x)$ both have unit length, so by the parallelogram rule, $(\cos x,\sin x)+(\cos 5x,\sin5x)$ is the diagonal of a rhombus, and by symmetry the direction of the diagonal must be halfway between the angles of the sides -- that is $\frac{x+5x}{2}=3x$. So $(\cos x,\sin x)+(\cos 5x,\sin5x)$ lies even with the $(\cos3x,\sin3x)$ term and the sum of all three vectors is parallel to $(\cos3x,\sin3x)$, as required.
This geometric argument mostly closes the case, but note (because that's how I wrote it at first) that it can be made to look slick and algebraic by moving to the complex plane. Then saying that the two vectors are parallel is is the same as saying that $e^{3xi}$ and $e^{xi}+e^{3xi}+e^{5xi}$ are real multiples of each other.
But $e^{xi}+e^{5xi}=e^{3xi}(e^{-2xi}+e^{2xi})$ and the factor in the parenthesis is real because it is the sum of a number and its conjugate. In particular, by Euler's formula,
$$e^{xi}+e^{3xi}+e^{5xi} = e^{3xi}(1+2\cos 2x)$$
and the two vectors are indeed parallel and your identity holds -- except when $\cos 2x=-\frac 12$, in which case the fraction to the right of your desired identity is $0/0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 3
} |
Example of a subalgebra of infinite-generated algebra which cannot be extended to a maximal subalgebra First of all, sorry for my English, especially mathematical one. The problem is that I know how to call such things in Ukrainian but unfortunately did not manage to find proper translations to English. So, I will give some brief definitions of terms in order to not confuse you with the names.
So, an algebra (maybe, partial) is a set of elements together with a set of operations defined on these elements $ (A, \Omega) $.
Let $ B \subset A $. Then a closure $[B]_f$ of $A$ by an $n$-ary operation $f \in \Omega$ is a set defined by two rules:
*
*$ B \subseteq [B]_f$
*$ \forall (a_1, a_2, \ldots, a_n) \subset [B]_f $ if $f(a_1, a_2, \ldots, a_n)$ is defined then $f(a_1, a_2, \ldots, a_n) \in [B]_f$
A closure $[B]$ of $A$ is $ B^0 \cup B^1 \cup B^2 \ldots $ where $B^0 = B$, $B^{i+1} = \bigcup_{f \in \Omega}[B^i]_f$.
The definition of subalgebra and therefore algebra extension is obvious, I think.
$B$ is a system of generators (I believe, it is close to the basis) for algebra $(A,\Omega)$ if $[B] = A$.
And, finally, subalgebra $(B,\Omega)$ is called maximal subalgebra of $(A,\Omega)$ if there is no subalgebra $(C,\Omega)$ such that $B \subset C \subset A, B \neq A, C \neq A, C \neq B$.
It can be proved that for a algebra with existing finite system of generators each its subalgebra can be extended to some maximal subalgebra.
But for algebras with infinite systems of generators, this statement is not always true. And I need a counter-example. Therefore, I need an example of infinite-based algebra and a subalgebra which is impossible to extend to some maximum subalgebra.
| EDITED: Hint: Consider the operation $f(n) = n-1$ on the natural numbers (with $f(0)$ arbitrary).
The original hint (Consider an algebra without any operations, or with only a trivial operation) was completely wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Eulerian walk proof: If a connected graph has exactly two nodes with odd degree, then it has an Eulerian walk? Prove that: If a connected graph has exactly two nodes with odd degree, then it has an Eulerian walk. Every Eulerian walk must start at one of these and end at the other one. How shall I prove this?
| Define the "total degree" to be the sum of all vertex degrees. Do induction on $k$, the total degree:
Base case: If there are any edges at all, the smallest total degree is $2$. Since the graph is connected, there must be only two nodes, and we just walk down the one edge. Done.
Let $k>1$, suppose the theorem is true for every graph with total degree $\leq k$, and suppose we have a graph (satisfying the hypotheses) with total degree $k+1$. Label $v_1$ and $v_2$ the only two vertices with odd degree. Starting at $v_1$, there are two cases:
1) there is an edge at $v_1$ which doesn't connect to $v_2$. Walk down that edge, and remove it from the graph (also remove $v_1$ if it has been disconnected). What is left over will still satisfy the hypotheses, the the total degree is now $k-1$, so there is an Eulerian walk.
2) Every edge at $v_1$ connects to $v_2$. Since we are beyond the base case there will be at least two edges, so starting at $v_1$ walk down any two edges and remove them from the graph (again, remove $v_1$ if it has been disconnected). The result will still satisfy the hypotheses (something to check here) and have total degree $k-3$, so we can walk.
(not totally rigorous, but I think it's a good sketch)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Uncertainty when computing variance from class intervals In descriptive statistics, continuous variables are often presented using class intervals of uniform width, for instance:
Annual salary (€) Frequency
[20000, 40000[ 10
[40000, 60000[ 25
[60000, 80000[ 5
and one is told to compute the mean or variance by assuming that all data in a class are at the center of the class. For instance, one would compute the mean annual salary from the table above as
$$
\frac{10 \times 30000 + 25 \times 50000 + 5 \times 70000}{40}
= 47500.
$$
It is easy to show that the uncertainty (maximal error) when computing the mean this way is half the width of a class interval (in this case, $10000$). In other words, the true mean is in the interval $[37500, 57500[$.
My question is: are there simple bounds for the uncertainty of the variance?
| There are. We can regard the errors introduced by the bunching of the data as a second random variable $Y$ added to the original data $X$. Then the variance of the sum is
$$\sigma_{X+Y}^2=\sigma_X^2+\sigma_Y^2+2\sigma_{XY}\;,$$
where $\sigma_{XY}$ is the covariance of $X$ and $Y$. Since $|\sigma_{XY}|\le\sigma_X\sigma_Y$, we have
$$(\sigma_X-\sigma_Y)^2=\sigma_X^2+\sigma_Y^2-2\sigma_X\sigma_Y\le\sigma_{X+Y}^2\le\sigma_X^2+\sigma_Y^2+2\sigma_X\sigma_Y=(\sigma_X+\sigma_Y)^2\;.$$
Thus, while for uncorrelated variables the variances add, for correlated variables the standard deviation may lie anywhere between the difference and the sum of the standard deviations.
It remains to find the maximal value of $\sigma_Y$. The magnitude of each error is at most the half-width $\Delta$ of the intervals, and the maximal standard deviation is thus attained if half the errors are $\Delta$ and half are $-\Delta$, for a standard deviation of $\sigma_Y=\Delta$. Thus we have the simple result that the standard deviation can be off by up to $\Delta$, just like the mean.
In case you prefer to work with the variances, the corresponding bounds on the exact variance $\sigma^2$ in terms of the approximated variance $\sigma'^2$ are
$$(\sqrt{\sigma'^2}-\Delta)^2\le\sigma^2\le(\sqrt{\sigma'^2}+\Delta)^2\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Extension and Self Injective Ring Let $R$ be a self injective ring. Then $R^n$ is an injective module. Let $M$ be a submodule of $R^n$ and let $f:M\to R^n$ be an $R$-module homomorphism. By injectivity of $R^n$ we know that we can extend $f$ to $\tilde{f}:R^n\to R^n$.
My question is that if $f$ is injective, can we also find an injective extension $\tilde{f}:R^n\to R^n$?
Thank you in advance for your help.
| Well, this is true if $R$ is commutative and noetherian; I don't know whether that's good enough for what you want. (This solution may be overcomplicated; I do not actually know how to prove all the facts I am using.)
If $R$ is commutative, noetherian, and self-injective, then it's Artinian, it's a finite product of local Artinian rings, hence we can reduce to the local case.
So say $R$ is commutative, noetherian, and local (and hence Artinian, but I won't use that). Take an injective hull of $M$ inside $R^n$; call it $Q$. So $f$ extends injectively to $f:Q\rightarrow R^n$, and we now need to extend it from $Q$ to all of $R^n$. Since $Q$ is injective, it is a direct summand of $R^n$, and hence is also projective. But we assumed $R$ was local, and hence $Q$ is free; say it is isomorphic to $R^m$, $m\le n$.
Then an injective function $R^k \rightarrow R^n$ is the same as an (ordered) linearly independent subset of $R^n$, of $k$ elements. So we have $m$ linearly independent elements of $R^n$ and we want to extend it to $n$ such. We can extend it to a maximal linearly independent set, certainly; the question then is just if such a set necesssarily has $n$ elements.
Now, since we assumed $R$ was commutative and noetherian, we can apply the theorem of Lazarus quoted here, and say yes, a maximal linearly independent set of $R^n$ necessarily has $n$ elements, and so having extended $f$ to $Q\cong R^m$, we can further extend it to $R^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is the product of symmetric positive semidefinite matrices positive definite? I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices?
My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...
| The product of two symmetric PSD matrices is PSD, iff the product is also symmetric.
More generally, if $A$ and $B$ are PSD, $AB$ is PSD iff $AB$ is normal, ie, $(AB)^T AB = AB(AB)^T$.
Reference:
On a product of positive semidefinite matrices, A.R. Meenakshi, C. Rajian, Linear Algebra and its Applications, Volume 295, Issues 1–3, 1 July 1999, Pages 3–6.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68",
"answer_count": 4,
"answer_id": 1
} |
How can I evaluate an expression like $\sin(3\pi/2)$ on a calculator and get an answer in terms of $\pi$? I have an expression like this that I need to evaluate:
$$16\sin(2\pi/3)$$
According to my book the answer is $8\sqrt{3}$. However, when I'm using my calculator to get this I get an answer like $13.86$. What I want to know, is it possible to make a calculator give the answer without evaluating $\pi$, so that $\pi$ is kept separate in the answer? And the same for in this case, $\sqrt{3}$. If the answer involves a square root, I want my calculator to say that, I don't want it to be evaluated.
I am using the TI-83 Plus if that makes a difference.
| It's quite simple; simply enter your expression exactly as it's written into your TI-36X Pro ($18 at OfficeMax) while you're in RAD mode.
The TI-36X Pro returns 8√3 quite nicely, or, it can return any answer in terms of pi.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
} |
Taylor series expansion of arctan(x) around the point 0 This is actually a technical question about why I'm getting different results when trying to do the same thing using Maxima and WolframAlpha.
When I enter
expand arctan(x)
in WolframAlpha I get
$$x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}+\frac{x^9}{9}+O(x^{10})$$
This is what I enter into maxima to get the 5th degree expansion for example
ff:taylor(arctan(x),x,0,5);
The result is littered with terms containing derivative symbols, unlike the nice-looking polynomial WolframAlpha returns.
My question is how to get Maxima to evaluate the expansion at the point 0 and return the first polynomial?
| Maxima uses that name atan, not arctan. Thus taylor(atan(x), x, 0, 9) will give you the same result as Wolfram Alpha. When you use arctan, Maxima simply returns the generic Taylor expansion of an unknown function, hence all the derivatives.
You can execute alias(arctan, atan), if you wish to use the arctan as an alias to the built-in atan.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/113993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the proof that covariance matrices are always semi-definite? Suppose that we have two different discreet signal vectors of $N^\text{th}$ dimension, namely $\mathbf{x}[i]$ and $\mathbf{y}[i]$, each one having a total of $M$ set of samples/vectors.
$\mathbf{x}[m] = [x_{m,1} \,\,\,\,\, x_{m,2} \,\,\,\,\, x_{m,3} \,\,\,\,\, ... \,\,\,\,\, x_{m,N}]^\text{T}; \,\,\,\,\,\,\, 1 \leq m \leq M$
$\mathbf{y}[m] = [y_{m,1} \,\,\,\,\, y_{m,2} \,\,\,\,\, y_{m,3} \,\,\,\,\, ... \,\,\,\,\, y_{m,N}]^\text{T}; \,\,\,\,\,\,\,\,\, 1 \leq m \leq M$
And, I build up a covariance matrix in-between these signals.
$\{C\}_{ij} = E\left\{(\mathbf{x}[i] - \bar{\mathbf{x}}[i])^\text{T}(\mathbf{y}[j] - \bar{\mathbf{y}}[j])\right\}; \,\,\,\,\,\,\,\,\,\,\,\, 1 \leq i,j \leq M $
Where, $E\{\}$ is the "expected value" operator.
What is the proof that, for all arbitrary values of $\mathbf{x}$ and $\mathbf{y}$ vector sets, the covariance matrix $C$ is always semi-definite ($C \succeq0$) (i.e.; not negative definte; all of its eigenvalues are non-negative)?
| Covariance matrix C is calculated by the formula,
$$
\mathbf{C} \triangleq E\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}.
$$
Where are going to use the definition of positive semi-definite matrix which says:
A real square matrix $\mathbf{A}$ is positive semi-definite if and only if
$\mathbf{b}^T\mathbf{A}\mathbf{b}\succeq0$
is true for arbitrary real column vector $\mathbf{b}$ in appropriate size.
For an arbitrary real vector u, we can write,
$$
\begin{array}{rcl}
\mathbf{u}^T\mathbf{C}\mathbf{u} & = & \mathbf{u}^TE\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}\mathbf{u} \\
& = & E\{\mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}\} \\
& = & E\{s^2\} \\
& = & \sigma_s^2. \\
\end{array}
$$
Where $\sigma_s$ is the variance of the zero-mean scalar random variable $s$, that is,
$$
s = \mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}}) = (\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}.
$$
Square of any real number is equal to or greater than zero.
$$
\sigma_s^2 \ge 0
$$
Thus,
$$
\mathbf{u}^T\mathbf{C}\mathbf{u} = \sigma_s^2 \ge 0.
$$
Which implies that covariance matrix of any real random vector is always positive semi-definite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 2,
"answer_id": 1
} |
Heronian triangle Generator I'm trouble shooting my code I wrote to generate all Heronian Triangles (triangle with integer sides and integer area). I'm using the following algorithm
$$a=n(m^{2}+k^{2})$$
$$b=m(n^{2}+k^{2})$$
$$c=(m+n)(mn-k^{2})$$
$$\text{Semiperimeter}=mn(m+n)$$
$$\text{Area}=mnk(m+n)(mn-k^{2})$$
for integers $m$, $n$ and $k$ subject to the contraints:
$$\gcd{(m,n,k)}=1$$
$$mn > k^2 \ge m^2n/(2m+n)$$
$$m \ge n \ge 1$$
which I found here http://en.wikipedia.org/wiki/Integer_triangle#Heronian_triangles
The odd part is that this algorithm doesn't seem to ever generate 7, 24, 25 which is a Hero Triangle (a right triangle, in fact) with integer area 84.
I had originally assumed it was a breakdown in my code, but then I realized I can't seem to find any values of $m$, $n$, or $k$ within the constraints or even ignoring the constraints that generate this triangle.
I don't know which is which between which of the 7, 24, or 25 equal the $a$, $b$, or $c$, but I've tried manually solving for $m$ and $n$ using wolframalpha. Since $a$ and $b$ are symmetric (since $m$ and $n$ are symmetric when ignoring constraints), I really only have 3 sets of linear equations to check:
$$7=n(m^{2}+k^{2})$$
$$24=m(n^{2}+k^{2})$$
$$25=(m+n)(mn-k^{2})$$
Wolframalpha - Linear equations 1
$$24=n(m^{2}+k^{2})$$
$$25=m(n^{2}+k^{2})$$
$$7=(m+n)(mn-k^{2})$$
Wolframalpha - Linear equations 2
$$25=n(m^{2}+k^{2})$$
$$7=m(n^{2}+k^{2})$$
$$24=(m+n)(mn-k^{2})$$
Wolframalpha - Linear equations 3
None of which have integer solutions.
Is my understanding of Hero Triangles wrong? Is the algorithm wrong? Is my implementation wrong?
| This algorithm doesn't generate primitive triangles ($\text{gcd}(a,b,c)=1$), and in fact does NOT generate 7, 24, 25 (as you showed). It instead generates 14, 48, 50 when $m=7$, $n=1$ ,$k=1$.
Looks like when they say "All Heronian triangles can be generated as multiples of" this formula, they aren't just counting integer multiples (like 2*14, 2*48, 2*50), they are counting this like like .5*14, .5*48, .5*50, which is the triangle you were looking for, 7, 24, 25. To get to the primitive triangle in each generated case you can divide by the greatest common divisor of the 3 sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Good introductory book on fluid dynamics I am interested in getting a good introductory book to fluid dynamics. I am a first year PhD student in Mathematics.
My project involves a simplification of the Navier-Stokes equations. But I don't have any background whatsoever on fluid dynamics or physics for the matter (at least beyond high school level).
Thank you for any recommendations! Please don't just post random books you found on Amazon (I can do that myself). I am interested in the opinion of people who have done some fluid dynamics, the more the better, and know about the field.
| "An Introduction to Fluid Dynamics" by G. K. Batchelor is a classic and is considered as the Bhagavad Gita of fluid dynamics. I have read this book as an undergrad and hence the knowledge required is just high school mathematics and physics.
Recent texts, in my opinion, are unfortunately biased too much towards computational fluid dynamics, than explaining the mathematical and physical underpinning of the fluid dynamics.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 5,
"answer_id": 0
} |
prove that $g\geq f^2$ The problem is this:
Let $(f_n)$ a sequence in $L^2(\mathbb R)$ and let $f\in L^2(\mathbb R)$ and $g\in L^1(\mathbb R)$. Suppose that $$f_n \rightharpoonup f\;\text{ weakly in } L^2(\mathbb R)$$ and $$f_n^2 \rightharpoonup g\;\text{ weakly in } L^1(\mathbb R).$$ Show that $$f^2\leq g$$ almost everywhere on $\mathbb R$.
I admit I'm having problems with this since It's quite a long time I don't deal with these kind of problems. Even an hint is welcomed. Thank you very much.
| Here is another solution. The main idea is to obtain an integral estimate, and use an approximation to the identity to reduce it to a pointwise estimate.
Let $E$ be the set of points $x$ such that $x$ is a Lebesgue point of both $f$ and $g$. Since both $f$ and $g$ are locally integrable, $\mathbb{R}-E$ has measure zero.
Now choose $x\in E$ and consider a cube $C_x(r)$ of side length $r$ centered at $x$. Then
$$\phi_{\epsilon}=\frac{1}{|C_{x}(\epsilon)|}\chi_{C_{x}(\epsilon)}$$
lies in $L^2(\mathbb{R})\cap L^{\infty}(\mathbb{R})$, so
$$(f_n,\phi_{\epsilon})\to(f,\phi_{\epsilon})\quad\text{and}\quad(f_n^2,\phi_{\epsilon})\to(g,\phi_{\epsilon}).$$
By Cauchy-Schwarz inequality applied to $f_n\sqrt{\phi_{\epsilon}}\cdot\sqrt{\phi_{\epsilon}}$,
$$(f_n,\phi_{\epsilon})^2=\left( \int f_n\phi_{\epsilon}\right)^2\leq\int f_n^2\phi_{\epsilon}=(f_n^2,\phi_{\epsilon}).$$
Thus taking $n\to\infty$ we have
$$(f,\phi_{\epsilon})^2\leq(g,\phi_{\epsilon}).$$
Now taking $\epsilon\to 0$ gives $f^2(x)\leq g(x)$, completing the proof.
P.S. I saw a same question on AoPS. I hope that my answer is not a duplicate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
A group of order $66$ has an element of order $33$ If I wanted to show that a group of order $66$ has an element of order $33$, could I just say that it has an element of order $3$ (by Cauchy's theorem, since $3$ is a prime and $3 \mid 66$), and similarly that there must be an element of order $11$, and then multiply these to get an element of order $33$? I'm pretty sure this is wrong, but if someone could help me out I would appreciate it. Thanks.
| Alternatively:
*
*show that the Sylow $11$-subgroup is normal.
*show that a cyclic group of order $11$ has no automorphisms of order $3$.
*pick an element of order $3$ in your group and conclude that it commutes with any element of order $11$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
what does following matrix says geometrically Let $M\subset \mathbb C^2$ be a hypersurface defined by $F(z,w)=0$. Then for some point $p\in M$, I've
$$\text{ rank of }\left(
\begin{array}{ccc}
0 &\frac{\partial F}{\partial z} &\frac{\partial F}{\partial w} \\
\frac{\partial F}{\partial z} &\frac{\partial^2 F}{\partial ^ 2z} &\frac{\partial^2 F}{\partial z\partial w} \\
\frac{\partial F}{\partial w} &\frac{\partial^2 F}{\partial w\partial z} & \frac{\partial^2 F}{\partial w^2} \\
\end{array}
\right)_{\text{ at p}}=2.$$
What does it mean geometrically? Can anyone give a geometric picture near $p$?
Any comment, suggestion, please.
Edit: Actually I was reading about Levi flat points and Pseudo-convex domains. I want to understand the relation between these two concepts. A point p for which the rank of the above matrix is 2 is called Levi flat. If the surface is everywhere Levi flat then it is locally equivalent to $(0,1)\times \mathbb{C}^n$, so I have many examples....but what will happen for others for example take the three sphere in $\mathbb{C}^2$ given by $F(z,w)=|z|^2+|w|^2−1=0$. This doesn't satisfy the rank 2 condition. Can I have precisely these two situations?
| Let $p=(z_0,w_0)$ and define
$G(z,w)=F(z,w)-(z_0,w_0)$.
Then the matrix is
$$
\left(
\begin{matrix}
G & G_z & G_w \cr
G_z & (G_z)_z & (G_z)_w \cr
G_w & (G_w)_z & (G_w)_w \cr
\end{matrix}
\right)_{\text{at }p}
$$
Since $G(p)=0$.
Is that any help?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
For every $k \in {\mathbb Z}$ construct a continuous map $f: S^n \to S^n$ with $\deg(f) = k$. Suppose $S^n$ is an $n$-dimensional sphere.
Definition of the degree of a map:
Let $f:S^n \to S^n$ be a continuous map. Then $f$ induces a homomorphism $f_{*}:H_n(S^n) \to H_n(S^n)$ . Considering the fact that $H_n(S^n) = \mathbb {Z}$ , we see that $f_*$ must be of the form $f_*(n)=an$ for some fixed integer $a$. This $a$ is then called the degree of $f$.
Question: For every $k \in {\mathbb Z}$ how does one construct a continuous map $f: S^n \to S^n$ with $\deg(f) = k$?
| Here is another solution.
Claim 1: If $f:S^n \to S^n$ has degree $d$ then so does $\Sigma f: S^{n+1} \to S^{n+1}$
Proof: Use the Mayer-Vietrois sequence for $S^{n+1}$. Let $A$ be the complement of the North pole, and $B$ the complement of the South pole. Then $S^n \simeq A \cap B$ and the connecting map $\partial_*$ in the Mayer-Vietrois sequence is an isomorphism. We get the following commutative diagram
$$
\newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!}
\newcommand{\la}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{llllllllllll}
H_{n+1}(S^{n+1}) & \ra{\partial_*} & H_n\left(A \cap B\right) & \la{i_*} & H_n(S^n)\\
\da{\Sigma f_*} & & \da{} & & \da{f_*} \\
H_{n+1}(S^{n+1}) & \ra{\partial_*} & H_n\left(A \cap B\right) & \la{i_*} & H_n(S^n)\\
\end{array}
$$
in which each horizontal map is an isomorphism. Thus $\Sigma f_* = \partial_*^{-1} i_* f_* i_*^{-1}\partial_*$ and applying homology shows that $\text{deg}(f) = \text{deg}(\Sigma f)$
Thus we are reduced to simply showing that there is a map $f:S^1 \to S^1$ of degree $k$. But this is just the winding number and it is (reasonably well known) that the map $z \mapsto z^k$ (where we view $S^1$ as the unit circle in $\mathbb{C}$) has degree $k$.
Finally I would direct you to have a look at Algebraic Topology by Hatcher:
*
*Example 2.31 gives a direct construction of a map of arbitrary degree;
*Example 2.32 works through the calculation of the map $f(z)=z^k$ proving it has degree $k$; and
*Prop 2.33 gives another prove of Claim 1 above (which basically takes a different route to the same commutative diagram).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 0
} |
How to pronounce "$\;\setminus\;$" (the symbol for set difference) A question for English speakers. When using (or reading) the symbol $\setminus$ to denote set difference —
$$A\setminus B=\{x\in A|x\notin B\}$$
— how do you pronounce it?
If you please, indicate in a comment on your answer what region you're from (what dialect you have).
This is a poll question. Please do not repeat answers! Rather, upvote an answer if you pronounce the symbol the same way the answerer does, and downvote it not at all. Please don't upvote answers for other reasons. Thanks!
| Complement of B in A for A-B or sometime A difference B
This is more self explaining way to say.. I feel!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 10,
"answer_id": 5
} |
Coupon Problem generalized, or Birthday problem backward. I want to solve a variation on the Coupon Collector Problem, or (alternately) a slight variant on the standard Birthday Problem.
I have a slight variant on the standard birthday problem.
In the standard Coupon Collector problem, someone is choosing coupons at random (with replacement) from n different possible coupons. Duplicate coupons do us no good; we need a complete set. The standard question is "What is the expected number of coupons (or probability distribution in number of coupons) to collect them all?
In the standard birthday problem, we choose k items from n options with replacement (such as k people in a room, each with one of 365 possible birthdays) and try to determine the probability distribution for how many unique values there will be (will they have the same birthday?).
In my problem, someone has chosen k items from n options and I know that there were p distinct values, but I don't know what k was. If p=n this is the coupon problem, but I want to allow for values of p that are less than n. I want to determine the probability distribution for k (actually, all I need is the expected value of k, but the distribution would be interesting as well) as a function of p and n.
| To do this properly, you need some sort of underlying distribution for $k$. For example you could use Bayesian methods: start with a prior distribution for $k$, multiply it by the likelihood of seeing $p$ for each $k$ given $n$, and divide by some constant to get a posterior probability distribution for $k$. You might then take the mean, median or mode of that posterior distribution as your central estimate; with an improper uniform prior, taking the mode would give the maximum likelihood estimate.
Another approach based on the coupon collector's calculation might be to assume that the person was aiming to get $p$ distinct items and stopped when this was achieved. In that case the expected value of $k$ would be $$\sum_{j=n-p+1}^n \frac{n}{j} = n(H_n - H_{n-p}) \approx n \log_e \left(\frac{n}{n-p}\right) $$ though the dispersion would be relatively wide. $H_n$ is the $n$th harmonic number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
If $f$ is differentiable on $[a,b]$, then it is also Lipschitz on it. He guys, I am trying to show that a differentiable function defined on a closed interval is also Lipschitz on it. I managed to weave the below proof, but I have a feeling that it may be just a tad too general for this purpose:
Theorem. If $f$ is differentiable on $[a,b]$, then it is also Lipschitz on it.
Recall that $f:A\to\mathbb{R}$ is Lipschitz on $A$ if there exists an $M>0$ such that$$\left|\frac{f(x)-f(y)}{x-y}\right|\leq M$$for all $x,y\in A$.
Proof. Let $f$ be differentiable on $[a,b]$. Because $f$ is continuous and $[a,b]$ is compact, by the Extreme Value Theorem, it follows that $f$ attains a maximum value $M$. Moreover, since $f$ is differentiable on $[a,b]$,$$f'(y)=\lim_{x\to y}\left|\frac{f(x)-f(y)}{x-y}\right|\leq M,$$for all $x,y\in[a,b]$, as required. $\square$
What do you guys think?
Edit: What if we were to add that $f'$ is also continuous on $[a,b]$?
| It is not true in general. Consider function
$$
f(x)=
\begin{cases}
x^2 \sin\frac{1}{x^2}\qquad x\neq 0\\
0\qquad\quad\qquad x=0
\end{cases}
$$
It is differentiable on $[0,1]$ and
$$
f'(x)=
\begin{cases}
2 x \sin\frac{1}{x^2}-\frac{2}{x} \cos\frac{1}{x^2}, &x\neq 0\\
0, &x=0
\end{cases}
$$
But this derivative is unbounded, since
$$
\lim\limits_{n\to\infty} f'\left(\frac{1}{\sqrt{ {\pi} +2\pi n}}\right)=+\infty
$$
On the other hand if require $f'\in C([0,1])$ then by Weierstrass theorem there exist
$$
M=\sup\limits_{t\in [0,1]}|f'(t)|<+\infty.
$$
This $M$ is a Lipschitz constant you are looking for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
How many combinations of 6 items are possible? I have 6 items and want to know how many combinations are possible in sets of any amount. (no duplicates)
e.g. It's possible to have any of the following:
1,2,3,4,5,6
1,3,5,6,2
1
1,3,4
there cannot be duplicate combinations:
1,2,3,4
4,3,2,1
Edit: for some reason I cannot add more comments. @miracle173 is correct. Also {1,1} is not acceptable
| $2^6$. Think of it like a garden of forking paths where at the first fork you have to decide whether or not to include 1, then 2, then 3... With each choice the number of possibilities doubles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
For $x_1,x_2,x_3\in\mathbb R$ that $x_1+x_2+x_3=0$ show that $\sum_{i=1}^{3}\frac{1}{x^2_i} =({\sum_{i=1}^{3}\frac{1}{x_i}})^2$ Show that if $ x_1,x_2,x_3 \in \mathbb{R}$ , and $x_1+x_2+x_3=0$ , we can say that:
$$\sum_{i=1}^{3}\frac{1}{x^2_i} = \left({\sum_{i=1}^{3}\frac{1}{x_i}}\right)^2.$$
| Take the equatin $x_1+x_2+x_3=0$, divide by $x_1x_2x_3$, multiply by $2$, add $x_1^{-2}+x_2^{-2}+x_3^{-2}$. This is essentially reverse-engineered from taking the suspected equality, multiplying out the right-hand side, subtracting out the left-hand side and multiplying by $x_1x_2x_3$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
General characteristics of a normal distribution If the normal distribution curve is symmetrical about the vertical line then the
mean = mode = median
This would mean that:
Prob( X < mean) = Prob( X > mean) = .....
What is the missing part of this concept?
| Since
Prob(X < mean) + Prob (X = mean) + Prob (X > mean)=1
and Prob(X = mean)=0 (continuous distribution)
You get
Prob(X < mean) = Prob (X > mean)=0.5
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Rationality test for a rational power of a rational It has been known since Pythagoras that 2^(1/2) is irrational. It is also obvious that 4^(1/2) is rational. There is also a fun proof that even the power of two irrational numbers can be rational.
Can you, in general, compute whether the power of two rational numbers is rational?
The reason I am asking, besides curiosity, is that the Fraction-type in Python always returns a float on exponentiation. If there is a quick way to tell if it could be accurately expressed as a fraction, the power function could conceivably only return floats when it has to.
EDIT:
By popular demand, I changed 0.5 to 1/2 to make it clearer that it is a fraction and not a float.
| Hmmm, I think that your assertion that 4^0.5 is rational is arguable. Bear in mind that in floating-point arithmetic 0.5 represents a range of real numbers, all of whose closest representation in the chosen version of f-p is closer to 0.5 than to either of its neighbouring f-p numbers. Only one of the numbers in that range leads to a rational result for the calculation of 4^0.5.
It is perfectly reasonable for Python (or any other computer) to, given f-p input, provide f-p output.
Perhaps you should have written it is obvious that 4^(1/2) is rational ?
And I see that you already have an answer to your question so I'll stop now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
How to show that these two random number generating methods are equivalent? Let $U$, $U_1$ and $U_2$ be independent uniform random numbers between 0 and 1. Can we show that generating random number $X$ by $X = \sqrt{U}$ and $X = \max(U_1,U_2)$ are equivalent?
| Random variables $X = \sqrt{U}$ and $Y = \max(U_1, U_2)$ are equal in distribution. Indeed, both $0 \leqslant X(\omega) \leqslant 1$ and $0 \leqslant Y(\omega) \leqslant 1$. Furthermore, for $0 \leqslant x \leqslant 1$, we have
$$
F_X(x) = \mathbb{P}(X \leqslant x) = \mathbb{P}(\sqrt{U} \leqslant x) = \mathbb{P}(U \leqslant x^2) = x^2
$$
$$
F_Y(x) = \mathbb{P}(Y \leqslant x) = \mathbb{P}(\max(U_1,U_2) \leqslant x) = \mathbb{P}(U_1 \leqslant x, U_2 \leqslant x) \stackrel{\text{independence}}{=} \\\mathbb{P}(U_1 \leqslant x) \cdot \mathbb{P}(U_2 \leqslant x) = x^2
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/114950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing $\int_0^{2\pi} \log|1-ae^{i\theta}|d\theta=0$ This is a homework problem for a second course in complex analysis. I've done a good bit of head-bashing and I'm still not sure how to solve it-- so I might just be missing something here. The task is to show that given $|a|<1$,
$$\int_0^{2\pi} \log|1-ae^{i\theta}|d\theta=0.$$
So right off the bat we can let $z=e^{i\theta}$ so that
$$\int_0^{2\pi} \log|1-ae^{i\theta}|d\theta=\int_{|z|=1} \log|1-az|\frac{dz}{iz}=-i\int_{|z|=1} \log|1-az|{dz}.$$
After that I'm not sure if using the residue theorem is the way to go?
| Hint:
Note that $\log|1-ae^{i\theta}|$ is the real part of $\log(1-ae^{i\theta})$. Then try differentiating with respect to $a$. Then notice that integrating around the unit circle
$$
\frac1i\oint\frac{\mathrm{d}z}{1-az}=0
$$
when $|a|<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
How should I generate this random variable? Suppose we now have a Erlang distribution $b(x;2,1) = x e^{-x}$. According to the definition of Erlang distribution we know that the variance of such a distribution is 2 and we want to reduce the variance of this model. So we hope that the modified time $\tilde{b}$ would fall faster than the original $b$. We're trying to use:
$\tilde{b} = \begin{cases} \frac{1}{c} x e^{-x} & 0 < x < t \\ \frac{1}{c} t e^{-x} & x \geq t\end{cases}$.
Integration tells us $c = 1 - e^{-t}$. And after we choose $c$, how should we generate such a random variable?
| In order to have an answer, here is something that will work. It is based on the idea of Rejection Sampling. Call your modified density function $f(x)$.
Use a pseudo-random number generator that simulates the values of a random variable uniformly distributed on $(0,1)$. Suppose that this generator produces the numbers $u$ and $v$.
Let $x=-\ln u$. If
$$f(x) \ge vte^{-x},$$
accept $x$ as coming from sampling from our distribution. Else reject $x$. Repeat.
This is a close relative of the usual Monte Carlo method for finding the area of a complicated figure.
This procedure unfortunately could have a high rejection rate, making it time-consuming to generate a large pseudo-random sample. So modifications may need to be made if we are to have a practical method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Arc Length Problem I am currently in the middle of the following problem.
Reparametrize the curve $\vec{\gamma } :\Bbb{R} \to \Bbb{R}^{2}$ defined by $\vec{\gamma}(t)=(t^{3}+1,t^{2}-1)$ with respect to arc length measured from $(1,-1)$ in the direction of increasing $t$.
By reparametrizing the curve, does this mean I should write the equation in cartesian form? If so, I carry on as follows.
$x=t^{3}+1$ and $y=t^{2}-1$
Solving for $t$
$$t=\sqrt[3]{x-1}$$
Thus,
$$y=(x-1)^{2/3}-1$$
Letting $y=f(x)$, the arclength can be found using the formula
$$s=\int_{a}^{b}\sqrt{1+[f'(x)]^{2}}\cdot dx$$
Finding the derivative yields
$$f'(x)=\frac{2}{3\sqrt[3]{x-1}}$$
and
$$[f'(x)]^{2}=\frac{4}{9(x-1)^{2/3}}.$$
Putting this into the arclength formula, and using the proper limits of integration (found by using $t=1,-1$ with $x=t^{3}+1$) yields
$$s=\int_{0}^{2}\sqrt{1+\frac{4}{9(x-1)^{2/3}}}\cdot dx$$
I am now unable to continue with the integration as it has me stumped. I cannot factor anything etc. Is there some general way to approach problems of this kind?
| Your integration could be done. However, there is a much easier way. Calculate the arc-length using the parametric version of the curve.
We have $x=u^3+1$ and $y=u^2-1$. (I changed the names of the parameters because I want to reserve $t$ for the parameter of the endpoint.) Then $\frac{dx}{du}=3u^2$ and $\frac{dy}{du}=2u$. Thus the arclength from $u=0$ to $u=t$ is given by
$$\int_0^t \sqrt{\left(\frac{dx}{du}\right)^2 +\left(\frac{dy}{du}\right)^2}\,du.$$
We have used the parametric arclength formula, much easier! The integration starts at $u=0$, since that is the value of the parameter that gives us the point $(1,-1)$.
We end up integrating $\sqrt{9u^4+4u^2}$. Since $u\ge 0$, we can replace this by $u\sqrt{9u^2+4}$. Integrate, making the substitution $w=9u^2+4$. We arrive at
$$\frac{1}{27}\left((9t^2+4)^{3/2}-8\right).\qquad (\ast)$$
This is the arclength $s$, expressed as a function of $t$.
We want to parametrize in terms of $s$. So solve for $t$ in terms of $s$, using $(\ast)$. When you solve, there will be two candidate values of $t$. Take the non-negative one, since we started at $t=0$ and were told that t$ is increasing.
Finally, in the original parametrization, replace $t$ by its value in terms of $s$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Symmetrical bilinear form $G\times G\to \mathbb{Z}_{2}$ Lets $G$ is finite abelian group (such that for any $x\in G$ $x+x=0$, i.e. $G=\mathbb{Z}_{2}^{\oplus k}$ for some $k\in\mathbb{N}$) and $(\cdot,\cdot):G\times G\to \mathbb{Z}_{2}$ is symmetrical bilinear form.
Know that:
$$(a, m)=0,$$
$$(a, p)=1,$$
$$(b, m)=1,$$
$$(b, p)=0.$$
Is it true that
$$(a, b) = 1?$$
Thanks.
| No, this is not true: Take the canonical dot product, with $k=2$, $a=p=(0,1)$, $b=m=(1,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Quick check on function composition notation Does $f^n(x)$ always mean $f(f(f(f(...f(x))))....)$ [n times]?
i.e. $f^3(x)$ always means $f(f(f(x)))$?
Does $f^0(x)$ mean $x$? [where $f\neq id$]
By always, I mean regardless of whether it's for proofs in computer science or for calculus.
Just want to be doubly sure so I don't make any unfounded leaps in my proofs by induction for computer science.
Apologies for this simian question.
Many thanks!
| No. Sometimes $f^n$ refers to multiplication, rather than composition of functions. This is especially true with trigonometric functions: for example, $\sin^2(x)$ always means $\sin(x) \cdot \sin(x)$, never $\sin(\sin(x))$. Outside trigonometry, composition is a more likely meaning, but multiplication is possible.
Do not confuse either of these with $f^{(n)}$, which means the $n$th derivative of $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
It is possible to integrate this function? I heard that that there are some unintegratable functions and I want to as if this one is not one of them?
\begin{equation}
\large \int \frac{t}{t+1}dt
\end{equation}
Got this by trying to solve another function and I need to check if this is not a dead end. If it is I will just have to find another way. It is likely that I just lack some skill and knowledge to solve it despite it`s simple appearance.
| This integral is easy to do:
$$\begin{align*}
\int \frac{t}{t+1}\,dt &= \int\frac{t+1-1}{t+1}\,dt\\
&= \int\left(\frac{t+1}{t+1}-\frac{1}{t+1}\right)\,dt\\
&= \int1\,dt - \int\frac{dt}{t+1}\\
&= t - \ln|t+1| + C.
\end{align*}$$
You can verify this by differentiation:
$$\frac{d}{dt}\left(t - \ln|t+1| + C\right) = 1 - \frac{1}{t+1} = \frac{t+1-1}{t+1} = \frac{t}{t+1}.$$
Note. What you have here is the integral of a rational function (a polynomial divided by a polynomial). In principle, every rational function has an elementary integral. There's even an algorithm for finding them.
To find the integral of $\frac{p(t)}{q(t)}$, where $p$ and $q$ are polynomials:
*
*If $\deg(p)\geq \deg(q)$, then perform long division with remainder and rewrite the fraction as
$$\frac{p(t)}{q(t)} = P(t) + \frac{r(t)}{q(t)}$$
where $P(t)$ is a polynomial, and $r(t)$ is a polynomial with $r=0$ or $\deg(r)\lt\deg(q)$. $P(t)$ can be integrated easily, so we are left with the problem of integrating rational functions $\frac{p(t)}{q(t)}$ with $\deg(p)\lt\deg(q)$.
*Completely factor $q(t)$ into a product of linear and irreducible quadratic polynomials. This step can be hard to perform in practice! In fact, this is the only reason why I say "in principle" above, because actually factoring a polynomial can be very hard to do.
*Use Partial Fraction Decomposition to rewrite $\frac{p(t)}{q(t)}$ as a sum of rational functions in which the denominator is a power of a linear polynomial and the numerator is a constant; or the denominator is a power of an irreducible quadratic polynomial and the numerator is linear polynomial.
*To compute $\int\frac{A}{(at+b)^n}\,dt$, $A$ constant, $a\neq 0$, $n$ a positive integer, use the substitution $u=at+b$.
*To compute $\int\frac{At}{(at^2+bt+c)^n}\,dt$ where $at^2+bt+c$ is irreducible quadratic, use the substitution $u=at^2+bt+c$, adjusting the numerator by a constant.
*To compute $\int\frac{A}{at^2+bt+c}\,dt$, with $at^2+bt+c$ irreducible quadratic, complete the square, use a substitution, and use the arctangent.
*To compute $\int\frac{A}{(at^2+bt+c)^n}\,dt$ with $n\gt 1$, $at^2+bt+c$ irreducible quadratic, complete the square, use a change of variable, and use the reduction formula
$$
\int\frac{dx}{(c^2\pm x^2)^n} = \frac{1}{2c^2(n-1)}\left(\frac{x}{(c^2\pm x^2)^{n-1}} + (2n-3)\int\frac{dx}{(c^2\pm x^2)^{n-1}}\right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
What on earth does "$r$ is not a root" even mean? Method of Undetermined Coeff Learning ODE now, and using method of Undetermined Coeff
$$y'' +3y' - 7y = t^4 e^t$$
The book said that $r = 1$ is not a root of the characteristic equation. The characteristic eqtn is $r^2 + 3r - 7 = 0$ and the roots are $r = -3 \pm \sqrt{37}$
Where on earth are they getting $r = 1$ from?
| It may be related to this:
For the equation: $y'' +3y' - 7y = e^{t}$. Using the method of undetermined coefficients, you would guess that $Ae^t$ is a particular solution of the equation. But this wouldn't work if $Ae^t$ were a solution to the homogeneous equation. Then, you'd guess $Ate^t$ for a particular solution (assuming that wasn't a solution to the homogeneous equation, in which case you'd try $t^2e^t$). To check if $Ae^t$ is a solution to the homogeneous equation, you'd check if $r=1$ is a solution to the c.e..
In your case, I think, the reason for mentioning that $r=1$ is not a solution of the c.e., is because that tells you that the guess for your particular solution should contain a term $Ae^t$ (the guess contains other terms because you have $t^4e^t$ on the right hand side).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Inner Product Spaces - Triangle Inequality I have to show that:
For an inner product space $V$, $\|x + y \| = \|x\| + \|y\|$, for all $x$, $y \in V$
if and only if one of the vectors $x$ or $y$ is a scalar multiple of the other.
I am thinking, if $x= cy$, for some scalar then the above equality holds.
I am unsure how to proceed the other way. Please help.
Edit : I guess have to look at the proof of the triangle inequality which holds as an equality if
$|\langle x, y\rangle | = \|x\|\,\|y\|.$
| We assume that $x$ and $y$ are not $0$. If we have a relationship of the form $x=cy$ where $c$ is a scalar then $\|x\|=|c|\cdot \|y\|$ so $|c|=\frac{\|x\|}{\|y\|}$. So we try to compute $\Bigl\| \|x\|y-\|y\|x\Bigr\|^2$:
$$\begin{align*}
\Bigl\lVert \|x\|y-\|y\|x\Bigr\rVert^2&=\|x\|^2\|y\|^2+\|y\|^2\|x\|^2-\|x\|\cdot \|y\|(\langle x,y\rangle+\langle y,x\rangle)\\
&=\|x\|\cdot \|y\|(2\|x\|\cdot \|y\|-(\|x+y\|^2-\|x\|^2-\|y\|^2))\\
&=\|x\|\cdot \|y\|(2\|x\|\cdot \|y\|-2\|x\|\cdot \|y\|)\\&=0\end{align*} $$
which gives the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Determining variance from sum of two random correlated variables I understand that the variance of the sum of two independent normally distributed random variables is the sum of the variances, but how does this change when the two random variables are correlated?
| For any two random variables:
$$\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y)+2\text{Cov}(X,Y).$$
If the variables are uncorrelated (that is, $\text{Cov}(X,Y)=0$), then
$$\tag{1}\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y).$$
In particular, if $X$ and $Y$ are independent, then equation $(1)$ holds.
In general
$$
\text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i)+
2\sum_{i< j} \text{Cov}(X_i,X_j).
$$
If for each $i\ne j$, $X_i$ and $X_j$ are uncorrelated, in particular if the $X_i$ are pairwise independent (that is, $X_i$ and $X_j$ are independent whenever $i\ne j$), then
$$
\text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i) .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 4,
"answer_id": 2
} |
In modular arithmetic is the concept of "increasing" well defined? If there is a function $f(x) = x\bmod n$ and, whenever $0\leq x_1\lt x_2\lt n$, we have $f(x_1)\lt f(x_2)$, can we say that $f$ is increasing?
Also, when finally I prove that $f(x)$ is increasing, how do I say it is doing so "modular arithmetically". I want to be able to specify that the function is increasing by the definition of increasing functions that is used for modular arithmetic.
Disclosure: I'm not a mathematician: I'm a student programmer who loves math. I'm working on proving that a given algorithm satisfies the requirements for being a solution to the critical section problem. This is homework, but the answer to that question is not homework.
Thanks!
z.
PS I post on stackoverflow but this is my first question here.
| Mathematicians don't usually put orderings on rings with positive characteristic, since we like to be able to say things like $x<x+1$ for all $x$.
But there's no reason you can't order the residue classes by setting $[x]<[y]$ whenever $0\leq x,y<n$. In other words, $[0]<[1]<\ldots <[n-1]$. Be careful though. For example, if this is your ordering mod 5, then $[-1]>[12]$ since $[-1]=[4]$ and $[12]=[2]$.
Just make sure that your ordering does not depend on which representative you pick from each class. (This is the definition of well-defined.)
As far as terminology goes, this is a nonstandard thing, so it's worth explaining your ordering. Then you can say "$f$ is increasing with respect to this ordering."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Geometric intuition of tensor product Let $V$ and $W$ be two algebraic structures, $v\in V$, $w\in W$ be two arbitrary elements.
Then, what is the geometric intuition of $v\otimes w$, and more complex $V\otimes W$ ? Please explain for me in the most concrete way (for example, $v, w$ are two vectors in 2 dimensional vector spaces $V, W$)
Thanks
| The difference between the ordered pair $(v,w)$ of vectors and the tensor product $v\otimes w$ of vectors is that for a scalar $c\not\in\{0,1\}$, the pair $(cv,\;w/c)$ is different from the pair $(v,w)$, but the tensor product $(cv)\otimes(w/c)$ is the same as the tensor product $v\otimes w$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35",
"answer_count": 5,
"answer_id": 0
} |
What's the sum of $\sum_{k=1}^\infty \frac{2^{kx}}{e^{k^2}}$? I already asked a similar question on another post:
What's the sum of $\sum \limits_{k=1}^{\infty}\frac{t^{k}}{k^{k}}$?
There are no problems with establishing a convergence for this power series:
$$\sum_{k=1}^\infty \frac{2^{kx}}{e^{k^2}}$$
but I have problems in determining its sum.
| There is this Jacobi theta function:
$$
\vartheta_3 \biggl(\frac{i}{2} x \operatorname{ln} (2),\operatorname{e} ^{-1}\biggr) = \sum_{k = -\infty}^{\infty} \operatorname{e} ^{-k^{2}} 2^{k x}
$$
But you stopped half-way through, so yours is not such a common one. Yours is a "partial theta function"
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Finding an example of a discrete-time strict local martingale.
Find an example of a discrete-time local martingale that is not a true martingale.
I was thinking hard for some time about this fun problem.
I know that $\mathbb{E}[|M|_t]=\infty \text{ for some } t\geq0$ should hold. Moreover any non-negative local martingale in discrete time is a true martingale, so this restricts my choice even more. I played around with Cauchy distribution, doubling strategy.
| Let $X$ be a random variable with finite mean and infinite variance. Let $B$ be $1$ with probability half and $−1$ with probability half, independent of $X$. Fix a filtration $\mathcal{F}$ by $\mathcal{F}_0 = \sigma(X)$ and $\mathcal{F}_i = \sigma(X,B)$ for every $i\geq 1$.
Let $M_0=X$ and $M_i=M_0+BM_0^2$ for every $i\geq1$. Then $(M_i)$ is not a true martingale, since $M_i$ is not integrable when $i\geq1$. For every $n$, set $T_n=\inf\{k:|M_k|\ge n\}$.
Fix an $n$. Then $\mathbb{E}[|M^{T_n}_0|]=\mathbb{E}[|X|]<\infty$ and, for every $i\geq1$,
$\begin{align}
\mathbb{E}[|M^{T_n}_i|] &= \mathbb{E}[|M^{T_n}_1|\mathbf{1}(T_n=0)]+\mathbb{E}[|M^{T_n}_1|\mathbf{1}(T_n>0)]\\
&= \mathbb{E}[|M_0|\mathbf{1}(T_n=0)]+\mathbb{E}[|M_1|\mathbf{1}(T_n>0)]\\
&\leq \mathbb{E}[|M_0|]+\mathbb{E}[|M_1|\mathbf{1}(M_0\leq n)]\\
&\leq \mathbb{E}[|M_0|] + n+n^2
<\infty
\end{align}$
So $M^{T_n}$ is integrable. We may also check that $\mathbb{E}[M^{T_n}_1\mid X]=M^{T_n}_0$, so $(T_n)$ localizes $M$. So $M$ is indeed a local martingale.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Notation of indexers with multiples in a series If $\sigma _{n}=1+\dfrac {1} {2}+\dfrac {1} {3}+\ldots +\dfrac {1} {n}$ what series is given by $\sigma _{2n}$ ? Does that mean we only take the even terms now or does it mean every term is multiplied by 2 ?
| Since
$$\sigma_{n} = 1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n-1}+\frac{1}{n}$$
is the sum of the reciprocals of $1$ up to $n$, we have that $\sigma_{2n}$ is
$$\sigma_{2n} = 1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n-1}+\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n-1}+\frac{1}{2n}$$
That is, we sum up to $2n$.
If we want to sum only even numbers, we'd have to change our notation and maybe write
$$\omega_n=\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots+\frac{1}{2n-2}+\frac{1}{2n}=\frac{\sigma_{n}}{2}$$ and for odd numbers, put,
$$\kappa_n=\sigma_{2n}-\frac{\sigma_{n}}{2}=1+\frac{1}{3}+\frac{1}{5}+\cdots+\frac{1}{2n-3}+\frac{1}{2n-1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Concluding that a finitely generated module is free? Suppose $R$ is a local Noetherian domain, and $M$ is a finitely generated $R$-module. Furthermore, let's suppose there exists $k>0$ such that
$$
\dim_{R_\mathfrak{p}/\mathfrak{p}R_\mathfrak{p}}M_\mathfrak{p}/\mathfrak{p}M_\mathfrak{p}=k
$$
for any prime $\mathfrak{p}\in\operatorname{spec}(R)$.
I've been curious though, how does this imply that $M$ is in fact a free module? I figure you want to extract some basis for $M$ from a generating set $\{x_1,\dots,x_n\}$, and this is where the dimension condition comes in. However, after localizing at $\mathfrak p$ and taking quotients, I'm losing sight of how to connect to the two ideas.
Can someone explain why $M$ is free here? Thank you.
| Let $m \subset R$ be the unique maximal ideal of the Noetherian ring. By hypothesis, $M/mM$ has dimension $k$ as an $R/m$ vector space. So, let $x_1 + mM, ..., x_k +mM$ be a basis for $M/m$. Then one can show using Nakayama's lemma (see proposition 2.8 in Atiyah-Macdonald) that $M = <x_1,...,x_k>$.
Let $p \in Spec(R)$. Then $M_p/pM_p$ is generated by $\{\frac{x_1}{1} + pM_p, ..., \frac{x_k}{1} + pM_p\}$ as a $R_p/pR_p$ module, hence is a basis for $M/pM_p$.
Let $r_1x_1 + ... + r_kx_k = 0$ in $M$ $(r_i \in R)$. Then, for all $i$, $\frac{r_i}{1} + pR_p = 0$ in $R_p/pR_p$. Thus, $\frac{r_i}{1} \in pR_p$. Hence, $r_i \in p$. But, $p$ was an arbitrary prime ideal, and since $R$ is a domain $(0)$ is prime. So, $r_i \in (0)$, and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Notation for sequences I am trying to write a small article, and I just want to know how would be a good way to present the maths I have written so that it looks professional.
I am trying to define a sequence $x_n$ of real numbers. So what I wrote in my article is:
Let $x_n$, $n \in \mathbb{N}$ be a sequence of real numbers.
However, it does not look very professional. How would I write the above sentence into something that looks professional? Note: I need to include $n \in \mathbb{N}$ in my sentence, so I think thats where my trouble is as $x_n$, $n \in \mathbb{N}$ seems a bit messy.
Thanks.
| I write my sequences as $\langle x_n \mid n \in \mathbb{N} \rangle$. Looks pretty cool I think. So you could say let $\langle x_n \mid n \in \mathbb{N} \rangle$ be a sequence in $\mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
} |
If we know the GCD and LCM of two integers, can we determine the possible values of these two integers? I know that $\gcd(a,b) \cdot \operatorname{lcm}(a,b) = ab$, so if we know $\gcd(a,b)$ and $\operatorname{lcm}(a,b)$ and we want to find out $a$ and $b$, besides factoring $ab$ and find possible values, can we find these two integers faster?
| If $a \neq b$ then $\gcd(a,b) \leq \min\{a,b\} < \max\{a,b\} \leq \mathrm{lcm}(a,b) = \gcd(a,b)$, contradiction...
Edit: This answer was posted when the question was finding $a,b$ when $\gcd(a,b) = \mathrm{lcm}(a,b)$. For a good answer to the new question, see Bill's answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Related rates and implicit differentiation I can get the proper answer, but I don't quite know why.
I am supposed to find $dy/dt$ for the function $y = \sqrt{2x +1}$ if $dx/dt = 3$ when $x=4$.
For the derivative I get $$ \frac {dy}{dt} = \frac {1}{2} (2x + 1)^{-1/2} \frac{dx}{dt},$$ which then gives me
$$ \frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} \cdot 3 \frac {dy}{dt} = \frac{1}{2}, $$
which is wrong. I can also do
$$ \frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} \cdot 2 \frac {dx}{dt},$$
which gives me $1$, which is the proper answer, but I am not sure why I get that. I know that the derivative of the inner function will be $2$ but the problems defines it as being $3$, so do I just multiply the two?
| $$ \frac {dy}{dt} = \frac {1}{2} (2x + 1)^{-1/2} 2* \frac{dx}{dt} $$
$$ \frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} 2* \frac {dx}{dt} $$
The 2 comes from the derivative of the inner function and then I multiply that by the implicit derivative of x which was given as 3 so I get 6.
$$ \frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} *6 $$
$$ \frac {dy}{dt} = \frac {1}{2} \frac {1}{3} *6 $$
$$ \frac {dy}{dt} = 1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that this limit is equal to $\liminf a_{n}^{1/n}$ for positive terms.
Show that if $a_{n}$ is a sequence of positive terms such that $\lim\limits_{n\to\infty} (a_{n+1}/a_n) $ exists, then this limit is equal to $\liminf\limits_{n\to\infty} a_n^{1/n}$.
I am not event sure where to start from, any help would be much appreciated.
| Hint:
$$a_n ^{\frac{1}{n}}= e^{\frac{ \ln (a_n)}{n}}$$
$$(a_{n+1}/a_n)= e^{\ln (a_{n+1}) -\ln(a_n)} \,.$$
Try to prove instead the equality of the corresponding exponents limits..
P.S. Are you familiar with Stolz Cezaro or Cezaro means?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
how do I find the taylor polynomial of multivariable functions? I know taylor polynomial for single variable functions but I am having trouble understanding how to find taylor polynomials for multivariable functions. I know how to find partial derivatives as well as the chain rule for multivariable functions.
Can someone please explain how I can find the taylor polynomial of a multivariable function? for simplicity lets do f(x,y)=x^5-y^4
Above is text from my textbook that I read but it doesnt make any sense to me especially step 3 and 4
If you know of a site or source that can better explain how to find multivariable function's taylor series I would appreciate that so that this way you dont spend so much time in typing explanation for me.
| We can fix $y$ and then consider $f(x,y)$ to be strictly a function of $x$ (say $h(x)=f(x,y)$ if you want, which makes sense because $y$ is fixed somewhere). Doing this yields the Taylor expansion seen in equation $(1)$. We would say that, by the usual formula, we have
$$h(x)=h(x_0)+h'(x_0)(x-x_0)+h''(x_0)(x-x_0)^2+\cdots \tag{0}$$
Now $h(x_0)=f(x_0,y)$ and $h'(x_0)=f_x(x_0,y)$ and $h''(x_0)=f_{xx}(x_0,y)$ and so forth. Now notice that this works regardless of what we fixed $y$ at to begin with, so it must hold for all available $y$.
We will switch gears now; since the formula holds for all $y$ we no longer need to consider $y$ fixed.
The Taylor expansion in the variable $x$ seen in $(0)$ involves a number of terms like $f_{xx}(x_0,y)$. Now we know that $x_0$ is fixed but $y$ is not, so this is a function of $y$! Say $g(y)=f_{xx}(x_0,y)$. Then we can speak of its Taylor expansion just as well, $g(y)=g(y_0)+g'(y)(y-y_0)+g''(y)(y-y_0)+\cdots$.
It turns out that $g(y_0)=f_{xx}(x_0,y_0)$ and $g'(y_0)=f_{xxy}(x_0,y_0)$ and $g''(y_0)=f_{xxyy}(x_0,y_0)$ and so on, because taking partial derivatives commutes with "evaluating at $y=y_0$" (or $x=x_0$, as in the last part). This means that differentiation and plugging things in can be done in any order here.
Doing a Taylor series expansion in the variable $y$ for $f_{xx}(x_0,y)$, as seen in $(4)$, can be done with any of the terms in $(1)$, like $f(x_0,y)$ and $f_x(x_0,y)$ seen in $(2)$ and $(3)$.
I have assumed that $f$ is sufficiently nice in this answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Algorithm to find the second smallest element I am having trouble with the following homework assignment:
Give an algorithm that finds the second smallest of n elements in at most
$n + \lceil\log(n)\rceil - 2$ comparisons.
I have been trying to find an algorithm myself, and believe I have to check the elements in pairs and then recurse on the smallest elements. However, when I tried an algorithm like this the amount of comparisons it took where $n + \lceil\log(n)\rceil$ (according to me).
This is the algorithm in question:
1. secondSmallest(A[n])
2. if A.length >= 2 then
3. for i = 1 to i = A.length do
4. if A[i] < A[i + 1] then
5. B[(i+1)/2] = A[i]
6. i + +
7. else
8. B[(i+1)/2] = A[i + 1]
9. i + +
10. endif
11. return secondSmallest(B[n])
12. else
13. if A[1] < A[2] then
14. return A[2]
16. else return A[1]
17. endif
18. endif
Note: this is pseudocode, and by no means accurate. And the logarithm is a BASE 2 logarithm.
| SKETCH: Assuming that you’re allowed to use extra storage, run the comparison as a single elimination tournament, keeping a record of the entries beaten by each winner along the way. (Low number wins.) When you get an overall winner, you have only to identify the winner among the $\lceil \lg n\rceil$ contestants beaten by the overall winner.
Added: On further thought, I see that one doesn’t actually need additional memory, if one rearranges the list. Implement the single elimination tournament as follows.
Suppose that the numbers are $a_0,a_1,\dots,a_{n-1}$. On the first pass compare $a_{2k}$ with $a_{2k+1}$ for $0\le k<(n-1)/2$; if $a_{2k}>a_{2k+1}$, interchange them. On the second pass compare $a_{4k}$ with $a_{4k+2}$ for $0\le k<(n-2)/4$; if $a_{4k}>a_{4k+2}$, interchange the pair $\langle a_{4k},a_{4k+1}\rangle$ with the pair $\langle a_{4k+2},a_{4k+3}\rangle$. On the $i$-th pass compare $a_{2^ik}$ with $a_{2^ik+2^{i-1}}$ for $0\le i<(n-2^{i-1})/2^i$; if $a_{2^ik}>a_{2^ik+2^{i-1}}$, interchange the blocks of length $2^{i-1}$ beginning at $a_{2^ik}$ and $a_{2^ik+2^{i-1}}$. This continues as long as $n-2^{i-1}>0$, i.e., until $n\le 2^{i-1}$; thus, if the last pass is the $m$-th pass, then $2^{m-1}<n\le 2^m$, or $\lceil\lg n\rceil=m$. The smallest number in the set is now $a_0$, and every other number in the set has lost exactly one comparison, so we’ve made $n-1$ comparisons.
The numbers that are now $a_{2^i}$ for $i=0,\dots,m-1$ are the numbers that lost to $a_0$ in direct comparisons; every number in the set that is neither $a_0$ nor one of these $m$ numbers lost a direct comparison to one of these $m$ numbers and therefore cannot be the second-smallest number in the set. Thus, the second-smallest number is the smallest of the $a_{2^i}$ for $i=0,\dots,m-1$. There are $m$ of these numbers, so it takes $m-1$ comparisons to find the smallest.
Altogether, then, this algorithm takes $$(n-1)+(m-1)=n-1+\lceil\lg n\rceil-1=n+\lceil\lg n\rceil-2$$ comparisons, as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
What is the value of $\sin(x)$ if $x$ tends to infinity?
What is the value of $\sin(x)$ if $x$ tends to infinity?
As in wikipedia entry for "Sine", the domain of $\sin$ can be from $-\infty$ to $+\infty$. What is the value of $\sin(\infty)$?
| Mathematics has many different ways of talking about infinity. In particular, there is more than one way of adjoining infinite quantities to the real number line. One, which is similar in spirit to what the others have been talking about, is the extended real number line. Another is the hyperreals. In the hyperreals, we have many different sizes of infinity, functions such as the sine can be extended to the whole hyperreal line in a natural and uniquely defined way, and it makes sense to say that $\sin x$ has some value, where $x$ is a certain infinite number. There is also a notion of an infinite integer, and we can, for example, say that $\sin(\pi n)=0$ and $\sin(\pi (n+1/2))=\pm 1$ if $n$ is an infinite integer.
What this still does not allow is any conclusion about the value of $\sin x$ as $x$ tends to infinity: $\sin x$ takes on all values between $-1$ and $+1$ for various infinite values of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
The position of a particle moving along a line is given by $ 2t^3 -24t^2+90t + 7$ for $t >0$ For what values of $t$ is the speed of the particle increasing?
I tried to find the first derivative and I get
$$6t^2-48t+90 = 0$$
$$ t^2-8t+15 = 0$$
Which is giving me $ t>5$ and $0 < t < 3$, but the book gives a different answer
| You need to keep in mind that speed is the absolute value of velocity. If the velocity is positive and increasing, the speed will increasing. But if the velocity is negative and decreasing (becoming more and more negative), the speed is increasing.
There are two other cases to consider, but I'll leave that to you.
For your problem, let's consider the motion of the point by
looking at the velocity function (whose graph is a parabola opening upwards with zeros at $x=3$ and $x=5$). With the standard conventions:
From $t=0$ to $t=3$: The particle is moving to the right. Its speed is decreasing over $(0,3)$. (Note the velocity is positive and decreasing here.)
At $t=3$, the particle reverses direction. (The velocity is 0 here.)
From $t=3$ to $t=4$: The particle is moving to the left. Its speed is increasing over $(3,4)$. (The velocity is negative and decreasing here.)
From $t=4$ to $t=5$: The particle is moving to the left. Its speed is decreasing over $(4,5)$. (The velocity is negative and increasing here)
At $t=5$, the particle reverses direction. (The velocity is 0 here.)
From $t=5$ onwards: The particle is moving to the right. Its speed is increasing over $(5,\infty)$. (The velocity is positive and increasing here)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Divide inside a Radical It has been so long since I have done division inside of radicals that I totally forget the "special rule" for for doing it. -_-
For example, say I wanted to divide the 4 out of this expression:
$\sqrt{1 - 4x^2}$
Is this the right way to go about it?
$\frac{16}{16} \cdot \sqrt{1 - 4x^2}$
$16 \cdot \frac{\sqrt{1 - 4x^2}}{16}$
$16 \cdot \sqrt{\frac{1 - 4x^2}{4}} \Longleftarrow \text{Took the square root of 16 to get it in the radicand as the divisor}$
I know that this really a simple, question. Can't believe that I forgot how to do it. :(
| First divide everything inside the radical by what you will want to take out of it, factoring this out while keeping it inside the radical. Then square root what you want to take out of the radical, and take it out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
If $f(x)=f'(x)+f''(x)$ then show that $f(x)=0$
A real-valued function $f$ which is infinitely differentiable on $[a.b]$ has the following properties:
*
*$f(a)=f(b)=0$
*$f(x)=f'(x)+f''(x)$ $\forall x \in [a,b]$
Show that $f(x)=0$ $\forall x\in [a.b]$
I tried using the Rolle's Theorem, but it only tells me that there exists a $c \in [a.b]$ for which $f'(c)=0$.
All I get is:
*
*$f'(a)=-f''(a)$
*$f'(b)=-f''(b)$
*$f(c)=f''(c)$
Somehow none of these direct me to the solution.
| Let $x=c$ be the $x$ coordinate of absolute max of $f(x)$ on $[a,b]$. (This point exists by the extreme value theorem). I will show that $f(c) = 0$. Since $f(a) = 0$ and $c$ is the absolute max, $f(c)\geq 0$. By Fermat's theorem, we know $f'(c) = 0$. Hence, we learn that $f(c) = f''(c)\geq 0$.
Now, assume for a contradiction that $f(c) > 0$, so $f''(c) > 0$ and $c\neq a$ and $c\neq b$. I claim that for $x$ close enough to $c$, but bigger than it, that $f(x) > f(c)$, contradicting maximality of $f(c)$.
Since $f''$ is continuous, for $x$ close enough to $c$ say, within $\delta$, we have $f''(x) > 0$. On the interval where $c<x<c+\delta$, $f'(x) \geq 0$ with equality only at $x=c$. This follows from the Mean value theorem applied to $f'$, because if $f'(x)\leq 0$ for a point $x\in(c,c+\delta)$, then by the MVT, $f''(d) \leq 0$ for some $d\in(c,c+\delta)$, giving a contradiction.
From this, it follows that $f(x)>f(c)$ for $x\in(c,c+\delta)$, because, again by the MVT, we have $\frac{f(x)-f(c)}{x-c} = f'(d) > 0$ for some $d\in(c,c+\delta)$, so, $f(x) - f(c) > 0$.
Thus, we contradict maximality of $f(c)$. From this contradiction, we deduce $f(c) = 0$ is the maximum of the function. Now, repeat a similar argument to $-f$ (changing the interval $(c,c+\delta)$ to $(c-\delta, c)$) to deduce the minimum value of $f$ is $0$. From this it follows that $f$ is identically $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 2
} |
Why are logarithms not defined for 0 and negatives? I can raise $0$ to the power of one, and I would get $0$. Also $-1$ to the power of $3$ would give me $-1$.
I think only some logarithms (e.g log to the base $10$) aren't defined for $0$ and negative numbers, is that right?
I'm confused because on all the websites I've seen they say "logs are not defined for $0$ and negative number". On one website it says "$\log_b(0)$ is not defined", then provides an example where the base is $10$.
| You can define everything you want, but will this newborn object satisfy properties you want, depends on your definition. Assume, we do have logarithms for negative numbers and zero and all the properties of logarithms are preserved. Then we immediately obtain a contradiction. Here it is
$$
0=\log 1=\log(-1)^2=2\log (-1)
$$
so $\log(-1)=0$ and from the definition of logarithms we have $-1=10^0=1$. This is one of the reasons.
But if you still want to take logarithms of negative numbers, you must relax some requiremetns. The most reasonable is to make logarithms multivalued with values in $\mathbb{C}$. For more detailed description of such logarithms look at Complex logarithm
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Can a basis for a vector space be made up of matrices instead of vectors? I'm sorry if this is a silly question. I'm new to the notion of bases and all the examples I've dealt with before have involved sets of vectors containing real numbers. This has led me to assume that bases, by definition, are made up of a number of $n$-tuples.
However, now I've been thinking about a basis for all $n\times n$ matrices and I keep coming back to the idea that the simplest basis would be $n^2$ matrices, each with a single $1$ in a unique position.
Is this a valid basis? Or should I be trying to get column vectors on their own somehow?
| Elements of a basis of a vector space always have to be elements of the vector space in the first place. Hence, if you are looking for a basis of the space of all $n\times n$ matrices, then matrices actually are your vectors and the only choice for what a basis element can be. In fact, the matrices you describe are a valid basis for the space of all $n\times n$ matrices. However, looking at matrices this way (as vectors of the vector space of all $n\times n$ matrices), it might help to realize that they are just tuples with $n^2$ many entries, arranged as a square.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Are all n-ary operators simply compositions of binary operators? Take for example $A \times B \cdot C$ = $(A \times B) \cdot C$ where $A, B, C$ are 3-component real vectors.
We can define a 3-nary operator $\times - \cdot$ that is a composition of the two common binary operators $\times$ and $\cdot$.
The same thing happens with most functions (operators) - the way we calculate them is by doing smaller binary problems and adding together.
Every time I try to come up with an $(n > 2)$-ary operator my mind automatically looks for binary operators.
So, the question is, do there exist operators (of some weird kind in some branch of math) that cannot be decomposed into 2-ary and 1-ary operators?
Thanks.
| There are two trivial senses in which the answer is "yes", you can always reduce it to binary functions.
One of them is the pairing operator -- the function that takes any two objects and returns the ordered pair containing them. So, for example, given any function $f$ of four variables, we can construct a new function $g$ (of 1 variable of type "ordered pair of ordered pairs of objects") by
$$ g( ((a,b), (c, d)) ) = f(a, b, c, d) $$
It might be instructive to see this restated in terms of an ordered-pair variable. If $x$ is an ordered pair, then the function $L(x)$ is the left coordinate, and $R(x)$ is the right coordinate. Then, $g$ is defined by
$$ g(x) = f(L(L(x)), R(L(x)), L(R(x)), R(R(x)) $$
(with a lot of pain, one could write this explicitly as composition of functions, but it is painful. We use the above notations for a reason!)
Dually, there is the transpose operator. Again, if $f$ is a function of four variables, then I can define a new function $h$ that is a function of one variable, whose values are themselves functions of 3 variables, by
$$ h(a)(b, c, d) = f(a, b, c, d)$$
($h(a)$ is a function, so it makes sense to evaluate it, as above. The above defines $h(a)$ pointwise, and thus $h$ pointwise)
This can be iterated: you can have a function of one variable whose values are of type "Function of one variable whose values are of type {Function of two variables}", defined by:
$$ k(a)(b)(c,d) = f(a, b, c, d)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 0
} |
Hilbert function of ideal generated by linear forms. This is a slight extension of a remark a read a few days ago.
Let $K$ be a field, and let $A=K[X_0,\dots,X_N]$ be a polynomial ring, which is graded in the standard way (the elements of degree $n$ are the homogeneous polynomials of degree $n$). Let $\mathfrak{a}$ be a homogeneous prime ideal of $A$, and $d$ the dimension of the corresponding projective variety. Moreover, let $\chi(n,\alpha)=\dim_K A_n/\mathfrak{a}_n$ (also known as the Hilbert function). Here $A_n$ is the $K$-space of homogeneous elements of $n$ in $A$, and likewise for $\mathfrak{a}_n$.
I understand that there is some $c_d\in\mathbb{N}$ so that
$$
\chi(n,\mathfrak{a})=c_d\frac{n^d}{d!}+c_{d-1}n^{d-1}+\cdots+c_0.
$$
Taking $d$ generic linear forms, say $f_1,\dots,f_d$, why does
$$
\chi(n,\mathfrak{a}+(f_1,\dots,f_d))=c_d?
$$
One theorem I have read is that if $F$ is a homogeneous polynomial of degree $j$ where $F$ is not a zero divisor modulo $\mathfrak a$, i.e., if $G\in A$ and $FG\in\mathfrak{a}$, then $G\in\mathfrak{a}$, then $\chi(n,\mathfrak{a}+(F))=\chi(n,\mathfrak{a})-\chi(n-j,\mathfrak{a})$. I'm wondering how it can be extended to an ideal generated by more than one linear form, to get the above equality. Thank you.
| By induction! We claim that $\chi(n,\mathfrak{a}+(f_1,\ldots,f_k))$ has the same leading coefficient as $\chi(n,\mathfrak{a})$ for any $k\le d$ and generic (linear) choice of the $f_i$. For $k=1$, the statement follows from the result you quoted:
$$\begin{align*}
\chi(n,\mathfrak{a}+(f))&=\chi(n,\mathfrak{a})-\chi(n-1,\mathfrak{a})
\\&= \sum_{k=0}^d c_k \frac{n^k}{k!} - c_k\frac{(n-1)^k}{k!}
\\&=\sum_{k=0}^d \frac{c_k}{k!} \cdot \left( n^k - \sum_{j=0}^k \binom{k}{j} (-1)^{k-j} n^j \right)
\\&=\sum_{k=0}^d \frac{c_k}{k!} \cdot\sum_{j=1}^{k} \binom{k}{j} (-1)^{k-j} n^{j-1}
\\&= n^{d-1}\cdot\frac{c_d}{(d-1)!} + \left<\mathrm{terms\ of\ lower\ degree}\right>
\end{align*}
$$
On the other hand, for $k> 1$, set $\mathfrak{b}:=\mathfrak{a}+(f_1,\ldots,f_{k-1})$. By induction hypothesis, the leading coefficients of $\chi(n,\mathfrak{a})$ and $\chi(n,\mathfrak{b})$ coincide. By the case $k=1$, the leading coefficients of $\chi(n,\mathfrak{b})$ and $\chi(n,\mathfrak{b}+(f_k))=\chi(n,\mathfrak{a}+(f_1,\ldots,f_k))$ also coincide, so we are done.
Remark: $d!$ times the leading coefficient of the Hilbert Polynomial is also referred to as the degree of the projective variety defined by $\mathfrak{a}$. It is a nice exercise to check that for $\mathfrak{a}=(f)$, it actually agrees with the degree of $f$. It is, thus, very reassuring to see that the degree remains invariant under intersection with a generic hyperplane - and that's precisely what it means to add a generic, linear polynomial to $\mathfrak{a}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.