Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Real-world applications of prime numbers? I am going through the problems from Project Euler and I notice a strong insistence on Primes and efficient algorithms to compute large primes efficiently. The problems are interesting per se, but I am still wondering what the real-world applications of primes would be. What real tasks require the use of prime numbers? Edit: A bit more context to the question: I am trying to improve myself as a programmer, and having learned a few good algorithms for calculating primes, I am trying to figure out where I could apply them. The explanations concerning cryptography are great, but is there nothing else that primes can be used for?
You can use prime numbers to plot this fine pattern :) Intensity of green colour for each pixel was calculated using a function, which can be described with this pseudocode snippet: g_intensity = ((((y << 32) | x))^((x << 32) | y))) * 15731 + 1376312589) % 256 where x and y are a pixel coordinates in screen space, stored in a 64bit integer variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 19, "answer_id": 9 }
Does this equality always hold? Is it true in general that $\displaystyle\frac{\mathrm{d}}{\mathrm{d}x} \int_0^{x} f(u,x) \mathrm{d}u = \int_0^{x} \left( \frac{\mathrm{d}}{\mathrm{d}x} f(u,x) \right)\mathrm{d}u +f(x,x )$ ? Thank you for your help!
Yes, it is, under the conditions indicated below. Let $$I(x)=\displaystyle\int_{0}^{x}f(u,x)\; \mathrm{d}u.\qquad(\ast)$$ If $f(u,x)$ is a continuous function and $\partial f/\partial x$ exists and is continuous, then $$I^{\prime }(x)=\displaystyle\int_{0}^{x}\dfrac{\partial f(u,x)}{\partial x}\; \mathrm{d}u+f(x,x)\qquad(\ast\ast)$$ follows from the Leibniz rule and chain rule. Note: the integrand of $(\ast\ast)$ is a partial derivative. It generalizes to the integral $$I(x)=\displaystyle\int_{u(x)}^{v(x)}f(t,x)\; \mathrm{d}t.$$ Under suitable conditions ($u(x),v(x)$ are differentiable functions, $f(t,x)$ is a continuous function and $\partial f/\partial x$ exists and is continuous), we have $$I^{\prime }(x)=\displaystyle\int_{u(x)}^{v(x)}\dfrac{\partial f(t,x)}{\partial x}\; \mathrm{d}t+f(v(x),x)v^{\prime }(x)-f(u(x),x)u^{\prime }(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/43157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Reason why these two probabilities are equal (Picking Balls in exact order without Replacement) I realize the probability of the following two events are equal. I am curious: is there a reason, besides coincidence, that the probabilities are equal? Suppose there are five balls in a bucket. 3 of the balls are labelled A, and 2 of the balls are labelled B. There is no way to distinguish between balls labelled A. There is no way to distinguish between balls labelled B. Suppose I draw balls at random, without replacement. The event $\{AAABB\}$ means I pick 3 balls labelled A, then 2 balls labelled B (in that exact order). Then, $$ P(\{AAABB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{1}{3} \times 1 = 0.1 $$ Also, $$ P(\{AABAB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{2}{3} \times \frac{1}{2} \times 1 = 0.1$$ As you can see, the probability of the events $\{AAABB\}$ and $\{AABAB\}$ are exactly the same. I have seen the claim that any possible order of 3 A's and 2 B's is the same. Why is this true (if it indeed true)? If the claim is true, then I don't have to multiply out individually for every conceivable event. Thanks.
The claim indeed seems true. For a way to see this. Consider the first probability $P(\{AAABB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{1}{3} \times 1 = 0.1$ This can actually be written as $P(\{AAABB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{1}{3} \times \frac{2}{2} \times \frac{1}{1} = 0.1$ Now notice the denominators: This will always be $5 \times 4 \times 3 \times 2 \times 1$. Now the numerator will just be $3 \times 2 \times 1 \times 2 \times 1$, in some order. The $3$ will appear when you pick the first A, one of the $2$ will appear when you pick the second A or first B etc. Thus the probability is $\frac{12}{120} = 0.1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
generalized eigenspaces for many operators Let $\phi_1, \cdots, \phi_n$ be commutative linear operators on a vector space $V.\,\,$ Then we have $$V=\oplus V_{(a_i)}, \text{ where }\, V_{(a_i)} = \{x\in V \mid \exists p \,\,\text{ such that }\,\, (\phi_i-a_i)^px=0, \forall i\}.$$ How to show that $V_{(a_i)} \cap V_{(b_i)} = 0$ for $(a_i)\neq (b_i)$? If $\phi_i$ does not commute, is $V=\oplus V_{(a_i)}$ still correct? Thank you very much.
It's hard to denote the fact that $(a_i)$ is different from $(a_j)$, so instead I'll use $(a_i)$ and $(b_i)$ to denote two different tuples. Note that $(a_i)\neq(b_i)$ if and only if $a_k\neq b_k$ for at least one $k$, $1\leq k\leq n$. If $\mathbf{x}\in V_{(a_i)}\cap V_{(b_i)}$, then $\mathbf{x}$ is either $\mathbf{0}$ or a generalized eigenvector for $\phi_i$ associated to $a_i$ and to $b_i$ for each $i$; in particular, $\mathbf{x}$ would have to be either $\mathbf{0}$ or a generalized eigenvector of $\phi_k$ corresponding to $a_k$ and to $b_k$. Since $a_k\neq b_k$ the latter cannot occur, so $V_{(a_i)}\cap V_{(b_i)} = \{\mathbf{0}\}$. If the operators don't commute with one another, then you don't necessarily have $V=\oplus V_{(a_i)}$. Take $V=\mathbb{R}^2$, $\phi_1$ to be the transformation that sends $(1,0)$ to $(2,0)$ and $(0,1)$ to $(0,3)$; take $\phi_2$ to be the transformation that sends $(1,1)$ to $(2,2)$ and sends $(-1,1)$ to $(-3,3)$. The eigenspaces corresponding to $\phi_1$ and those corresponding to $\phi_2$ have no nonzero vector in common, so all $V_{(a_i)}$ are equal to the zero vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluate $\sum\limits_{k=1}^n k^2$ and $\sum\limits_{k=1}^n k(k+1)$ combinatorially $$\text{Evaluate } \sum_{k=1}^n k^2 \text{ and } \sum_{k=1}^{n}k(k+1) \text{ combinatorially.}$$ For the first one, I was able to express $k^2$ in terms of the binomial coefficients by considering a set $X$ of cardinality $2k$ and partitioning it into two subsets $A$ and $B$, each with cardinality $k$. Then, the number of ways of choosing 2-element subsets of $X$ is $$\binom{2k}{2} = 2\binom{k}{2}+k^2$$ So sum $$\sum_{k=1}^n k^2 =\sum_{k=1}^n \binom{2k}{2} -2\sum_{k=2}^n \binom{k}{2} $$ $$ \qquad\qquad = \color{red}{\sum_{k=1}^n \binom{2k}{2}} - 2 \binom{n+1}{3} $$ I am stuck at this point to evaluate the first of the sums. How to evaluate it? I need to find a similar expression for $k(k+1)$ for the second sum highlighted above. I have been unsuccessful this far. (If the previous problem is done then so is this, but it would be nice to know if there are better approaches or identities that can be used.) Update: I got the second one. Consider $$\displaystyle \binom{n+1}{r+1} = \binom{n}{r}+\binom{n-1}{r}+\cdots + \binom{r}{r}$$ Can be shown using recursive definition. Now multiply by $r!$ and set $r=2$
For $$\sum_{k=1}^n (k+1)(k),$$ let me tell a story. There are $n+2$ chairs in a row, and three people, Lefty, Alice, and Bob. They want to sit down. Lefty does not like to have anyone to his left. In how many ways can they sit? Lefty could be in the third seat from the right, leaving $2$ choices for Alice and then $1$ for Bob. Or else Lefty could be in the fourth seat from the right, leaving $3$ choices for Alice and then $2$ for Bob. And so on. Finally Lefty could be in the leftmost seat, giving Alice $n+1$ choices and Bob $n$. So the total number of seating arrangements is our sum. Or else choose $3$ seats from $n+2$, reserve the leftmost for Lefty. For each choice of $3$ seats there are then $2$ places for Alice to sit, for a total of $$2\binom{n+2}{3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/43317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 4 }
Ballistics of guns I am looking for mathematical texts or material on ballistics of guns. So far the only reference I have been able to locate is Ballistics: Theory and Design of Guns and Ammunition by Carlucci and Jacobson. Are there any more such books or online notes which explain the mathematical properties (equation of motion etc) in gun ballistics?
There is a steep learning curve in ballistics. The start is to assume only Newton's Laws and no air resistance, etc. The next is to include air resistance, which is incorporated in this assignment. Then, it can become radically more complicated. For example, JSTOR has several articles on ballistics (e.g. here), so if you have access I would look there. These end up relying on a good working knowledge of PDE and fluid dynamics, so I'm not quite sure if those are what you're looking for. But there are also programs that do a lot of this work for you if you are just calculating them: you have gunsim and it's blog, for example. Someone else also describes the math that they use in their own sim, found here. I do not know of any other 'texts' so to speak, only lots of articles. Once you look through JSTOR, you can look at their references and what referenced them to continue on the chain. I hope this is what you're looking for! (Note, if you have access to a university library, you might have access to JSTOR articles through them).
{ "language": "en", "url": "https://math.stackexchange.com/questions/43371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How can I compute the integral $\int_{0}^{\infty} \frac{dt}{1+t^4}$? I have to compute this integral $$\int_{0}^{\infty} \frac{dt}{1+t^4}$$ to solve a problem in a homework. I have tried in many ways, but I'm stuck. A search in the web reveals me that it can be do it by methods of complex analysis. But I have not taken this course yet. Thanks for any help.
$$ \begin{aligned} \int_0^{\infty} \frac{d t}{1+t^2}&=\int_0^{\infty} \frac{\frac{1}{t^2}}{t^2+\frac{1}{t^2}} d t \\=& \frac{1}{2} \int_0^{\infty} \frac{\left(1+\frac{1}{t^2}\right)-\left(1-\frac{1}{t^2}\right)}{t^2+\frac{1}{t^2}} d t \\ =& \frac{1}{2}\left[\int_0^{\infty} \frac{d\left(t-\frac{1}{t}\right)}{\left(t-\frac{1}{t}\right)^2+2}-\int_0^{\infty} \frac{d\left(t+\frac{1}{t}\right)}{\left(t+\frac{1}{t}\right)^2-2}\right]_0^{\infty} \\ =& \frac{1}{2 \sqrt{2}}\left[\tan ^{-1}\left(\frac{t-\frac{1}{t}}{\sqrt{2}}\right)\right]_0^{\infty}-\frac{1}{2 \sqrt{2}}\left[\ln \left|\frac{t+\frac{1}{t}-\sqrt{2}}{t+\frac{1}{t}+\sqrt{2}} \right|\right]_0^{\infty}\\ =& \frac{\pi}{2 \sqrt{2}} \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/43457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 7, "answer_id": 6 }
Quick ways for approximating $\sum_{k=a_1}^{k=a_2}C_{100}^k(\frac{1}{2})^k(\frac{1}{2})^{100-k}$? Consider the following problem: A fair coin is to be tossed 100 times, with each toss resulting in a head or a tail. Let $$H:=\textrm{the total number of heads}$$ and $$T:=\textrm{the total number of tails},$$ which of the following events has the greatest probability? A. $H=50$ B. $T\geq 60$ C. $51\leq H\leq 55$ D. $H\geq 48$ and $T\geq 48$ E. $H\leq 5$ or $H\geq 95$ What I can think is the direct calculation: $$P(a_1\leq H\leq a_2)=\sum_{k=a_1}^{k=a_2}C_{100}^k(\frac{1}{2})^k(\frac{1}{2})^{100-k}$$ Here is my question: Is there any quick way to solve this problem except the direct calculation?
Here is a very elementary way of estimating these probabilities. Observe that the distribution of $H$ is very similar to a normal distribution with mean $50$ and standard deviation $\sigma = 5$. In particular, we should have $$ P (|H-50| \leq \sigma) \;\approx\; 68\% \qquad\text{and}\qquad P(|H-50| \leq 2\sigma) \;\approx\; 95\% $$ As mixedmath pointed out, the only viable answers are B, D, and E. We can estimate the probabilities of these events as follows: B. $P(H \geq 60) \;=\; P(H \geq 50 + 2\sigma)$, which should be on the order of 2.5%. D. $P(48\leq H \leq 52) \;=\; P(|H-50| \leq \sigma/2)$, so this should be something like 40%. E. $P(H\leq 5\text{ or }H \geq 95) \;=\; P(|H-50| \geq 9\sigma)$, so this should be really small. Thus (D) is the correct answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How can I solve this infinite sum? I calculated (with the help of Maple) that the following infinite sum is equal to the fraction on the right side. $$ \sum_{i=1}^\infty \frac{i}{\vartheta^{i}}=\frac{\vartheta}{(\vartheta-1)^2} $$ However I don't understand how to derive it correctly. I've tried numerous approaches but none of them have worked out so far. Could someone please give me a hint on how to evaluate the infinite sum above and understand the derivation? Thanks. :)
let $$S=\sum_{i=1}^\infty\frac{i}{\theta^i}, (\theta>1),$$ then $$ \begin{align} S-\frac{1}{\theta}S&=\sum_{i=1}^\infty\frac{i}{\theta^i}-\sum_{i=1}^\infty\frac{i}{\theta^{i+1}}\\ &=\sum_{i=1}^\infty\frac{i}{\theta^i}-\sum_{i=2}^\infty\frac{i-1}{\theta^{i}}\\ &=\frac{1}{\theta}+\sum_{i=2}^{\infty}\frac{1}{\theta^i}\\ &=\frac{1}{\theta}+\frac{1}{\theta^2-\theta}, \end{align} $$ which yields $$S=\frac{\theta}{(\theta-1)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/43572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Bounding ${(2d-1)n-1\choose n-1}$ Claim: ${3n-1\choose n-1}\le 6.25^n$. * *Why? *Can the proof be extended to obtain a bound on ${(2d-1)n-1\choose n-1}$, with the bound being $f(d)^n$ for some function $f$? (These numbers describe the number of some $d$-dimensional combinatorial objects; claim 1 is the case $d=2$, and is not my claim).
In the meanwhile, you may consider the following. Suppose that $X$ is a binomial$((2d-1)n-1,p)$ random variable ($0 < p < 1$). Then, $$ {\rm P}(X = n - 1) = {(2d-1)n-1 \choose n-1} p^{n-1}(1-p)^{n(2d-2)} \leq 1. $$ Putting $z=1/p > 1$, we thus have $$ {(2d-1)n-1 \choose n-1} \leq z^{n-1} \bigg(\frac{z}{{z - 1}}\bigg)^{n(2d - 2)} = \frac{1}{z}\bigg[\frac{{z^{2d - 1} }}{{(z - 1)^{2d - 2} }}\bigg]^n . $$ So, we want to minimize $$ \psi(z) = \frac{{z^{2d - 1} }}{{(z - 1)^{2d - 2} }},\;\; z > 1. $$ In the case where $d=2$, we get $$ {3n-1 \choose n-1} \leq \frac{1}{z}\bigg[\frac{{z^3 }}{{(z - 1)^2 }}\bigg]^n. $$ Here, $\psi(z) = \frac{{z^3 }}{{(z - 1)^2 }}$ attains its minimum at $z=3$, where $\psi(z)=27/4$. Hence $$ {3n-1 \choose n-1} \leq \frac{1}{3}\bigg(\frac{{27}}{4}\bigg)^n = \frac{1}{3}6.75^n . $$ EDIT: For general $d$, the function $\psi(z)$ attains its minimum at $z=2d-1$ (indeed, $\psi(z) \to \infty$ as $z \downarrow 1$ or $z \to \infty$, and, as an elementary calculation shows, $\psi'(z)=0$ for $z=2d-1$). Hence, $$ {(2d-1)n-1 \choose n-1} \leq \frac{1}{{2d - 1}}\bigg[\frac{{(2d - 1)^{2d - 1} }}{{(2d - 2)^{2d - 2} }}\bigg]^n . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/43672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Probability of picking all elements in a set I was studying the birthday paradox and got curious about a related, but slightly different problem. Lets say I have a set S, that has n unique elements. If I randomly pick k elements from the set (assuming equal probability of picking any element), the probability that I'll have picked all the elements in the set, when k=n is n!/n^n. Now, what would be the value of k for probability of picking all elements in the set to be > 0.9 (or some constant). A simpler variation would be - how many times should I flip a coin to have a probability > 0.9 to have thrown heads and tails at least once.
Your first question is about a famous problem called the Coupon Collector's Problem. Look in the Wikipedia write-up, under tail estimates. That will give you enough information to estimate, with reasonable accuracy, the $k$ that gives you a $90$ percent chance of having seen a complete collection of coupons.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
equivalent definitions of orientation I know two definitions of an orientation of a smooth n-manifold $M$: 1) A continuous pointwise orientation for $M$. 2) A continuous choice of generators for the groups $H_n(M,M-\{x\})=\mathbb{Z}$. Why are these two definitions equivalent? In other words, why is a choice of basis of $\mathbb{R}^n$ equivalent to a choice of generator of $H_n(\mathbb{R}^n,\mathbb{R}^n-\{0\})=\mathbb{Z}$? See comments for precise definitions. Thanks!
Observe that in (1) there is no difference between using tangent and cotangent bundle, and in (2) one can use $H^n$ instead of $H_n$. Now, the equivalence becomes especially clear if in (2) one uses de Rham cohomology (instead of, say, singular). Indeed, (1) is just existence of a (non-vanishing) section $\omega$ for $\Lambda^{top} T^*M$. So $\omega$ is a differential form, and for any $x\in U$ one can take a function $f_U$ that is 1 near $x$ and 0 outside of $U$ — and $\omega\cdot f_U$ is a generator of $H^n_{dR,c}(U)=H^n(M,M-\{x\})$. And using partitions of unit it's not hard to go in the opposite direction (i.e. reconstruct $\omega$ from local orientations).
{ "language": "en", "url": "https://math.stackexchange.com/questions/43779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
"Plotting" an equation I have an equation like $$ (x - a)^2 + (y - b)^2 = r^2 $$ that represents a circle. I need to "plot" it very basically with a programming language. Computer graphics coordinate generally use the "lower-right" quadrant of a Cartesian plane (0, 0 being top left). I want my circle to have its center in (200, 200) and a radius of, say, 100. All measures given in pixels. I need to do this step by step, within a loop. So, if the loop is 100 times, i need to know at each iteration at what x and y coordinate to draw a pixel. How to this? With my center and radius, the above equation is like this $$ (x - 200)^2 + (y - 200)^2 = 1000 $$ but still I can't figure out how to express any "advancement" of my virtual plotting device. Thanks EDIT My question specifically asked for the equation of a circle, but it was just an example. I have the same problem with parabolas, sins...
Any reasonably correct method of plotting involves interval arithmetic. At a very rough level, it involves keeping track, for a given resolution, of intervals in which no pixel is part of the plot, intervals in which every pixel is, intervals yet to be analysed, and so on, updating and colouring pixels only when sure. You can read this very readable paper: * *Jeff Tupper, Reliable Two-Dimensional Graphing Methods for Mathematical Formulae with Two Free Variables, SIGGRAPH 2001 Tupper is the author of the program GrafEq which when I used it many years ago impressed me by getting graphs correctly that most other graphing programs would have trouble with. (See e.g. its rogue's gallery. or other links on the site.) It's a very nice paper. There have presumably been improvements since 2001, but it's shocking how many graphers even today don't even do as much as this paper describes clearly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/43842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
How to draw triangle in the middle of the line? I need to draw a triangle like an arrow in the middle of the line. How can I calculate triangle's coordinates in order to draw it in the middle of the line PLEASE? ** UPDATE ** Here I have found out how to find the middle of the line, so now I have a coordinate for one of three vertices of the triangle. I need to calculate coordinates of other two vertices.
First start with a triangle of the size you want, say directed to the right. The three corners are then $(0,0), (x_0,y_0), (x_0,-y_0)$ where $x_0 \lt 0$. You just multiply these by the rotation matrix $$\left(\begin {array}{c c}\cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array}\right)\left(\begin {array}{c}x_0 \\ y_0 \end{array}\right)$$ to get each point relative to the head of your arrow. To get $\theta$ you need the angle of your line. How is that represented?
{ "language": "en", "url": "https://math.stackexchange.com/questions/43899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$. I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$ I looked up the hint and it says let $g(x) = \log f(a^x) $ The wikipedia page for functional equations only states the form of the solutions without proof. Attempt Using the hint (which was like pulling a rabbit out of the hat) Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$ so that we can define the real function $g(x) = \log f(a^x)$ and we have $$g(x+y) = g(x)+ g(y)$$ i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is). Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited
So, we assume $f$ is continuous. Letting $g(x) = \ln(f(a^x))$, we get $$ \begin{align*} g(x+y) &= \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x)f(a^y))\\ &= \log(f(a^x)) + \ln(f(a^y))\\ &= g(x)+g(y). \end{align*}$$ So $g$ satisfies the Cauchy functional equation; if you assume $f$ is continuous, then so is $g$, hence $g(x) = xg(1)$ for all $x\gt 0$. Since $g(1) = \ln(f(a))$, we have $$f(a^x) = e^{g(x)} = e^{g(1)x} = (e^{x})^{g(1)}.$$ Given $r\in \mathbb{R}$, $r\gt 0$, we have $r = a^{\log_a(r)}$, hence $$\begin{align*} f(r) &= f\left(a^{\log_a(r)}\right)\\ &= \left(e^{\log_a(r)}\right)^{g(1)}\\ &= \left(e^{\ln(r)/\ln(a)}\right)^{g(1)}\\ &= \left(e^{\ln(r)}\right)^{g(1)/\ln(a)}\\ &= r^{g(1)/\ln(a)}, \end{align*}$$ where we have used the change-of-base formula for the logarithm, $$\log_a(r) = \frac{\ln r}{\ln a}.$$ Finally, since $g(1) = \ln(f(a))$, we have $$f(r) = r^{\ln(f(a))/\ln(a)}.$$ As this works for any positive $a$, $a\neq 1$, taking $a=e$ we get $$f(r) = r^{\ln(f(e))}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/43964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 3, "answer_id": 1 }
conversion of 2D Gaussian into polar coordinates Is it possible to convert the 2D Gaussian function in to polar coordinates? $$\frac{1}{2\pi\sigma^2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\big(-({(x-\mu_x)^2+(y-\mu_y)^2})/{2\sigma^2}\big) \,\mathrm{d}x\,\mathrm{d}y $$
Hint: There are many symmetries at work here. What if $\mu _x = \mu _y = 0$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/44022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trouble with absolute value in limit proof As usual, I'm having trouble, not with the calculus, but the algebra. I'm using Calculus, 9th ed. by Larson and Edwards, which is somewhat known for racing through examples with little explanation of the algebra for those of us who are rusty. I'm trying to prove $$\lim_{x \to 1}(x^2+1)=2$$ but I get stuck when I get to $|f(x)-L| = |(x^2+1)-2| = |x^2-1| = |x+1||x-1|$. The solution I found says "We have, in the interval (0,2), |x+1|<3, so we choose $\delta=\frac{\epsilon}{3}$." I'm not sure where the interval (0,2) comes from. Incidentally, can anyone recommend any good supplemental material to go along with this book?
By the continuity of the function $f(x)=x^2+1$, $$\lim_{x\to 1}(x^2+1)=2$$ For the $\epsilon-\delta$ proof, one needs to show that $$\forall \epsilon>0, \exists \delta>0\quad \textrm{s.t.}\quad |x-1|<\delta\Rightarrow |(x^2+1)-2|<\epsilon$$ Now things boil down to finding the $\delta$(depending on $\epsilon$) such that $$|(x^2+1)-2|<\epsilon,$$ i.e.,$$|x+1||x-1|<\epsilon.\qquad (*)$$ Note that the choice of $\delta$ is totally decided by you. That's to say, given $\epsilon>0$, you can let any value of $\delta$ such that (*) is satisfied when $|x-1|<\delta$. From $|x-1|<\delta$, we have $1-\delta<|x|<1+\delta$, which implies that $$|1+x|\leq 1+|x|\leq 2+\delta.$$ Now we have $$|1+x||1-x|\leq(2+\delta)\delta.$$ If you can make $$(2+\delta)\delta<\epsilon$$ for the given $\epsilon$, things are done. Hence for example, you can chose $\delta$ such that $$0<\delta<\min\{1,\frac{\epsilon}{3}\}.$$ Since $\delta<1$, $|x-1|<\delta<1$. This is where your $(0,2)$ from. For the references of the topic, I would strongly recommend Terrence Tao's Real Analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How many solutions are there to $F(n,m)=n^2+nm+m^2 = Q$? Let $n,m$ be two positive integers, we consider: $$F(n,m)=n^2+nm+m^2$$ Let $Q$ be one value reach by $F(n,m)$. How many different pairs $(n,m)$ verify $F(n,m)=Q$?
As it turns out, we can give a complete answer to this question. The exact number of solutions depends on the prime factorization of $Q$ Specifically, it is a function of the exponents of the prime factors which are congruent to $1$ mod $3$, with the condition that all the factors congruent to $2$ modulo $3$ have their prime factors appear with even multiplicity. Note: I have not yet provided a proof, the result that primes of the form $1+3k$ can be represented is a theorem of Jacobi. Let $$Q=\prod_i q_i^{\alpha_i} \prod_i p_i^{\beta_i}$$ where the $q_i$ are $1$ mod $3$ and the $p_i$ are $2$ mod $3$. Our equation $n^2+nm+m^2$ has solutions if and only if each $\beta_i$ is even. Proof: Take the equation modulo $3$. By case analysis for $n,m$ the right hand side cannot be congruent to $2$, and hence the statement follows. Remark: Notice the following similarity to the sum of squares problem (points on a circle). The Answer: Suppose that as before $$Q=\prod_i q_i^{\alpha_i} \prod_i p_i^{\beta_i}$$ where the $q_i$ are $1$ mod $3$ and the $p_i$ are $2$ mod $3$. Suppose as well that all of the $\beta_i$ are even. (Otherwise we can have no solutions) Let $$B=(\alpha_1+1)(\alpha_2+1)\cdots(\alpha_n+1).$$ Then the number of non-negative integer solutions to $$m^2+mn+n^2=Q$$ is exactly $$\left\lceil\frac{B}{2}\right\rceil.$$ (Again notice the similarity to the sum of squares function) In particular, if $l(Q)$ is the number of representations where $n,m$ are any integers, (that is positive or negative) then $$l(Q)=2B.$$ Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How many points in the xy-plane do the graphs of $y=x^{12}$ and $y=2^x$ intersect? The question in the title is equivalent to find the number of the zeros of the function $$f(x)=x^{12}-2^x$$ Geometrically, it is not hard to determine that there is one intersect in the second quadrant. And when $x>0$, $x^{12}=2^x$ is equivalent to $\log x=\frac{\log 2}{12}x$. There are two intersects since $\frac{\log 2}{12}<e$. Is there other quicker way to show this? Edit: This question is motivated from a GRE math subject test problem which is a multiple-choice one(A. None B. One C. Two D.Three E. Four). Usually, the ability for a student to solve such problem as quickly as possible may be valuable at least for this kind of test. In this particular case, geometric intuition may be misleading if one simply sketch the curve of two functions to find the possible intersect.
This is not really an independent answer, more a gathering in one place of what others have said. It's easy to see there's a solution between $-1$ and $0$ (since $x^{12}-2^x$ is positive at $-1$ and negative at $0$), and another solution between $1$ and $2$ (again, there's a change of sign), and another solution bigger than $2$ (by Theo Buehler's comment on Joseph Malkevitch's answer). Then there are any number of ways to show that there are no solutions other than these three.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 2 }
Verifying the triangle inequality I am going through some practice problems in Abbott's Analysis text, and one of them is the following: Verify the triangle inequality in the special case where a) a and b have the same sign; b) $a \geq 0$, $b < 0$, and $a+b \geq 0$. I do not know where to begin because I don't know how exactly to go about the 'verification' given such conditions. Thanks!
For instance, in case (a), you could distinguish between two possibilities: * *$a,b \geq 0$. Then $a+b \geq 0 $ too and $\mid a+b\mid = a+b$ which is certainly equal to $\mid a \mid +\mid b\mid = a +b$. *$a,b \leq 0$. Then $a+b \leq 0$ too and $\mid a+b\mid = -(a+b)$ which is certainly equal to $\mid a \mid +\mid b\mid = -a -b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Finding $A^n$ for a matrix I have a matrix $$ A = \left[ {\begin{array}{cc} 1 & c \\ 0 & d \\ \end{array} } \right] $$ with $c$ and $d$ constant. I need to find $A^n$ ($n$ positive) and then need to prove that formula using induction. I would like to check that the formula I derived is correct: $$ A^n = \left[ {\begin{array}{cc} 1 & c^{n-2}(dc + c) \\ 0 & d^n \\ \end{array} } \right] $$ If this is correct, how can I prove this? I suppose I can write $A^{n+1} = A^n A$, which would be $$ \left[ {\begin{array}{cc} 1 & c^{n-2}(dc + c) \\ 0 & d^n \\ \end{array} } \right] \left[ {\begin{array}{cc} 1 & c \\ 0 & d \\ \end{array} } \right] $$ But then what would I do? Thanks.
Letting $a_n$ be the upper right hand corner of $A^n$, and assuming it is obvious that the lower right corner of $A^n$ is $d^n$, and the left column is $[1,0]^T$, we get: $$A^{n+1} = A^n A = \left[ {\begin{array}{cc} 1 & a_n \\ 0 & d^n \\ \end{array} } \right] \left[ {\begin{array}{cc} 1 & c \\ 0 & d \\ \end{array} } \right]$$ This gives us $a_{n+1} = c + d a_n$, with $a_1=c$. In particular, then $a_n = c + cd + cd^2 + ... + cd^{n-1} = c\frac{d^n-1}{d-1}$. So: $$A^n = \left[ {\begin{array}{cc} 1 & c\frac{d^n-1}{d-1} \\ 0 & d^n \\ \end{array} } \right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/44368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
How do I get the square root of a complex number? If I'm given a complex number (say $9 + 4i$), how do I calculate its square root?
SQUARE ROOT OF A BINOMIC COMPLEX NUMBER The number $\sqrt{a+bi}$ is a complex (or complex) $x+yi$ such that: $a+bi=(x+yi)^{2}$ So: $a+bi=(x^{2}-y^{2})+2xyi\rightarrow \left. x^{2}-y^{2}=a \atop 2xy=b \right\}$ Solving this system, and taking into account, to solve the bi-square equation, that $\sqrt{a^{2}+b^{2}}\geq a$: $$\left. y=\dfrac{b}{2x} \atop 4x^{4}-4ax^{2}-b^{2}=0 \right\}\rightarrow \left. x^{2}-y^{2}=a \atop x^{2}=\dfrac{a+\sqrt{a^{2}+b^{2}}}{2} \right\}\rightarrow \left. x^{2}=\dfrac{a+\sqrt{a^{2}+b^{2}}}{2} \atop y^{2}=\dfrac{-a+\sqrt{a^{2}+b^{2}}}{2} \right\}$$ Now, the equation $2xy=b$ tells us that the product $xy$ has the same sign as $b$. Therefore, if $b>0$, $x$ and $y$ have the same signs, and if $b<0$, they have different signs. $$b\geq 0\rightarrow \sqrt{a+bi}=\pm \left( \sqrt{\dfrac{a+\sqrt{a^{2}+b^{2}}}{2}}+i\sqrt{\dfrac{-a+\sqrt{a^{2}+b^{2}}}{2}}\right)$$ $$b<0\rightarrow \sqrt{a+bi}=\pm \left( \sqrt{\dfrac{a+\sqrt{a^{2}+b^{2}}}{2}}-i\sqrt{\dfrac{-a+\sqrt{a^{2}+b^{2}}}{2}}\right)$$ In practice, these formulas are not used, but the process is followed. Also, it is highly recommended to pass it to polar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121", "answer_count": 12, "answer_id": 4 }
Information about the reaction-diffusion equation We´re modeling the distribution of a population on a 2-dimensional plane with the reaction-diffusion equation: $$\frac{\partial P}{\partial t} = \nabla (D(x,y)\nabla P) + rP(1-\frac{P}{k(x,y)})$$ Where $P$ is a function of space $(x,y)$ and time($t$), $D$ and $K$ depend on space only, and $r$ is a constant. I was trying to find out if this equation has any stationary points, if/how I can find them, and if there is an analytical way to solve it, but I'm actually really new in differential equations. Does anyone know how to do this or could you please recommend a book or paper?! Thanks a lot. ALR
You could try sepatarion of variables, but this method is for linear PDE's. So the mos effective method to solve non-linear PDE is to use similarity method where you are going to propose a elongation as follows: $\bar{x}=\epsilon^{a}x, \bar{t}=\epsilon^{b}t, \bar{u}=\epsilon^{c}u$ and then subsitute this equalities in the original PDE. The you are going to find another PDE (or on most of the cases you'll reduce a PDE to and ODE) more easily to solve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
a question about $E(\sum_{n=1}^{+\infty}X_{k}$)=$\sum_{n=1}^{+\infty}EX_{k}$ $X_{k}$ are random variables and they are not independent, I wonder if $E(\sum_{n=1}^{+\infty}X_{k}$)=$\sum_{n=1}^{+\infty}EX_{k}$. if the equation does not hold, what conditions are required in order to make it right.
If the $X_i$ are nonnegative, it is always true, by either monotone convergence or Tonelli's theorem (thinking of the sum as an integral over $\mathbb{N}$ with respect to counting measure). If we have $\sum_k E|X_k| < \infty$ or equivalently $E \sum_k |X_k| < \infty$, then it is also true, by either dominated convergence or Fubini's theorem. Otherwise it can fail: let $U \sim U(0,1)$, $Y_k = k 1_{\{U \le 1/k\}}$, so that $Y_k \to 0$ a.s. but $E Y_k = 1$ for each $k$. Let $Y_0 = 0$ and $X_k = Y_k - Y_{k-1}$ for $k \ge 1$. Then you can easily check that $$E \sum_{k=1}^\infty X_k = 0 \ne 1 = \sum_{k=1}^\infty E X_k.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/44517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Applications of systems of linear equations Sorry if this questions is overly simplistic. It's just something I haven't been able to figure out. I've been reading through quite a few linear algebra books and have gone through the various methods of solving linear systems of equations, in particular, $n$ systems in $n$ unknowns. While I understand the techniques used to solve these for the most part, I don't understand how these situations present themselves. I was wondering if anyone could provide a simple real-world example or two from data analysis, finance, economics, etc. in which the problem they were working on led to a system of $n$ equations in $n$ unknowns. I don't need the solution worked out. I just need to know the problem that resulted in the system.
Here are two neat exmaples I found while researching the topic for a class I'm teaching: http://www.ohiouniversityfaculty.com/youngt/IntNumMeth/lecture9.pdf (The example deals with systems of equations in statics problems in mechanical engineering; presumably, an introductory text on statics for mechanical engineers might contain more real-life examples by the truckload?) https://www.researchgate.net/publication/339843126_Balancing_Chemical_Equations_Using_Gauss-Jordan_Elimination_Aided_by_Matrix_Calculator (Just about every introductory [and/or advanced?] chemistry textbook should contain real-life examples by the truckload.) Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into? What is the smallest number of $45^\circ-60^\circ-75^\circ$ triangles that a square can be divided into? The image below is a flawed example, from http://www.mathpuzzle.com/flawed456075.gif Laczkovich gave a solution with many hundreds of triangles, but this was just an demonstration of existence, and not a minimal solution. ( Laczkovich, M. "Tilings of Polygons with Similar Triangles." Combinatorica 10, 281-306, 1990. ) I've offered a prize for this problem: In US dollars, (\$200-number of triangles). NEW: The prize is won, with a 50 triangle solution by Lew Baxter.
I improved on Laczkovich's solution by using a different orientation of the 4 small central triangles, by choosing better parameters (x, y) and using fewer triangles for a total of 64 triangles. The original Laczkovich solution uses about 7 trillion triangles. Here's one with 50 triangles:
{ "language": "en", "url": "https://math.stackexchange.com/questions/44684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108", "answer_count": 2, "answer_id": 1 }
Question about Holomorphic functions I try to show: Let $f: \mathbb{C} \longrightarrow \mathbb{C} $ be holomorphic with $$\Re(f)+\Im(f)=1 $$ then $ f $ is constant. ($\Re$ = Real Part, $\Im$ = Imaginary Part) I have certain ideas one is to use Liouvilles theorem about bounded functions but I am not sure if the given information already implies that $f$ is bounded. Consider for example $f(x)=1 + (1 - i) \Re(x)$ then $f$ is not bounded (here it fails because $f$ is not holomorphic). But I don't know how I could use the fact that $f$ is holomorphic to show that its bounded.
This is a special case of a very interesting Liouville-type theorem: Theorem Let $\Omega$ be an open subset of the complex plane and $f\colon \Omega \to \mathbb{C}$ be holomorphic. If $f(\Omega)$ is contained in a 1-dimensional manifold (i.e. a smooth curve) then $f$ is constant. The geometric intuition behind this is explained very well in Needham's Visual complex analysis. Locally $f$ acts like a rotation composed with a dilation (amplitwist) so it must be full rank or rank 0, it cannot be rank 1. If the range of $f$ lies in a curve then the rank of $f$ cannot exceed 1 and so it must be 0 everywhere. This means that $f$ must be constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }
How to prove correctness of a formula (differential calculus, integral)? How do I prove the correctness of the following formula relating to the fundamental theorem of calculus? $$\int \! x\cos{3x} \, \mathrm{d}x = \frac{\cos{3x}}{9}+\frac{x\sin{3x}}{3}+C$$
Differentiation is easy! Once you have calculated an integral, differentiate your "answer" and see whether you get the right thing. Of course you have to remember the $+{}C$, since if you differentiate, you will get the same thing as if you had remembered it. You can even use the idea as a "technique of integration." Here is a simple example. We want $\int e^{3x}dx$. Guess that the answer is $e^{3x}+C$. Now differentiate your answer. If you remember to use the Chain Rule, you will get $3e^{3x}$. So the answer $e^{3x}+C$ was wrong. But that's easy to fix. Divide your "answer" $e^{3x}$ by $3$ to get rid of that nasty $3$ that came from the Chain Rule. You get a new improved answer $\frac{e^{3x}}{3}+C$. If you still have doubts, differentiate that. You will get $e^{3x}$, the thing you were trying to integrate. Even on a test, if there is time, you can check all your (indefinite) integral answers by differentiating. You probably have roughly $100$ percent mastery of differentiation, and can do it quickly, so a check is always possible, and quick.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Introductory text for calculus of variations I am currently working on problems that require familiarity with calculus of variations. I am fairly new to this field. Please suggest a good introductory book for the same that could help me pick up the concepts quickly. edit: I would prefer books which are available in PDF format online.
If you check out Wikipedia's entry on "Calculus of Variations: here, and scroll down to the bottom where "References" are listed: * *You'll find a link to a pdf reference (Jon Fischer, Introduction to the Calculus of Variation, a quick and readable guide) that might be exactly what you're looking for, as well as some additional references (sample problems, guides, etc.). *In addition, you'll find a link to this site listed among the references. *There's also a chapter of a text that's available online: Chapter 8: Calculus of Variation from Optimization for Engineering Systems, by Ralph W. Pike. There are also some additional texts and resources listed in the linked Wikipedia's entry, as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/44828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 6, "answer_id": 4 }
Interesting integral formula I looked around and found that integrals of the form $$\int_{0}^{\infty} \frac{x^{m-1}}{a+x^n}, a,m,n \in \mathbb{R}, 0<m<n, 0<a$$ seem to occur very often: Just to give a few examples (the formula given below would solve them all right away): How can I compute the integral $\int_{0}^{\infty} \frac{dt}{1+t^4}$? Simpler way to compute a definite integral without resorting to partial fractions? Is this definite integral really independent of a parameter? How can it be shown? Integrating $\int_0^\infty \frac{1}{x^2 + 2x + 2} \mathrm{d} x$ Even more surprising was that there seems to be a (quite beautiful) closed form, namely: $$\int_{0}^{\infty} \frac{x^{m-1}}{a+x^n}=\frac{\pi}{n} \left(\frac{1}{a}\right)^{1-\frac{m}{n}} \csc \left(m \cdot \frac{\pi }{n}\right)$$ (This result is from Mathematica). I tried to derive this result by integrating (brute force) but you get hypergeometric functions which I don't like. Therefore I would like to know if there is a straight-forward way to get this by hand.
Well there is actually a formula which I found in Gamelin's complex analysis book. This is problem number $7$ Exercise VII.4 Page No. 208. This works for the case $a=1$. So if you have $a=1$, then $$\int\limits_{0}^{\infty} \frac{x^{a-1}}{1+x^{b}} \ \text{dx} = \frac{\pi}{b \sin(\pi{a}/b)}, \quad 0 < a < b$$ Set $$I = \int\limits_{0}^{\infty} \frac{x^{a-1}}{1+x^{b}} \ \text{dx}$$ and integrate $$f(z) = \frac{z^{a-1}}{1+z^{b}} = \frac{|z|^{a-1} \cdot e^{i(a-1)\text{arg}(z)}}{1+|z|^{b}e^{ib\text{arg}(z)}}$$ Simple pole at $z_{1} = e^{\pi{i}/b}$ and hence $$\text{Res} \Biggl[\frac{z^{a-1}}{1+z^{b}}, e^{\pi{i}/b}\Biggr] = \frac{z^{a-1}}{bz^{b-1}}\Biggl|_{z =e^{\pi i / b}} = -\frac{1}{b}e^{\pi i a/b}$$ Integrate along $\gamma_{1}$, and let $R \to \infty$ and let $ \epsilon \to 0^{+}$. This gives, \begin{align*} \int\limits_{\gamma_{1}} f(z) \ dz & = \int\limits_{\gamma_{1}} \frac{|z|^{a-1} \cdot e^{i(a-1)\text{arg}(z)}}{1+|z|^{b}e^{ib\text{arg}(z)}} \ dz \\\ &= \int\limits_{\epsilon}^{R} \frac{x^{a-1}}{1+x^{b}} \to \int\limits_{0}^{\infty} \frac{x^{a-1}}{1+x^{b}} \ dx =I \end{align*} Integrate along $\gamma_{2}$, and let $R \to \infty$. This gives $0 < a < b$ and $$\Biggl|\int\limits_{\gamma_{2}} f(z) dz \Biggr| \leq \frac{R^{a-1}}{R^{b}-1} \cdot \frac{2\pi R}{b} \sim \frac{2 \pi}{b R^{b-a}} \to 0$$ Integrate along $\gamma_{3}$ and let $R \to \infty$ and $\epsilon \to 0^{+}$. This gives \begin{align*} \int\limits_{\gamma_{3}} f(z) \ dz &= \int\limits_{\gamma_{3}} \frac{|z|^{a-1} \cdot e^{i(a-1)\text{arg}(z)}}{1+|z|^{b}e^{ib\text{arg}(z)}} = \Biggl[\begin{array}{c} z=x e^{2\pi i/b} \\\ dz=e^{2\pi i/b} \ dx \end{array}\Biggr] \\\ &= \int\limits_{R}^{\epsilon} \frac{x^{a-1}e^{2\pi i(a-1)/b}}{1+x^{b}} \cdot e^{2\pi i b} \ dx \to \int\limits_{\infty}^{0} \frac{x^{a-1}e^{2\pi i(a-1)/b}}{1+x^{b}} \cdot e^{2\pi i b} \ dx \\\ &= -e^{2\pi ia/b}I \end{align*} Integrate along $\gamma_{4}$ and let $\epsilon \to 0^{+}$. This gives $0 < a <b$, $$\Biggl|\int\limits_{\gamma_{4}} f(z) \ dz \Biggr| \leq \frac{\epsilon^{a-1}}{1-\epsilon^{b}} \cdot \frac{2\pi\epsilon}{b} \sim \frac{2\pi\epsilon}{b} \to 0$$ Using the $\text{Residue Theorem}$ and letting $R \to \infty$ and $\epsilon \to 0^{+}$, we obtain that $$ I + 0 - e^{2\pi a/b}I + 0 = 2\pi i \cdot \Bigl(-\frac{1}{b} e^{\pi ia/b}\Bigr)$$ This yields, $$(e^{-\pi i a/b} - e^{\pi i a./b})I= -\frac{2\pi i}{b}$$ and hence solving for $I$, we have $$I= \frac{2\pi i}{b \cdot (e^{\pi ia/b} - e^{-\pi i a/b})}=\frac{\pi}{b \sin(\pi a/b)}$$ Actually I have taken this answer from, one of my posts: You can see that as well. * *Simpler way to compute a definite integral without resorting to partial fractions?
{ "language": "en", "url": "https://math.stackexchange.com/questions/44928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 1, "answer_id": 0 }
$\tan(\frac{\pi}{2}) = \infty~$? Evaluate $\displaystyle \int\nolimits^{\pi}_{0} \frac{dx}{5 + 4\cos{x}}$ by using the substitution $t = \tan{\frac{x}{2}}$ For the question above, by changing variables, the integral can be rewritten as $\displaystyle \int \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}$, ignoring the upper and lower limits. However, after changing variables from $dx$ to $dt$, when $x = 0~$,$~t = \tan{0} = 0~$ but when $ x = \frac{\pi}{2}~$, $~t = \tan{\frac{\pi}{2}}~$, so can the integral technically be written as $\displaystyle \int^{\tan{\frac{\pi}{2}}}_{0} \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}~$, and if so, is it also reasonable to write it as $\displaystyle \int^{\infty}_{0} \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}$ EDIT: In response to confusion, my question is: Is it technically correct to write the above integral in the form with an upper limit of $\tan{\frac{\pi}{2}}$ and furthermore, is it is reasonable to equate $\tan{\frac{\pi}{2}}$ with $\infty$ and substitute it on the upper limit?
Well, In my opinion it is correct to write what you have stated. But for the sake of convenience we don't actually write in that way. For example if your integral had some substitution in which the limit's after substitution changes to $t = \sin\frac{\pi}{3} = \frac{\sqrt{3}}{2}$, it is better we write the limit as it's value instead of writing $\sin\frac{\pi}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Proving $\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$ How do I show that: $$\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$$ This is actually problem B $4371$ given at this link. Looks like a very interesting problem. My attempts: Well, I have been thinking about this for the whole day, and I have got some insights. I don't believe my insights will lead me to a $\text{complete}$ solution. * *First, I wrote $\sin\frac{5\pi}{14}$ as $\sin\frac{9 \pi}{14}$ so that if I put $A = \frac{\pi}{14}$ so that the given equation becomes, $$\frac{1}{\sin^{2}{A}} + \frac{1}{\sin^{2}{3A}} + \frac{1}{\sin^{2}{9A}} =24$$ Then I tried working with this by taking $\text{lcm}$ and multiplying and doing something, which appeared futile. *Next, I actually didn't work it out, but I think we have to look for a equation which has roots as $\sin$ and then use $\text{sum of roots}$ formulas to get $24$. I think I haven't explained this clearly. * *$\text{Thirdly, is there a trick proving such type of identities using Gauss sums ?}$ One post related to this is: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$ I don't know how this will help as I haven't studied anything yet regarding Gauss sums.
Another method would involve use of complex numbers. ** added ** OK, elaboration. Let $w = \exp(i \pi/14)$ so that $w^7 = i$. In (1) I factored $w^7-i$ and in (2) obtained the relation satisfied by $w$. (3) is what we want to compute. (4) is the relations of the trig functions to $w$. In (5) we wrote the thing to compute in terms of $w$. In (6) we took the denominator, and reduced it using the relation satisfied by $w$. In (7) the same thing for the numerator. So (8) is our answer, which is simplified in (9).
{ "language": "en", "url": "https://math.stackexchange.com/questions/45144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 1 }
Is the Notion of the Empty Set Relative or Absolute? Suppose we specify subsets of a reference set by pairs, where the first co-ordinate specifies a member of the universe of discourse, and the second co-ordinate specifies the value that the characteristic function yields for that member of the reference set. E. G., if we have $\{a, b\}$ as the universe of discourse, then we have subsets $\{(a, 0), (b, 0)\}, \{(a, 0), (b, 1)\}, \{(a, 1), (b, 0)\}, \{(a, 1), (b, 1)\}$, where $(x, 0)$ indicates that $x$ does not belong to the set, while $(x, 1)$ indicates that $x$ belongs to the set. Now, consider the empty set compared across different universes of discourses, for example $\{a, b\}$, and $\{a\}$. For $\{a, b\}$ we have $\{(a, 0), (b, 0)\}$ as the empty set under this specification, and for $\{a\}$ we have $\{(a, 0)\}$ as the empty set under this specification. So, it would seem that we have a relative notion of an empty set in this context, but oftentimes books talk of "the empty set" which suggests it as absolute. So, does the concept of the empty set come as absolute, or relative?
It is absolute. What you're seeing is an image of the empty set under a map and this can be different for different maps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
A notion of topology for computability A topology on a space $X$ is defined as a subset of the power-set of X, that is closed under arbitrary unions, finite intersections and includes the empty set and the full space. Is anybody aware of a modification of the notion of topology where closure is only under finite intersection and recursively enumerable collections of open sets? Example: Consider the set of natural numbers, and let the set of open sets be the set of recursively enumerable sets of the natural numbers. Now for any finite collection of r.e. sets their intersection is also r.e. Furthermore, for an r.e. collection of sets, their union is r.e. Has any work been done on a notion similar to this (if indeed that makes sense)?
In the setting of effective Polish spaces, you are describing the lightface $\Sigma^0_1$ pointclass. These are well-studied in effective descriptive set theory. The canonical reference on this subject is Moschovakis's Descriptive Set Theory, which is currently available online by the author.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Raising a square matrix to the k'th power: From real through complex to real again - how does the last step work? I am reading Applied linear algebra: the decoupling principle by Lorenzo Adlai Sadun (btw very recommendable!) On page 69 it gives an example where a real, square matrix $A=[(a,-b),(b,a)]$ is raised to the k'th power: $$A^k.(1,0)^T$$ The result must be a real vector. Nevertheless it seems easier to do the calculation via the complex numbers:$$=((a+bi)^k+(a-bi)^k).(1,0)^T/2-i((a+bi)^k-(a-bi)^k).(0,1)^T/2$$ At this stage the result seems to be complex. But then comes the magic step and everything gets real again:$$=Re[(a+bi)^k].(1,0)^T+Im[(a+bi)^k].(0,1)^T$$ Now I did some experiments and made two observations: First, this step seems to yield the correct results - yet I don't know why. Second, the raising of this matrix to the k'th power even confuses CAS (e.g. WolframAlpha aka Mathematica, see e.g. the plots here) because they most of the time seem to think that the results are complex. My question Could you please give me a proof/explanation for the correctness of the last step. Perhaps you will even know why CAS are confused too (perhaps it is because their algorithms also go through the complex numbers and they too have difficulties in seeing that the end result will be real?)
We have that \begin{align} A=\begin{pmatrix}a & -b\\ b &a \end{pmatrix} \end{align} with $a$ and $b$ real numbers. In space $L(\mathbb{C}^2)$ this matrix has 2 eigen-values $a+bi$ and $a-bi$ with eigen vectors $\mathbf{b}_1=(1,-i)$ and $\mathbf{b}_2=(1,i)$. We have $(1,0)=\mathbf{e}_1=(\mathbf{b}_1+\mathbf{b}_2)/2$. So \begin{align} A^k\mathbf{e}_1=A^k(\mathbf{b}_1+\mathbf{b}_2)/2=A^k\mathbf{b}_1/2+A^k\mathbf{b_2}/2 \end{align} Now we use the fact that $\mathbf{b}_1$ and $\mathbf{b}_2$ are eigenvectors, hence $$A^k\mathbf{b}_1=(a+bi)^k\mathbf{b}_1$$ $$A^k\mathbf{b}_2=(a-bi)^k\mathbf{b}_2$$ Now using the fact that $$\mathbf{b}_1=\mathbf{e}_1-i\mathbf{e}_2$$ $$\mathbf{b}_2=\mathbf{e}_1+i\mathbf{e}_2$$ we arrive at the equation in question: $$A^k\mathbf{e}_1=((a+bi)^k+(a-bi)^k)/2\mathbf{e}_1-i((a+bi)^k-(a-bi)^k).(0,1)^T/2\mathbf{e}_2$$ We rewrite this as $$A^k\mathbf{e}_1=c_1\mathbf{e}_1+ic_2\mathbf{e}_2,$$ where $c_1$ and $c_2$ are complex numbers. The vector on the left hand side is real, so the vector on the right-hand side should be real too. The complex vector $\mathbf{c}$ is real if $$\mathbf{c}=Re (\mathbf{c})$$ So the following must hold \begin{align} c_1\mathbf{e}_1+ic_2\mathbf{e}_2&=Re(c_1\mathbf{e}_1+ic_2\mathbf{e}_2)\\ &=Re(c_1)\mathbf{e_1}+Re(ic_2)\mathbf{e_2}\\ &=Re(c_1)\mathbf{e_1}-Im(c_2)\mathbf{e_2} \end{align} and we arrive at the required result. Update Actually we do not, since I missed the last step. @Dennis provides the ending in its answer. His answer contains the essential trick, my answer just gives the context to the question, i.e. what is the matrix $A$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simple algebra question $$\dfrac{\qquad\dfrac{5p+10}{p^2-4}\qquad}{\dfrac{3p-6}{(p-2)^2}}$$ Im all confused about this question. Can someone go through it step by step. Can you please list when I can cancel numbers. Thanks.
$$\frac{{5p + 10}}{{p^2 - 4}}\bigg/\frac{{3p - 6}}{{(p - 2)^2 }} = \frac{{5p + 10}}{{p^2 - 4}}\frac{{(p - 2)^2 }}{{3p - 6}} = \frac{{5(p + 2)}}{{(p + 2)(p - 2)}}\frac{{(p - 2)(p - 2)}}{{3(p - 2)}} = \frac{5}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/45357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Linear versus non-linear integral equations I'm having trouble solving an integral equation. It appears to me to be a homogeneous Fredholm equation of the second kind. However, I'm being told that this can't be a Fredholm equation, because it is non-linear. Could someone help me in trying to figure out how to classify an integral equation as linear or non-linear. Also, I'll post the equation I need to solve below, and it would be great if anyone could also give me some tips on how to try and solve it. Thank you to all who reply. The equation is $$\phi(x) = (x^2 - x)\int_0^1 \mathrm{d}y \frac{\phi(y)}{(y-x)^2}$$ Also, is this by chance related to an eigenvalue problem? I know that might sound like a strange question, but I've seen some people treating these as eigenvalue equations. By the way, I want to solve the equation for $\phi(x)$
This is homogenous Fredholm integral equation of the second kind. It certainly is linear in the function $\phi$, as already observed by Fabian (see comments). An important observation is that the kernel has a singularity in the integration domain, for $0 \le x \le 1$, which makes the equation a singular Fredholm integral equation of the second kind. I don't know how to solve your equation, but here is a reference that has a chapter dedicated to singular Fredholm integral equations of both the first and second kind: * *David Porter, David Stirling: "Integral Equations. A practical treatment, from spectral theory to applications". See chapter 9, "Some singular integral equations". I'm not quite sure, but I think they don't have examples with a quadratically diverget kernel, but it may be useful to read that chapter nevertheless.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral with $\sqrt{2x^4 - 2x^2 + 1}$ in the denominator $$\int\frac{x^{2}-1}{x^{3}\sqrt{2x^{4}-2x^{2}+1}} \: \text{d}x$$ I tried to substitute $x^2=t$ but I am unable to solve it and I also tried to divide numerator and denominator by $x^2$ and do something but could not get anything.
The factor of $(2x^4 - 2x^2 + 1)^{-1/2}$ suggests that it might be profitable to look at solutions of the form $f(x)(2x^4 - 2x^2 + 1)^{1/2}$, and hope for a simplification. Indeed, by differentiating this expression we get the differential equation $$(2x^4 - 2x^2 + 1) \frac{\mathrm{d}f}{\mathrm{d}x} + 2x(2x^2 - 1) f = \frac{x^2-1}{x^3}$$ which can be solved with an integrating factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 0 }
Probability inequality Let $X,Y,Z$ be non-negative independent r.v. I should find for which $u\geq 0$ holds $$ Zu\geq Y - X(1+Z) $$ with probability one. Clearly, if $P(Z = 0)>0$ then the value of $u$ doesn't matter and we should have $X\geq Y$ with probability one. The question is the following: what if $P(Z = 0) = 0$? Could we divide an inequality by $Z$ and continue working? If yes I expect an answer like $$ u\geq \alpha $$ for $$ \alpha = \inf\left(a:P\left[\frac{Y-X}{Z}-X\leq a \right]= 1\right) $$ with possibly $\alpha = \infty$.
Since $$ P[Zu \ge Y - X(1 + Z)] = P[Zu \ge Y - X(1 + Z),Z > 0] + P[Zu \ge Y - X(1 + Z),Z = 0], $$ $P(Z=0)=0$ yields $$ P[Zu \ge Y - X(1 + Z)] = P[Zu \ge Y - X(1 + Z),Z > 0] = P\bigg[u \ge \frac{{Y - X}}{Z} - X,Z > 0\bigg]. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/45534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Number of possible sets for given N How many possible valid collections are there for a given positive integer N given the following conditions: All the sums from 1 to N should be possible to be made by selecting some of the integers. Also this has to be done in way such that if any integer from 1 to N can be made in more than one way by combining other selected integers then that set of integers is not valid. For example, with N = 7, The valid collections are:{1,1,1,1,1,1,1},{1,1,1,4},{1,2,2,2},{1,2,4} Invalid collections are: {1,1,1,2,2} because the sum adds up to 7 but 2 can be made by {1,1} and {2}, 3 can be made by {1,1,1} and {1,2}, 4 can be made by {1,1,2} and {2,2} and similarly 5, 6 and 7 can also be made in multiple ways using the same set. {1,1,3,6} because all from 1 to 7 can be uniquely made but the sum is not 7 (its 11).
Suppose your collection contains a finite set of distinct numbers $\{n_1 ... n_k\}$ and that the collection contains the number $n_i$ $t_i$ times (you can also suppose that the $n_i$ are sorted) Then your condition is that for every number $x$ between 0 and $N$, $x$ can be written as $x = \Sigma u_i n_i$ with $0 \leq u_i \leq t_i$ in exactly one way. This is possible if and only if $n_1 = 1$, forall $i$, $n_i = \Sigma_{j\lt i} t_i n_i$, and $N = \Sigma t_i n_i$. You can prove this by doing an induction on $k$, starting at $k=N=0$ for the base case. The base case is easy. The induction case is not difficult : If you have a valid collection containing $k$ distinct terms that can make all numbers up to $N_k$, then the only way to extend it into a bigger collection is by adding $n_{k+1} = N_k+1$. If you pick a smaller number, then $n_{k+1}$ can be written in two ways, if you pick a bigger number, then you can not make $N_k+1$. And then, if you have a valid collection made of $k+1$ terms, then the sub-collection containing the first $k$ distinct terms is also valid and has to write every number up to $n_{k+1}-1$ for the same kind of reasons. Thus, a valid collection of $k$ distinct terms that makes every number up to $N$ is determined by the sequence $(n_1=1, \ldots n_k, n_{k+1}=N+1)$ where $n_i$ divides $n_{i+1}$ : $$n_i = \Sigma_{j \lt i} t_j n_j = n_{i-1} + t_{i-1} n_{i-1} = (t_{i-1} + 1) n_{i-1}$$ So this shows why they are successives multiples of each other and how to recover the $t_i$ from the sequence. The 4 collections you gave as an example correspond respectively to the sequences $(1,8), (1,2,8), (1,4,8), (1,2,4,8)$. The number of valid sequences depends on the exponents in the prime decomposition of $N+1$ : if $N+1$ is a prime power $p^a$, then there are exactly $2^{a-1}$ valid sequences, since you have to choose wether you pick $p^i$ or not for $0 \lt i \lt a$. If $N+1$ has several prime divisors, I don't think there is a nice formula giving the number of valid collections from the multiset of exponents in the prime factorisation of $N+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
How to find a positive semidefinite matrix $Y$ such that $YB =0$ where $B$ is given $B$ is an $n\times m$ matrix, $m\leq n$. I have to find an $n\times n$ positive semidefinite matrix $Y$ such that $YB = 0$. Please help me figure out how can I find the matrix $Y$.
I can give you the answer in the case $m=n$. * *if $\det(B)\neq 0$ then $B$ is invertible, and therefore the only matrix $Y$ is $Y=0$. *if $\det(B)=0$ then take $Y=adj(B)$, where $adj(B)$ is the adjugate matrix of $B$. Oops... I forgot about the fact that $Y$ must be positive semidefinite. Sorry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Interesting integral related to the Omega Constant/Lambert W Function I ran across an interesting integral and I am wondering if anyone knows where I may find its derivation or proof. I looked through the site. If it is here and I overlooked it, I am sorry. $$\displaystyle\frac{1}{\int_{-\infty}^{\infty}\frac{1}{(e^{x}-x)^{2}+{\pi}^{2}}dx}-1=W(1)=\Omega$$ $W(1)=\Omega$ is often referred to as the Omega Constant. Which is the solution to $xe^{x}=1$. Which is $x\approx .567$ Thanks much. EDIT: Sorry, I had the integral written incorrectly. Thanks for the catch. I had also seen this: $\displaystyle\int_{-\infty}^{\infty}\frac{dx}{(e^{x}-x)^{2}+{\pi}^{2}}=\frac{1}{1+W(1)}=\frac{1}{1+\Omega}\approx .638$ EDIT: I do not what is wrong, but I am trying to respond, but can not. All the buttons are unresponsive but this one. I have been trying to leave a greenie and add a comment, but neither will respond. I just wanted you to know this before you thought I was an ingrate. Thank you. That is an interesting site.
I also considered this integral in another site, but it is only imperfect and non-rigorous one. It seems that the Formelsammlung Mathematik is rendering a complete solution. It is written in German, but your bona fide translater Google may read this for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 3, "answer_id": 1 }
Can an element other than the neutral element be its own inverse? Take the following operation $*$ on the set $\{a, b\}$: * *$a * b = a$ *$b * a = a$ *$a * a = b$ *$b * b = b$ $b$ is the neutral element. Can $a$ also be its own inverse, even though it's not the neutral element? Or does the inverse property require that only the neutral element may be its own inverse but all other elements must have another element be the inverse.
Yes, an element other than the identity can be its own inverse. A simple example is the numbers $0,1,2,3$ under addition modulo 4, where 0 is the identity, and 2 is its own inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
Subspace intersecting many other subspaces V is a vector space of dimension 7. There are 5 subspaces of dimension four. I want to find a two dimensional subspace such that it intersects at least once with all the 5 subspaces. Edit: All the 5 given subspaces are chosen randomly (with a very high probability, the intersection is a line). If i take any two of the 5 subspaces and find the intersection it results in a line. Similarly, we can take another two planes and find another line. From these two lines we can form a 2 dimensional subspace which intersect 4 of the 5 subspaces. But can some one tell me how we can find a two dimensional subspace which intersect all the 5 subspace. It would be very useful if you can tell what kind of concepts in mathematics can i look for to solve problems like this? Thanks in advance. Edit: the second paragraph is one way in which i tried the problem. But taking the intersection of the subspace puts more constraint on the problem and the solution becomes infeasible.
By "intersects" do you mean that the intersection is nonzero? Then in general there won't be a solution: the two-dimensional subspace spanned by the intersection of subspaces 1 and 2 and the intersection of subspaces 3 and 4 will intersect subspace 5 only in the zero vector. For example, if $e_1, \ldots, e_7$ form a basis of $V$ take $U_1$ spanned by $e_1, e_2, e_3, e_4$, $U_2$ spanned by $e_1, e_5, e_6, e_7$, $U_3 = U_1$, $U_4$ spanned by $e_2, e_5, e_6, e_7$, and $U_5$ spanned by $e_3, e_4, e_5, e_6$. Edit: Oops, this example is bogus, please ignore.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Relation between Cholesky and SVD When we have a symmetric matrix $A = LL^*$, we can obtain L using Cholesky decomposition of $A$ ($L^*$ is $L$ transposed). Can anyone tell me how we can get this same $L$ using SVD or Eigen decomposition? Thank you.
There is an interesting relationship between the eigen-decomposition of a symmetric matrix and its Cholesky factor: Say $A = L L'$ with $L$ the Cholesky factor, and $A = E D E'$ the eigen-decompostion. Then the eigen-decompostion of $L$ is $L= E D^{\frac{1}{2}} F$, with $F$ some orthogonal matrix, i.e. the Cholesky factor is a rotated form of the matrix of eigenvectors scaled by the diagonal matrix of sqaure-root eigen-values. So you can get $L$ from $E D^{\frac{1}{2}}$ through a series of orthogonal rotations aimed at making the elements above the diagonal zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/45963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 8, "answer_id": 0 }
Programming Logic: - Splitting up Tasks Between Threads I asked this question at stackoverflow and instead of addressing the math required in the problem, they wanted to talk about why setting up 5 threads is no good, or question my intentions. I just want the math solved. Lets say you want 5 threads to process data simultaneous. Also assume, you have 89 tasks to process. Off the bat you know 89 / 5 = 17 with a remainder of 4. The best way to split up tasks would be to have 4 (the remainder) threads process 18 (17+1) tasks each and then have 1 (# threads - remainder) thread to process 17. This will eliminate the remainder. Just to verify: Thread 1: Tasks 1-18 (18 tasks) Thread 2: Tasks 19-36 (18 tasks) Thread 3: Tasks 37-54 (18 tasks) Thread 4: Tasks 55-72 (18 tasks) Thread 5: Tasks 73-89 (17 tasks) Giving you a total of 89 tasks completed. I need a way of getting the start and ending range of each thread mathematically/programmability; where the following should print the exact thing I have listed above: $NumTasks = 89 $NumThreads = 5 $Remainder = $NumTasks % $NumThreads $DefaultNumTasksAssigned = floor($NumTasks / $NumThreads) For $i = 1 To $NumThreads if $i <= $Remainder Then $NumTasksAssigned = $DefaultNumTasksAssigned + 1 else $NumTasksAssigned = $DefaultNumTasksAssigned endif $Start = ?????????? $End = ?????????? print Thread $i: Tasks $Start-$End ($NumTasksAssigned tasks) Next This should also work for any number of $NumTasks. Note: Please stick to answering the math at hand and avoid suggesting or assuming the situation.
$NumTasks = 89 $NumThreads = 5 $Remainder = $NumTasks % $NumThreads $DefaultNumTasksAssigned = floor($NumTasks / $NumThreads) $End = 0 For $i = 1 To $NumThreads if $i <= $Remainder Then $NumTasksAssigned = $DefaultNumTasksAssigned + 1 else $NumTasksAssigned = $DefaultNumTasksAssigned endif $Start = $End + 1 $End = $End + $NumTasksAssigned print Thread $i: Tasks $Start-$End ($NumTasksAssigned tasks) Next
{ "language": "en", "url": "https://math.stackexchange.com/questions/46014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Finding a basis for a submodule Let $F$ be the $\mathbb{Z}$-module $\mathbb{Z}^{3}$ and let $N$ be the submodule generated by $\{(4,-4,4),(-4,4,8),(16,20,40)\}$. Find a basis $\left\{ f_{1},f_{2},f_{3}\right\}$ for $F$ and integers $d_{1},d_{2},d_{3}$ such that $d_{1}\mid d_{2}\mid d_{3}$ and $\left\{d_{1}f_{1},d_{2}f_{2},d_{3}f_{3}\right\}$ is a basis for $N$. Now I found $d_1 = 4 , d_2 = 12$ and $d_3 = 36$, but I don't know how to find $f_1 , f_2 , f_3$.
As this is a standard exercise I only outline the basic theory and steps. Let $A$ be a 3x3 matrix with your three generators as rows. If your preference is to use columns, that is very much ok, you simply need to transpose everything I say below. Do remember that taking the transpose reverses the order of matrix multiplication. The algorithm leading to Smith normal form gives you two unimodular matrices, $P$ and $Q$, such that $PAQ=D$ with $$D=\left(\begin{array}{ccc}d_1&0&0\\0&d_2&0\\0&0&d_3\end{array}\right).$$ The rows of the product matrix $PA$ are linear combinations of the rows of $A$, so as $P$ is unimodular, they form another basis of $N$. Here $PA=DQ^{-1}$, so the rows of $DQ^{-1}$ form a basis of $N$ as well. On the other hand $Q^{-1}$ is unimodular, so its rows form a basis of $\mathbf{Z}^3$. I leave it to you as an exercise to find how the rows of $DQ^{-1}$ are related to the rows $Q^{-1}$. Trust me, it is crucial to your question! How to find the unimodular matrices $P$ and $Q$? Remember that when you perform an elementary row operation (=swap two rows, add a scalar multiple of one to another,...) to a matrix, you are multiplying it with an elementary matrix from the left. Several such operations are carried out, when looking for the Smith normal form. All these operations amount to multiplying whatever you had before from the left. Therefore the row operations are the building blocks of the matrix $P$. Similarly elementary column operations are equivalent to multiplying a matrix with an elementary matrix from the right. Thus the column operations that you perform while computing the Smith normal form are the building blocks of the matrix $Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Effect of adding a constant to both Numerator and Denominator I was reading a text book and came across the following: If a ratio $a/b$ is given such that $a \gt b$, and given $x$ is a positive integer, then $$\frac{a+x}{b+x} \lt\frac{a}{b}\quad\text{and}\quad \frac{a-x}{b-x}\gt \frac{a}{b}.$$ If a ratio $a/b$ is given such that $a \lt b$, $x$ a positive integer, then $$\frac{a+x}{b+x}\gt \frac{a}{b}\quad\text{and}\quad \frac{a-x}{b-x}\lt \frac{a}{b}.$$ I am looking for more of a logical deduction on why the above statements are true (than a mathematical "proof"). I also understand that I can always check the authenticity by assigning some values to a and b variables. Can someone please provide a logical explanation for the above? Thanks in advance!
$$ \lim_{x\to 1} f(x)= \frac{a+x}{b+x} $$ where $a b \ne 0$. As neither $a$ nor $b$ are zero the function $f(x)$ is continuous and differentiable for all $x$ belongs to $\mathbb{R}^{+}$ (positive real numbers). As $x\to \infty$, $f(x) \to 1$. As in our case $ x>0$ and $x\to \infty$ we observe that, if $\frac{a}{b}<1$ , $f(x)$ tends to $+1$ thereby $\frac{a}{b} < f(x)$ and if $\frac{a}{b} > 1$ , $f(x)$ tends to $+1$ thereby $\frac{a}{b} < f(x)$. Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/46156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 8, "answer_id": 7 }
Reference book on measure theory I post this question with some personal specifications. I hope it does not overlap with old posted questions. Recently I strongly feel that I have to review the knowledge of measure theory for the sake of starting my thesis. I am not totally new with measure theory, since I have taken and past one course at the graduate level. Unfortunately, because the lecturer was not so good at teaching, I followed the course by self-study. Now I feel that all the knowledge has gone after the exam and still don’t have a clear overview on the structure of measure theory. And here come my specified requirements for a reference book. * *I wish the book elaborates the proofs, since I will read it on my own again, sadly. And this is the most important criterion for the book. *I wish the book covers most of the topics in measure theory. Although the topic of my thesis is on stochastic integration, I do want to review measure theory at a more general level, which means it could emphasize on both aspects of analysis and probability. If such a condition cannot be achieved, I'd like to more focus on probability. *I wish the book could deal with convergences and uniform integrability carefully, as Chung’s probability book. My expectation is after thorough reading, I could have strong background to start a thesis on stochastic integration at an analytic level. Sorry for such a tedious question. P.S: the textbook I used is Schilling’s book: measures, integrals and martingales. It is a pretty good textbook, but misprints really ruin the fun of reading.
It seems unnecessary to add to this long list of great books, but Real Analysis and Probability by R.M. Dudley is wonderful. His book fits your need to emphasize on both aspects of analysis and probability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86", "answer_count": 16, "answer_id": 2 }
PEDMAS in RPN needed? In RPN do we still have to take into note the PEDMAS rules? For example these questions: 3 – 4 * 2 3 * 4 – 2 3 * (4 – 2) (3 – 4) * 2 3 – 4 + 2 Answers 342*- 34*2- ---dont know this one 34-2* 342+- Thanks.
Let's look at what you've got so far: Algebraic RPN (a) 3 – 4 * 2 3 4 2 * - (b) 3 * 4 – 2 3 4 * 2 - (c) 3 * (4 – 2) [don't know] (d) (3 – 4) * 2 3 4 - 2 * (e) 3 – 4 + 2 3 4 2 + - Of these, (a), (b), and (d) are correct. Your expression for (e) will add the 4 and 2, then subtract the result from 3, so its algebraic equivalent would be $3-(4+2)$. To express $3-4+2$ in RPN, the subtraction should happen before the addition, so the - symbol should appear earlier than the + symbol. Does this give you enough information to finish it? For (c), start with the numbers in order, as you have them in all the rest of your expressions: 3 4 2. Now, what operation happens first? Where does that symbol need to go? What operation is next? Where does that symbol go? (Does this give you enough information?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/46247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Book/tutorial recommendations: acquiring math-oriented reading proficiency in German I'm interested in others' suggestions/recommendations for resources to help me acquire reading proficiency (of current math literature, as well as classic math texts) in German. I realize that German has evolved as a language, so ideally, the resource(s) I'm looking for take that into account, or else perhaps I'll need a number of resources to accomplish such proficiency. I suspect I'll need to include multiple resources (in multiple forms) in my efforts to acquire the level of reading proficiency I'd like to have. I do like "hard copy" material, at least in part, from which to study. But I'm also very open to suggested websites, multimedia packages, etc. In part, I'd like to acquire reading proficiency in German to meet a degree requirement, but as a native English speaker, I would also like to be able to study directly from significant original German sources. Finally, there's no doubt that a sound/solid reference/translation dictionary (or two or three!) will be indispensable, as well. Any recommendations for such will be greatly appreciated, keeping in mind that my aim is to be proficient in reading mathematically-oriented German literature (though I've no objections to expanding from this base!).
If your focus includes number theory, then Landau's classic Handbuch Der Lehre Von Der Verteilung Der Primzahlen is an excellent choice. You can easily find a hard copy and/or read it online, for example, at University of Michigan Historical Math Collection
{ "language": "en", "url": "https://math.stackexchange.com/questions/46313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 6, "answer_id": 3 }
Homeomorphism of the Disk I'm working through Massey's "Basic Course in AT." One of the problems is prove that a homeomorphism of the closed disk maps the boundary to the boundary and the interior to the interior. How would one prove this? I can't seem to get this one problem.
This answer extends on Chris Eagles comment: Let $D^n \subset \mathbb R^n$ denote the $n$-dimensional closed unit disk, that is $D^n = \{ x \in \mathbb R^n \;|\; |x|\leq 1 \}$, with boundary $\partial D^n = S^{n-1} = \{ x \in \mathbb R^n \;|\; |x| = 1 \}$ the $(n-1)$-dimensional sphere. Let $f: D^n \to D^n$ be a homeomorphism that maps $x \in \partial D^n$ to $f(x) \in D^n \setminus \partial D^n$. Obviously $f$ induces a homeomorphism $\tilde{f}: D^n \setminus \{ x\} \to D^n \setminus \{ f(x) \}$. Since $x \in \partial D^n$, we have that $D^n \setminus \{ x\}$ is convex and therefore homotopy equivalent to a point. On the other hand we can construct a homotopy equivalence $D^n \setminus \{ f(x) \} \simeq \partial D^n = S^{n-1}$ since $D^n$ is compact and radially convex wrt. a neighborhood of $f(x)$. Thus we get $\{pt\} \simeq D^n \setminus \{ x\} \cong D^n \setminus \{ f(x)\} \simeq S^{n-1}$, which is a contradiction by your technique of choice. For example $\pi_{n-1}(\{pt\}) \not \cong \pi_{n-1}(S^{n-1})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
Solving an exponential equation Alright, here's the equation: $$‎1.08^x = 1.10^{x-1}$$ I know I need to use logarithms, but I can't figure how to do it. Thanks in advance!
Don't get hung up on the fact that the bases don't match. The so-called power rule (or exponent rule) for logarithms works for any base. $$ \begin{align*} 1.08^x &= 1.10^{x-1}\\ \ln\left(1.08^x\right) &= \ln\left(1.10^{x-1}\right)\\ x\ln(1.08) &= (x-1)\ln(1.10)\\ x\ln(1.08) &= x\ln(1.10) - \ln(1.10)\\ x\ln(1.08) - x\ln(1.10) &= - \ln(1.10)\\ x(\ln(1.08) - \ln(1.10)) &= - \ln(1.10)\\ x &= \frac{-\ln(1.10)}{\ln(1.08) - \ln(1.10)} \end{align*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/46398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I learn de casteljau algorithm? (from calculus) I'm an highschool graduate who is currently waiting for college. Meanwhile, I'm trying to do a little project by myself. (Computer stuff) And yesterday, I found that I needed to deal with something called "De Casteljau algorithm" I know calculus (single-variable, but I'm thinking about learning multi-) and I don't want an empty 'memorize-without-understanding-or-proving' approach. Which path will take me there? (I'm hoping for answers like: "Calculus -> Differential Eq -> ...") p.s: I would also appreciate book/video lecture recommendations :) Thank you!
What exactly do you need the De Casteljau algorithm for? You might be able to use other methods and get good results. Anyway, it believe you have the required mathematical tools to work and understand De Casteljau's algorithm. Here is a similar question https://stackoverflow.com/questions/6271817/casteljaus-algorithm-practical-example search on computer graphics websites and you will get more information with examples. here is one such example http://www.cc.gatech.edu/classes/AY2007/cs3451_fall/bezier.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/46446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Connected planar simple Graph: number of edges a function of the number of vertices Suppose that a connected planar simple graph with $e$ edges and $v$ vertices contains no simple circuit with length greater than or equal to $4.\;$ Show that $$\frac 53 v -\frac{10}{3} \geq e$$ or, equivalently, $$5(v-2) \geq 3e$$
Try using Euler's "polyhedral formula" - If G is a connected plane graph then V + F - E = 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
order relation and numbers I want to prove the following: If $a$ and $b$ are elements of an ordered field such that $a \leq b+c$ for every $c >0$, then $a \leq b$. So suppose that $a>b$. Then $a-b > 0$ or $b-a \leq 0$. We know that $a \leq b+c$ so that $b+c-a \geq 0$ or $b-a \geq -c$. How would I get the contradiction?
First, note that in ordered field we know that $2$ is positive and in particular, we may divide by $2$ while preserving positivity. Now, suppose $a$ and $b$ have your property, but $b\lt a$. Let $c=(a-b)/2$, which is positive, but also $b+c+c=a$ and so $b+c\lt a$, contradicting your hypothesis. Meanwhile, observe that the fact is not necessarily true in ordered rings, such as the integers $\mathbb{Z}$, since $5\leq 4+c$ for any integer $c\gt 0$, but $5\not\leq 4$. So we should expect to divide or otherwise use the field axioms at some point in any proof of your fact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving Stewart's theorem without trig Stewart's theorem states that in the triangle shown below, $$ b^2 m + c^2 n = a (d^2 + mn). $$ Is there any good way to prove this without using any trigonometry? Every proof I can find uses the Law of Cosines.
First of all, see the Wikipedia article Stewart's Theorem. I personally prefer the symmetric formulation of the theorem, because it is easy to remember without any mnemonic trickery: Let $A$, $B$, $C$ be points on a directed line $l$ in the Euclidean plane, and $P$ be a point anywhere in the plane. Then $$ PA^2\cdot BC + PB^2\cdot CA + PC^2\cdot AB + BC\cdot CA\cdot AB = 0~, $$ where the length of a line segment $XY$ on the line $l$ is positive if the segment has the same direction as the line, and is negative if it has the opposite direction. Proof. The proof is by straightforward calculation. Choose a Cartesian coordinate system with the directed line $l$ as the first axis, and let the points involved have the coordinates $A=(a,0)$, $B=(b,0)$, $C=(c,0)$, $P=(x,y)$. Stewart's identity is essentially just the polynomial identity $$ (1) \quad (x-a)^2(c-b) + (x-b)^2(a-c) + (x-c)^2(b-a) + (c-b)(a-c)(b-a) = 0~, $$ which is easily verified. (Go and really do the calculation: it's fun to observe everything cancel out.) Now add to the above identity the obvious identity $$ y^2(c-b) + y^2(a-c) + y^2(b-a) = 0~, $$ and you have the full-blown Stewart's theorem. Done. Note that the theorem is true also in partly degenerate situations, where some of the points $A$, $B$, $C$ coincide, or the point $P$ is on the line $l$. A geometric reasoning has to discuss these degenerate cases separately, while the computational proof smoothly subsumes them in the general case. Remark. The identity $(1)$ is a generic polynomial identity, which means that it is an identity in the ring $\mathbb{Z}[a,b,c,x]$ of polynomials in (formal) variables $a$, $b$, $c$, $x$ with integer coefficients, and therefore instantiates to an identity when the variables are replaced by elements of an arbitrary commutative ring. This observation tells us that Stewart's theorem is valid in the 'Euclidean plane geometry' over any commutative ring, in particular over any field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 1 }
Explaining why a function is well-defined How can I explain that a function is well-defined, if it's defined recursively by specifying $f(1)$, and a rule for finding $f(n)$ from $f(n-1)$? My reasoning: If the function for $f(n)$ can be derived from $f(n-1)$, then the function must give a unique value for each input, which is part of what being well-defined is. And since I have $f(1)$, then I can prove any other function starting with that one, so it IS well-defined.
"Well-defined" is a somewhat fuzzy term. But here, it seems that you want to ensure that a function that is defined at $1$ and then you have a recursive definition explaining how to obtain $f(n)$ from $f(n-1)$ does in fact lead to a function whose domain is all the natural numbers. The answer is that this follows from the Recursion Theorem. Recursion Theorem. Let $X$ be a set, let $a\in X$, and let $g\colon X\to X$ be a function. Then there exists a unique function $u\colon\mathbb{N}\to X$ such that $u(1)=a$ and $u(n+1) = g(u(n))$ for all $n$. So here, $X$ is the codomain of $f$; $a$ is the value of $f(1)$; and $g$ is the function that corresponds to your rule for finding $f(n)$ from $f(n-1)$. The Recursion Theorem ensures the existence of a function whose domain is all of $\mathbb{N}$ that has the property you want. The proof of the recursion theorem is by induction. What follows is the argument in Halmos's Naive Set Theory, pages 48-49. Remember that we can consider a function $\mathbb{N}\to X$ as a set $u$ of ordered pairs $(n,x)$, where for each $n\in \mathbb{N}$ there is an $x$ in with $(n,x)\in u$; and if $(n,x)$ and $(n,y)$ are both in $u$, then $x=y$. Consider the collection of all relations $A$ between $\mathbb{N}$ and $X$ that contain $(1,a)$, and such that if $(n,x)\in A$, then $(n+1,g(x))\in A$. The collection is nonempty (the total relation satisfies the properties), so we may consider the intersection of all sets in the collection. Call this intersection $u$; we need to show that $u$ is a function defined on all natural numbers. Let $S$ be the set of all natural numbers $n$ for which there is exactly one element $x$ of $X$ such that $(n,x)\in u$. We show $S=\mathbb{N}$ by induction. If $1\notin S$, then there exists $b\neq a$ such that $(1,b)\in u$. But then $u-\{(1,b)\}$ still contains $(1,a)$, and if it contains $(n,x)$ then it contains $(n+1,g(x))$; so $u-\{(1,b)\}$ is one of the relations in our collection, which is impossible (since it is properly contained in the intersection of all such relations). So $1\in S$. Now suppose that $n\in S$; Then there is a unique $x\in X$ such that $(n,x)\in u$. By the properties of $u$, $(n+1,g(x))\in u$. If $n+1\notin S$, then there exists $y\in X$, $y\neq g(x)$, such that $(n+1,y)\in u$. Then consider $u-\{(n+1,y)\}$ and derive a similar contradiction. So $n+1\in S$. By induction $S=\mathbb{N}$, so $u$ is a function from $\mathbb{N}$ to $X$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Permutation/Combinations in bit Strings I have a bit string with 10 letters, which can be {a, b, c}. How many bit strings can be made that have exactly 3 a's, or exactly 4 b's? I thought that it would be C(7,2) + C(6,2), but that's wrong (the answer is 24,600).
First note that Number of ways of event $A$ or event $B = $ (Number of ways of event $A$) + (Number of ways of event $B$) - (Number of ways of event $A$ and $B$ occurring simultaneously) In our problem, event $A$ is the number of strings containing exactly $3$ $a$'s and event $B$ is the number of strings containing exactly $4$ $b$'s We are interested in the number of strings containing exactly $3$ $a$'s and in the number of strings containing exactly $4$ $b$'s. Number of strings containing exactly $3$ $a$'s is obtained as follows. Of the $10$ letters in our string, if we want exactly $3$ $a$'s, each of the remaining $7$ has $2$ choices, namely, it should be either $b$ or $c$ and hence we have $2^7$ choices for the remaining $7$. Once we have this, we can now arrange them in $\frac{10!}{3! \times 7!}$ ways. Hence, number of ways of this event is $\frac{10!}{3! \times 7!} \times 2^7$. Similarly, number of strings containing exactly $4$ $b$'s is obtained as follows. Of the $10$ letters in our string, if we want exactly $4$ $b$'s, each of the remaining $6$ has $2$ choices, namely, it should be either $a$ or $c$ and hence we have $2^6$ choices for the remaining $6$. Once we have this, we can now arrange them in $\frac{10!}{4! \times 6!}$ ways. Hence, number of ways of this event is $\frac{10!}{4! \times 6!} \times 2^6$. Number of strings containing exactly $3$ $a$'s and $4$ $b$'s is $\frac{10!}{3! 4! 3 !}$. Hence, the total number of strings containing exactly $3$ $a$'s or exactly $4$ $b$'s is $$\frac{10!}{3! \times 7!} \times 2^7 + \frac{10!}{4! \times 6!} \times 2^6 - \frac{10!}{3! 4! 3 !} = 24,600$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/46808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Reference for Algebraic Geometry I tried to learn Algbraic Geometry through some texts, but by Commutative Algebra, I left the subject; many books give definitions and theorems in Commutative algebra, but do not explain why it is needed. Can one suggest a good reference to learn this subject geometrically, which would also give ways to translate geometric ideas in algebraic manner, possibly through examples? Particularly, I am interested in differentials on algebraic curves, Riemann-Roch theorem, various definitions of genus and their equivalences, and mainly groups related to complex algebraic curves such as group of automorphisms, monodromy group etc.
Fulton's Algebraic Curves: An Introduction to Algebraic Geometry which is freely available seems to fit your description. I haven't started reading it yet, but I'm planning to do so shortly: I'm in a situation similar to yours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 6, "answer_id": 4 }
computing Moore-Penrose pseudoinverse when SVD computation does not converge I am writing a routine to return the Moore-Penrose inverse of a rectangular matrix. Currently am computing the Moore-Penrose inverse using SVD, i.e., if the SVD is given by $A = \sum_{i=1}^r \lambda_i u_i v_i^T$, with $\lambda_i > 0 $ then $A^{+} = \sum_{i=1}^{r} \frac{1}{\lambda_i} v_i u_i^T$. However, if SVD computation does not converge, I will have to throw an exception, I would prefer to compute the pseudo-inverse using some non-iterative method. I have come across the formula that $A^{+} = H(H^2)^-A^T$, where $H=A^TA$ and $(H^2)^-$ is any generalized inverse of $H^2$, and I can compute a generalized inverse using elementary row operations, this does provide a way. However, I am not fond of this method because $H^2$'s condition number is $A$'s condition number raised to the fourth power. Do I have any other alternative?
You can compute a complete orthogonal decomposition using two QR decompositions. See http://www.netlib.org/lapack/lug/node43.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/46897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
find area of darkened figure We know that length of diameter $AB=6$; $CA=2\sqrt{3}$; We should find area of darkened figure I have tried following : Since $AB$ is diameter we can say $ACB$ is right triangle so we can caluclate area of $ABC$, but my question is how calculate area of this triangle inside circle? Please help me.
From the metric data you can compute the angle at $B$ since $\tan B = \sqrt 3/3$. Let $D$ be the intersection of $BC$ with the circle. Since the angle at $B$ is known and $OB=OD$, you can easily compute the area of the circular sector $AOD$ and of the triangle $BOD$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/46942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$F[a] \subseteq F(a)?$ I think this is probably an easy question, but I'd just like to check that I'm looking at it the right way. Let $F$ be a field, and let $f(x) \in F[x]$ have a zero $a$ in some extension field $E$ of $F$. Define $F[a] = \left\{ f(a)\ |\ f(x) \in F[x] \right\}$. Then $F[a]\subseteq F(a)$. The way I see this is that $F(a)$ contains all elements of the form $c_0 + c_1a + c_2a^2 + \cdots + c_na^n + \cdots$ ($c_i \in F$), hence it contains $F[a]$. Is that the "obvious" reason $F[a]$ is in $F(a)$? And by the way, is $F[a]$ standard notation for the set just defined?
Yes, these are standard notations: given some fields $E\supseteq F$ and some $a\in E$, $$F(a)=\left\{\left.\frac{f(a)}{g(a)}\right\vert\,\,\, f,g\in F[x], g(a)\neq0\right\}$$ $$F[a]=\left\{\left.f(a)\,\,\right\vert\,\,\, f\in F[x]\right\}$$ Because any $f(a)=\frac{f(a)}{g(a)}$ where $g=1$, we have that $F[a]\subset F(a)$ (this is just a restatement of your argument).
{ "language": "en", "url": "https://math.stackexchange.com/questions/47045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Is there a name for the matrix $X(X^tX)^{-1}X^{t}$? In my work, I have repeatedly stumbled across the matrix (with a generic matrix $X$ of dimensions $m\times n$ with $m>n$ given) $\Lambda=X(X^tX)^{-1}X^{t}$. It can be characterized by the following: (1) If $v$ is in the span of the column vectors of $X$, then $\Lambda v=v$. (2) If $v$ is orthogonal to the span of the column vectors of $X$, then $\Lambda v = 0$. (we assume that $X$ has full rank). I find this matrix neat, but for my work (in statistics) I need more intuition behind it. What does it mean in a probability context? We are deriving properties of linear regressions, where each row in $X$ is an observation. Is this matrix known, and if so in what context (statistics would be optimal but if it is a celebrated operation in differential geometry, I'd be curious to hear as well)?
It is also called hat matrix. The idea is that this matrix "gives the hat": transforms the dependent variable to its prediction in linear regression. The linear regression model is the following: $$y=X\beta+\varepsilon.$$ The least squares estimate of the $\beta$ is defined as $$\hat\beta=(X^TX)^{-1}X^Ty.$$ The prediction of the model is then: $$\hat{y}=X\hat\beta=X(X^TX)^{-1}X^Ty$$ So we get that matrix $X(X^TX)^{-1}X^T$ transforms $y$ to $\hat{y}$, hence the hat matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Question in solving $\phi(t)=\phi(2t)+\phi(2t-1)$, $\phi\ne0$ Actually one can resort to the two-scale equation in multiresolution analysis. Perform Fourier transformation on both side of $\phi(t)=\phi(2t)+\phi(2t-1)$, it turns out that $$\hat\phi(\omega)=\frac{1}{2}\hat\phi(\frac{1}{2}\omega)+\frac{1}{2}\hat\phi(\frac{1}{2}\omega)e^{-\frac{i\omega}{2}}$$ , that is, $$\frac{\hat{\phi}(2\omega)}{\hat{\phi}(\omega)}=\frac{1+e^{-i\omega}}{2}=m(\omega)$$ Therefore $$\hat\phi(\omega)=\prod_{k=1}^{\infty}m\left(\frac{\omega}{2^k}\right)=\prod_{k=1}^{\infty}\frac{1+e^{-i2^{-k}\omega}}{2}\hat\phi(0)$$ My question is, how to calculate this limit? Thank you~
If $\omega=0$, then the product is $1$. Let $z=e^{-i\omega}$. Observe that $$ \prod_{k=1}^{K}\frac{1+z^{2^{-k}}}{2}=\frac{1}{2^K}\frac{1-z}{1-z^{2^{-K}}}.$$ Treating $2^K(1-z^{2^{-K}})$ as a derivative as $K\to\infty$ for $-e^{i\omega} t$ at $t=0$, we get $$\prod_{k=1}^{\infty}\frac{1+e^{-i2^{-k}\omega}}{2}=\frac{1-e^{-i\omega}}{i\omega}.$$ In the limit as $\omega\to 0$, we also get $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Exponents in the denominator? I'm having trouble understanding exponents in the denominator. For example: I have the expression: $\displaystyle 1 - \frac{1}{3^n} + \frac{2}{3^{n+1}}$. I know that this simplifies to $\displaystyle 1 - \frac{1}{3^{n+1}}$, but how/why? Can someone please list the steps? My understanding is that the exponent $(n+1)$ in the expression $x^{n+1}$ means that $x^{n+1} = x x^n$, but how does this fit with the above problem?
You should get a common denominator for the last two terms by multiplying the second term by $3/3$. You have $$ \begin{align*} 1-\frac{1}{3^n}+\frac{2}{3^{n+1}} &= 1-\frac{3}{3^{n+1}}+\frac{2}{3^{n+1}}\\ &= 1-\frac{1}{3^{n+1}} \end{align*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/47180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
totally disconnected orbit-stabilizer theorem So I'm aware that the orbit-stabilizer theorem does not hold for arbitrary spaces with a transitive action by a topological group, but I wonder if it works in the following situation. Let $G$ be a totally disconnected, locally compact Hausdorff topological group and $X$ a topological space satisfying the same conditions (I would call such things $\ell$-groups and $\ell$-spaces respectively). If $G$ acts transitively on $X$ and $x \in X$ is any point there is an obvious $G$-equivariant continuous bijection $G/G_x \to X$, where $G_x$ denotes the stabilizer of $x$ in $G$. Can we conclude, in this situation, that $G/G_x \to X$ is a homeomorphism? If not, what further conditions do we need to impose? Notice that this is true if $G/G_x$ is compact, since a continuous bijection of compact Hausdorff spaces is a homeomorphism.
This is true for G a locally compact, Hausdorff topological group, and X locally compact, Hausdorff, with a countable local basis. This "apocryphal lemma" appears many places, but is easily misplaced. I reproduced the usual argument in an appendix in the "Solenoids" class notes on my modular forms course page, here .
{ "language": "en", "url": "https://math.stackexchange.com/questions/47239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
"Resultant" of three polynomials The resultant $\operatorname{Res}(f,g)$ of two polynomials over a field $k$ is a polynomial in the coefficients of $f$ and $g$ which enjoys the property of being nonzero if and only if $f$ and $g$ have no common root in an algebraic closure $\overline{k}$ of $k$. Does there exist a similar construction for three polynomials? There seems to be none. I would like to suggest the following conjecture: Conjecture: there does not exist a function $\operatorname{Res}(f,g,h)$ of three polynomials $f,g,h \in k[x]$, which is a polynomial in the coefficients of $f,g,h$, having the property of being zero if and only if the polynomials $f,g,h$ have a common root in an algebraic closure $\overline{k}$ of $k$.
There is a resultant for three polynomials in two variables. It's considerably trickier than the resultant for two polynomials in one variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Are "$n$ by $n$ matrices with rank $k$" an affine algebraic variety? Identify the set of all complex $n$ by $n$ matrices with $\mathbb{C}^{n^2}$. We say a subset $S \subset \mathbb{C}^{n^2}$ is an affine algebraic variety if $S$ is the common zero set of a collection (possibly infinite or uncountable) of polynomials in $n^2$ complex variables. Some examples that satisfy this definition are "matrices with determinant 1" and "matrices with rank at most $k$". A non-example is "matrices with rank $n$", because it is the complement of the affine algebraic variety "matrices with rank at most $n - 1$"; any affine algebraic variety is necessarily closed in the Euclidean topology, so its complement (which is open in the Euclidean topology) cannot be an affine algebraic variety. However, I'm having trouble formulating a statement about "matrices with rank $k$" for $1 < k < n$. Is the set of complex $n$ by $n$ matrices of rank $k$, where $1 < k < n$, an affine algebraic variety?
At the end of his answer, Pete Clark asks whether the quasi-affine variety of $n \times n$ matrices of rank $k$ might be affine. The answer is no, except when $k=n$. This is a special case of a much more general fact. Let $X$ be an irreducible affine variety, and let $Y$ be a closed subvariety. If $X \setminus Y$ is affine, then $Y$ is codimension $1$ in $X$. Proof Sketch: (This argument is edited in response to points raised to me by Steven Sam.) Let $r$ be the codimension of some component of $Y$. Then the local cohomology group $H^r_Y(X, \mathcal{O})$ is nonzero. I earlier said that this was by the inequality between depth and codimension, but Steven points out to me that I am misusing this; that inequality implies that $H^s_Y(X, \mathcal{O})$ is nonzero for some $s \leq r$. I think the inequality I meant to cite is Grothendieck's nonvanishing theorem. Look at Hartshorne Exercise III.2.2.3.e . There is an exact sequence which, in part, is $$H^r(X \setminus Y, \mathcal{O}) \to H^{r+1}_Y(X, \mathcal{O}) \to H^{r+1}(X, \mathcal{O}).$$ Since $X$ is affine, the right hand side is zero, and we just claimed that the middle term is nonzero. So $H^r(X \setminus Y, \mathcal{O})$ is nonzero. If $r>1$, this implies that $X \setminus Y$ is non-affine. QED The rank $k$ matrices have dimension $n^2 - (n-k)^2$. One can check that this is only codimension $1$ for $(k, k-1) = (n,n-1)$. By the way, it is NOT true that $X$ affine, $Y$ codimension $1$ implies $X \setminus Y$ affine. The simplest counter-example I know is $X = \mathrm{Spec}\ k[w,x,y,z]/(wz-xy)$ and $Y = \{ w=x=0 \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Finding the Laurent series While studying i got this exercise and would like some pointers where i'm going wrong. Find the Laurent Series at $z_0=2i$ $$f(z) = \frac{1+z}{z^2+4}+e^z$$ I've tried the following: $$f(z)=(1+z)\frac{1}{(z+2i)(z-2i)}+e^z$$ and the series for $$\frac{1}{(z+2i)(z-2i)} = \frac{1}{z-2i}\frac{1}{4i}\frac{1}{1+(\frac{z-2i}{4i})} = \displaystyle\sum\limits_{k=0}^{+\infty} \frac{(-1)^k}{(4i)^{k+1}}(z-2i)^{k-1}$$ after this i'm stumped in trying to put everthing has a power of $z-2i$ $$f(z) = (1+z)\displaystyle\sum\limits_{k=0}^{+\infty} \frac{(-1)^k}{(4i)^{k+1}}(z-2i)^{k-1}+\displaystyle\sum\limits_{k=0}^{+\infty}\frac{z^k}{k!}$$ I have several questions regarding this: 1 - should i try to put $\displaystyle \frac{1+z}{z^2+4}$ has a sum of simple fractions? 2 - is there even a Laurent series for $e^z$ around $z_0=b$ since $e^z$ doesn't have any singularities? 3 - What should i do to that $(1+z)$?
There are a couple of different ways that you can think about this. First of all, I suspect you already immediately know how to find the expansion of $e^z$ around whatever point you want. [as an aside, you ask if there exists a Laurent series for $e^z$ around a point even though it doesn't have any singularities - it does. I suspect you think that Laurent expansions must include terms of negative degree - but that's not the case]. So I'll not worry about that term, and I will instead consider only the $\dfrac{1+z}{z^2 + 4}$ term. But I see that you've already gone through the derivation of the series for $\dfrac{1}{(z+2i)(z-2i)}$ around $z - 2i$. That's the hard part. When you look at the $1+z$ term, you need to extract a $z-2i$ from it. So - add and subtract $2i$ (as Jyrki said). Finally, since you've noticed that $e^z$ doesn't have any singularities, there is no principal part. So there's only that Taylor-series style expansion...
{ "language": "en", "url": "https://math.stackexchange.com/questions/47418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Submanifold concerning the map of a function Let $f: \mathbb{R}^+ \times \mathbb{R} \rightarrow \mathbb{R}$ with $f \in C^1$ and the Jacobian of $f$ has full rank (1) for all $z \in \mathbb{R}^+ \times \mathbb{R}$. Then $M=\{z=(z_1,z_2,z_3) \in \mathbb{R}^3|(z_1,z_2)\neq (0,0),f(\sqrt{z_1^2+z_2^2},z_3)=0\}$ is a 2-dimensional submanifold of $\mathbb{R}^3$. I am stuck on this task, I have some approach but I couldn't quite finish my proof (and I am especially not sure if I did any mistake): My idea was to write $M$ as the roots of a function $g=f*h$ with a Jacobian of maximal rank (it follows that M is a submanifold). Let use take $h: \mathbb{R}^2\backslash\{0\} \times \mathbb{R} \rightarrow \mathbb{R}^+ \times \mathbb{R}$ with $h(z_1,z_2,z_3)=(\sqrt{z_1^2+z_2^2},z_3)$ we get that $h$ is continuous because both components are continuous and therefore the composition $g=f*h$ is continuous. And therefore it follows that $$J(f*h)=\left( \begin{array}{ccc} \frac{z_1 \partial_1f\left(\sqrt{z_1^2+z_2^2},\text{z3}\right)} {\sqrt{z_1^2+z_2^2}} & \frac{z_2 \partial_1f\left(\sqrt{z_1^2+z_2^2},\text{z3}\right)} {\sqrt{z_1^2+z_2^2}} & \partial_2f\left(\sqrt{z_1^2+z_2^2},\text{z3}\right) \end{array} \right)$$ Now I think I am done if I can show that at least one entry of that matrix is always unequal to zero. Thank you for your help.
You can do this using regular values. I think the necessary machinary comes from the Regular Value Theorem but the idea is to show that 0 is a regular value of the image, that is f^{-1}(0) contains no critical points, but seeing as we're told that the jacobian has full rank (of 1) this must be true and hence M is a regular submanifold of codim 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
"Counting Tricks": using combination to derive a general formula for $1^2 + 2^2 + \cdots + n^2$ I was reading an online article which confused me with the following. To find out $S(n)$, where $S(n) = 1^2 + 2^2 + \cdots + n^2$, one can first write out the first few terms: 0 1 5 14 30 55 91 140 204 285 Then, get the differences between adjacent terms until they're all zeroes: 0 1 5 14 30 55 91 140 204 285 1 4 9 16 25 36 49 64 81 3 5 7 9 11 13 15 17 2 2 2 2 2 2 2 all zeroes this row Then it says that therefore we can use the following method to achieve $S(n)$: $S(n) = 0 {n\choose 0} + 1 {n\choose 1} + 3 {n\choose 2} + 2 {n\choose 3}$. I don't understand the underlying mechanism. Someone cares to explain?
The underlying mechanism is known as the calculus of finite differences. Just like one can reconstruct a $n$th degree polynomial $f$ from $f(a),f'(a),\ldots,f^{(n)}(a)$, at any point $a$, we can also reconstruct it from $f(a),\Delta_h^1[f](a),\ldots,\Delta_h^n[f](a)$, where $$\Delta^n_h[f](x) = \sum_{i = 0}^{n} (-1)^i \binom{n}{i} f(x + (n - i) h)$$ is the $n$th forward difference of $f$ at $x$ (we usually take $h=1$). It is a common math parlor trick to ask someone to choose a low degree polynomial, ask them its values at $0,1,\ldots,\text{degree}$, and to their amazement tell them their polynomial. We only need to ask them about $\text{degree}+1$ points because we know that the the $\text{degree}+1$'th and higher forward difference of a polynomial will vanish (just like with normal derivatives). If one suspects that a function is a polynomial, but is unsure of the degree, one can ask for more and more values, and the method of finite differences will return polynomials of higher and higher degree that will approximate the function about $a$ (just like with Taylor series).
{ "language": "en", "url": "https://math.stackexchange.com/questions/47509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
Is there a continuous bijection from $\mathbb{R}$ to $\mathbb{R}^2$ I need a hint. The problem is: is there a continuous bijection from $\mathbb{R}$ to $\mathbb{R}^2$ I'm pretty sure that there aren't any, but so far I couldn't find the proof. My best idea so far is to consider $f' = f|_{\mathbb{R}-\{*\}}: \mathbb{R} - \{*\} \to \mathbb{R}^2 - \{f(*)\}$, and then examine the de Rham cohomologies: $$H^1_{dR}(\mathbb{R}^2 - \{f(*)\}) = \mathbb{R} \ \xrightarrow{H^1_{dR}(f')} \ 0 = H^1_{dR}(\mathbb{R} - \{*\}),$$ but so far I failed to derive a contradiction here. Am I on the right path? Is it possible to complete the proof in this way e.g. by proving that $H^1_{dR}(f')$ must be a mono? Or is there another approach that I missed?
Here is a hint: What simple space is $\mathbb{R}^2 - \{ f(\ast) \}$ homotopic to? Edit: Just a small edit, to hopefully bump this up. I had read this as a homeomorphism, in which case it is easy. However we only have a continuous bijection from $\mathbb{R} \to \mathbb{R}^2$. There may be a way to argue from the fact that $\mathbb{R} - \{ \ast \}$ is disconnected and $\mathbb{R^2} - \{ f(\ast) \}$ is connected. This will work immediately to show there is no continuous bijection from $\mathbb{R}^2 \to \mathbb{R}$ as the continuous image of a connected set is connected. I am not sure about getting something out of the other direction however (perhaps the 'simplest' is Zarrax's explanation). Hopefully the experts will have something to add!
{ "language": "en", "url": "https://math.stackexchange.com/questions/47547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 3, "answer_id": 0 }
Binomial Distribution, finding of at least $x$ success When calculating the $P$ for at least $x$ success one uses $\text{max} (x-1)$ instead, and then take $1- (\text{max} (x-1))$. This works. And I understand it. Because we use the complement to calculate it, because the calculator supports it. But what I do not understand is the following. When calculating a combination of these, $P(\text{max}\,\, x\,\,\, \text{and}\,\,\, \text{min}\,\, y)$ we can just forget about the $1 - (\text{max}\,\, (x-1))$ part, and just use $\text{max}\,(x-1)$ directly. For example: $$P(\text{at least 150 sixes and at most 180 sixes)} = P(\text{max}\,\, 180 \,\,\text{sixes}) - P(\text{max}\,\,149\,\,\text{sixes}).$$ And then we don't have to do the $1-x$ part. Why is this?
In your formula $1 - P(max\space x)$, 1 represents $P(max = "theoretical\space maximum")$. If your max has to be some $M < "theoretical\space maximum"$, it becomes: $P(max\space M)-P(max \space x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Fractions with radicals in the denominator I'm working my way through the videos on the Khan Academy, and have a hit a road block. I can't understand why the following is true: $$\frac{6}{\quad\frac{6\sqrt{85}}{85}\quad} = \sqrt{85}$$
Are you sure you entered that right? It looks like you have $$(6) \over ({6 \sqrt{85}\over 85})$$ which would be equal to $$\sqrt{85}.$$ Note: this answer was written before the original question was edited. The original question was asking why (6)/(6√85/85) was equal to 6.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Median of a set of Integer The median of a set containing odd number of integer is the middle element after sorting. What about the case of even number of terms. Some places mention of average of the two middle value, but I heard that precise definition allows any number from the left middle number to the right middle number. For example, {1 2 3 4}, any number between and including 2 and 3 is valid.
It's really a matter of custom-convention-convenience. "If there is an even number of observations, then there is no single middle value; the median is then usually defined to be the mean of the two middle values" (wikipedia). No definite argument (that I'm aware of) dictates that the average is to be prefered, but in many scenarios that seems a reasonable thing to do. Specially if we can assume that the underlying distribution is approximately symmetric around the median. On one side: to see that that definition cannot be the last word, consider this example: for a positive random variable $X$ the ("true") median $m_X$ is insensitive to a change $Y=X^2$, in the sense that $m_Y=m_X^2$. Imagine that $X$ is the length of some random squares, and $Y$ their area. If an statician is given a sample $\mathbf{X}=\{98,99,101,102\}$, he will estimate that the median of X is 100, so that his median area is 10000. If another statician is given the same data, expressed as areas, $\mathbf{Y}=\{9604,9801,10201,10404\}$, he will get a median area of 10001 (If they had opt for a geometric mean instead of an arithmetic one, they would have agreed). On the other side, to see that the recipe "Take any number between and including the two middle values" can also be justified, recall that the (probabilistic) median is the value that minimizes the expected value of the absolute prediction error $|x - a|$. Analogously, we'd expect that to find the sample median should be equivalent to find the value that minimizes the average distance to all points, i.e. minimize $\sum |x_i-a|$. And this indeed coincides with the traditional definition of sample median, for odd sample size; and, for even sizes, it's easily seen that any number "between and including the two middle values" attain this mininum - and hence, is a reasonable sample median.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Given $N$, count $\{(m,n) \mid 0\leq mI'm confused at exercise 4.49 on page 149 from the book "Concrete Mathematics: A Foundation for Computer Science": Let $R(N)$ be the number of pairs of integers $(m,n)$ such that $0\leq m < N$, $0\leq n<N$, and $m\perp n$. (a) Express $R(N)$ in terms of the $\Phi$ function. (b) Prove that $$R(N) = \displaystyle\sum_{d\geq 1}\left\lfloor\frac{N}{d}\right\rfloor^2 \mu(d)$$ * *$m\perp n$ means $m$ and $n$ are relatively prime *$\mu$ is the Möbius function *$\Phi(x)=\sum_{1\leq k\leq x}\phi(k)$ *$\phi$ is the totient function For question (a), my solution is $R(N) = 2 \cdot \Phi(N-1) + [N>1]$ (where $[\;\;]$ is the Iverson bracket, i.e. [True]=1, [False]=0) Clearly $R(1)$ has to be zero, because the only possibility of $(m,n)$ for testing is $(0,0)$, which doesn't qualify. This agrees with my answer. But here is the book's answer: Either $m<n$ ($\Phi(N−1)$ cases) or $m=n$ (one case) or $m>n$ ($\Phi(N−1)$ again). Hence $R(N) = 2\Phi(N−1) + 1$. $m=n$ is only counted when $m=n=1$, but how could that case appear when $N=1$? I thought the book assumed $R$ is only defined over $N≥2$. But their answer for question (b) relies on $R(N) = 2Φ(N−1) + 1$ and proves the proposition also for the case $N=1$. They actually prove $2Φ(N−1) + 1 = RHS$ for $N≥1$. And if my assumption about the $R(1)$ case is true, then the proposition in (b) cannot be valid for $N=1$, for $LHS=0$ and $RHS=1$. But the fact that it's invalid just for one value seems a little fishy to me. My question is, where am I confused? What is wrong in my understanding about the case $R(1)$? Thank you very much.
$R(1)=1$, because $m=n=1$ is a solution: they are coprime as the GCD is $1$. So the case $N=1$ works for (b), as both sides are $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/47986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Squares on a checkerboard How many squares of all sizes arise using an $n$-by-$n$ checkerboard? How many triangles of all sizes arise using a triangular grid with sides of length $n$ ?
Your answer for the squares is correct. For the triangles, you do something similar, with a twist. For the triangles in the original direction you have 1 big triangle, 3 triangles one size smaller, 6 another size smaller and you should be able to persuade yourself that these continue as the triangle numbers $1,3,6,10,15,21,\ldots$. For the triangles in the other direction, if you have an original triangle with an even length side you have 1 triangle with side half that of the original triangle, 6 triangles one size smaller, 15 another size smaller etc., while if you have an original triangle with an odd length side you have 3 triangles with side half that of the original triangle rounded down, 10 triangles one size smaller, 21 another size smaller etc. Adding all these up is not trivial, but you should get OEIS A002717 as your result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Sum of First $n$ Squares Equals $\frac{n(n+1)(2n+1)}{6}$ I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals: $$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$ I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?
$\begin{aligned} & \hspace{0.5in} \begin{aligned}\displaystyle \sum_{1 \le k \le n}k^2 & = \sum_{1 \le k \le n}~\sum_{1 \le r \le k}r =\sum_{1 \le r \le n}~\sum_{r \le k \le n}r \\& = \sum_{1 \le r \le n}~\sum_{1 \le k \le n}r-\sum_{1 \le r \le n}~\sum_{1 \le k \le r-1}r \\& = n\sum_{1 \le r \le n}r-\frac{1}{2}\sum_{1 \le r \le n}r(r-1) \\& =\frac{1}{2}n^2(n+1)-\frac{1}{2}\sum_{1 \le r \le n}r^2+\frac{1}{2}\sum_{1 \le r \le n}r \\& =\frac{1}{2}n^2(n+1)-\frac{1}{2}\sum_{1 \le k \le n}k^2+\frac{1}{4}n(n+1) \end{aligned} \\& \begin{aligned}\implies\frac{3}{2}\sum_{1 \le k \le n}k^2 & = \frac{1}{2}n^2(n+1)+\frac{1}{4}n(n+1) \\& = \frac{1}{4}n(n+1)(2n+1) \end{aligned}\\& \implies \hspace{0.15in} \displaystyle \sum_{1 \le k \le n}k^2 = \frac{1}{6}n(n+1)(2n+1).\end{aligned}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/48080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "145", "answer_count": 32, "answer_id": 7 }
How can I complexify the right hand side of this differential equation? I want to get a particular solution to the differential equation $$ y''+2y'+2y=2e^x cos(x) $$ and therefore I would like to 'complexify' the right hand side. This means that I want to write the right hand side as $q(x)e^{\alpha x}$ with $q(x)$ a polynomial. How is this possible? The solution should be $(1/4)e^x(\sin(x)+\cos(x))$ but I cannot see that.
As $\cos x=\frac{e^{ix}+e^{-ix}}{2}$, $2e^x \cos x = e^{x+ix}+e^{x-ix}$, but that is not of the requested form. Is it close enough?
{ "language": "en", "url": "https://math.stackexchange.com/questions/48099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Sufficient statistic for product of $\Gamma(x + a_i)$ Background Suppose that the national mint mints coins with bias $p_i \sim \rm{Beta}(A,B)$ for some unknown constants $A$, $B$. Given $n$ coins, you flip each coin a certain number of times. Coin $i$ comes up heads $a_i$ times and tails $b_i$ times. (The flips are clearly exchangeable, so we don't need the actual sequence of flips.) If I did the math right, I figured out that the likelihood of $A, B$ given the statistics $(a_i, b_i)$ is: $$L(A,B \mid a_i, b_i) = \frac{\prod_i\rm{Beta}(A + a_i, B + b_i)}{\rm{Beta}(A, B)^n}.$$ (Edit in 2012: This is the product of $n$ Pólya–Eggenberger urn schemes.) I would like to summarize the statistics $(a_i, b_i)$ even further, into a finite set of numbers if possible. Question Can $\prod_i\Gamma(A + a_i)$ be written using sufficient statistics calculated from the $a_i$? Example For example, $\prod_i\exp(A + a_i)$ can be written $\exp(nA + S)$ where $n, S$ are sufficient statistics calculated from the $a_i$.
I think I would agree with your expression for the likelihood, although this approach is not without problems (see below). Notice, that since $a_i$ and $b_i$ are integers, the likelihood is just a rational function in $A$ and $B$: $$ L(A,B \mid a,b) = \prod_{i=1}^n \left( \frac{ \prod\limits_{k=0}^{a_i-1} (A+k) \prod\limits_{m=0}^{b_i-1} (B+m) }{\prod\limits_{j=0}^{a_i+b_i-1} (A+B+j)}\right) $$ The maximum likelihood equations are easily seen to be $$ \begin{eqnarray} \sum_{i=1}^n \sum_{k=0}^{a_i-1} \frac{1}{A+k} &=& \sum_{i=1}^n \sum_{j=0}^{a_i+b_i-1} \frac{1}{A +B+j} \\ \sum_{i=1}^n \sum_{m=0}^{b_i-1} \frac{1}{B+m} &=& \sum_{i=1}^n \sum_{j=0}^{a_i+b_i-1} \frac{1}{A +B+j} \end{eqnarray} $$ Given that $A>0$ and $B>0$ these equations become polynomial equations in $A$ and $B$ with, hopefully, a unique positive root. However, checks in Mathematica for small values of $n$ and $a_i$, $b_i$ indicate that the resulting system admits no positive finite solutions. It is easily seen, that infinite $A$ and $B$ solve MLE equations. Let's fix $r=A/B$. Then, in the limit of infinite $A$ and $B$, equations reduce to $$ \begin{eqnarray} \sum_{i=1}^n a_i &=& \sum_{i=1}^n \frac{r}{r+1} (a_i+b_i) \\ \sum_{i=1}^n r b_i &=& \sum_{i=1}^n \frac{r}{r +1} \cdot (a_i+b_i) \end{eqnarray} $$ which gives $$ r = \frac{\sum_{i=1}^n a_i}{\sum_{i=1}^n b_i} $$ This result is confirmed numerically by the following heuristic. Use $a_i$ and $b_i$ to estimate head probability $p_i$ for each coin using ML estimator $p_i = \frac{a_i}{a_i+b_i}$. Then use ML estimation for beta distribution to determine $\alpha$ and $\beta$ from the data $\{p_1, \ldots, p_n \}$. I do not have a good explanation as to why ML estimation can not seem to determine both $\alpha$ and $\beta$. This may be worth a separate question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find an absorbing set in the table: fast algorithm Consider a $m\times n,m\leq n$ matrix $P = (p_{ij})$ such that $p_{ij}$ is either $0$ or $1$ and for each $i$ there is at least one $j$ such that $p_{ij} =1$. Denote $s_i = \{1\leq j\leq n:p_{ij} = 1\}$ so $s_i$ is always non-empty. We call a set $A\subseteq [1,m]$ absorbing if for all $i\in A$ holds $s_i\subset A$. If I apply my results directly then I will have an algorithm with a complexity of $\mathcal{O}(m^2n)$ which will find the largest absorbing set. On the other hand I was not focused on developing this algorithm and hence I wonder if you could advise me some algorithms which are faster? P.S. please retag if my tags are not relevant. Edited: I reformulate the question (otherwise it was trivial). I think this problem can be considered as a searching for the largest loop in the graph(if we connect $i$ and $j$ iff $p_{ij} = 1$).
One way to think about this is that the largest candidate absorbing set is $A^\prime = [1, m]$. Then while $A^\prime$ contains an element which doesn't meet the condition, we remove that element. When there's nothing left to remove, we have the largest possible absorbing set. This can be rephrased by thinking of $A$ as a graph adjacency matrix. Then you want to remove all nodes which are reachable from a node $v \in [m+1, n]$. They are easily identified by depth-first search or breadth-first search. For clarity you might want to insert a pseudonode $n+1$ which has edges to each $v \in [m+1, n]$. The running time is $O(mn)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
different probability spaces for $P(X=k)=\binom{n}{k}p^k\big(1-p\big)^{ n-k}$? Consider the following formula, which is from the Binomial distribution: $$P(X=k)=\binom{n}{k}p^k\big(1-p\big)^{ n-k}$$ On the left hand side, it is the probability of one event: $\{\omega\in\Omega:X(\omega)=k\}$. One the right hand side, one can consider $n$ independent variables, and using the addition and multiplication formula. It seems that the underlying sample spaces can be different for the two sides of the equality. For the left hand side, as I understand, the sample space(up to set isomorphism) $\Omega_1=\{(a_i)_{i=1}^{n}:a_i=0,1\}$. And the random variable $X:\Omega_1\to{\mathbb R}$, $$X(\omega)=\sum_{i=1}^{n}a_i$$ While the right hand side, it can be $\Omega_2=\{0,1\}$. And for the random variable $Y:\Omega_2\to{\mathbb R}$, $Y(0)=0,Y(1)=1$. Here is my question: How can one reach the equality by working with different sample spaces(and hence different probability spaces)? Edit: According to the comments and the answers to the question, I would like to modify the question as the following: Is it possible to interpret the formula $P(X=k)=\binom{n}{k}p^k\big(1-p\big)^{ n-k}$ in terms of the following probability space $(\Omega,\Sigma,P)$ where * *$\Omega=\{0,1\}$; *$\Sigma=2^{\Omega}$; *$P(0)=p,P(1)=1-p\quad 0\leq p\leq 1$.
On the right-hand side, all I can see is a number. That number could be found in several ways. There is no random variable mentioned on the right-hand side of the equation, and no sample space. And in any case, one can (and often does) show that two probabilities are equal when the underlying sample spaces are entirely different. Think of tossing a coin, event "Head", and tossing a die, event "even number."
{ "language": "en", "url": "https://math.stackexchange.com/questions/48243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Examples of results failing in higher dimensions A number of economists do not appreciate rigor in their usage of mathematics and I find it very discouraging. One of the examples of rigor-lacking approach are proofs done via graphs or pictures without formalizing the reasoning. I would like thus to come up with a few examples of theorems (or other important results) which may be true in low dimensions (and are pretty intuitive graphically) but fail in higher dimensions. By the way, these examples are directed towards people who do not have a strong mathematical background (some linear algebra and calculus), so avoiding technical statements would be appreciated. Jordan-Schoenflies theorem could be such an example (though most economists are unfamiliar with the notion of a homeomorphism). Could you point me to any others? Thanks.
Here's an example that doesn't require too much mathematical knowledge, and the low-dimensional result is intuitive graphically: We know that if a differentiable function $ f : \mathbb{R} \to \mathbb{R} $ has only one stationary point, which is a local minimum, then it must be a global minimum (this is intuitively obvious, and can be proved using Rolle's theorem). However, this result does not generalise to higher dimensions. An example would be $f : \mathbb{R}^2 \to \mathbb{R} $ with $ f(x,y) = x^2 + y^2(1-x)^3 $. This function has a unique stationary point at $ (0,0) $, which is a local minimum but not a global minimum (this can be seen by considering $ x >> 1 $). (Interactive 3D plot)
{ "language": "en", "url": "https://math.stackexchange.com/questions/48301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 12, "answer_id": 6 }
Similarity of Triangle problem Given: AD & PS are medians in ΔABC and ΔPQR respectively, $$\frac{AB}{PQ}=\frac{AD}{PS}=\frac{AC}{PR}$$ To Prove: ΔABC ~ ΔPQR Figure: Problem: In ΔABD & ΔPQS or in ΔADC & ΔPSR or in ΔABC & ΔPQR, I have only found that only two sides are proportional but can't figure-out third thing to prove similarity. Plz help me.
Create a point A' in the direction of AD, such that DA'=AD. Then there is a parallelogram ABA'C. So is parallelogram PQP'R. And AD/PS=AA'/PP', So, AB/PQ=AA'/PP'=AC/PR. =>ΔABA' ~ ΔPQP' =>∠BAD=∠QPS So is ∠CAD=∠RPS Then ∠BAC=∠QPR =>ΔABC ~ ΔPQR (Q.E.D.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/48372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Equidistribution results vs transcendence degree Consider $\alpha =(\alpha_1, \dots, \alpha_n) \in \mathbb{R}^n$ linearly independent over $\mathbb{Q}$, then the map $q \mapsto q \alpha:= ( q \alpha_1, \dots, q \alpha_n)$ gives a dense embedding of $\mathbb{Z}$ in the $n$ torus, i.e. $\mathbb{R}^n / \mathbb{Z}^n$, actually even more: it becomes equidistributed in the following sense $$\frac{1}{N}\sum\limits_{k \leq N} f(k\alpha) \rightarrow \int\limits_{\mathbb{R^N} / \mathbb{Z}^n} f( x) \mathrm{d} x,$$ where $\mathrm{d} x$ denotes the Haarmeasure. A similar result holds for the $\infty$ torus as well. What can be said about the rate of convergence here? For $n=1$, can we distinguish wether $\alpha_1$ is transcendental or algebraic from the error term? Motivation: I try to undestand effective version of universality theorems of L functions, see e.g. http://en.wikipedia.org/wiki/Zeta_function_universality.
The rate of convergence is related to the discrepancy. Koksma's inequality says that if $f$ is of bounded variation $V(f)$, and $x_1,\dots,x_n$ are points in $[0,1)$ with discrepancy $D_n$, then $$\left|{1\over n}\sum_1^nf(x_i)-\int_0^1f(t)\,dt\right|\le V(f)D_n$$ The discrepancy, in turn, is a measure of departure from unifomity of distribution. It is defined by $$D_n=\sup_{0\lt a\le1}\left|{1\over n}\#\lbrace i\le n:x_i\lt a\rbrace-a\right|$$ The discrepancy of the sequence of fractional parts of $\alpha,2\alpha,3\alpha,\dots,n\alpha$ is closely related to the partial quotients in the continued fraction expansion of $\alpha$. For algebraic irrationals it is known that $nD_n=O(n^{\epsilon})$ for every $\epsilon\gt0$. But we don't know enough about continued fractions to distinguish transcendentals from algebraics (in general), so, in general, I don't think you'll be able to distinguuish the two classes by convergence rates. All of this generalizes to higher dimensions and has been studied in some detail. I've only presented the tip of the iceberg. For more, see Kuipers and Niederreiter, Uniform Distribution of Sequences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Problem finding zeros of complex polynomial I'm trying to solve this problem $$ z^2 + (\sqrt{3} + i)|z| \bar{z}^2 = 0 $$ So, I know $ |z^2| = |z|^2 = a^2 + b ^2 $ and $ \operatorname{Arg}(z^2) = 2 \operatorname{Arg} (z) - 2k \pi = 2 \arctan (\frac{b}{a} ) - 2 k\pi $ for a $ k \in \mathbb{Z} $. Regarding the other term, I know $ |(\sqrt{3} + i)|z| \bar{z}^2 | = |z|^3 |\sqrt{3} + i| = 2 |z|^3 = 2(a^2 + b^2)^{3/2} $ and because of de Moivre's theorem, I have $ \operatorname{Arg} [(\sqrt{3} + i ) |z|\bar{z}^2] = \frac{\pi}{6} + 2 \operatorname{Arg} (z) - 2Q\pi $. Using all of this I can rewrite the equation as follows $$\begin{align*} &|z|^2 \Bigl[ \cos (2 \operatorname{Arg} (z) - 2k \pi) + i \sin (2 \operatorname{Arg}(z) - 2k \pi)\Bigr]\\ &\qquad \mathop{+} 2|z|^3 \Biggl[\cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right) + i \sin \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right)\Biggr] = 0 \end{align*} $$ Which, assuming $ z \neq 0 $, can be simplified as $$\begin{align*} &\cos (2 \operatorname{Arg} (z) - 2k \pi) + i \sin (2 \operatorname{Arg} (z) - 2k \pi) \\ &\qquad\mathop{+} 2 |z|\Biggl[\cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q \pi \right) + i \sin \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right)\Biggr] = 0 \end{align*} $$ Now, from this I'm not sure how to go on. I tried a few things that got me nowhere like trying to solve $$ \cos (2 \operatorname{Arg}(z) - 2k \pi) = 2 |z| \cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right) $$ I'm really lost here, I don't know how to keep going and I've looked for error but can't find them. Any help would be greatly appreciated.
The relation is equivalent to $z^2=-(\sqrt{3}+i)|z|\overline{z}^2$. $z=0$ is a solution, so in the following $z \neq 0$. Take modulus of both sides and denote $r=|z|=|\overline{z}|$. Then $r^2=2r^3$, which means $r=\frac{1}{2}$. The relations turns to $z^2+\frac{1}{2}(\sqrt{3}+i)\overline{z}^2=0$. Multiply by $z^2$ and get $z^4+\frac{1}{2}(\sqrt{3}+i)\frac{1}{16}=0$. Write it in trigonometric form $$ z^4=\frac{1}{16}\left(-\frac{\sqrt{3}}{2}-i\frac{1}{2}\right)=\frac{1}{16}(\cos \frac{7\pi}{6}+\sin \frac{7\pi}{6})$$. From here on it is just the extraction of complex roots. [edit] I did not answer your question, as to how to continue your calculations, but I can say from experience that in most complex numbers problems the substitution $z=a+bi$ gets you in more troubles in the end, than working with the properties of complex conjugate, modulus and trigonometric form. You can see that in my solution no great computational problems were encountered.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Equality of outcomes in two Poisson events I have a Poisson process with a fixed (large) $\lambda$. If I run the process twice, what is the probability that the two runs have the same outcome? That is, how can I approximate $$f(\lambda)=e^{-2\lambda}\sum_{k=0}^\infty\frac{\lambda^{2k}}{k!^2}$$ for $\lambda\gg1$? If there's a simple expression about $+\infty$ that would be best, but I'm open to whatever can be suggested.
So I've come across this problem before, and ran into the Bessel-function exact solution; but I figured that there must be some heuristic explanation. Here's the best I was able to come up with. Let $X$ and $Y$ be independent Poisson with mean $\lambda$ (and therefore variance $\lambda$). You want $P(X=Y)$. Now, $X$ and $Y$ are both approximately normal with mean $\lambda$ and variance $\lambda$, if $\lambda$ is large. Let $U = X + \lambda$ and $V = 3 \lambda - Y$. Then $U$ and $V$ are both approximately normal with mean $2 \lambda$ and variance $\lambda$. But these are the mean and variance of a binomial distribution with parameters $4\lambda$ and $1/2$. So $U$ and $V$ are both approximately binomial with parameters $4\lambda$ and $1/2$; thus $U + V$ is approximately binomial with parameters $8\lambda$ and $1/2$. But by construction $U + V = 4 \lambda + X - Y$. So $X = Y$ if and only if $U + V = 4 \lambda$. The probability that $U + V$ is $4 \lambda$ is approximately ${8 \lambda \choose 4 \lambda} 2^{-8\lambda}$. And by Stirling's approximation this is of order $(4 \pi \lambda)^{-1/2}$, which is what we wanted. (The fact that the probability should decay like $\lambda^{-1/2}$ is fairly easy to see -- since $X$ and $Y$ have standard deviation $\lambda^{1/2}$, the number of values they take ``regularly'' is of order $\lambda^{1/2}$, so the chance of collision is of order $\lambda^{-1/2}$. The constant $(4\pi)^{-1/2}$ is in my opinion much harder to guess.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/48593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Homology of the loop space Let $X$ be a nice space (manifold, CW-complex, what you prefer). I was wondering if there is a computable relation between the homology of $\Omega X$, the loop space of $X$, and the homology of $X$. I know that, almost by definition, the homotopy groups are the same (but shifted a dimension). Because the relation between homotopy groups and homology groups is very difficult, I expect that the homology of $\Omega X$ is very hard to compute in general. References would be great.
General idea for computation of $H(\Omega X)$ (due to Serre, AFAIK) is to consider a (Serre) fibration $\Omega X\to PX\cong pt\to X$ and use Leray-Serre spectral sequence (it allows, in particular, to compute easily (at least, in simply-connected case) $H(\Omega X;\mathbb Q)$; cohomology with integer coefficients are, indeed, more complicated). It's discussed, I believe, in any textbook covering LSSS — e.g. in Hatcher's.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Derive $\frac{d}{dx} \left[\sin^{-1} x\right] = \frac{1}{\sqrt{1-x^2}}$ Derive $\frac{d}{dx} \left[\sin^{-1} x\right] = \frac{1}{\sqrt{1-x^2}}$ (Hint: set $x = \sin y$ and use implicit differentiation) So, I tried to use the hint and I got: $x = \sin y$ $\frac{d}{dx}\left[x\right] = \sin y\frac{d}{dx}$ $\frac{dx}{dx} = \cos y \frac{dy}{dx}$ $\frac{dy}{dx} = \frac{1}{\cos y}$ $\frac{dy}{dx} = \sec y$ From here I need a little help. * *Did I do the implicit differentiation correctly? *How do I use this to help with the original question?
Edited in response to Aryabhata's comment. From $\cos ^{2}y+\sin ^{2}y=1$, we get $\cos y=\pm \sqrt{1-\sin ^{2}y}$. For $y\in \lbrack -\pi /2,\pi /2]$, $\cos y=\sqrt{1-\sin ^{2}y}\geq 0$, and $% y=\arcsin x\Leftrightarrow x=\sin y$ (see Inverse trigonometric functions). Then by the rule of the inverse function we have $$\dfrac{dy}{dx}=\dfrac{1}{\dfrac{dx}{dy}}=\dfrac{1}{\dfrac{d}{dy}\sin y}=\frac{1}{\cos y}=\dfrac{1}{\sqrt{1-\sin ^{2}y}}=\dfrac{1}{\sqrt{1-x^{2}}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/48752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 0 }
Differential Equations Flow-chart/genealogical diagram? There are many methods to solve differential equations. There are many kinds of equations, different orders, linear, non-linear, homogeneous, exact, the other kind of homogeneous etc. I would like to know of any diagrams that organize equations in to families by the best suited solution method. Or maybe a genealogical diagram, or even a flow chart. Something that gives the "big picture" on all these methods and types of equations. Is there any such digram? If not how would you organize it? (the goal is for the diagram to assist one in choosing a method for solving, while at the same time clarifying the similarities and difference between equations. Something like the diagram that shows the real, imaginary, natural and complex numbers, perhaps: Of course it would be more complex... and it might fill a wall, but I'm OK with that.
Maple's help page for the odeadvisor command contains a list of many different types of differential equations, each constituting a link to a page that describes that type of equation and (in many cases) its solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How to determine number with same amount of odd and even divisors With given number N, how to determine first number after N with same amount of odd and even divisors? For example if we have N=1, then next number we are searching for is : 2 because divisors: odd : 1 even : 2 I figured out that this special number can't be odd and obviously it can't be prime. I can't find any formula for this or do i have just compute it one by one and check if it's this special number ? Obviously 1 and number itself is divisors of this number. Cheers
If $2n$ is a divisor of $2m$1, then $$ 2n \mid 2m \Leftrightarrow n \mid m $$ In other words, $2n$ is a divisor of $2m$ if and only if $n$ is a divisor of $m$. So for every even divisor of $2m$ there is a divisor of $m$, and for every divisor of $m$ there is an even divisor of $2m$. Now we conclude that the number of even divisors of $2m$ is equal to the number of divisors of $m$ (#). Now if $2n - 1$ is a divisor of $2m$, then $$ 2n - 1 \mid 2m \Leftrightarrow 2n - 1 \mid m $$ In other words, $2n - 1$ is a divisor of $2m$ if and only if $2n - 1$ is a divisor of $m$. So for every odd divisor of $2m$ there is an odd divisor of $m$ and for every odd divisor of $m$ there is an odd divisor of $2m$. Now we conclude that the number of odd divisors of $2m$ is equal to the number of odd divisors of $m$ (##). If the number of even divisors of $2m$ is equal to the number of odd divisors of it, from ## and # we conclude that the number of divisors of $m$ is equal to the number of odd divisors of it, which means all of the divisors of $m$ are odd, so $m$ is odd. So if the number of even divisors and odd divisors of $2m$ are equal, then $m$ is odd. \begin{align} 2m &= 2(2n - 1) \\ &= 4n - 2 \\ &= 2 + 4(n - 1) \\ &= a_1 + d(n - 1), n \in \mathbb{N} \end{align} Every term of the sequence of every other even number starting with $2$ has the property. 1.As you said the number has to be even so I wrote $2m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Deriving the rest of trigonometric identities from the formulas for $\sin(A+B)$, $\sin(A-B)$, $\cos(A+B)$, and $\cos (A-B)$ I am trying to study for a test and the teacher suggest we memorize $\sin(A+B)$, $\sin(A-B)$, $\cos(A+B)$, $\cos (A-B)$, and then be able to derive the rest out of those. I have no idea how to get any of the other ones out of these, it seems almost impossible. I know the $\sin^2\theta + \cos^2\theta = 1$ stuff pretty well though. For example just knowing the above how do I express $\cot(2a)$ in terms of $\cot a$? That is one of my problems and I seem to get stuck half way through.
Since $\displaystyle\cot(2a) = \frac{\cos(2a)}{\sin(2a)}$, you would have (assuming you know the addition formulas for sines and cosines): $$\begin{align*} \cos(2a) &= \cos(a+a) = \cos(a)\cos(a) - \sin(a)\sin(a)\\ &= \cos^2(a) - \sin^2(a);\\ \sin(2a) &= \sin(a+a) = \sin(a)\cos(a) + \cos(a)\sin(a)\\ &= 2\sin(a)\cos(a), \end{align*}$$ and therefore $$\begin{align*} \cot(2a) &= \frac{\cos(2a)}{\sin(2a)} = \frac{\cos^2(a) - \sin^2(a)}{2\sin(a)\cos(a)}\\ &= \frac{1}{2}\left(\frac{\cos^2(a)}{\sin(a)\cos(a)}\right) - \frac{1}{2}\left(\frac{\sin^2(a)}{\sin(a)\cos(a)}\right)\\ &= \frac{1}{2}\left(\frac{\cos(a)}{\sin(a)} - \frac{\sin(a)}{\cos(a)}\right)\\ &= \frac{1}{2}\left(\cot(a) - \tan(a)\right)\\ &= \frac{1}{2}\left(\cot(a) - \frac{1}{\cot(a)}\right)\\ &= \frac{1}{2}\left(\frac{\cot^2(a)}{\cot(a)} - \frac{1}{\cot(a)}\right)\\ &= \frac{1}{2}\left(\frac{\cot^2(a) - 1}{\cot (a)}\right). \end{align*}$$ P.S. Now, as it happens, I don't know the formulas for double angles, nor most identities involving tangents, cotangents, etc. I never bothered to memorize them. What I know are: * *The definitions of tangent, cotangent, secant, and cosecant in terms of sine and cosine; *That sine is odd ($\sin(-x) = -\sin(x)$) and cosine is even ($\cos(-x)=\cos(x)$); *The addition formulas for sine and cosine; *The values of sine and cosine at $0^{\circ}$, $30^{\circ}$, $45^{\circ}$, $60^{\circ}$, and $90^{\circ}$. (I can derive $\sin^2\theta + \cos^2\theta = 1$ from the above, but in all honesty that one comes up so often that I do know it as well). I do not know the addition or double angle formulas for tangents nor cotangents, so the above derivation was done precisely "on the fly", as I was typing. I briefly thought that I might need to $\cos(2a)$ with one of the following equivalent formulas: $$\cos^2(a)-\sin^2(a) = \cos^2(a) + \sin^2(a) - 2\sin^2(a) = 1 - 2\sin^2(a)$$ or $$\cos^2(a) - \sin^2(a) = 2\cos^2(a) - \cos^2(a) - \sin^2(a) = 2\cos^2(a) - 1,$$ if the first attempt had not immediately led to a formula for $\cot(2a)$ that involved only $\cot(a)$ and $\tan(a) = \frac{1}{\cot(a)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$? How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?
Since we have $\text{Col }AB \subseteq \text{Col }A$ and $\text{Row }AB \subseteq \text{Row }B$, therefore $\text{Rank }AB \leq \text{Rank }A$ and $\text{Rank }AB \leq \text{Rank }B$, then the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/48989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 5, "answer_id": 3 }
Using Horner's Method I'm trying to evaluate a polynomial recursively using Horner's method. It's rather simple when I have every value of $x$ (like: $x+x^2+x^3...$), but what if I'm missing some of those? Example: $-6+20x-10x^2+2x^4-7x^5+6x^7$. I would also appreciate it if someone could explain the method in more detail, I've used the description listed here but would like some more explanation.
$6x^7-7x^5+2x^4-10x^2+20x-6$ = $6x^7+0x^6-7x^5+2x^4+0x^3-10x^2+20x-6$ The method is essentially start with the coefficient of the highest power, multiply by $x$ and add the next coefficient. Stop when you add the constant coefficient. So steps in the iteration go: $6$ $6x[+0]$ $6x^2-7$ $6x^3-7x+2$ $6x^4-7x^2+2x[+0]$ $6x^5-7x^3+2x^2-10$ $6x^6-7x^4+2x^3-10x+20$ $6x^7-7x^5+2x^4-10x^2+20x-6$ Trust this helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/49051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
integrals inequalities $$ \left( {\int\limits_0^1 {f^2(x)\ \text{d}x} }\right)^{\frac{1} {2}} \ \geqslant \quad \int\limits_0^1 {\left| {f(x)} \right|\ \text{d}x} $$ I can't prove it )=
Define $$ \mu = \int_0^1 {|f(x)|\,dx} $$ and $$ \sigma^2 = \int_0^1 {(|f(x)| - \mu )^2 \,dx} . $$ Then $$ \sigma^2 = \int_0^1 {f^2 (x)\,dx} - 2\mu \int_0^1 {|f(x)|\,dx} + \mu ^2 = \int_0^1 {f^2 (x)\,dx} - \mu ^2 . $$ Since $\sigma^2 \geq 0$, $$ \int_0^1 {f^2 (x)\,dx} \geq \mu ^2. $$ Taking square roots of both sides yields the desired result: $$ \bigg(\int_0^1 {f^2 (x)\,dx} \bigg)^{1/2} \ge \int_0^1 {|f(x)|\,dx}. $$ EDIT: The idea used here is that for a random variable $Y$ with finite second moment, $$ {\rm Var}(Y) := {\rm E}[Y - {\rm E}(Y)]^2 \geq 0. $$ So, $$ {\rm Var}(Y) = {\rm E}(Y^2) - 2{\rm E}(Y){\rm E}(Y)+ {\rm E}^2 (Y) = {\rm E}(Y^2) - {\rm E}^2 (Y), $$ and hence $$ {\rm E}(Y^2) \geq {\rm E}^2 (Y). $$ To relate this to the question at hand, let $X$ be a uniform$[0,1]$ random variable, and let $Y=|f(X)|$ (for $f$ a square-integrable function on $[0,1]$). Then $$ {\rm E}(Y^2) = {\rm E}[f^2 (X)] = \int_0^1 {f^2 (x)\,dx} $$ and $$ {\rm E}^2 (Y) = {\rm E}^2 (|f(X)|) = \bigg(\int_0^1 {|f(x)|\,dx} \bigg)^2 . $$ Hence $$ \int_0^1 {f^2 (x)\,dx} \geq \bigg(\int_0^1 {|f(x)|\,dx} \bigg)^2 , $$ which gives the desired result after taking square roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Why $\sqrt{-1 \times -1} \neq \sqrt{-1}^2$? We know $$i^2=-1 $$then why does this happen? $$ i^2 = \sqrt{-1}\times\sqrt{-1} $$ $$ =\sqrt{-1\times-1} $$ $$ =\sqrt{1} $$ $$ = 1 $$ EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.
There are many ways to show that your second equality is incorrect. Just for the sake of argument and a refreshing change of pace, suppose your point is true. That is, suppose $i^{2} = 1$. Then \begin{align} (x + i)(x-i) = x^2 - i^{2} = x^{2} - 1. \end{align} Use the Descartes Rule of signs to derive a contradiction. Hint: $i$ is not real. Can you finish the line of reasoning and derive an absurdity?
{ "language": "en", "url": "https://math.stackexchange.com/questions/49169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43", "answer_count": 9, "answer_id": 7 }
The number of bit strings with only two occurrences of 01 How many bit strings of length $n$, where $n \geq 4$, contain exactly two occurrences of $01$?
Any such sequence can be written as: $$1^k0^l1^m0^p1^q0^r$$ where $k+l+m+p+q+r=n$ and $l,m,p,q\geq 1$ and $k,r\geq 0$. But we can see that as the number of ways of writing $n-4$ as the sum of $6$ non-negative integers, which is ${n+1}\choose{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Does the likelihood of an event increase with the number of times it does not occur? I would seem logical that the more times an event does not happen, the more likely it is to happen, for example: If a coin is flipped and it lands on tails 10 times in a row it would seam more likely that the next flip will result in heads. The Infinite Monkey Theorem is one such idea that suggests this is true, http://en.wikipedia.org/wiki/Infinite_monkey_theorem It states that if some number of monkeys are left in a room with typewriters for an infinite amount of time then they will eventually compose all written texts ever produced. This seems to suggest that since the chance of the monkeys writing a work, say Shakespeare's Romeo and Juliet, is very low. The more times they do not write it, the more likely they are to write it, until the chance becomes significant and it, the writing of the play, happens. However another idea, Gambler's Fallacy states quite the opposite. http://en.wikipedia.org/wiki/Gambler%27s_fallacy It states that the chance of an event does not increase with the number of times it does not occur. So what is the answer? Does the likelihood of an event go up the more times it does not happen, or does it stay the same? And if it does stay the same then how does one explain the Infinite Monkey Theorem?
The Infinite Monkey Theorem does not suggest that the "more times they do not write it, the more likely they are to write it, until the chance becomes significant and it, the writing of the play, happens." Rather, what it says informally is that the longer they have been writing, the more likely they are to have written a given string. The monkey is just as likely to start with the complete works of Shakespeare from keystroke 1 as from keystroke $10^{400,000}$. However, the longer the string of successive keystrokes, the more likely any given substring can be found there. Thus, for example, the complete works of Shakespeare are much more likely to be found in the string of the first $10^{400,000}$ keystrokes than in the string of the first $10^{300,000}$ keystrokes. That's because the former is $10^{100,000}$ times as long.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
An identity involving Lucas numbers Let $L_n$ be the Lucas numbers, defined by $L_n = F_{n-1} + F_{n+1}$ where $F_k$ are the Fibonacci numbers. How to prove that $$L_{2n+1} = \displaystyle \sum_{k=0}^{\lfloor n + 1/2\rfloor}\frac{2n+1}{2n+1 - k}{2n+1 - k \choose k} $$
The upper limit of the summation is simply $n$:$$L_{2n+1} = \sum\limits_{k=0}^n \frac{2n+1}{2n+1-k} {{2n+1-k}\choose{k}}.$$ Notice that as the index $k$ changes, the sum of the upper and lower numbers in the binomial coefficient remains constant at $2n+1$. In other words, ignoring for a moment the fractional coefficient, we're summing lower-left-to-upper-right diagonals in Pascal's triangle when it's written in rectangular form, as for instance shown at the top of this web page. It's well known that those sums are the Fibonacci numbers: $$F_{n+1} = \sum\limits_{k=0}^{\lfloor n/2 \rfloor} {{n-k}\choose{k}}.$$ (It's also not hard to prove this by induction, using the recursive definition of the Fibonacci numbers and the fact that ${{n}\choose{k}} = {{n-1}\choose{k}} + {{n-1}\choose{k-1}}$.) This suggests trying to manipulate the given summation into a form recognizable as the sum of two of these diagonal sums. This works: $$\frac{2n+1}{2n+1-k} = 1 + \frac{k}{2n+1-k}$$ and $$\frac{k}{2n+1-k} {{2n+1-k}\choose{k}} = {{2n-k}\choose{k-1}},$$ so $ \begin{align*} \sum\limits_{k=0}^n \frac{2n+1}{2n+1-k} {{2n+1-k}\choose{k}} &= \sum\limits_{k=0}^n \left(1 + \frac{k}{2n+1-k} \right) {{2n+1-k}\choose{k}}\\ &= \sum\limits_{k=0}^n \left[{{2n+1-k}\choose{k}} + {{2n-k}\choose{k-1}} \right]\\ &= \sum\limits_{k=0}^n {{2n+1-k}\choose{k}} + \sum\limits_{k=0}^n {{2n-k}\choose{k-1}}\\ &= \sum\limits_{k=0}^n {{2n+1-k}\choose{k}} + \sum\limits_{k=-1}^{n-1} {{2n-1-k}\choose{k}}\\ &= \sum\limits_{k=0}^n {{2n+1-k}\choose{k}} + \sum\limits_{k=0}^{n-1} {{2n-1-k}\choose{k}}\\ &= F_{2n+2} + F_{2n}\\ &=L_{2n+1}. \end{align*}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/49347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }