Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How long it takes to roll 6 6's in a row Someone is sick and extremely bored. To pass the time, he decides to roll a $6$-sided die until he gets $6$ $6$'s in a row. If it takes him $5$ seconds for every roll of the die, how long is he expected to be rolling dice until he gets his $6$ $6$'s? I believe I have a correct answer to this problem, but I'm more curious if there is a more efficient way of solving it than I did, so I'd appreciate if you could attempt to solve it yourself and compare it with my approach. My approach to solving this problem was to first find the expected number of die tosses and multiply that result by $5$ Let's call the expected number of tosses $X$. We can say: $X = \frac1{6^6}(6) + \frac5{6^6}(X+6) + \frac5{6^5}(X+5) + \frac5{6^4}(X+4) + \frac5{6^3}(X+3) + \frac5{6^2}(X+2) + \frac5{6}(X+1)$ Solving for X gives us $55981$ expected tosses resulting in a total $279905$ seconds.
The only possible approach is to first find the expected number of die tosses and multiply the result by $5$, so I assume you're looking for different ways to find the expected number of die tosses. Here's a somewhat well-known, nice solution: You offer a roulette-style game where players can bet any amount on on any number one through six, and then you roll a die and give a 5 to 1 payout when their number comes up (i.e. this is a fair game). Your customers are peculiar: they all play the exact same strategy. They bet on $6$ every time, and "let it ride" until they lose. In other words, if they win the first round and get paid $5$, plus their original bet back, they bet $6$ the following round. If they win again, they have $36$ in front of them and they bet it all on the following round, and so on. Also, each round, exactly one new customer shows up. You decide at the outset that you will shut down the game if and only if some player wins six times in a row. This is a fair game (and since your losses are bounded, the optional stopping theorem applies) so your expected earnings must be zero. If the game goes $T$ rounds, you get $T$ initial bets from each customer. Only the last six customers leave with anything. The last customer that arrived leaves with $6,$ the second to last with $36,$ the third to last with $6^3,$ and so on. So the expected value of $T$ must be $6+6^2+6^3+6^4+6^5+6^6,$ which is close to, but slightly off from your answer. (Check your algebra.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4641439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that if $x,y>0$, $\left(\frac{x^3+y^3}{2}\right)^2≥\left(\frac{x^2+y^2}{2}\right)^3$ Through some rearrangement of the inequality and expansion, I have been able to show that the inequality is equivalent to $$x^6-3x^4y^2+4x^3y^3-3x^2y^4+y^6≥0$$ However, I am not sure how to prove the above or if expansion and rearrangement are even correct steps.
Another longer way... Let $x=r\cos\theta$ and $y=r\sin\theta$ where $0<\theta<\frac\pi 2$ and $r>0$. Then the expression becomes $$|\sin(2\theta)-2||\sin(\theta+\frac\pi4)|\geq\frac{1}{\sqrt{2}}$$ which is true for $0<\theta<\frac\pi 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4641593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Elevator Disposition Proof of Impossibility Let the highest floor of a building be an integer L. There are 2 elevators A and B, where A starts at floor zero and B starts at floor L. If someone presses the elevator button on floor N, the closest lift to that floor N will move to that floor. Neither one of the elevators can move if the other one is moving. If A and B are both at the same distance to floor N, the lift on the lowest floor will move. If both elevators are on the same floor, lift A will move. How can it be proven that A will never be on a floor higher than B and B will never be on a floor lower than A? ** my train of thought right now is that this can be proven by showing that B will never be able to move by half or more of the distance between B and A. I don't really know how to prove that though. maybe I could include arithmetic? I was working on a programming problem when I had this problem. At first, I miswrote something which resulted in some errors. I fixed it and tried experimenting with the code, then I realized that B will never be on a lower floor than A. I then made a sketch on paper to confirm it and thought about how a mathematical proof would be for something like this or what kinds of proofs can be used. Any kind of help or keywords to be searched on google will be appreciated! Thanks!! ///
We need to prove that an invariant – that $A$'s floor is always strictly below $B$'s floor – holds after any button press if it holds before the press. These are all the cases: * *If the button is pressed at or below $A$'s floor, or between $A$'s and $B$'s but closer to $A$ than $B$, $A$ moves down, or up but strictly less than half the floor gap between $A$ and $B$, respectively. Thus $A$ is still strictly below $B$. *By symmetry, the invariant holds if the button is pressed at or above $B$'s floor, or between the floors but closer to $B$ than $A$. *If the button is pressed at a floor exactly between $A$ and $B$, since the lower lift is $A$, $A$ moves and $A$ is still strictly below $B$ afterwards. (There must be at least two floors' distance between $A$ and $B$ before the press, and at least one afterwards, for this to happen.) Assuming the natural $L>0$, the invariant thus holds under all conditions since it holds at the start.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4641701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence in distribution under equivalent measures Say we have two probability measures $P$ and $Q$ on the same probability space and that they are equivalent (absolutely continuous with respect to each other). Then, one can show that convergence in probability under $P$ implies convergence under $Q$ and vice-versa (for example, see here). But does the same hold for convergence in distribution? Specifically, if we have a sequence of random variables $X_n$ and a limit $X$, is it true that $X_n \xrightarrow{D} X$ under $P$ if and only if $X_n \xrightarrow{D} X$ under $Q$? I would think it is since convergence in probability is generally stronger, but I can't seem to find a result like this anywhere.
Hint: On any probability space $X,-X,X,-X,\cdots$ converges in distribution if and only if $X$ and $-X$ have the same distribution. For a counter-example all you need is two probability measures $P$ and $Q$ which are absolutely continuous w.r.t. each other such $X$ had a symmetric distribution w.r.t $P$ but not w.r.t $Q$. I will leave the construction to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4642069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to add a line to a given diagram so that the resulting figure has rotational symmetry but not line symmetry? So this is a question I found in the math D book by David Rayner. The diagram is given below. How to add a line to this diagram so that the resulting figure has rotational symmetry but not line symmetry? I have seen a solution on the internet like this, But it doesn't work since the diagram still has line symmetry. Can anyone suggest me a solution to this? It will be very beneficial.
$${}{}{}{}{}{}{}\mathsf{N}{}{}{}{}{}{}{}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4642229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve $\left(3x^2y-xy\right)dx+\left(2x^3y^2+x^3y^4\right)dy=0$ $\left(3x^2y-xy\right)dx+\left(2x^3y^2+x^3y^4\right)dy=0$ I'm trying to solve this first-order differential equation. I know it's not an exact equation so I'm trying to use the method taught in class to solve it. I get stuck trying to do the integrating factor. Below is what I have: $\frac{\partial \:}{\partial \:x}\left(M\right)=\frac{\partial }{\partial x}\left(3x^2y-xy\right) = 6xy-y$ $\frac{\partial \:}{\partial y}\left(N\right)=\frac{\partial }{\partial y}\left(2x^3y^2+x^3y^4\right)dy = 4x^3y+4x^3y^3$ And since $\frac{\partial \:}{\partial \:y}\left(N\right)\neq\frac{\partial }{\partial \:x}\left(M\right)$, have that it is not exact. So we apply the formula to get an integrating factor: $\xi =\frac{\left(\:\frac{\partial \:}{\partial \:y}-\frac{\partial }{\partial x}\right)}{N}$ to get a function $\xi(x)$, or the formula $\xi =\frac{\left(\:\frac{\partial \:}{\partial \:y}-\frac{\partial }{\partial x}\right)}{-M}\:$ to get a function $\xi(y)$. We use $\xi$ to get an integrating factor $\mu(x)=e^{\int \:\xi(x) dx}$ or $\mu(y)=e^{\int \:\xi(x) dy}$. Now, when I apply either one of the formulas for $\xi$, I always get a result dependent on both $x$ and $y$, so I'm unable to get the integrating factor. Is there supposed to be a simpler way to solve this? I'm using this method because it's what was taught in class, but is there another simple way to solve this that I'm not seeing?
$$\left(3x^2y-xy\right)dx+\left(2x^3y^2+x^3y^4\right)dy=0$$ $$xy\Big(\left(3x-1\right)dx+x^2y\left(2+y^2\right)dy\Big)=0$$ First trivial solution : $$y(x)=0$$ Second trivial solution : $$x(y)=0$$ Remaining equation : $$\left(3x-1\right)dx+x^2y\left(2+y^2\right)dy=0$$ Changing $y$ into $-y$ doesn't change the equation. This suggests the change of function : $$Y(x)=y^2(x)$$ $$\left(3x-1\right)dx+\frac12 x^2\left(2+Y\right)dY=0$$ The equation is separable. Solvint it for $Y(x)$ is straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4642429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
No Identity Element For $x, y ∈$ $\mathbb{R}$, let $x△y = 2(x + y)$. Then $△$ is a binary operation on $\mathbb{R}$. Show that there is no identity element for $△$ on $\mathbb{R}$. I have tried $x△e = e△x=x$ I don't know what else to do.
Suppose we have an identity $e$. Then $$0 \triangle e =0\implies2(0+e)=0\implies e=0.$$ But $$2 \triangle e =2\implies2(2+e)=2\implies e=-1.$$ Thus $-1=e=0$. This is contradictory. So there is no such an identity $e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4642566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How can I show that this piecewise linear map $T$ is continuous? Let $V=L^1([0,2])$ be space of integrable functions, equipped with the norm: $$\lVert f(x)\rVert=\int_0^2\vert f(x) \vert dx$$ Consider the linear mapping: $$T: V \to V, [Tf](x):=\begin{cases} f(x+1) & \text{for} &x\in(0,1] \\f(x-1) & \text{for} & \text{elsewhere} \end{cases}$$ Question: How can I show that this is continuous? For a piecewise function I would just check if the values of the two functions at the point $x=1$ is the same. However, I don't have explicit functions here. What to I need to do to show $T$ is continuous or not? Sample solution from an old exam (I am just not sure whats being done here): a) $$\lVert T\rVert=\sup_{\substack{f\neq0\\ f\in V}}\frac{\lVert Tf\rVert}{\lVert f\rVert}=\sup\frac{\int_0^1\left\lvert f\left(x+\frac{1}{2}\right)\right\rvert~\mathrm{d}x+\int_1^2\left\lvert f\left(x\right)\right\rvert~\mathrm{d}x}{\int_0^2\left\lvert f\left(x\right)\right\rvert~\mathrm{d}x}\leq2\cdot\frac{\int_0^2\left\lvert f\left(x\right)\right\rvert~\mathrm{d}x}{\int_0^2\left\lvert f\left(x\right)\right\rvert~\mathrm{d}x}=2<\infty$$ $\implies$ continuous b) $\sup$: find function s.t. $\lVert T\rVert$ is maximal: $$f_m(x)=\begin{cases}\lambda,&1\leq x\leq \frac{3}{2},\\ 0&\text{else},\end{cases}$$ $\lambda\in\mathbb{R}$, $\lambda\neq0$. $$\lVert T\rVert=\frac{\lVert Tf_m\rVert}{\lVert f_m\rVert}=\frac{\int_0^1\left\lvert f_m\left(x+\frac{1}{2}\right)\right\rvert~\mathrm{d}x+\int_1^2\left\lvert f_m\left(x\right)\right\rvert~\mathrm{d}x}{\int_0^2\left\lvert f_m\left(x\right)\right\rvert~\mathrm{d}x}=\frac{\lvert\lambda\rvert\cdot0.5+\lvert\lambda\rvert\cdot 0.5}{\lvert\lambda\rvert\cdot0.5}=2$$
You could just go down to the definition of continuity: $T$ is continuous if, for every $\epsilon > 0$, there exists some $\delta > 0$ such that if $\|f-g\| < \delta$, then $\|Tf - Tg\| < \epsilon$. To start off, I would just write what $\|Tf - Tg\|$ is equal to, and see where that takes me. Notice that it is equal to $$\int_{-1}^1 |(Tf)(x) - Tg(x)|dx = \int_{-1}^0|(Tf)(x) - (Tg)(x)|dx + \int_{0}^1|(Tf)(x) - (Tg)(x)|dx$$ Now we know that in the first interval, $(Tf)(x) = f(x+1)$ and $(Tg)(x)=g(x+1)$, and similarly for the second interval, so that simplifies to $$\int_{-1}^0|f(x+1) - g(x+1)|dx + \int_{0}^1|f(x-1) - g(x-1)|dx$$ The next steps, I leave to you, with a hint that you might want to consider some sort of change of variables in the two integrals :). Hint 2: In the first integral, susbtitute $y=x+1$ and in the second, substitute $y=x-1$. Then, take a look at what you get and compare it to $\|Tf - Tg\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4642746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Two independent weak convergence sequences have dependent limit? Let $X_n\Rightarrow X$ and $Y_n\Rightarrow Y$ as $n\to\infty$, where all these $X$ and $Y$ are well defined random variables and $\Rightarrow$ is the convergence in distribution. Could you give me an example in which $X_n$ and $Y_n$ are independent, but $X$ and $Y$ are not independent? Thank you!
Let $a$ be such that $F_X(x)$ is continuous at $a$, and $b$ be such that $F_Y(x)$ is continuous at $b$. Then $F_{X_n}(a)\to F_X(a)$, $F_{Y_n}(b)\to F_Y(b)$. On the other hand, $$ \mathbb{P}(X_n\le a,Y_n\le b)=\mathbb{P}(X_n\le a)\ \mathbb{P}(Y_n\le b) $$ by independence. As a result, the LHS of the above $$ F_{X_n,Y_n}(a,b)\to F_X(a) F_Y(b) $$ so if $F_{X,Y}(a,b):=F_X(a)F_Y(b)$ then $$ \mathbb{P}(X_n\le a,Y_n\le b)\to F_{X,Y}(a,b)\quad\text{as }n\to \infty $$ and thus the pair $(X_n,Y_n)$ converges in distribution to a random variable whose CDF is given by $F_{X,Y}(a,b)$, and given its product form, $X$ and $Y$ are independent. The only thing which remains to consider is the case when $F_X(x)$ is not continuous at $a$ (and similar for $Y$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4642920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Combinatorics question regarding arranging $13$ balls in a line but with a condition that on neither of the sides have a blue ball. I have $13$ balls. $6$ red , $4$ blue , $3$ yellow. I want to arrange in a line such that the right side and the left side do not have blue balls. ( balls in the same color are not distinct ). I tried to calculate the options without any terms or conditions which is 13! divided by $6!4!3!$. ( ! is factorial ) and then subtract the options that contain having blue ball on the right side and the left side which is : ($4$ choose $2$ for the blue balls on each side) multiply by ( $11!$ divided by $(6!2!3!)$ which are the remaining $11$ balls left). but I'm not getting the right answer. What am I doing wrong in the process?
You write: and then subtract the options that contain having blue ball on the right side and the left side But what you need is to subtract off the number of arrangements that have a blue ball on the left or right side. You multiply by $\binom{4}{2}$, I guess to pick which blue balls go on the side? But you don't need this factor since they're indistinguishable. As soon as you say B goes on the left side, all you need is to specify the other 12 balls. Instead I would break it into 3 steps: * *Find the number of arrangements where B is on the left *Find the number of arrangements where B is on the right *Subtract the number of arrangements where B is on both sides, to avoid double counting
{ "language": "en", "url": "https://math.stackexchange.com/questions/4643057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Probability of Twisting a Phone Cord During a Call I invented this problem and am unable to solve it. It is not a homework problem. I make a phone call on a standard handset (with a coiled cord). I start with the phone on my right ear. With probability p I talk long enough that I transfer the phone to my left ear, putting a twist in the cord. If I hang up at that point the cord remains twisted. But at probability p^2 I talk even longer and transfer the phone back to my right ear, removing the twist. At p^3 I put the twist back, and at p^4 I remove the twist again, and so on. If the call can be unbounded, how do I compute the probability P(p) that a call will put a twist in the cord? Here is a diagram of the call, where the (possibly infinite) call is on the horizontal and the possible hang-ups on the vertical. 1 means a twist, and 0 means no twist. p^1 p^2 p^3 p^4 0---------1---------0---------1---------0... Phone call --> | | | | | 1-p^1 | 1-p^2 | 1-p^3 | 1-p^4 Hangups | | | | | | 0 1 0 1 v There are scenarios that result in a 1: Q1 = p^1 * (1 - p^2) Twist and no untwist Q3 = p^1 * p^2 * p^3 * (1 - p^4) Twist, untwist, twist again and no untwist Q5 = p^1 * p^2 * p^3 * p^4 * p^5 * (1 - p^6) Two aborted twists and a twist ... For any given call, there can be at most one Q. But that seems to mean the exclusive-or of an infinite number of Qs! How can that be done? Or is that the wrong approach? I'm looking for: How to calculate P(p)? What is the limit of P(p) as p approaches 1.0? (if it exists) (Graphs of P(p) and (P(p) - p) might be interesting) Update: When p is 1.0 my Q scenarios all go to 0 because of the last term. It is impossible to avoid untwisting.
You do not XOR an infinite number of probabilities, you add them. Given your probability tree: $$P(p)=p-p^3+p^6-p^{10}+\cdots=\sum_{n=1}^\infty(-1)^{n+1}p^{n(n+1)/2}$$ which does not have a closed-form solution. (Not even Jacobi theta functions as seen here will help.) However, $\lim_{p\to1}P(p)=\frac12$ as can be intuitively derived from the twisting and untwisting process on an infinite-length call. This is a plot of $P(p)$:
{ "language": "en", "url": "https://math.stackexchange.com/questions/4643214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Solving $|x-1|^{\log_2(4-x)} \le|x-1|^{\log_2 (1+x)}$ Let us solve $|x-1|^{\log_2(4-x)}\le|x-1|^{\log_2 (1+x)}......(*)$ Let $4-x>0 ~\& ~1+x>0$.......(1) Case 1: $|x-1|\le 1\implies 0\le x\le 2$........(2) We get $\log_2(4-x) \color{red}{\ge} \log_2(1+x)\implies 4-x \ge 1+x \implies x\le 3/2$....(3) The overlap of (1,2,3) gives $x\in [0,3/2]$........(4) Case 2: $|x-1|\ge 1 \implies x\le 0 ~or~ x\ge 2$....(5) We get $\log_2(4-x) \color{red}{\le} \log_2(1+x)\implies 4-x <1+x \implies x\ge 3/2$....(6) Taking overlap of (1,5,6), we get $x \in [2,4)$. So the final solution is : $[0,3/2] \cup [2,4).$ Now the question is whether this solution is complete and how else this (*) could be solved?
$|x-1|^{\log_2(4-x)}\le|x-1|^{\log_2 (1+x)}......(1)$ Note that $x=0,1,2$ are already roots. Taking $\log_a$ both side ( $a\in (0,1)$ or $a\in(1,\infty))$, let us take $a=2$. We get $$\log_2(4-x) \log_2|x-1|\color{red}{\le} \log_2(1+x)\log_2|x-1|\quad(2)$$ Since, $\log_2(4-x)$ and $\log_2(1+x)$ have to be real we declare $$(4-x)>0~\&~ (1+x)>0\quad(3)$$ Case 1: $|x-1|<1\implies 0<x< 2, \log_2|x-1|\color{red}{<}0\quad(4)$, From (2) we get $$\log_2(4-x) \color{red}{\ge} \log_2(1+x) \implies 4-x \ge 1+x \implies x \le 3/2\quad(5)$$ Taking intersection of (3,4,5) and inclusion of $x=0,1$; we get the solution as $x\in [0,3/2]\quad (6)$ Case 2: $|x-1|> 1 \implies x<0 ~or~x > 2, \log_2|x-1|\color{red}{>}0 \quad (7)$ This time from (2), we get $$\log_2(4-x) \color{red}{\le} \log_2(1+x) \implies 4-x \le 1+x \implies x\ge 3/2 \quad (8)$$ Finally, intersection of (3,7,8) and inclusion of $x=2$, gives the solution as $2\le x< 4 \quad (9)$ So the final solution lies in $[0,3/2] \cup [2,4)$. Note that by choosing the base e.g., $1/2$ wont change the final answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4643374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate the volume $λ_3(A)$ of $A := \{ (x, y, z) ∈\mathbb{R}^3 : x^2 ≤ y^2 + z^2 ≤ 1\} $ I have a lot of problems with exercises where I must calculate the Volume of a set using integrals. Here an example: Le the set $A$ be $A := \{ (x, y, z) ∈\mathbb{R}^3 : x^2 ≤ y^2 + z^2 ≤ 1\} $. Calculate its volume $λ_3(A)$. So what I thought is to doing something like this: $λ_3(A)=λ_3(A_1)-λ_3(A_2)$ Where $A_1 := \{ (x, y, z) ∈\mathbb{R}^3 : y^2 + z^2 ≤ 1\}$ and $A_2 := \{ (x, y, z) ∈\mathbb{R}^3 : x^2 ≤ y^2 + z^2 \}$ But here comes my problem for every exercise like this: I dont know how to find the limits of my triple integral. Can someone help me?
In cylindrical coordinates with $x$ as the height axis the region becomes $$\{(x,r,\theta)\in\mathbb R^3:x^2\le r^2\le1\}$$ This can be simplified a bit since $r$ is positive: $$\{(x,r,\theta)\in\mathbb R^3:|x|\le r\le1\}$$ The region is a cylinder with radius $1$ and height $2$, minus two right circular cones with base radius $1$ and height $1$. The volume is thus $$\pi\cdot2-2\cdot\frac13\pi\cdot1=\frac43\pi$$ We can check this answer by integrating in the aforementioned cylindrical coordinates: $$\int_0^{2\pi}\int_0^1\int_{-r}^r1\cdot r\,dx\,dr\,d\theta$$ $$=2\pi\int_0^12r^2\,dr=4\pi[r^3/3]_0^1=\frac{4\pi}3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4643575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Problem with understanding how to scale a function I'm trying to scale the following function: $$y=\sqrt{1-\frac{x^2}{a+bx}\times c}$$ For example, I want to scale it proportionately by 2. Turns out I need to multiply a and c by 4 (2 squared), but b by just 2. I assume this is something about square root and it also has to do with whether the constant is standing alone or being multiplied by x, but I'm not sure of what's actually going on and how to explain this. Can someone pls help? Edit: for clarification, what I am trying to achieve is to keep the shape of the original function but scale the major axis of the half ellipse (domain 0, 4.2), not just scaling vertically or horizontally. Essentially trying to get the same shape with bigger size. As the screenshot shows, in order to scale it proportionately by 2, I had to multiply b by 2, but a and c had to be multiplied by 4.
In order to scale the graph of an equation in two variables $x,y$ by a constant factor $k$ one replaces each occurence of $x$ with $\frac{x}{k}$ and each occurence of $y$ with $\frac{y}{k}$. In your example to double the scale for both $x$ and $y$ $$y=\sqrt{1-\frac{x^{2}}{a+bx}\times c}$$ must be replaced with $$ \frac{y}{2}=\sqrt{1-\frac{\left(\frac{x}{2}\right)^{2}}{a+b\left(\frac{x}{2}\right)}\cdot c}$$ Here is a desmos illustration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4643908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Let $a_n$ be sequence of positive reals with $\lim na_n = a \in (0, \infty)$. Prove that $\lim_{x \to 1^-} f'(x)(1-x) = a$ for $f(x) = \sum a_nx^n$ Let $\{a_n\}$ be a sequence of positive numbers such that $\,a=\lim\limits_{n\to\infty}na_n\,$ exists with $a \in (0, \infty)$. Let $f(x) = \displaystyle\sum_{n=0}^\infty a_nx^n$ for $x \in (-1,1)$. Prove that $f(x)$ is convergent and continuously differentiable for all $x \in (-1,1)$. Also, prove that $\lim\limits_{x\to 1^-}f'(x)(1-x) = a$. Here is my work so far: let $s_N(x) := \displaystyle\sum_{n=0}^N a_nx^n$. Obviously $s_N(0)$ converges to $a_1$. Also, $s_N'(x)$ converges uniformly, since there exists $N^\sim$ such that for all $n \geq N^\sim$ we have, for any arbitrary $r \in (0, 1)$ and for any $x$ such that $x \in [-r,r], |na_nx^n| \leq 2|a||x|^n \leq 2|a|r^n$. Thus, by Weierstrass M-test, $s_N'(x)$ converges uniformly on $[-r,r]$ for all $r \in (0,1)$. Thus we have that, for all $x \in [-r,r]$, $f'(x) = \displaystyle\sum_{n=1}^\infty na_nx^{n-1}$. Furthermore, since $f'(x)$ is a uniform limit of continuous functions, it is itself continuous on $[-r,r]$. But since for any $ \in (-1, 1)$ there exists $r$ such that $|x| < r < 1$, we have that $f'(x)$ is continuously differentiable for all $x \in (-1,1)$. I am not sure how to prove that $f(x)$ is convergent and, more importantly, how to prove that $\displaystyle\lim_{x \to 1^-} f'(x)(1-x) = a$. I would appreciate any help. Thank you!
If you have proven that the series defining $f'$ converges in $(-1,1)$, then convergence of the series defining $f$ should readily follows. If you want a direct proof, I think you could use the root test: $$\lim_{n \to \infty} \sqrt[n]{a_n} = \lim_{n \to \infty} \frac{\sqrt[n]{n a_n}}{\sqrt[n]{n}}=1.$$ For the second part of your question, you want to use Abel's theorem. If you know it, it boils down to expressing $f'(x)(1-x)$ for $x\in(-1,1)$ as a series, and see what it gives you if you formally evaluate this series in $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4644046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The minimal partition of a triangle into pentagons The question about the existence of a cycle of a given length in a $3$-connected planar graph all faces of which are pentagonal, and also attempts to solve it led to the following problem. Insert into triangle a planar graph with pentagonal faces only, so that the degree of each of its vertices is not less than three. This problem was solved here. The solution is obtained as follows. Consider a plane graph with $7$ vertices: Each face in it is a pentagon. We inscribe a dodecahedron graph into two of the three pentagons. As the result we obtain a graph with $37$ vertices all faces of which inside triangle are pentagons. The degree of each vertex of this graph is at least $3$. Denote this graph by $Q$. We can see a plane image of graph $Q$ here and here. My question. Is graph $Q$ minimal in the number of vertices among planar graphs in which there is a single triangular face and all other faces are pentagons and the degree of each vertex of this graph is at least $3$? This question is posed purely out of curiosity and because I could neither prove the minimality of $Q$, nor construct a smaller graph with this property. Addendum. The answers of Parcly Taxel and student91 show that $37$ was too rough estimate for graph with specified property. The graph constructed by Parcly Taxel is symmetric and especially beautiful. But I am itching to clarify my question. Clarifying Question. After all, what is the minimal number of vertices in planar $3$-connected graphs all faces of which are pentagons except for exactly one triangular face. As follows from Euler's formula for planar graphs, if a graph has a single triangular face and other faces are pentagons and three vertices are of degree $4$ and others are of degree $3$, then such graph must have $25$ vertices and Parcly Taxel constructed it.
Take the regular dodecahedron and open up the three faces around one vertex like petals of a flower. To each of the new degree-$2$ vertices attach a new edge and new vertex, then join these three latter new vertices by a triangle. The result is a $3$-connected partition of a triangle into $15$ pentagons using $25$ vertices. This graph was found using the plantri command plantri_ad -F3_1^1F5F6 16 followed by a little processing in Sage. To show that this pentangulation is minimal, note that the dual of a $13$-pentagon example (the number of pentagons must be odd by Euler's polyhedron formula) would be a pure triangulation minus $2$ edges to leave $13$ vertices of degree $5$ and one of degree $3$. Consider all possible ways of adding these two edges: * *The extra edges are both incident to the degree-$3$ vertex. This leaves just degree-$5$ and $6$ vertices, and plantri_ad -F5F6 14 will find all such triangulations. But the only graph returned is the Kleetope of the hexagonal antiprism, where the degree-$6$ vertices have no mutual neighbour that can correspond to the triangle face in the dual. *One extra edge is incident, giving a degree-$4$ vertex and either of $\{6,6,6\}$ or $\{7,6\}$ (plantri_ad -F4_1^1F5F6F7 14) as the other non-degree-$5$ vertices before edge removal. There is only one graph in this class with $\{4,6,6,6\}$ as the "odd" vertex degrees, so the removed edges would have to form a perfect matching on these vertices. But the graph induced by the "odd" vertices has no perfect matching. *The extra edges solely connect degree-$5$ vertices and leave "odd" degrees as $\{6,6,6,6\}$ and $\{7,6,6\}$ (plantri_ad -F3_1^1F5F6F7 14). There is only one graph in this class with $\{7,6,6\}$ as the "odd" degrees; the degree-$7$ vertex has to be adjacent to both degree-$6$ vertices in order to correspond to a triangle pentagulation, which is unfortunately not the case here (the degree-$7$ is only adjacent to one of the degree-$6$s). We can exclude the $11$- and $9$-pentagon examples similarly, the latter of which is the absolute minimum from considerations of the sum of degrees of the pentangulation's vertices. Thus the graph above constitutes a minimal pentangulation of the triangle. Again in a similar fashion, we can show that this partition of a square into $14$ pentagons is minimal:
{ "language": "en", "url": "https://math.stackexchange.com/questions/4644257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Lemma 4.4 in Gilbarg-Trudinger In equation (4.12) of the proof of Lemma 4.4 (2nd inequality, 2nd term), Gilbarg-Trudinger uses following inequality $$\frac{1}{\omega_n}\int_{B}|x-y|^{\alpha-n}dy\leq \frac{n}{\alpha}(3R)^\alpha,$$ where $B=B_x(2R)$ for some fixed radius $R>0$, $\alpha\in (0,1)$, and $\omega_n=\text{volume of unit ball in }\mathbb{R}^n$. I am confused to why there's a "3" on the RHS, and would appreciate if someone could tell me where the mistake in my computation is. I'm sure this is completely elementary. We compute \begin{align*} \int_B |x-y|^{\alpha-n}dy&=\frac{n\omega_n(2R)^{n-1}}{(2R)^{n-1}}\int_{0}^{2R}\rho^{\alpha-n}(\rho^{n-1}d\rho)\\ &=(n\omega_n)\frac{\rho^\alpha}{\alpha}\bigg\vert_0^{2R}\\ &=\frac{n\omega_n}{\alpha}(2R)^\alpha, \end{align*} where the first equality uses co-area formula for the ball of radius 2R, along with change of variables. The rest is obvious. This "3" appears later in the proof when estimating $|I_3|$, so I'm fairly sure I'm missing something.
The proof starts with "For any $x$ in $B_1$..", with $B_i = B_i(x_0)$ (as opposed to $B_2 = B_2(x)$, what you seem to be assuming in your calculation).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4644473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the maximum area of a triangle when the distance from a point to three vertices is fixed (1).$P$ is inside $\Delta ABC$ and $$PA=2,PB=7,PC=11.$$Let the area of the triangle be $S$, then find The maximum value of $S$. (2).$P$ is inside $\Delta ABC$ and $$PA=x,PB=y,PC=z.$$Let the area of the triangle be $S$, then find The maximum value of $S$. My approach: for (1),considering the fixed $B,C$, we can easily know that the triangle area is largest when the $AP$ is perpendicular to $BC$. By symmetry, we can know that $P$ is pendicular. Of course, this is not rigorous. How can we show that when the area reaches the maximum, $P$ must be pendicular without partial derivative.
For the first case $PA = 2, PB = 7 , PC = 11 $ So we let $P$ be at the origin $(0,0)$ , and we draw three circles $A,B,C$ of radii $2, 7, 11$ respectively. Now we can select point $A$ to be at $(2, 0)$. If point $B$ is fixed at $(x_2, y_2)$ , and we vary point $C$ along the perimeter of circle $C$, then to obtain the maximum area of $\triangle ABC$, point $C$ must have the maximum possible distance from the line segment $AB$, and this can only happen if the extension of segment $PC$ is perpendicular to segment $AB$, which is what you stated in the question. Extending this result to all three vertices, we deduce that in the maximum area triangle, $PA$ is perpendicular to $BC$ , and $PB$ is perpendicular to $AC$ , and $PC$ is perpendicular to $AB$. Hence point $P$ must be the orthocenter of $\triangle ABC$. Now for the given values of radii, if $A$ is at $(2, 0)$, then $BC$ lies parallel to $y$ axis, i.e. $B = (x, \sqrt{49 - x^2} )$ $ C = (x, -\sqrt{ 121 - x^2} )$ And we have to determine $x$ such that $B$ is perpendicular to $AC$ $ AC = (x - 2, - \sqrt{121 - x^2} ) $ Hence, by using the dot product $ B \cdot AC = x (x - 2) - \sqrt{ (49 - x^2) (121 - x^2) } = 0 $ Hence, $ x^2 - 2 x = \sqrt{ (49 - x^2)(121 - x^2) } $ Squaring $ x^4 + 4 x^2 - 4 x^3 = (49)(121) - (170 x^2) + x^4 $ So that, $ 4 x^3 - 174 x^2 + 5929 = 0 $ Solving gives the following solutions: $-5.5 , 6.31346652052679 , 42.6865334794732 $ The valid root is the negative one: $-5.5$ Hence, $ BC = \sqrt{ 49 - (-5.5)^2 } + \sqrt{ 121 - (-5.5)^2 } = \sqrt{18.75} + \sqrt{90.75} $ Therefore, the maximum area is $ \frac{1}{2} (7.5) ( \sqrt{18.75} + \sqrt{90.75} ) = 30 \sqrt{3} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4645024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculating line integral on a vector field, help me find the mistake Alright, so, I have vector field: $F=[p(x,y), q(x,y)]=[y^3+e^{x^2}, x^3+{\tan}^2y]$. I need to calculate $\oint_Lpdx+qdy$, where $L: x^2+y^2+4y=0$. I transform it to $x^2 + (y+2)^2 = 4$, i.e. a circle with $r=2$ with origin at $(0,-2)$. The circle is "positively oriented", so I guess the integral should be going in counterclockwise direction. The lecture from which this example assignment comes from introduces line integral and Green's theorem. Given presence of $e^{x^2}$, using the theorem is a must. Thus: $$\oint_Lpdx+qdy = \iint_D (\frac{\partial{q}}{\partial{x}}-\frac{\partial{p}}{\partial{y}})dxdy = \iint_D(3x^2-3y^2)dxdy$$ I then try to change cooridnates to polar. \begin{equation} \begin{cases} x &= R\cos{\varphi} \\ y + 2 &= R\sin{\varphi} \\ \end{cases} \end{equation} After subsituting the circle equation, $R=2$. Then I define $D$ area: \begin{equation} D = \begin{cases} r \in (0, 4) \\ \varphi \in (\pi, 2\pi) \\ \end{cases} \end{equation} Back to double integral: $$\int_\pi^{2\pi}\int_0^4(3(2\cos\varphi)^2-3(2\sin\varphi-2)^2)rdrd\varphi$$ Aaaaand, apparently after verifying with WolframAlpha, I have already failed here. The answer after solving the double integral should be $72\pi$. I get something like $-382-288\pi$ (might be inaccurate, writing from memory). Can you help me find the problem? I suspect I'm being dumb about the polar coordinate substitution, or $r$ has wrong range, or Jacobian is wrong
After your transformation $x=x'$ and $y=y'-2$ since $dxdy=dx'dy'$ the integral becomes $\int\int_{D'}3(x^2-(y-2)^2)dxdy$ where $D': 0\leq r\leq 2$, $0\leq\theta\leq 2\pi$. Hence, the integral is $$\int_0^{2\pi}\int_0^23(r^2\cos2\theta+4r\sin\theta-4)rdrd\theta=\int_0^{2\pi}\int_0^2(-12)rdrd\theta=(-12)(2\pi)(\frac{2^2}{2})=-48\pi$$ ... Another way: The circle in polar coordinates is $r=-4\sin\theta$ and so the domain is $D: 0\leq r\leq -4\sin\theta$, $\pi\leq\theta\leq 2\pi$. So, the integral is $$\int_\pi^{2\pi}\int_0^{-4\sin\theta}(3r^2\cos2\theta)rdrd\theta=192\int_\pi^{2\pi}\sin^4\theta\cos2\theta d\theta=192(-\frac\pi 4)=-48\pi$$ where I took some help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4645226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Norm inequality of positive element of $C^*$-Algebra with norm less than 1 We know that for two real positive numbers $a,b$, this property holds: $$ \begin{equation} A+B<1\rightarrow\dfrac{A}{1-B}<1 \end{equation} $$ If $x, y$ are two positive elements of unital $C^*$-Algebra with $\|x+y\|<1$, is true that $$ \begin{equation} \dfrac{\|x\|}{1-\|y\|}<1? \end{equation} $$ With inequality for positive element, it is obvious that $\|x\|,\|y\|<1$. I know it's sufficient to show that $\|x\|+\|y\|<1$, but it seems that I can't get there for some reason. Is there any clue to do this or maybe there has to be additional condition so that the inequality holds? Thank you for your help.
This fails in any C$^*$-algebra other than $\mathbb C$. In $\mathbb C^2$, fix $c\in(0,1)$ and take $$ x=(c,0),\qquad\qquad y=(0,c). $$ Then $\|x+y\|=c<1$, while $\|x\|+\|y\|=c+c=2c$. So as long as $\frac12<c<1$, we get $\|x+y\|<1$ and $\frac{\|x\|}{1-\|y\|}>1$. In an arbitrary C$^*$-algebra the same game can be played by taking a selfadjoint element with non-singleton spectrum and doing the above for two functions in $C(\sigma(a))$ such that their product is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4645811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the HM follow the same pattern as AM and GM of the particular MVT application/theorem? This seems like something which should be well known, but I don't know how to find it so I'm asking it here. Suppose $f:[a,b]\to \mathbb{R}$ (with $f(a)\neq f(b)$) is continuous at $[a,b]$ and differentiable at $(a, b)$ and let $n\in\mathbb{N}^+$. One can prove the following results having to do with each of AM and GM Arithmetic Mean: There exist numbers $a < \xi_1 <\cdots < \xi_n < b$ such that $$\frac{1}{n}\sum_{i=1}^n f'(\xi_i) = \frac{f(b)-f(a)}{b-a}$$ Geometric Mean: There exist numbers $a < \xi_1 <\cdots < \xi_n < b$ such that $$\prod_{i=1}^n f'(\xi_i)= \left(\frac{f(b)-f(a)}{b-a}\right)^n$$ or, if $f'$ is positive, $$\left(\prod_{i=1}^n f'(\xi_i)\right)^{1/n}= \frac{f(b)-f(a)}{b-a}$$ It only makes sense to ask whether the following holds Harmonic Mean Claim: There exist numbers $a < \xi_1 <\cdots < \xi_n < b$ such that $$\frac{n}{\sum_{i=0}^n \frac{1}{f'(\xi_i)}}= \frac{f(b)-f(a)}{b-a}$$ (Suppose that perhaps $f'(x_0)\neq 0$ for all $x_0\in(a,b)$) However I have absolutely no idea where to start. For the other two looking at the $n=2$ case helped, however I can't seem to think of anything here.
Here's a proof of the IVT version of this claim, for a very general class of means that includes AM, GM, and HM: Theorem. Let $g \colon [c,d] \to \mathbb R$ be a continuous function with $g(c) \ne g(d)$, and let $y$ lie between $g(c)$ and $g(d)$. Let $M$ be a continuous "mean" whose only real mean-like property we care about is that $M(x,x,\dots,x) = x$ for all $x$. Then we can choose $c < \xi_1 < \xi_2 < \dots < \xi_n < d$ such that $M(g(\xi_1), \dots, g(\xi_n)) = y$. Proof. For a parameter $t \in [0,1]$, define $$\xi_i(t) = (1 - t^{n+1-i}) c + t^{n+1-i} d.$$ This has three important properties: * *When $t = 0$, $\xi_1(t) = \xi_2(t) = \dots = \xi_n(t) = c$. *When $0 < t < 1$, $c < \xi_1(t) < \xi_2(t) < \dots < \xi_n(t) < d$. *When $t = 1$, $\xi_1(t) = \xi_2(t) = \dots = \xi_n(t) = d$. So the function $m(t) := M(g(\xi_1(t)), g(\xi_2(t)), \dots, g(\xi_n(t)))$ is continuous with $m(0) = g(c)$ and $m(1) = g(d)$. By the intermediate value theorem, there is some $t$ between $0$ and $1$ such that $m(t) = y$, and then $ \xi_1(t) < \xi_2(t) < \dots < \xi_n(t)$ is the solution we wanted. If $f'$ is continuous, then we can prove the claim we want by taking $g = f'$. First, we need to find values $c,d$ such that $a < c < d < b$ and $\frac{f(b)-f(a)}{b-a}$ lies strictly between $f'(c)$ and $f'(d)$. This is possible provided $f$ is not linear. (Or, if $f$ is linear, then any $\xi_1, \dots, \xi_n$ will work.) In that case, there is some $x \in (a,b)$ such that $\frac{f(x)-f(a)}{x-a} \ne \frac{f(b)-f(a)}{b-a}$. Without loss of generality, $\frac{f(x)-f(a)}{x-a} < \frac{f(b)-f(a)}{b-a}$; then we must have $\frac{f(b)-f(x)}{b-x} > \frac{f(b)-f(a)}{b-a}$ to compensate. Pick $c \in (a,x)$ so that $f'(c) = \frac{f(x)-f(a)}{x-a}$ and pick $d \in (x,b)$ so that $f'(d) = \frac{f(b)-f(x)}{b-x}$, both by the mean value theorem. Now apply the theorem proved earlier with $g = f'$ on the interval $[c,d]$ with $y = \frac{f(b)-f(a)}{b-a}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4645961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Difficulty with a Combinatorial Math problem I was doing this question from a national mathematical olympiad and although I couldn't solve it but I found something but don't know how to progress. For a positive integer $N$, let $T(N)$ denote the number of arrangements of the integers $1,2,\dots,N$ into a sequence $a_1,a_2\dots,a_n$ such that $a_i>a_{2i}$ for all $i$ with $1\le i<2i\le N$ and $a_i>a_{2i+1}$ for all $i$ with $1\le i<2i+1\le N$. For example, $T(3)$ is $2$, since the possible arrangements are 321 and 312. If $K$ is the largest non-negative integer so that $2^K$ divides $T(2^n-1)$, show that $K=2^n-n-1$. I was able to build this relationship. Is it correct? If yes how do I progress further? If not what is the explicit formula for $T(2^n-1)$? $$T(2^n-1)=2^{2^{(n-2)}}\binom{2^n-2}{2^{n-1}-1}$$ but in the step-by-step solution, they gave this recurrent relation $$T(2^n-1)=T(2^{n-1}-1)^2\binom{2^n-2}{2^{n-1}-1}$$
Your relationship is wrong; e.g. for $n = 2$ it suggests $T(3) = 4$, when as above the correct result is 2. The recurrence relation is formed by considering each sequence $a_1, \dots, a_{2^n-1}$ as the array representation of a heap. From the tree representation, we see that the number of possible heaps of size $2^n-1$ is equal to the number of ways to divide the numbers $1, \dots, 2^n-2$ between the left and right sub-trees, times the number of ways to arrange the numbers within the sub-trees. This directly gives the quoted recurrence $$T(2^n-1)=T(2^{n-1}-1)^2\binom{2^n-2}{2^{n-1}-1}.$$ The result then follows by induction, noting that $2^{2^n-1}$ is the largest power of two to divide $(2^n)!$. You don't need to find the explicit formula for $T$, but from the recurrence, you can see that it's $$T(2^n-1)=\prod_{i=1}^{n}\binom{2^{i}-2}{2^{i-1}-1}^{2^{n-i}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4646145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About an integral of the MIT Integration Bee Finals (2023) I would like to solve the first problem of the MIT Integration Bee Finals, which is the following integral : $$\int_0^{\frac{\pi}{2}} \frac{\sqrt[3]{\tan(x)}}{(\cos(x) + \sin(x))^2}dx$$ I tried substitution $u=\tan(x)$, King Property, but nothing leads me to the solution which is apparently $\frac{2\sqrt{3}}{9} \pi$. If anybody knows how to solve it I would be grateful.
$$\int_0^{\frac{\pi}2} \frac{\sqrt[3]{\tan x}}{(1+ \tan x)^2}\frac{dx}{\cos^2x}$$ $$=\int_0^\infty\frac{t^{1/3}}{(1+t)^2}dt=\int_0^\infty\frac s{(1+s^3)^2}3s^2ds,$$ and you certainly know how to integrate a rational fraction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4646362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
calculating winding number around zero let $m, n \in \mathbb{Z}$ be fixed and let $ 0 < r \neq 1$, determine the winding number of the closed curve $\gamma(t) : = e^{imt} + re^{int}$ $\gamma: [0, 2\pi] \to \mathbb{C} \backslash 0$ around zero so we know that $w_\gamma (0) = \frac{1}{2\pi i} \int_{\gamma} \frac{1}{z-0} dz$ would I then be having $$w_\gamma (0) = \frac{1}{2\pi i} \int_{0}^{2\pi} \frac{ime^{imt} + in re^{int}}{e^{imt} + re^{int}} dt$$ ? is my approach correct until now and how could I proceed?
Here is a simpler approach that is easy to visualize. First consider the winding number around zero of $\sigma(t) = \exp\big(i\cdot t\big)$ for $t\in [0,2\pi]$. A standard calculation is that $n\big(\sigma,0\big)=1$; finish by using the fact that winding numbers (around zero) of products split into sums of winding numbers. Note this implies $n\big(\sigma^k,0\big)=k$ for $k\in \mathbb Z$. The original problem reads $n\big(\gamma,0\big)= n\big(\sigma^m\cdot(1+r\cdot \sigma^{n-m}),0\big)=n\big(\sigma^m,0\big)+n\big(1+r\cdot \sigma^{n-m},0\big)=m+n\big(r\cdot \sigma^{n-m},-1\big)$ (i) if $r\lt 1$ then $-1$ is in the unbounded component so $n\big(r\cdot \sigma^{n-m},-1\big)=0\implies n\big(\gamma,0\big)=m$ (ii) if $r\gt 1$, then $-1$ is in the same component as zero (they are path connected on the real line) so $n\big(r\cdot \sigma^{n-m},-1\big)=n\big(r\cdot \sigma^{n-m},0\big)=n\big(\sigma^{n-m},0\big)=n-m\implies n\big(\gamma,0\big)=n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4646506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Trignometric inequalities. I was working through a question on limits where the author suggested the inequality: $ |\sin(x)| \leq x $ in some deleted neighbourhood around 0, I'm kind of unsure if the inequality is correct because in a neighbourhood around 0 we would have x<0, but $|\sin(x)| > 0 $ for all x in any neighbourhood around 0. Am i missing anything? Edit: 1.) removed the line "which leads to: $-x \leq \sin x \leq x $ "
You're right, when $x<0$ the inequality $|\sin x| \le x$ is obviously false, and likewise for $-x \le \sin x \le x$. (But $|\sin x| \le |x|$ is true for all real $x$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4646750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Need help with determining if sequences $a_{n}=\frac{n+3}{n+2}$ and $a_{n}=\frac{n-1}{n+1}$ are increasing or decreasing I'm struggling with a problem that asks me to determine whether the following sequences are increasing or decreasing: a) $a_{n}=\frac{n+3}{n+2}$ b) $a_{n}=\frac{n-1}{n+1}$ For part (a), I tried to find $a_{n+1}-a_{n}$ and simplify it to see if it was positive or negative. After some algebra, I got: $a_{n+1}-a_{n}=\frac{1}{n+2}$ Since $n+2$ is always positive, $a_{n+1}-a_{n}$ is always positive. Therefore, the sequence is increasing. For part (b), I followed the same process and got: $a_{n+1}-a_{n}=-\frac{2}{(n+1)(n+3)}$ Since $(n+1)(n+3)$ is always positive, $a_{n+1}-a_{n}$ is always negative. Therefore, the sequence is decreasing. Could someone please verify if my answers are correct or not? If I made a mistake, I would appreciate any guidance on how to approach the problem correctly. Thank you in advance for your help!
Your computations are incorrect. In the first case, $a_{n+1}-a_n=-\frac{1}{(n+1)(n+2)}<0$ so $a_n$ is decreasing. In the second case, $a_{n+1}-a_n=\frac{2}{(n+2)(n+1)}>0$ so the sequence is increasing. Your general reasoning is good though. Alternatively, since $a_n>0$ for every $n$, you can also compute $\frac{a_{n+1}}{a_n}$. Then: * *$a_n$ is increasing if and only if $\frac{a_{n+1}}{a_n}>1$ for every $n$ *$a_n$ is decreasing if and only if $\frac{a_{n+1}}{a_n}<1$ for every $n$ For some sequences, this would lead to easier computations. Finally, in a), you can also write $a_n=1+\frac{1}{n+2}$ and since $\frac{1}{n+2}$ is decreasing, $a_n$ is decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4646863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding a basis of a subspace. Can you please help me with this task? Let $f_1=\cos 2x,f_2=\sin x,f_3=\sin^2x$. Consider the linear space $\mathbb{W}=\left\langle f_1,f_2,f_3\right\rangle$. Let $\mathbb{U}=\{f\in\mathbb{W}|f(0)=0\}$. Find a basis of this subspace. I have proved that $f_1, f_2, f_3$ are linearly independent by considering the equation $$a\cos(2x)+b\sin(x)+c\sin^2(x)=0$$ How do I find a basis? Should I substitute $0$ into $x$?
To summarize the discussion in the comments: This problem is somewhat easier than the general problem of this type because it happens that a good basis for $\mathbb U$ is a subset of the given basis for $\mathbb W$. Indeed, both $f_2$ and $f_3$ are in $\mathbb U$ so $\mathbb U$ contains the $2$ dimensional subspace spanned by $f_2,f_3$. Now that has to be all of $\mathbb U$ since if $\mathbb U$ contained anything else it would have dimension $3$ hence would be all of $\mathbb W$, but $f_1\not \in \mathbb U$ so this is not possible. In general, there is no reason for a basis of $\mathbb U$ to be a subset of the given basis, but it is the case here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4647046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that for $n∈N$ and $\epsilon>0$ there exists $R>0$ such that $P(|X_n|>R)<\epsilon$. Suppose $X_n$ is a sequence of random variable, taking value from ($-\infty,\infty$). Show that for $n∈N$ and $\epsilon>0$, there exists $R>0$ such that $P(|X_n|>R)<\epsilon$. I mean, intuitively it is true since the value of $X_n$ is finite. We can just let $R$ be large enough and then $P(|X_n|>R)<\epsilon$. But I do not know how to officially prove it. Anyone could help me on it?
We want to show for $R$ large enough, $$\epsilon>P(|X_n|>R)=1-P(|X_n|\leq R)=1-F_{|X_n|}(R)\iff F_{|X_n|}(R)>1-\epsilon,$$ where $F_{|X_n|}$ is the CDF of $|X_n|.$ By properties of CDFs, the limit of a CDF is $1$ as its argument tends to $+\infty$, so the desired result follows immediately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4647214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Question regarding Big O: Let $f$ be a positive function such that $$f(n) = a f\left( \left\lfloor \frac nb\right\rfloor\right) + c$$ holds for every integer n ≥ 1, where a ≥ 1, b is an integer larger than one, and c ∈ R+ . Prove that $f(n) = O(n^{\log_b a})$ if a > 1, and $f(n) = O(\ln n)$ if a = 1 I don't understand where to begin, I tried using the formal definition of big O notation but still couldn't progress any further
You may want to have a look at some proofs of the Master Theorem as your question seems extremely similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4647388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dimension of splitting field of $(x^3+x+1)(x^2+1) \in \mathbb{Q}[x]$ over $\mathbb{Q}$ I am having trouble finding $[\Omega_f : \mathbb{Q}]$, where $\Omega_f$ is the splitting field of $f = gh = (x^3+x+1)(x^2+1) \in \mathbb{Q}[x]$. Since $g$ only has one real root we can show $[\Omega_g : \mathbb{Q}] = 6$, and similarly $[\Omega_h : \mathbb{Q}] = [\mathbb{Q}(i) : \mathbb{Q}] = 2$. It can also be shown that $g \in \mathbb{Q}(i)[x]$ is irreducible $\implies [\Omega_f : \mathbb{Q}] \in \{6, 12\}$. However, how do I rule out that $[\Omega_f : \mathbb{Q}] = 6$ and not 12? Maybe some Galois theory could be useful but I am not sure.
Let $\alpha$ be the real root of $g$. Then $g=(x-\alpha)(x^2+\alpha x-\alpha^{-1})$, the discriminant of the quadratic is $\alpha^2+4\alpha^{-1}=1+5\alpha^{-1}$, and the splitting field is $\Omega_g=\mathbb Q(\alpha,\beta)$ where $\beta^2=1+5\alpha^{-1}$. Next, since $g(-1)=-1$ and $g(0)=1$ we know that $\alpha\in(-1,0)$, and so $1+5\alpha^{-1}<0$. Thus $\beta$ is purely imaginary. So, if $i\in\Omega_g$, then necessarily $i=u\beta$ for some $u\in\mathbb Q$. Squaring gives $-1=u^2\beta^2=u^2(1+5\alpha^{-1})$ inside $\mathbb Q(\alpha)$ We now take the field norm $N$ for $\mathbb Q(\alpha)/\mathbb Q$. For $v\in\mathbb Q(\alpha)$, multiplication by $v$ is $\mathbb Q$-linear, and $N(v)$ is the determinant of this linear map. Using the basis $1,\alpha,\alpha^2$ of $\mathbb Q(\alpha)$ we can quickly check that $N(\alpha)=-1$ (which is the negative of the constant term of $g$). We compute $N(1+5\alpha^{-1})$. Since $\alpha^{-1}=-\alpha^2-1$, multiplication by $1+5\alpha^{-1}$ is given by the matrix $$ \begin{pmatrix}-4&5&0\\0&1&5\\-5&0&1\end{pmatrix}, $$ and $N(1+5\alpha^{-1})=-129$. Clearly $N(-1)=-1$, so $u^2=-1/\beta^2$ yields $N(u)^2=1/129$. This is a contradiction, since $N(u)\in\mathbb Q$. We have shown that $i\not\in\Omega_g$, and so $\Omega_f=\mathbb Q(\alpha,\beta,i)$ has dimension 12 over $\mathbb Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4647563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that the following set is uncountable Good evening to everybody. Today I was trying to find a solution to the following exercise: Let $$ A_\epsilon= \bigcup_{n=1}^{\infty} ( q_n-\frac{\epsilon}{2^n} , q_n + \frac{\epsilon}{2^n} ) $$ where $ q_n$ are the rational numbers of $ [0,1]$ . Let $$A=\bigcap_{j=1}^{\infty} A_{1/j} $$ The question was to prove that : i) $ \lambda(A_{\epsilon}) \leq 2\epsilon $ ii) For all $\epsilon < \frac{1}{2}$ , it holds that $[0,1]\backslash A_\epsilon $ is non-empty and $A $ is a subset of $ [0,1]$ iii) it holds $ \lambda(A)=0$ iv) $ \mathbb{Q}\cap[0,1] \subset A $ and that the set $ A $ is uncountable. Ok, I have already proved quite easily the first 3 parts and the first relation of part iv) but I am stuck on the proof of the uncountability of this set. From what I have already thought, we can identify the rational number $q_1$ with the sequence $ 1, 1 , ... $ (meaning that $q_1$ belongs to the first set of the union in $A_1$ , to the first set of the union in $A_\frac{1}{2}$ etc..) and the rational number $q_2$ with the sequence $ 2, 2 , ... $ (meaning that $q_2$ belongs to the second set of the union in $A_1$ , to the second set of the union in $A_\frac{1}{2}$ etc..),so we can exclude all the rationals in the set $ A $ and, if we show that the set $ A $ contains also some irrational , let say it $ X $ , then we can pick the first integer $N_1 $ to be the natural number ( here $N_1\geq 0 $ ) such that $ X $ belongs to the first set of the union of $A_1 , A_\frac{1}{2} , A_\frac{1}{3},..., A_\frac{1}{N_1-1}$ but NOT in $A_\frac{1}{N_1}$ , then the integer $N_2$ to be the natural number (here we need also $N_2\geq0$) such that $ X $ belongs to the second set of the union of $A_1 , A_\frac{1}{2} , A_\frac{1}{3},..., A_\frac{1}{N_2-1}$ but NOT in $A_\frac{1}{N_2}$ , etc..., and thus identify each non rational number of $ A $ by the sequence $ N_1 , N_2 , ...$. Then assuming that $A$ is countable , say $ \phi_1 , \phi_2 , ...$ we can use the diagonal argument and take the element $ ( \phi_1(1)+1, \phi_2(2)+1 , \phi_3(3)+1 , ... ) $ . . This element is in $ A $ but it is not any of the sequences $ \phi_1 , \phi_2 , ... $ So I only need to prove that $ A $ does not contain ONLY the rationals $q_1 , q_2 ... $ Any ideas would be really helpful.
You almost got part iv), especially when you mentioned "diagonal argument". The crux of diagonal argument to prove a set is uncountable is assuming the set is countable and then picking a diagonal that * *avoids all elements in the set one by one and *identifies a particular element in the set, which is a contradiction. In order to satisfy requirement 1, for $i$-th diagonal element, we can select an interval in some $A_{1/j}$ that does not include $u_i$, where we assume elements in $A$ are listed as $u_1,u_2,\cdots$. Here we do not care whether $u_i$ is a rational number or not (which is a concern that might have distracted you). In order to satisfy requirement $2$, each interval that we will select should be contained in the previous interval we have selected. I hope the hints above are enough for you to make progress. Once you got a proof, you may add it to this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4647688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Restriction of domains and inverse trigonometric functions Because I had to restrict the sine domain to $ \left[ -\frac{\pi }{2},\frac{\pi }{2}\right] $ is it correct to write, let say $$\arcsin \left( \sin \left( \frac{3\pi }{2}\right) \right) =-\frac{\pi }{2}\ ?$$ EDIT For example, if you change the name from "sin" to "Sin" when you restrict the domain, and then define "arcSin" or something like that, and use this new defined functions in the formula, it is clear that the argument of the "Sine" function is out of the domain. But here, the "sin" function is the original or the one I made with a restricted domain? Maybe I had to write $\arcsin (\sin (7\pi/2))$, for example, in my question. What I am trying to argue is that with the sine domain restriction, maybe is no longer correct to write, or to justified, the input of an angle out of the new domain of the new sine function.
Because I had to restrict the sine domain to $ \left[ -\frac{\pi }{2},\frac{\pi }{2}\right] $ is it correct to write, let say $$\arcsin \left( \sin \left( \frac{3\pi }{2}\right) \right) =-\frac{\pi }{2}\ ?$$ * *Reading through your comments, what you're trying to ask is this: * *"Since arcsin has principal range $\left[ -\dfrac{\pi }{2},\dfrac{\pi }{2}\right],$ then this means that in the given context $\sin$ accepts only $\left[ -\dfrac{\pi }{2},\dfrac{\pi }{2}\right],$ so how is $$\arcsin \left( \sin \left( \frac{3\pi }{2}\right) \right)$$ even a valid expression? Is it really correct to say that it equals $-\dfrac\pi2$ ?" The answer is No, Yes, Yes: the composed function $$\arcsin\left(\sin(x)\right)$$ has the same domain as $\sin$, which is $\mathbb R,$ and has principal range $\left[ -\frac{\pi }{2},\frac{\pi }{2}\right].$ To be clear: whatever restriction to do to $\arcsin$ affects only $\arcsin$ itself, and has absolutely no impact on other trigonometric functions in the same context, or their domains or ranges. What is important to check is that the range of $\sin$ is a subset of the domain of $\arcsin;$ this is indeed satisfied. *\begin{align}\\ \forall x{\in}\left[-\frac\pi2,\frac\pi2\right]\;\arcsin\left(\sin(x)\right) &= x\\ \arcsin\left(\sin(x)\right) &\not\equiv x\\ \sin\left(\arcsin(x)\right)&\equiv x.\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4647950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$F$ is a field such that for every $a\in F, a^4=a$, then what is the characteristic of $F$? $F$ is a field such that for every $a\in F, a^4=a$, then what is the characteristic of $F$? Take any $a,b \in F-\{0\}$. then $(a+b)^4=a+b\implies a+b +4a^3b+6a^2b^2+4ab^3=a+b\implies 4a+4b+6a^2b^2=0$. Multiplying throughout by $ab$ and using $a^3=b^3=1$, we get $4a^2+4b^2+6=0$. I am not sure how to go from here. Please help. Thanks.
I think that I figured it out: We have $(-a) ^4= a^4$ so $-a= a$ for every $a$. It follows that $2a=0$,hence the field is of char $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4648052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$I+T$ is not bounded below Let $\mathcal{H}$ a separable Hilbert space. I want to show that the operator $I+S \in L(\mathcal{H})$, with $S\in L(\mathcal{H})$ the left shift, is not bounded below; i.e, there exists a sequence $\{x_n\}_{n\in \mathbb{N}}\subset \mathcal{H}$ such that $||x_n||=1$ and $T(x_n)\underset{n \longrightarrow \infty}{\longrightarrow} 0$. My aproach was: I couldn't find the suitable sequence, then i tried the following... $\sigma(S)=\mathbb{D} \implies S+I \notin \mathcal{G}_l(\mathcal{H})$. That's implies $I+S$ isn't bounded below or $I+S^*$ isn't bounded below. I tried to find a lower bound for $I+S^*$, but i couldn't. Thanks for read.
Assume $I+S$ is bounded below i.e.$$\|(I+S)x\|\ge c\|x\|$$ Then the range of $I+S$ is closed. As $-1\in \sigma(S)$ the range of $I+S$ cannot be equal $\mathcal{H}.$ Thus $\ker (I+S^*)={\rm Im}(S)^\perp\neq \{0\},$ a contradiction, as $-1$ is not an eigenvalue of $S^*.$ The same reasoning is valid for $I+S^*.$ If we are after a concrete sequence of vectors, let $v_t(k)=t^k$ for $|t|<1.$ Then $(I+S)v_t=(1+t)v_t.$ Hence $${\|(I+S)v_t\|\over \|v_t\|}=1+t\underset{t\to -1^+}{\longrightarrow}0$$ Concerning $I+S^*$ we have $$(I+S^*)v_t=(1,t+1,t^2+t,\ldots, t^n+t^{n-1},\ldots)$$ Thus $$\|(I+S^*)v_t\|^2=1+(1+t)^2\|v_t\|^2$$ Hence $${\|(I+S^*)v_t\|^2\over \|v_t\|^2}={1\over \|v_t\|^2}+(1+t)^2 \underset{t\to -1^+}{\longrightarrow}0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4648369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Functions without complex roots, but with quaternion roots Many introductions to complex numbers begin with the question "What are the roots of $x^2 + 1 = 0$?" This function does not have real roots, but does have complex roots. Are there functions which, in a similar vein, do not have complex roots but do have roots in the quaternions?
This is a good question with an important answer. The answer is no, because $\mathbb{C}$ has a property called algebraic closure. This means that any degree $n$ polynomial in $\mathbb{C}$ has $n$ factors (though some may be repeated) so the polynomial can always be fully factorised into linear terms. In particular, it means every polynomial has a root, and intuitively, there is nothing 'missing' from $\mathbb{C}$. This is a very important property about $\mathbb{C}$, which is what makes it so useful. The quaternions don't really have the same relationship to $\mathbb{C}$ as $\mathbb{C}$ has to $\mathbb{R}$, as $\mathbb{C}$ is adding to $\mathbb{R}$ things that are 'missing' in a sense, whereas $\mathbb{C}$ doesn't actually need anything added to it, and the quaternions, $\mathbb{H}$, just add extra roots to polynomials which already have roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4648515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 0 }
Would $x = x+1$ have infinite solutions? If $x = x+1$ and we have the equation $x = x+1$ but since $x = x+1$ would it equal $x+1 = x+1$? It is a dumb question but someone told me that and I want to make sure it's not false.
For an equation to have a solution, you need to be able to substitute all variables in the equation with your solution and obtain a true equality. For example, $x + 3 = 5$ has the one solution of $x = 2$, since $2 + 3 = 5$. For another example, $x + x = 2x$ has the solutions $x = 5$ and $x = 6$, since $5 + 5 = 10 = 2(5)$ and $6 + 6 = 12 = 2(6)$. In fact, this equation has infinitely many solutions, since this equation is an identity. In your case, $x = x + 1$ does not have any solutions. For example, $x = 4$ is not a solution, because $4 = 4 + 1$ implies that $4 = 5$, which is not true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4648668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Is double integral of Gaussian distribution over an area unimodal with respect to $\sigma$? What I already know: Say we have a Gaussian distribution $X \sim N(0,\sigma^2)$. We know that $\text{Pr}[a\le x\le b]$ is a unimodal function of $\sigma$. The reason is as follows. First, we define $f(\sigma) = \text{Pr}[a\le x\le b]$, and it can be written by $$f(\sigma) = \int_a^b \frac{1}{\sigma\sqrt{2 \pi} } e^{-\frac{1}{2}\left(\frac{x}{\sigma}\right)^2} \ dx=\frac{1}{2} \left(\text{erf}\left(\frac{b}{\sqrt{2}\sigma}\right)-\text{erf}\left(\frac{a}{\sqrt{2}\sigma}\right)\right),$$ where $\text{erf}(\cdot)$ is Error function. And the derivative of $f(\sigma)$ is $$f'(\sigma)=\sqrt{\frac{2}{\pi}} \frac{\left(a e^{-\frac{a^2}{2\sigma^2}} - b e^{-\frac{b^2}{2\sigma^2}}\right)}{2\sigma^2}.$$ Clearly, this derivate has only one zero point, so $f(\sigma)$ is unimodal. What I want to know: Can we extend the unimodality to bivariate (or multivariate if possible) Gaussian? More specifically, we have a bivariate Gaussian distribution $(X,Y) \sim N(0,\sigma^2)$. (I get the bivariate Gaussian function of variables $x$ and $y$ from this paper, i.e., $X$ and $Y$ are independent, and $\sigma_1 = \sigma_2 = \sigma$ in our problem.) I would like to know if the double integral over an area as a function of $\sigma$ is also unimodal. Define function $g(\sigma)$ as $$g(\sigma) = \iint_D \frac{1}{2\pi \sigma^2}e^{-\frac{x^2+y^2}{2\sigma^2}}dxdy ,$$ where $D$ is an area for point $(x,y)$. What I have tried: If area D is an annular sector whose center is origin, we can know $g(\sigma)$ is unimodal by the conclusion from univariate Gaussian. However, for other shapes, the double integral is not easy to get an expression consisting of elementary functions to analyze. I have tried some areas (e.g., $D_1:1\le x\le2 \wedge 1\le y\le2$, $D_2:(x-2)^2+y^2 \le 1$) via Mathematica, and the plots show that $g(\sigma)$ is unimodal. I guess that $g(\sigma)$ is unimodal if area $D$ is a connected set (or more strictly: convex set). Anyone has ideas to prove it? Update: By the counter-example from @MathWonk, a connected area $D$ is not enough to make $g(\sigma)$ unimodal. Then what if a convex area $D$?
Consider this counter-example: a ball centered at the origin and a concentric annular ring. The annular subregion is defined by $2<r<3$ and the ball has radius 1/2. The union of these two subregions is the total region of integration. The integral depends on the parameter $t=\sigma$. The graph is shown below: P.S. Note that you can connect the the two subregions with a narrow neck that has negligible area to create a connected region that has similar properties. The counter-example is based on the idea that (i) the integral over the ball is initially close to 1 but is monotone decreasing, but (ii) the integral over the annulus is initially close to zero, then rises and falls. The sum of these two effects creates a double spike. The underlying intuition that explains this counter-example is that the probability distribution describes diffusion via a random walk, and (i) as time progresses the walker who is initially almost certainly near the origin will be progressively less likely to be in the ball, and (ii) the walker will initially not be in the annulus but will eventually reach it and then gradually wander farther out beyond it. The second picture shows the graphs of (i), (ii) and (i)+(ii). I still need to think about the convex case!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4648955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing the de Rham cohomology for the torus. I'm computing the de Rham cohomology for the $2$-dimensional torus using the Mayer-Vietoris sequence with the following cover given on Wikipedia. What I currently have is that for $H^0$ we have the sequence $$0 \longrightarrow H^0(\mathbb{T}^2) \xrightarrow{i_0^*} H^0(U ) \oplus H^0 (V) \xrightarrow{j_0^*} H^0(U \cap V) \xrightarrow{d_0^*} \dots$$ which continues as $$\dots \xrightarrow{d_0^*} H^1(\mathbb{T}^2) \xrightarrow{i_1^*} H^1(U ) \oplus H^1 (V) \xrightarrow{j_1^*} H^1(U \cap V) \xrightarrow{d_1^*} \dots$$ up to $$\dots \xrightarrow{d_1^*} H^2(\mathbb{T}^2) \longrightarrow 0. $$ Now the first bit concering $H^0$ reduced to $$0 \longrightarrow \mathbb{R} \xrightarrow{i_0^*} \mathbb{R} \oplus \mathbb{R} \xrightarrow{j_0^*} \mathbb{R} \oplus \mathbb{R} \xrightarrow{d_0^*} \dots$$ due to connectedness and the fact that $U$ and $V$ are homotopy equivalent with $S^1$'s. The part concering $H^1$ reduced to $$\dots \xrightarrow{d_0^*} H^1(\mathbb{T}^2) \xrightarrow{i_1^*} \mathbb{R} \oplus \mathbb{R} \xrightarrow{j_1^*} \mathbb{R} \oplus \mathbb{R}\xrightarrow{d_1^*} \dots$$ again due to the homotopy equivalences. Now we only need to figure out $H^1(\mathbb{T}^2)$ and $H^2(\mathbb{T}^2)$. For the latter we have $$H^2(\mathbb{T}^2) = \operatorname{im}(d_1^*) = \frac{\mathbb{R} \oplus \mathbb{R}}{\ker(d_1^*)} = \frac{\mathbb{R} \oplus \mathbb{R}}{\operatorname{im}(j_1^*)}.$$ My question how do I figure out what $\operatorname{im}(j_1^*)$ should be? I think I'll get the same conclusion for $H^1(\mathbb{T}^2)$ where I would need to figure out what $\operatorname{im}(j_0^*)$ is. How can these be found?
When you have an exact sequence, the alternate sum of the dimensions of the cohomology groups is zero. Here this gives $$1-2+2-dim\ H^1+2-2+dim\ H^2 = 0,$$ $$dim\ H^1 = dim\ H^2 + 1.$$ Since the torus is a connected orientable manifold, its top dimensional cohomology group is one dimensional, generated by the volume form. So $dim \ H^2 = 1$ and $dim\ H^1 = 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4649163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Negative Log inside Negative Log Is there a name for a function which is like $f(x)=-\log(-\log(x))$ where $0<x<1$? Or, is there any name for this function $g(x)=x+\log(\frac{1}{x})$ where $0<x$? Exchanging $x=-\log k$ in $g(x)$ gives $f(k)$ and I would like to know about those functions any deeper, but I am having trouble searching about them. If there are any specific name or related function that I can search for, I will be very glad to know.
Your first function corresponds to the inverse of the standard cumulative Gumbel distribution and occurs in extreme value statistics. In statistics, the inverse of a cumulative distribution is also called a quantile function. So, that makes it the standard Gumbel quantile function. Other than that, I cannot see a use for devoting special attention to this function, in any case not at the analysis level, as it is just a composite function and the interesting function from that point of view is just the logarithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4650371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Amann and Escher's analysis and Pugh's RMA. I started studying real analysis with Rudin's book, but it's really terse, i somehow made it through the first chapter (After reading 3 times :D) and solved some problems but here we are, i feel like i really didn't understand that chapter. So, i found two texts to study analysis and leave Rudin's PMA, they are : * *Amann and Escher's, 'Analysis' (3 volumes) *Pugh's 'Real Mathematical Analysis' Both are rigorous and intuitive. (Am i correct? This is what reviews says so?) I liked Amann and Escher's analysis, but by reading reviews on Amazon and a answer on this platform, it says that they present the material in a generalized form and i am afraid that i miss the usual special case way and well i maybe at a disadvantage in that sense. My Questions : 1)Should I just study Amann and Escher's analysis alone? 2) Or study from Pugh's RMA 3) Is reading Amann and Escher's analysis enough or am i going to miss some stuffs for a usual analysis course?
I think Abbott, Understanding Analysis, or Bartle, Sherbert, Introduction to Real Analysis are two modern classics that are often used instead of Rudin. Amann is great, but I think it might be a bit too much if it's the first time you see this material, especially if you would like something that explains the context a bit more than baby Rudin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4650583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Multinomial Distribution - Compute the probability of the sample containing 6 with grade Consider a class with 100 students enrolled. Suppose that 30 achieved a mark over 70%,60 achieved between 50-69% and 10 achieved 0−49% . Let's take a randomly selected sample of 12 of these students for course moderation purposes such that they can be considered independent of each other. (a) Write down the probability distribution function of the number receiving each of the three type of grades $X=(x_1,x_2,x_3)$. Remember to note the constraints on the sample space. (b) Compute the probability of the sample containing 6 with grade A, 4 with grade B+ and 2 with grade C (c) Compute the probability of the sample containing 6 with grade A Could someone help with part (c) of this question? I was taking the following approach but am unsure if it's correct. It seems wrong to me and I feel as though I'm missing logic. $P(X_1 = 6) = \frac{12!}{6!\cdot 0!\cdot 6!} \cdot 0.3^6 \cdot 0.6^0 \cdot 0.1^6 = .00005$
You have to calculate the sum of all probabilities where $x_1=6$ and $x_1+x_2+x_3=12$. This is $$P(X_1=6)=\sum\limits_{x_2=0}^{6} \frac{12!}{6!\cdot x_2!\cdot (12-6-x_2)!}\cdot 0.3^6\cdot 0.6^{x_2}\cdot 0.1^{12-6-x_2}$$ For instance: If $x_2=2$ and the given $x_1=6$, then $x_3=12-6-2=4$. The sum is $12$. You add up all combinations, where $x_2+x_3=6$. $$=\frac{12!/6!}{6!}\cdot 0.3^6\cdot 0.1^{6}\cdot \sum\limits_{x_2=0}^{6}\frac{6!}{x_2!\cdot (12-6-x_2)!}\cdot 6^{x_2}$$ $$=\frac{12!}{6!\cdot 6!}\cdot 0.3^6\cdot 0.1^{6}\cdot \sum\limits_{x_2=0}^{6} \binom{6}{x_2}\cdot 6^{x_2}\cdot 1^{6-x_2}$$ For the sum you can use the binomial theorem. Update More general: For arbitrary $p_i,x_1$ and $\sum\limits_{i=1}^{3} x_i=n$ the probability is $$P(X_1=x_1)=\frac{n!}{x_1!\cdot (n-x_1)!}\cdot p_1^{x_1}\cdot (p_2+p_3)^{n-x_1},$$ where $p_2+p_3=1-p_1$.Therefore the marginal pmf is distributed as $X_1\sim \textrm{Bin}(n,p_1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4650783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $\lfloor {2x} \rfloor + \lceil {x} \rceil - 4x = 0$ I need to find: $A=\{x \in \mathbb{R}\vert\,\lfloor {2x} \rfloor + \lceil {x} \rceil - 4x = 0\}$ Let $x \in \mathbb{R}$. Then $$\begin{align*} \lceil {x} \rceil =4x- \lfloor {2x} \rfloor &\iff 4x- \lfloor {2x} \rfloor -1 < x \leq 4x -\lfloor {2x} \rfloor\\ &\iff \bigg[ 4x- \lfloor {2x} \rfloor -1 < x \bigg] \land \bigg[ x \leq 4x -\lfloor {2x} \rfloor \bigg]\\ &\iff (3x-1 < \lfloor {2x} \rfloor ) \land (\lfloor {2x} \rfloor \leq 3x) \end{align*} $$ Now define $B=\{x\in \mathbb R\mid(3x-1<\lfloor {2x} \rfloor \}$ and $C=\{x\in\mathbb R\mid\lfloor {2x} \rfloor \leq 3x\}$. Then $A=B\cap C$. My confusion: $-\dfrac{2}{3} \in (B \cap C)$ but $-\dfrac{2}{3} \notin A$ In which step have I gone wrong?
"In which step have I gone wrong?" In the first one: The left hand side implies that $x$ is divisible by $4$ while the right side does not. Here is how I would approach it. Notice that $\lfloor 2x \rfloor$ and $\lceil x\rceil$ are always whole numbers so that $\lfloor 2x \rfloor+\lceil x\rceil -4x$ is a whole number if and only if $x=n/4$ for some $n\in\mathbb Z$. Now assume $x=n/4$ is in $A$. Then $$\lfloor 2x \rfloor+\lceil x\rceil -4x=0$$ which for $n$ means $$\lfloor n/2 \rfloor+\lceil n/4\rceil =n.$$ Now notice $$ n/2-1+n/4\le\lfloor n/2 \rfloor+\lceil n/4\rceil \le n/2+ n/4+1 $$ and thus $$ n/2-1+n/4\le n\le n/2+ n/4+1 $$ which by multiplying with $4$ yields $$ (3n-1\le 4n\le 3n+1) \iff (-4 \le n \le +4) $$ Thus the only possible values for $n$ are $\{-4,\dots,4\}$, which are the values $\{-1,-1+1/4,-1+2/4,\dots,1\}$ for $x$. (You still have to check which of these values are in $A$. But after that you are done)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4650931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Structural description for the order of $\mathrm{GL}_n(\mathbb F_q)$ This question is inspired by the cute answer to Order of general- and special linear groups over finite fields. The formula $$ |\mathrm{GL}_n(\mathbb F_q)|=q^{\frac{n(n-1)}2}(q-1)(q^2-1)\cdots(q^n-1). $$ is obtained in that answer by simply but cleverly counting the number of linearly independent $n$-tuples of vectors in $\mathbb F_q^n$. On the other hand, we know a lot about the structure of the group $\mathrm{GL}_n(\mathbb F_q)$: we can choose (in many ways) a maximal torus; let us take the one consisting of all invertible diagonal matrices, which is a subgroup of order $(q-1)^n$. We can next locate the unipotent radical of the corresponding Borel subgroup which in our case is the subgroup of all upper triangular matrices with $1$s along the main diagonal, thus has order $q^{\frac{n(n-1)}2}$. This accounts for $q^{\frac{n(n-1)}2}(q-1)^n$ elements. How to account for the remaining factors $\frac{q^2-1}{q-1}$, $\frac{q^3-1}{q-1}$, ..., $\frac{q^n-1}{q-1}$? Do they also correspond to some subgroups that can be named, or maybe some explicitly describable conjugacy classes?
Let $F_q$ be a finite field of . $q$. Let $T_n(F_q)$ the set of upper triangular matrices (the semi-dircet product of digoanl matrices and unipotent radical). The quotient $GL(n,F_q)/ T_n(F_q)$ it the set of complete flags $0\subset E_1\subset E_2 \subset ..E_{n-1}\subset E_n=K^n$, where $E_1$ is a 1 dimensional subspace, $E_k$ a $k$-dimensional subsapce), so that for every $i$, $E_{i-1}$ is an hyperplane in $E_{i}$. Then $q^i-1\over q-1$ appears as the number of $i-1$ dimensional subspace contained in a given $i$ dimensional space : indeed the number of hyperplane in a $d$ dimensionnel vector space over $F_q$ has cardinality $q^d-1\over q-1$ The product $1.{q^2-1\over q-1}... {q^{n-1}-1\over q-1}$ is of course the number of flags.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4651182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
${\rm Spec}(A/\mathfrak{a})={\rm Spec}(A)$ if and only if $\mathfrak{a}$ is generated by nilpotent elements I have been introduced to the Zariski topology and I cant solve this problem: Let $A$ be a commutative ring with unity and $\mathfrak{a}$ an ideal of $A$, we define ${\rm Spec}(A) $ as the set of prime ideals of $A$, and ${\rm Spec}(A/\mathfrak{a})$ as the set of primes ideals of $A/a$ (which is homeomorphic to the zeros of $\mathfrak{a}$). Knowing this prove: ${\rm Spec}(A/\mathfrak{a})={\rm Spec}(A)$ if and only if $a$ is generated by nilpotent elements.
Here are a few hints: the (prime) ideals of $A/\mathfrak{a}$ correspond to the (prime) ideals of $A$ containing $\mathfrak{a}$. You can use this idea to get a continuous injective map $\mathrm{Spec}(A/\mathfrak{a}) \to \mathrm{Spec}(A)$. Now, this map is surjective (and in fact a homeomorphism) exactly when all of the prime ideals of $A$ contain $\mathfrak{a}$. This means $\mathfrak{a}\subset \bigcap_{\mathfrak{p}\in \mathrm{Spec}(A)} \mathfrak{p}$. However, there is a characterization of $\mathfrak{N}_A$ (the nilradical of $A$) as $$ \mathfrak{N}_A = \bigcap_{\mathfrak{p}\in \mathrm{Spec}(A)}\mathfrak{p}. $$ Putting these ideas together solves the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4651434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Permutations of colored balls with restrictions Consider $N$ balls, each in one of $K$ possible colors. We denote by $n_k$ the number of balls colored in the $k$-th color ($\sum_{k=1}^K n_k =N$). Is there a known formula for the total number of permutations of the $N$ balls subject to the restriction that concatenated copies of shorter sequences should be discounted? For example, if we have $3$ $b$lue-colored balls and $3$ $r$ed-colored balls then one should not count the permutations $brbrbr$ and $rbrbrb$.
Yes you can. Using Möbius inversion, you can show that the number of sequences of length $N$, consisting of $n_k$ copies of color $k$ for each $k\in \{1,\dots,K\}$, is $$ \sum_{d\mid \gcd(n_1,\dots,n_K)} \mu(d)\frac{(N/d)!}{(n_1/d)!(n_2/d)!\cdots (n_K/d)!}, $$ where $\mu$ is the Möbius $\mu$ function. The sum ranges over positive integers $d$ such that $d$ is a common divisor of $n_1,\dots,n_K$. This is essentially the principle of inclusion exclusion. You add in all of the sequences with the $d=1$ term, then for $d=2$, you subtract away sequences which are a double of a smaller sequence. Similarly, for $d=3$, you subtract triple repeats of smaller sequences. But for $d=6$, since $\mu(6)=+1$, you add back in the six-tuple repeats, because these were doubly subtracted in the previous two steps. The $\mu()$ function magically makes everything cancel out to count the number of sequences which are not repeats of a smaller one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4651639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Can you get an infinite number of derivatives from $\frac{d}{dx}\sin(x)$? I know that the first derivative of sine of x is cosine of x, but I'm really facing a problem trying to understand the other derivatives of the sine function The following 2 statements are both true: $$\frac{d}{dx}\sin(x) = \cos(x)$$ $$\frac{d}{dx}\cos(x) = -\sin(x)$$ And then, according to the constant multiple rule, we can see that the following is also true: $\frac{d}{dx}-\sin(x) = -1 * \frac{d}{dx}\sin(x) = -1 * \cos(x) = -\cos(x)$ Hence: $\frac{d}{dx}-\cos(x) = -1*\frac{d}{dx}\cos(x) = -1*-\sin(x) = \sin(x) ....$ can we keep repeating this process to infinity? and what does it really mean that the fourth derivative of sin(x) is sin(x), I've seen a visualisation that explains why the derivative of sin(x) is cos(x), so I know why the first derivative of sin(x) is cos(x), at least intuitively, but I can't really get the idea of the rest of the derivatives of sin(x), or why they are repeating, and can we really keep repeating this with no problems?
Yes, you get the derivative as $$\dfrac{d^n \sin(x)}{dx^n} = \sin \left(\frac{\pi n}{2}+x\right)$$ for $n \ge 0$ and $n$ integer. This just produces a repeating pattern of $$\sin (x),\cos (x),-\sin (x),-\cos (x) \ldots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4651856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What does associativity mean for orders? I'm watching the class Category Theory for Programmers and it's said that an order (preorder, partial order, or total order) constitutes a category, and one of the conditions for this is that the relation is associative. I understand how and why $(x+y)+z = x+(y+z)$, in the sense that the order of applying the $+$ operator doesn't matter - after doing $x+y$ I get a number $w$, then I can do $w+z$. However I don't really understand what $(a\leq b)\leq c$ means. $(a\leq b)$ doesn't produce a number like $(x+y)$; it's just a true/false value of whether or not $a$ is less than or equal to $b$. What does it mean to ask "is $a$ is smaller than $b$ smaller than $c$"? What's a better interpretation of asking whether an order is associative?
I think part of your confusion is just the notation being used. To make a poset into a category, you have precisely one morphism from $a$ to $b$ exactly if $a \leq b$, but $a \leq b$ is not (usually) the "name" of that morphism. For the purposes of this answer, we'll call such a morphism (if it exists) $l_{a, b} \in \hom(a, b)$. The key idea that will affect everything below is that elements of $\hom(a, b)$ are unique if they exist (for this kind of category - this is not true for all categories). For any morphisms $f, g \in \hom(a, b)$, $f = g$ because both $f$ and $g$ are equal to $l_{a, b}$. The axioms of a category then require that there is an identity morphism $\mathrm{id}_a : a \to a$, indicating that the relation $\leq$ needs to be reflexive ($a \leq a$). Since there's exactly one morphism between two objects if it exists, $\mathrm{id}_a$ must be $l_{a, a}$. Next, we require composition: a map $\hom(b, c) \times \hom(a, b) \to \hom(a, c)$. Remembering that $\hom(a, b)$ is inhabited (by $l_{a, b}$) exactly if $a \leq b$, this translates to transitivity of the relation: $ b \leq c$ and $a \leq b$ implies $a \leq c$. But what is $l_{b, c} \circ l_{a, b}$? We know that it's an element of $\hom(a, c)$, but that set has at most one element, and if it has any element, it's $l_{a, c}$. So $l_{b, c} \circ l_{a, b} = l_{a, c}$. The other conditions talk about equality of morphisms. But remember that any morphisms with the same domain and codomain are equal, so these equalities are all trivial. Left identity is $\mathrm{id}_b \circ f = f$ for all morphisms $f \in \hom(a, b)$. Both sides of the equality are morphisms from $a$ to $b$, so they're automatically equal. Similarly, right identity, $f \circ \mathrm{id}_a = f$ is trivially true. Associativity says that for morphisms $f \in \hom(c, d)$, $g \in \hom(b, c)$ and $h \in \hom(a, b)$, $(f \circ g) \circ h = f \circ (g \circ h)$. Both sides of the equality are morphisms from $a$ to $d$, so they are trivially equal once again. Both sides would equal $l_{a, d}$ in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4652045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How can you prove that the square root of two is irrational? I have read a few proofs that $\sqrt{2}$ is irrational. I have never, however, been able to really grasp what they were talking about. Is there a simplified proof that $\sqrt{2}$ is irrational?
Another one that is understandable by high schoolers and below. We will use the following lemma: If $n$ is an integer, $n^2$ is even (resp. odd) iff $n$ is even (resp. odd). For the high-schoolers, the proof is about writing $(2k)^2 = 2(2k^2)$ and $(2k+1)^2=2(2k^2+2k)+1$ ... Now, assume $\sqrt 2 = \frac{a}{b}$ with $a$ and $b$ strictly positive integers. Then $a^2=2b^2$, $\implies a^2$ is even ($=2b^2$), $\implies a$ is even (from the lemma), $\implies a=2a_1$ with $a_1 \in \mathbb N^*$, $\implies b^2=2a_1^2$. Repeat with $b$ to find that $b=2b_1$ with $b_1 \in \mathbb N^*$ and $(a_1,b_1)$ verifies $a_1^2=2b_1^2$. By repeating these two steps, we build two sequences $(a_n)_{n\in \mathbb N}$ and $(b_n)_{n\in \mathbb N}$ with values in $\mathbb N^*$ and strictly decreasing, which is impossible, ergo $\sqrt{2}$ is irrational. (Here of course we use the well-ordering principle which most high schoolders would not know about, but the intuition that the sequence would hit $0$ after at most $a_0=a$ steps is easy to get).
{ "language": "en", "url": "https://math.stackexchange.com/questions/5", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 17, "answer_id": 2 }
Is it true that $0.999999999\ldots=1$? I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Another approach is the following: $$\begin{align} 0.\overline9 &=\lim_{n \to \infty} 0.\underbrace{99\dots 9}_{n\text{ times}} \\ &= \lim_{n \to \infty} \sum\limits_{k=1}^n \frac{9}{10^k} \\ &=\lim_{n \to \infty} 1-\frac{1}{10^n} \\ &=1-\lim_{n \to \infty} \frac{1}{10^n} \\ &=1. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/11", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "348", "answer_count": 31, "answer_id": 1 }
What is a real number (also rational, decimal, integer, natural, cardinal, ordinal...)? In mathematics, there seem to be a lot of different types of numbers. What exactly are: * *Real numbers *Integers *Rational numbers *Decimals *Complex numbers *Natural numbers *Cardinals *Ordinals And as workmad3 points out, some more advanced types of numbers (I'd never heard of) * *Hyper-reals *Quaternions *Imaginary numbers Are there any other types of classifications of a number I missed?
Real numbers Real numbers are any numbers you can locate (even approximately) on an infinite number line. This is a theoretical number line with infinite "resolution" that extends infinitely in both positive and negative directions. One neat property about real numbers is that they are orderable -- that is, given any two real numbers, you can tell which one is "higher" and which one is "lower" than the other. Real numbers are closed under multiplication, addition, and subtraction. That is, if you perform any of these operations on two real numbers, their result will always be real as well. They are almost closed under division, except for the whole divide-by-zero issue. Not real numbers: * *infinity *the square root of -1 *1/0 Decimals There isn't a rigorous definition of "decimals", because depending on where you use it, you'll get different definitions. In the elementary sense, it means any number that has a "decimal part"; or a part after the radix (decimal point, etc.). In a more advance sense, it means any number written in Base 10. Natural numbers Natural numbers are often also called "counting numbers", because they are the numbers you count with. (0,) 1, 2, 3, 4, etc. There is some disagreement in the mathematics community over whether or not 0 is a natural number. Cardinals In linguistics, this means the natural "numbers" themselves (1, 2, 3, etc.) But you probably don't want to know about linguistics. In Set Theory, two sets have the same cardinality if each element could be paired up with an element of the other set. {1,2,3} and {4,5,6} share the same cardinality because you can pair up 1&4, 2&5, 3&6. Ordinals In linguistics, this means 1st, 2nd, 3rd, etc. But you probably don't want to know about linguistics. In Set Theory, an ordinal is a well-ordered set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 9, "answer_id": 0 }
Can you recommend a decent online or software calculator? I'm looking for an online or software calculator that can show me the history of items I typed in, much like an expensive Ti calculator. Can you recommend any?
If you want a quick RPN calculator with print capabilities for standard calculations, I use Free42 which is compatible with HP42S (RPN). There is a print key to print when you need and you can configure the file used to save those prints. You can use skins to make it look the way you want, including the original HP calculator. Free42 : An HP-42S Calculator Simulator
{ "language": "en", "url": "https://math.stackexchange.com/questions/29", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 23, "answer_id": 19 }
What is an elliptic curve, and how are they used in cryptography? I hear a lot about Elliptic Curve Cryptography these days, but I'm still not quite sure what they are or how they relate to crypto...
The technical definition is a nonsingular projective curve of genus 1, which is an abelian variety under the group law: basially, this means that you draw the line through two points on the curve -- which can be embedded in the projective plane -- and find where that line intersects the curve again (and call that the negative of the sum). We can always put elliptic curves in the (projectivization of the) form $y^2 = x^3 - Ax + B$. So, the meaning of "abelian variety" is that you can add points on the elliptic curve, which is really useful; there isn't a way to do this for most objects in algebraic geometry. Then one can study things like the torsion points on an elliptic curve, with respect to this abelian group structure: it's a theorem that there are $m^2$ torsion points of order $m$ for instance, if you 're working in an algebraically closed field. In fact, one way to think of this is that an elliptic curve is really--algebraically and topologically--a torus if you are working over the complex numbers, and the torsion points in the torus are easily determined. (Namely, a torus is algebraically the product of two copies of the unit circle.) This also yields the theorem about torsion points for algebraically closed fields of characteristic zero via the "Lefschetz principle." (For characteristic p, you need a different argument.) Other things one can consider include the group of points with coordinates in, say, the rational numbers (assuming the curve is defined by rational coefficients). One of the central theorems is that this group is finitely generated. The point is that the geometry of the elliptic curve leads to a rich algebraic structure. That's a bit about elliptic curves; I know nothing about cryptography and can't comment on that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 3, "answer_id": 1 }
Online resources for learning Mathematics Not sure if this is the place for it, but there are similar posts for podcasts and blogs, so I'll post this one. I'd be interested in seeing a list of online resources for mathematics learning. As someone doing a non-maths degree in college I'd be interested in finding some resources for learning more maths online, most resources I know of tend to either assume a working knowledge of maths beyond secondary school level, or only provide a brief summary of the topic at hand. I'll start off by posting MIT Open Courseware, which is a large collection of lecture notes, assignments and multimedia for the MIT mathematics courses, although in many places it's quite incomplete.
Andrea Feretti's MathOnline page.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 10, "answer_id": 3 }
How would you describe calculus in simple terms? I keep hearing about this weird type of math called calculus. I only have experience with geometry and algebra. Can you try to explain what it is to me?
Calculus is basically a way of calculating rates of changes (similar to slopes, but called derivatives in calculus), and areas, volumes, and surface areas (for starters). It's easy to calculate these kinds of things with algebra and geometry if the shapes you're interested in are simple. For example, if you have a straight line you can calculate the slope easily. But if you want to know the slope at an arbitrary point (any random point) on the graph of some function like x-squared or some other polynomial, then you would need to use calculus. In this case, calculus gives you a way of "zooming in" on the point you're interested in to find the slope EXACTLY at that point. This is called a derivative. If you have a cube or a sphere, you can calculate the volume and surface area easily. If you have an odd shape, you need to use calculus. You use calculus to make an infinite number of really small slices of the object you're interested in, determine the sizes of the slices, and then add all those sizes up. This process is called integration. It turns out that integration is the reverse of derivation (finding a derivative). In summary, calculus is a tool that lets you do calculations with complicated curves, shapes, etc., that you would normally not be able to do with just algebra and geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 9, "answer_id": 1 }
Do complex numbers really exist? Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
I usually say: "Believe it or not, electricity and radio waves actually do behave like complex numbers. You don't see that in high school, but electric and electronics engineers do."
{ "language": "en", "url": "https://math.stackexchange.com/questions/154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "530", "answer_count": 37, "answer_id": 4 }
Do complex numbers really exist? Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
I posted a similar question recently about how complex analysis describes reality better than real analysis and I got very interesting answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "530", "answer_count": 37, "answer_id": 36 }
Why is the volume of a sphere $\frac{4}{3}\pi r^3$? I learned that the volume of a sphere is $\frac{4}{3}\pi r^3$, but why? The $\pi$ kind of makes sense because its round like a circle, and the $r^3$ because it's 3-D, but $\frac{4}{3}$ is so random! How could somebody guess something like this for the formula?
I'll add a Chinese version for fun. The ancient Chinese had another way to calculate this volume. The principle is same as Cavalieri's Principle; the difference is using the intersection of two perpendicular cylinders, a "bicylinder", to pack the sphere. The Chinese name for this shape is 牟合方蓋 or "mouhefanggai" (meaning two square umbrellas). Every plane (parallel to the cylinders' axes) intersects in a square with the bicylinder and intersects in a circle with the sphere (for more pictures and an animation, see http://phdfishman.blogspot.com/2010/02/blog-post_07.html ). This means that the volume ratio of the sphere to the bicylinder is proportional to the areas of the circles and squares: $\dfrac{\pi r^2}{(2r)^2}=\dfrac{\pi}{4}$. Now the question becomes calculating the volume of the bicylinder (white). It is also very difficult, so add a cube (red) packing the bicylinder (white). Now when the plane intersects the cube, it forms another larger square. The extra area in the large square (the big square from the cube minus the smaller square from the bicylinder), is the same as $4$ small squares (blue). As the plane cutting through the solids moves, these blue squares will form $4$ small pyramids in the corners of the cube with isosceles triangle sides and their apex at the edge of the cube. Moving through the whole bicylinder generates a total of $8$ pyramids. Now we can calculate the volume of the cube (red) minus the volume of the eight pyramids (blue) to get the volume of the bicylinder (white). The volume of the pyramids is:$$8\cdot \frac{1}{3}r^2\cdot r=\frac{8}{3}r^3,$$and then we can calculate that the bicylinder volume is $(2r)^3-\dfrac{8}{3}r^3=\dfrac{16}{3}r^3$. Finally, using the ratio of the volumes of the bicylinder and the sphere from above, the sphere's volume is $\dfrac{\pi}{4}\dfrac{16}{3}r^3=\dfrac{4\pi}{3}r^3$. Now you can see that $3$ is from the pyramids and $4$ is from the cube! They are not random. This picture shows the geometric relationships.
{ "language": "en", "url": "https://math.stackexchange.com/questions/164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108", "answer_count": 19, "answer_id": 11 }
Is there possibly a largest prime number? Prime numbers are numbers with no factors other than one and itself. Factors of a number are always lower or equal to than a given number; so, the larger the number is, the larger the pool of "possible factors" that number might have. So the larger the number, it seems like the less likely the number is to be a prime. Surely there must be a number where, simply, every number above it has some other factors. A "critical point" where every number larger than it simply will always have some factors other than one and itself. Has there been any research as to finding this critical point, or has it been proven not to exist? That for any $n$ there is always guaranteed to be a number higher than $n$ that has no factors other than one and itself?
Now that this question has been bumped up, I feel like posting the other somewhat famous proof that they are infinitely many primes: Consider the Fermat numbers $F_n = 2^{2^n} + 1$. It is an easy exercise to see that the gcd of any two Fermat numbers is $1$. As each number has at least one prime factor, picking for each $F_n$ a factor $p_n$ gives an infinite sequence of prime numbers. This proof is usually attributed to Pólya, but it may well be much older. See this discussion of its history on Math Overflow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 9, "answer_id": 1 }
Proof that the sum of two Gaussian variables is another Gaussian The sum of two Gaussian variables is another Gaussian. It seems natural, but I could not find a proof using Google. What's a short way to prove this? Thanks! Edit: Provided the two variables are independent.
I prepared the following as an answer to a question which happened to close just as I was putting the finishing touches on my work. I posted it as a different (self-answered) question but following suggestions from Srivatsan Narayanan and Mike Spivey, I am putting it here and deleting my so-called question. If $X$ and $Y$ are independent standard Gaussian random variables, what is the cumulative distribution function of $\alpha X + \beta Y$? Let $Z = \alpha X + \beta Y$. We assume without loss of generality that $\alpha$ and $\beta$ are positive real numbers since if, say, $\alpha < 0$, then we can replace $X$ by $-X$ and $\alpha$ by $\vert\alpha\vert$. Then, the cumulative probability distribution function of $Z$ is $$ F_Z(z) = P\{Z \leq z\} = P\{\alpha X + \beta Y \leq z\} = \int\int_{\alpha x + \beta y \leq z} \phi(x)\phi(y) dx dy $$ where $\phi(\cdot)$ is the unit Gaussian density function. But, since the integrand $(2\pi)^{-1}\exp(-(x^2 + y^2)/2)$ has circular symmetry, the value of the integral depends only on the distance of the origin from the line $\alpha x + \beta y = z$. Indeed, by a rotation of coordinates, we can write the integral as $$ F_Z(z) = \int_{x=-\infty}^d \int_{y=-\infty}^{\infty}\phi(x)\phi(y) dx dy = \Phi(d) $$ where $\Phi(\cdot)$ is the standard Gaussian cumulative distribution function. But, $$d = \frac{z}{\sqrt{\alpha^2 + \beta^2}}$$ and thus the cumulative distribution function of $Z$ is that of a zero-mean Gaussian random variable with variance $\alpha^2 + \beta^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 1 }
A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language The following is a quote from Surely you're joking, Mr. Feynman. The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently the Banach-Tarski paradox was not a good example.) Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false." It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?" "No holes." "Impossible! "Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!" Just when they think they've got me, I remind them, "But you said an orange! You can't cut the orange peel any thinner than the atoms." "But we have the condition of continuity: We can keep on cutting!" "No, you said an orange, so I assumed that you meant a real orange." So I always won. If I guessed it right, great. If I guessed it wrong, there was always something I could find in their simplification that they left out.
The Fold-and-Cut Theorem is pretty unintuitive. http://erikdemaine.org/foldcut/
{ "language": "en", "url": "https://math.stackexchange.com/questions/250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "340", "answer_count": 36, "answer_id": 14 }
Why does the series $\sum_{n=1}^\infty\frac1n$ not converge? Can someone give a simple explanation as to why the harmonic series $$\sum_{n=1}^\infty\frac1n=\frac 1 1 + \frac 12 + \frac 13 + \cdots $$ doesn't converge, on the other hand it grows very slowly? I'd prefer an easily comprehensible explanation rather than a rigorous proof regularly found in undergraduate textbooks.
This is not as good an answer as AgCl's, nonetheless people may find it interesting. If you're used to calculus then you might notice that the sum $$ 1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}$$ is very close to the integral from $1$ to $n$ of $\frac{1}{x}$. This definite integral is ln(n), so you should expect $1+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{n}$ to grow like $\ln(n)$. Although this argument can be made rigorous, it's still unsatisfying because it depends on the fact that the derivative of $\ln(x)$ is $\frac{1}{x}$, which is probably harder than the original question. Nonetheless it does illustrate a good general heuristic for quickly determining how sums behave if you already know calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "159", "answer_count": 25, "answer_id": 10 }
What is the single most influential book every mathematician should read? If you could go back in time and tell yourself to read a specific book at the beginning of your career as a mathematician, which book would it be?
Not a book, but an essay: "Politics and the English Language" by George Orwell. What? What? (I note that the original question doesn't say that the book has to help with mathematics. It also seems to conflate 'influential' with 'should be read'; as others have pointed out, there is no pressing reason for someone who wants to be a mathematician to read the influential books rather than the useful or the interesting ones.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "104", "answer_count": 30, "answer_id": 17 }
Simple numerical methods for calculating the digits of $\pi$ Are there any simple methods for calculating the digits of $\pi$? Computers are able to calculate billions of digits, so there must be an algorithm for computing them. Is there a simple algorithm that can be computed by hand in order to compute the first few digits?
There is also the option of approximating $\pi$ using Monte Carlo integration. The idea is this: If we agree that the area of a circle is $\pi r^2$, for simplicity we build a circle of area $\pi$ by setting $r=1$. Placing this circle wholly inside of another region of known area, preferably by inscribing it in a square of side length 2, then we have a ratio of the circle's area to the total area of the square (in this example that ratio is $\frac{\pi}{4}$). The Monte Carlo method works by approximating areas based on the ratio of the number of sample points lying within our region of interest and the total number of sample points we choose to try. If we spread a uniformly distributed sequence of $N$ points over our square of area 4, and call the number of points that land inside of the inscribed circle $p$, then we can say $\frac{\pi}{4} = \frac{p}{N}$. My implementation of this in Matlab requires tens of thousands of test points to achieve 3.14159xxxx, but I have not tried it for low-discrepancy sequences, or any other uniformly distributed point sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 7, "answer_id": 1 }
Calculating the probability of two dice getting at least a $1$ or a $5$ So you have $2$ dice and you want to get at least a $1$ or a $5$ (on the dice not added). How do you go about calculating the answer for this question. This question comes from the game farkle.
To visually see the answer given by balpha above, you could write out the entire set of dice rolls [1, 1], [1, 2], [1, 3], [1, 4], [1, 5], [1, 6] [2, 1], [2, 2], [2, 3], [2, 4], [2, 5], [2, 6] [3, 1], [3, 2], [3, 3], [3, 4], [3, 5], [3, 6] [4, 1], [4, 2], [4, 3], [4, 4], [4, 5], [4, 6] [5, 1], [5, 2], [5, 3], [5, 4], [5, 5], [5, 6] [6, 1], [6, 2], [6, 3], [6, 4], [6, 5], [6, 6] Total number of possible dice rolls: 36 Dice rolls that contain 1 or a 5: 20 20/36 = 5/9
{ "language": "en", "url": "https://math.stackexchange.com/questions/326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why $\sqrt{-1 \cdot {-1}} \neq \sqrt{-1}^2$? I know there must be something unmathematical in the following but I don't know where it is: \begin{align} \sqrt{-1} &= i \\\\\ \frac1{\sqrt{-1}} &= \frac1i \\\\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\\\ \sqrt{\frac1{-1}} &= \frac1i \\\\ \sqrt{\frac{-1}1} &= \frac1i \\\\ \sqrt{-1} &= \frac1i \\\\ i &= \frac1i \\\\ i^2 &= 1 \\\\ -1 &= 1 \quad !!? \end{align}
Some of the confusion here is because there are two types of roots: roots and principal roots. The principal roots always return only one value. However symbolically they are almost never different. You can very, very rarely encounter $_+\sqrt{\;}\;$for the principal root. So confusion is often created because people confuse these two types of roots, as they are represented by exactly the same symbols. $$_+\sqrt1=1$$ $$\sqrt1=-1,+1$$ $$\sqrt{-1}=-i,+j$$ If we are doing something like this $$\sqrt{-1}=\frac{1}{i}$$ it means we have to take the correct roots; otherwise, we can have even this: $$\sqrt{1}=-1\;\ and \;\sqrt{1}=1$$ Hence, $1=-1\,.\;$ That's not the way to go. We have multiple roots when dealing with complex numbers, even square roots from real numbers have two answers. To avoid this, the principal root is used but it doesn't differ symbolically from just square root. People extremely rarely write $_+\sqrt{\;}\;$. Therefore there's a lot of confusion. The rules of extracting roots from complex numbers don't strictly follow the rules used with principal roots, so you may easily arrive at a wrong answer. So when we see, for example, the formula: $$\sqrt[n]{z}=\sqrt[n]{p}\, \bigg(\cos \frac{φ+2πk}{n}+i\sin\frac{φ+2πk}{n}\bigg)$$ We have to understand that what is meant is this $$\sqrt[n]{z}=\,_+\sqrt[n]{p}\, \bigg(\cos \frac{φ+2πk}{n}+i\sin{\frac{φ+2πk}{n}}\bigg)$$ Another example. We can't reduce a multiple root $\;\sqrt[nk]{z^k}\;$ to $\;\sqrt[n]{z}\;$ because the first one has $nk$ different root values and the second--only $n\,$. Formulas such as $\sqrt[n]{ab}=\sqrt[n]{a}\sqrt[n]{b}$ will work but not always. Generally, we can't use rules for principal roots for real numbers when we are dealing with complex numbers. To recap: First, $\sqrt{-1}=-i,+i\;\;$ ( NOT JUST $\;i\,$) Second, when dealing with complex numbers, $\sqrt1=-1,+1\;\;$ ( NOT JUST $\;1\;$) And third, $\sqrt{-1}\cdot\sqrt{-1}=1\;$ if we take $\;i,-i\;$ or $\;-i,i\;$ as roots, and $\sqrt{-1}\cdot\sqrt{-1}=-1\;$ if we take either $\;i,i$ or $\;-i,-i\;$ as roots. And even if we deal with real numbers and extract roots (not principal roots) we can still arrive at different values. In very old books $\sqrt{-1}$ is sometimes used instead of $i$. It can only add to confusion. Most crucial here: Roots must be distinguished from principal roots. When dealing with complex numbers we don't extract some principal root of them, but we have a set of different root values. So, almost every single line is wrong in the OP's 'proof'. Kevin Holt provided a very nice step by step illustration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "187", "answer_count": 14, "answer_id": 4 }
Good Physical Demonstrations of Abstract Mathematics I like to use physical demonstrations when teaching mathematics (putting physics in the service of mathematics, for once, instead of the other way around), and it'd be great to get some more ideas to use. I'm looking for nontrivial ideas in abstract mathematics that can be demonstrated with some contraption, construction or physical intuition. For example, one can restate Euler's proof that $\sum \frac{1}{n^2} = \frac{\pi^2}{6}$ in terms of the flow of an incompressible fluid with sources at the integer points in the plane. Or, consider the problem of showing that, for a convex polyhedron whose $i^{th}$ face has area $A_i$ and outward facing normal vector $n_i$, $\sum A_i \cdot n_i = 0$. One can intuitively show this by pretending the polyhedron is filled with gas at uniform pressure. The force the gas exerts on the $i_th$ face is proportional to $A_i \cdot n_i$, with the same proportionality for every face. But the sum of all the forces must be zero; otherwise this polyhedron (considered as a solid) could achieve perpetual motion. For an example showing less basic mathematics, consider "showing" the double cover of $SO(3)$ by $SU(2)$ by needing to rotate your hand 720 degrees to get it back to the same orientation. Anyone have more demonstrations of this kind?
How about probability & statistics? Not exactly physics, but lots of applications which can be demonstrated with empirical data. Any example where "taking an average" seems reasonable is amenable to finding a distribution. Many examples: frequencies of arrival (traffic, say) as Poisson or negative binomial; arrival times as geometric; insurance claims as lognormal or gamma (or other more complex skewed distributions, but no need to get that complicated); percentiles as beta; human physical characteristics as normal. Depending upon your course, you could even take empirical data and try fitting distributions using various techniques, which employ calculus, numerical methods, power series (e.g. moments), etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 19, "answer_id": 5 }
Conjectures that have been disproved with extremely large counterexamples? I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture. I'm sure that everyone here is familiar with it; it describes an operation on a natural number – $n/2$ if it is even, $3n+1$ if it is odd. The conjecture states that if this operation is repeated, all numbers will eventually wind up at $1$ (or rather, in an infinite loop of $1-4-2-1-4-2-1$). I fired up Python and ran a quick test on this for all numbers up to $5.76 \times 10^{18}$ (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at $1$. Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.) I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?" To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!" And he said, "It is my conjecture that there are none! (and if any, they are rare)". Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?
A case where you can "dial in" a large counterexample involves this theorem: "A natural number is square if and only if it is a $p$-adic square for all primes $p$." A $p$-adic square is a number $n$ for which $x^2\equiv n\bmod p^k$ has a solution for any positive $k$. If we include all primes $p$ we have no counterexamples, but suppose we are computationally testing for squares and we have only space and time to include finitely many primes. How high we can go before we get a non-square number that slips through the sieve depends on how many primes we include in our test. With $p=2$ as the only prime base, the first non-square we miss is $17$. Putting $p=3$ in addition to $p=2$ raises that threshold to $73$. Using $2,3,5,7$ gives $1009$ as the first "false positive". The numbers appear to be growing fast enough to make the first counterexample large with a fairly modest number of primes. See http://oeis.org/A002189 for more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "260", "answer_count": 18, "answer_id": 7 }
What is "ultrafinitism" and why do people believe it? I know there's something called "ultrafinitism" which is a very radical form of constructivism that I've heard said means people don't believe that really large integers actually exist. Could someone make this a little bit more precise? Are there good reasons for taking this point of view? Can you actually get math done from that perspective?
Greg Egan has some fun with this idea in one of his best short stories, "Luminous" (published in the collection of the same name). A pair of researchers are exploring an apparent "defect" in mathematics: "You still don't get it, do you, Bruno? You're still thinking like a Platonist. The universe has only been around for fifteen billion years. It hasn't had time to create infinities. The far side can't go on forever-because somewhere beyond the defect, there are theorems that don't belong to any system. Theorems that have never been touched, never been tested, never been rendered true or false." Terrific stuff!
{ "language": "en", "url": "https://math.stackexchange.com/questions/531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90", "answer_count": 3, "answer_id": 0 }
Explanation of method for showing that $\frac{0}{0}$ is undefined (This was asked due to the comments and downvotes on this Stackoverflow answer. I am not that good at maths, so was wondering if I had made any basic mistakes) Ignoring limits, I would like to know if this is a valid explanation for why $\frac00$ is undefined: $x = \frac00$ $x \cdot 0 = 0$ Hence There are an infinite number of values for $x$ as anything multiplied by $0$ is $0$. However, it seems to have got comments, with two general themes. Once is that you lose the values of $x$ by multiplying by $0$. The other is that the last line is: $x \cdot 0 = \frac00 \cdot 0$ as it involves a division by $0$. Is there any merit to either argument? More to the point, are there any major flaws in my explanation and is there a better way of showing why $\frac00$ is undefined?
I think that ignoring limits is problematic. If there was a limit of the function $f(x,y)=x/y$ for $x,y \to 0$ regardless of how the limit is performed, then one would define that value to be $f(0,0)$, even if everything else is strange. Since the limiting value depends on the way the limit is done, choosing a value for $f(0,0)$ is counterproductive as it gives a non-continuous function. Better to have a continuous function over a slightly smaller domain. This also forces the point that if you do have a limit process that results in the evaluation of $f(0,0)$, you realize early on that you should examine the limit carefully rather than use $\lim g(x) = g(\lim x)$ (which is only true for continuous functions, of course.) By the way, this might be too trivial, but I'll give an example of how the limiting value depends on the limit: $\displaystyle \lim_{x \to 0} \frac{\sin(x)}{x} = 1$, (both $\sin(x)$ and $x$ go to 0) $\displaystyle \lim_{x \to 0} \frac{\cos x - 1}{x} = 0$ (again, both numerator and denominator go to 0, but numerator goes "faster") $\displaystyle \lim_{x \to 0} \frac{\sqrt{x}}{x} =+\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 6, "answer_id": 0 }
Varying definitions of cohomology So I know that given a chain complex we can define the $d$-th cohomology by taking $\ker{d}/\mathrm{im}_{d+1}$. But I don't know how this corresponds to the idea of holes in topological spaces (maybe this is homology, I'm a tad confused).
Edited to clear some things up: Simplicial and singular (co)homology were invented to detect holes in spaces. To get an intuitive idea of how this works, consider subspaces of the plane. Here the 2-chains are formal sums of things homeomorphic to the closed disk, and 1-chains are formal sums of things homeomorphic to a line segment. The operator d takes the boundary of a chain. For example, the boundary of the closed disk is a circle. If we take d of the circle we get $0$ since a circle has no boundary. And in general it happens that $d^2 = 0$, that is boundaries always have no boundaries themselves. Now suppose we remove the origin from the plane and take a circle around the origin. This circle is in the kernel of d since it has no boundary. However, it does not bound any 2-chain in the space (since the origin is removed) and so it is not in the image of the boundary operator on two-dimensions. Thus the circle represents a non-trivial element in the quotient space $\ker( d ) / \operatorname{im} (d)$. The way I have defined things makes the above a homology theory simply because the d operator decreases dimension. Cohomology is the same thing only the operator increases dimension (for example the exterior derivative on differential forms). Thus algebraically there really is no difference between cohomology and homology since we can just change the grading from $i$ to $-i$. From a homology we can get a corresponding cohomology theory by dualizing, that is by looking at maps from the group of chains to the underlying group (e.g. $\Bbb Z$ or $\Bbb R$). Then d on the cohomology theory becomes the adjoint of the previous boundary operator and thus increases degrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Why does the log-log scale on my Slide Rule work? For a long time I've eschewed bulky and inelegant calculators for the use of my trusty trig/log-log slide rule. For those unfamiliar, here is a simple slide rule simulator using Javascript. To demonstrate, find the $LL_3$ scale, which is on the back of the virtual one. Let's say we want to solve $3^n$. First, you would move the cursor (the red line) over where $3$ is on the $LL_3$ scale. Then, you would slide the middle slider until the $1$ on the $C$ scale is lined up to the cursor. And voila, your slide rule is set up to find $3^n$ for any arbitrary $n$. For example, to find $3^2$, move the cursor to $2$ on the $C$ scale, and your answer is what the cursor is on on the $LL_3$ scale ($9$). Move your cursor to $3$ on $C$, and it should be lined up with $27$ on $LL_3$. To $4$ on C, it is on $81$ on $LL_3$. You can even do this for non-integer exponents ($1.3,\cdots$ etc.) You can also do this for exponents less than one, by using the $LL_2$ scale. For example, to do $3^{0.5}$, you would find $5$ on the $C$ scale, and look where the cursor is lined up at on the $LL_2$ scale (which is about $1.732$). Anyways, I was wondering if anyone could explain to me how this all works? It works, but...why? What property of logarithms and exponents (and logarithms of logarithms?) allows this to work? I already understand how the basics of the Slide Rule works ($\ln(m) + \ln(n) = \ln(mn)$), with only multiplication, but this exponentiation eludes me.
If x = 3n, then log x = n log 3. The C scale is logarithmic, which means if the reading is p, then the distance is proportional to   log p. Similarly, in the LLx scale the distance is proportional to   log log p. Thus, when you align 1 to "3" in LL3, you introduce an offset of (log log 3). Suppose you get a reading of n in the C scale, then the corresponding value in LL3 would be: log log p = log log 3 + log n (LL3) (offset) (C) eliminating one level of log gives log p = log 3 * n eliminating one more level of log gives p = 3^n LL2 is the same as LL3 except it covers a different range.
{ "language": "en", "url": "https://math.stackexchange.com/questions/604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Balance chemical equations without trial and error? In my AP chemistry class, I often have to balance chemical equations like the following: $$ \mathrm{Al} + \text O_2 \to \mathrm{Al}_2 \mathrm O_3 $$ The goal is to make both side of the arrow have the same amount of atoms by adding compounds in the equation to each side. A solution: $$ 4 \mathrm{Al} + 3 \mathrm{ O_2} \to 2 \mathrm{Al}_2 \mathrm{ O_3} $$ When the subscripts become really large, or there are a lot of atoms involved, trial and error is impossible unless performed by a computer. What if some chemical equation can not be balanced? (Do such equations exist?) I tried one for a long time only to realize the problem was wrong. My teacher said trial and error is the only way. Are there other methods?
To solve this problem, create an ordered set of chemical elements $(Al,O)$ and an ordered set of chemical species $(Al,Al_2O_3,O_2)$. The ordering of the chemical elements can be used to create a vector for each chemical species. $Al = \begin{pmatrix}1\\0\end{pmatrix}$ $Al_2O_3 = \begin{pmatrix}2\\3\end{pmatrix}$ $O_2 = \begin{pmatrix}0\\2\end{pmatrix}$ The ordering of the chemical species (this ordering is arbitrary, but standard linear algebra algorithms will produce different results) can be used to create an element abundance matrix, $A$. $A = \begin{pmatrix}1&2&0\\0&3&2\end{pmatrix}$ The coefficients of the set of linearly independent stoichiometric equations can be represented by the stoichiometric matrix, $N$, where $N$ is a representation of the null space of $A$. $AN=0$ To put the stoichiometric matrix into 'canonical form' apply the operations: $(RREF(N^T))^T$ The ordering of the species will preferentially make the stoichiometric coefficents of the species at the left side of matrix $A$ to have coefficents of $1$. This method will produce the maximal number of linearly independent stoichiometric equations. They can be added or multiplied and still represent a stoichiometric equation (they are linearly independent). If you are going to try and calculate thermodynamic equilibrium with a mixture of gas and condensed species, I recommend ordering the gas species first in the list. This will try and isolate 1 gas per independent stoichiometric equation (if possible) and will facilite using the chemical equilibrium constant method $K_{eq}$ to calculate thermodynamic equilibrium.
{ "language": "en", "url": "https://math.stackexchange.com/questions/624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 4, "answer_id": 2 }
What's an intuitive way to think about the determinant? In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $n \times n$ matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?
(I considered making this a comment, but I thought it might deserve more attention than a comment would receive. Upvotes and downvotes will tell if I am right or wrong). Complement about the sign of the determinant I loved the accepted answer by Jamie, but I was frustrated that it did not give more explanation about the sign of the determinant and the notion of "rotation" or "orientation" of a vector. The answer from Marc Van Leeuwen comments more on this, but maybe not be enough for everyone -- at least not for me -- to understand what it means for a matrix to change the orientation of the space it transforms. So I googled the issue and ended up on the following explanation which I find excellent and accessible: http://mathinsight.org/determinant_linear_transformation#lintrans3D
{ "language": "en", "url": "https://math.stackexchange.com/questions/668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "793", "answer_count": 17, "answer_id": 11 }
Sum of the alternating harmonic series $\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k} = \frac{1}{1} - \frac{1}{2} + \cdots $ I know that the harmonic series $$\sum_{k=1}^{\infty}\frac{1}{k} = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \cdots + \frac{1}{n} + \cdots \tag{I}$$ diverges, but what about the alternating harmonic series $$\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k} = \frac{1}{1} - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots + \frac{(-1)^{n+1}}{n} + \cdots \text{?} \tag{II}$$ Does it converge? If so, what is its sum?
A proof without words by Matt Hudleson
{ "language": "en", "url": "https://math.stackexchange.com/questions/716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60", "answer_count": 12, "answer_id": 1 }
Online Math Degree Programs Are there any real online mathematics (applied math, statistics, ...) degree programs out there? I'm full-time employed, thus not having the flexibility of attending an on campus program. I also already have a MSc in Computer Science. My motivation for a math degree is that I like learning and am interested in the subject. I've studied through number of OCW courses on my own, but it would be nice if I could actually be able to have my studying count towards something. I've done my share of Googling for this, but searching for online degrees seems to bring up a lot of institutions that (at least superficially) seem a bit shady (diploma mills?).
I don't know if you're still looking, but I found this page in my own search for an online math program. http://www.onlinecollege.org/bachelors/mathematics/
{ "language": "en", "url": "https://math.stackexchange.com/questions/734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 10, "answer_id": 6 }
Least wasteful use of stamps to achieve a given postage You have sheets of $42$-cent stamps and $29$-cent stamps, but you need at least $\$3.20$ to mail a package. What is the least amount you can make with the $42$- and $29$-cent stamps that is sufficient to mail the package? A contest problem such as this is probably most easily solved by tabulating the possible combinations, using $0$ through ceiling(total/greater value) of the greater-value stamp and computing the necessary number of the smaller stamp and the total postage involved. The particular example above would be solved with a $9$-row table, showing the minimum to be $\$3.23$, made with seven $42$-cent stamps and one $29$-cent stamp. Is there a better algorithm for solving this kind of problem? What if you have more than two values of stamps?
This is a simple variation of the Knapsack problem. Which is a NP-Complete problem. Let n to be the target value, it can be solve using knapsack's dynamic programming solutions. If you have a finite amount of stamps, then you can add them all up, subtract the target value. Run 0-1 knapsack on the stamps with the resulting number. The stamps outside the knapsack is the solution. If you have a infinite amount of stamps. You can either make them finite: $\lceil \frac{n}{t_i}\rceil$ stamps for stamps with value $t_i$ and run the above process, or you can compute the result by running knapsacks O(log n) times similar to a binary search. It's no better than a naive backtracking solution when n is large. I assume you are not going to pay $100 for stamps. ;)
{ "language": "en", "url": "https://math.stackexchange.com/questions/742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Why does Benford's Law (or Zipf's Law) hold? Both Benford's Law (if you take a list of values, the distribution of the most significant digit is rougly proportional to the logarithm of the digit) and Zipf's Law (given a corpus of natural language utterances, the frequency of any word is roughly inversely proportional to its rank in the frequency table) are not theorems in a mathematical sense, but they work quite good in the real life. Does anyone have an idea why this happens? (see also this question)
Let's take two quantities that we're counting: * *Quantity L grows linearly. It starts at 100 and grows by 1 unit per day *Quantity E grows exponentially. It starts at 100 and grows by 1% per day If we record these quantities every day for a long time, we'd see that it takes L as much time to go from 1000 -> 2000 as it does to go from 8000 -> 9000 whereas for E, it will take longer to go from 1000 -> 2000 than it would to go from 8000 -> 9000. This remains true for all orders of magnitude. This generalises to all quantities that grow exponentially and thus all processes that grow exponentially will spend longer with their leading digit as 1 than 2 and longer at 2 than 3 and ..... As most processes in nature grow exponentially (size of a settlement, number of trees in a forest, salaries, number of covid cases, ....), we see that the leading number of things we measure will disproportionately be 1 Note: As a previous poster pointed out, this applies to anything growing at a 'rate'. Not just growing exponentially.
{ "language": "en", "url": "https://math.stackexchange.com/questions/781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 10, "answer_id": 9 }
Separation of variables for partial differential equations What class of Partial Differential Equations can be solved using the method of separation of variables?
For example the linear homogeneous PDEs with dependent variable $u$ and independent variables $x$ and $y$ , the separable condition is that the PDEs can rewrite to the form $\dfrac{\sum\limits_{a_1=0}^{b_1}M_{a_1}(x)X^{[a_1]}(x)}{\sum\limits_{a_2=0}^{b_2}N_{a_2}(x)X^{[a_2]}(x)}=\dfrac{\sum\limits_{a_3=0}^{b_3}P_{a_3}(y)Y^{[a_3]}(y)}{\sum\limits_{a_4=0}^{b_4}Q_{a_4}(y)Y^{[a_4]}(y)}$ when letting $u(x,y)=X(x)Y(y)$ . For example, the PDE $x^2u_{xy}-yu_{yy}+u_x-4u=0$ mentioned in The canonical form of a nonlinear second order PDE is an unseparable example while the PDE $u_{xy}-yu_{yy}+u_x-4u=0$ is a separable example. Start from the PDEs with three independent variables, the separable conditions are more difficult to described, since for example the linear homogeneous PDEs with dependent variable $u$ and independent variables $x$ , $y$ and $z$ , the PDEs are separable when the PDEs not only can rewrite to the form $\dfrac{\sum\limits_{a_1=0}^{b_1}M_{1,a_1}(x)X^{[a_1]}(x)}{\sum\limits_{a_2=0}^{b_2}M_{2,a_2}(x)X^{[a_2]}(x)}+\dfrac{\sum\limits_{a_3=0}^{b_3}M_{3,a_3}(y)Y^{[a_3]}(y)}{\sum\limits_{a_4=0}^{b_4}M_{4,a_4}(y)Y^{[a_4]}(y)}+\dfrac{\sum\limits_{a_5=0}^{b_5}M_{5,a_5}(z)Z^{[a_5]}(z)}{\sum\limits_{a_6=0}^{b_6}M_{6,a_6}(z)Z^{[a_6]}(z)}=0$ when letting $u(x,y,z)=X(x)Y(y)Z(z)$ , but also when the PDEs can rewrite to the form $\dfrac{\sum\limits_{a_1=0}^{b_1}M_{1,a_1}(x)X^{[a_1]}(x)}{\sum\limits_{a_2=0}^{b_2}M_{2,a_2}(x)X^{[a_2]}(x)}+\dfrac{\sum\limits_{a_3=0}^{b_3}M_{3,a_3}(y)Y^{[a_3]}(y)}{\sum\limits_{a_4=0}^{b_4}M_{4,a_4}(y)Y^{[a_4]}(y)}+\dfrac{\sum\limits_{a_3=0}^{b_3}N_{3,a_3}(y)Y^{[a_3]}(y)\sum\limits_{a_5=0}^{b_5}M_{5,a_5}(z)Z^{[a_5]}(z)}{\sum\limits_{a_4=0}^{b_4}N_{4,a_4}(y)Y^{[a_4]}(y)\sum\limits_{a_6=0}^{b_6}M_{6,a_6}(z)Z^{[a_6]}(z)}=0$ when letting $u(x,y,z)=X(x)Y(y)Z(z)$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Applications of class number There is the notion of class number from algebraic number theory. Why is such a notion defined and what good comes out of it? It is nice if it is $1$; we have unique factorization of all ideals; but otherwise?
We say that a prime p is regular if it does not divide the class number of the p-th cyclotomic field. For regular primes, it is easy to prove Fermat's last theorem, as outlined for instance in Milne's notes. Basically, everything would be easy if the class number was 1, in which case one could use unique factorization. If the class number is prime to p, you use the fact that every ideal whose p-th power is principal must itself be principal. This, together with unique factorization for ideals, turns out to be all that you need in the naif proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 0 }
Why is compactness in logic called compactness? In logic, a semantics is said to be compact iff if every finite subset of a set of sentences has a model, then so to does the entire set. Most logic texts either don't explain the terminology, or allude to the topological property of compactness. I see an analogy as, given a topological space X and a subset of it S, S is compact iff for every open cover of S, there is a finite subcover of S. But, it doesn't seem strong enough to justify the terminology. Is there more to the choice of the terminology in logic than this analogy?
As far as I know, the link comes from the syntactic theory. You are given a set of symbols of sentence F = {f_i} and you are allowed allowed to form complex statements using them. You can combine the elementary statements with AND, OR, NOT operators and parentheses, in the usual way. So you get a set X of composed sentences, like (f AND g) OR (NOT h) or something like that. A syntactic version of the compactness theorem states the following. Assume that for every finite subset Y of X you can assign truth values to the f_i in such a way that all sentences in Y are true. Than you can do the same for X. Proof Consider the topological space A obtained by taking the product of {0, 1} over the set F. The topology on A is the product topology. By Tychonoff's theorem, A is compact. For every composed statement s, the set of truth values which make s true is a finite intersection of cylinders, hence it is a closed set of A. The hypothesis say that every finite intersection of such closed sets, for s ranging in X, is not empty. Hence the intersection of all such closed set is not empty, which means one can make all statements in X true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87", "answer_count": 7, "answer_id": 2 }
What is the best way to factor arbitrary polynomials? I am currently working on a Computer Algebra System and was wondering for suggestions on methods of finding roots/factors of polynomials. I am currently using the Numerical Durand-Kerner method but was wondering if there are any good non-numerical methods (primarily for simplifying fractions etc). Ideally this should work for equations in multiple variables.
If you're looking to factor exactly, then you'll need to use something that's not one of the fundamental operations of addition, subtraction, multiplication, division and extraction of roots. The Abel-Ruffini theorem says so for degree five and above. However, there are numerous other methods to find roots exactly, using more general functions, my favorite being theta functions, as explained in the appendix to Mumford's "Tata Lectures on Theta II"
{ "language": "en", "url": "https://math.stackexchange.com/questions/868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Correct usage of the phrase "In the sequel"? History? Alternatives? While I feel quite confident that I've inferred the correct meaning of "In the sequel" from context, I've never heard anyone explicitly tell me, so first off, to remove my niggling doubts: What does this phrase mean? (Someone recently argued to me that "sequel" was actually supposed to refer to a forthcoming second part of a paper, which I found highly unlikely, but I'd just like to make sure. ) My main questions: At what points in the text, and for what kinds of X, is it appropriate to use the phrase "In the sequel, X" in a paper? In a book? Is it ever acceptable to introduce definitions via "In the sequel, we introduce the concept of a "blah", which is a thing satisfying ..." at the start of a paper or book without a formal "Definition. A "blah" is a thing, satsifying ..." in the main text of the paper or book? Finally, out of curiosity, I'm wondering how long this phrase has been around, if it's considered out of date or if it's still a popular phrase, and what some good alternatives are.
Mathematics is generally seen as a precise and specific language. Surely if there is ambiguity in the phrase "in the sequel" it should be avoided. There are far more accurate ways of expressing both the meanings that phrase could carry so why create needless confusion?
{ "language": "en", "url": "https://math.stackexchange.com/questions/907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 2 }
Probability to find connected pixels Say I have an image, with pixels that can be either $0$ or $1$. For simplicity, assume it's a $2D$ image (though I'd be interested in a $3D$ solution as well). A pixel has $8$ neighbors (if that's too complicated, we can drop to $4$-connectedness). Two neighboring pixels with value $1$ are considered to be connected. If I know the probability $p$ that an individual pixel is $1$, and if I can assume that all pixels are independent, how many groups of at least $k$ connected pixels should I expect to find in an image of size $n\times n$? What I really need is a good way of calculating the probability of $k$ pixels being connected given the individual pixel probabilities. I have started to write down a tree to cover all the possibilities up to $k=3$, but even then, it becomes really ugly really fast. Is there a more clever way to go about this?
I have to admit that I'm not very good at math but I took a piece of paper and made some thoughts with some graphics and got to the following idea: Let's say that the probability of hitting a specific pixel is the same for every pixel you have, then you can describe the probability to be P(X=x1) = a Then the probability to hit the exact same pixel is much lower (by square) if you assume independency so that P(X=[x1;x1]) = P(X=x1)*P(X=x1) = a*a Now if you assume 8 neighbours that you'll count as hits (as positive events) then you are increasing the probability by 8 for the second multiplicator, because you don't just want to have the exact same pixel but one of 8 more than that. Since you assume every pixel to have the same probability and that they are independent (which is, I assume, not the case for real graphics) you'll get P(X=[x1;[x2;x3;x4;x5;x6;x7;x8;x9]]) = a*(a+a+a+a+a+a+a+a) P(X=[x1;[x2,...,x9]]) = a*(8*a) = 8*a^2 This seems legit since hitting the exact same pixel has to be clearly less probable then the exact same pixel plus one of 8 other pixels around.
{ "language": "en", "url": "https://math.stackexchange.com/questions/949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
What's the difference between open and closed sets? What's the difference between open and closed sets? Especially with relation to topology - rigorous definitions are appreciated, but just as important is the intuition!
I will not reiterate the very nice definitions found in the other answers, however I think that these "practical" definitions might help you as well on an intuitive level. Open sets are typically used as domains for functions, as they are more useful for analysing "continuous" properties like differentiability. Also they don't have nasty borders (hence you don't have to deal with functions which are well behaved only on one side of the edge). Closed sets are useful because, if they are limited, they are compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 4 }
Usefulness of Conic Sections Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where?
Conic sections should definitely be retained. If you don't cover conic sections, then what other examples can you cover? Lines? Too simple. General curves? Insufficiently concrete. Examples are very important for illustrating the general theory and techniques. Also, in a multivariable calculus course, typical examples will involve quadric surfaces. Here conic sections will come into play, since hyperplane sections (or "level curves") of quadric surfaces are conic sections.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Usage of dx in Integrals All the integrals I'm familiar with have the form: $\int f(x)\mathrm{d}x$. And I understand these as the sum of infinite tiny rectangles with an area of: $f(x_i)\cdot\mathrm{d}x$. Is it valid to have integrals that do not have a differential, such as $\mathrm{d}x$, or that have the differential elsewhere than as a factor ? Let me give couple of examples on what I'm thinking of: $\int 1$ If this is valid notation, I'd expect it to sum infinite ones together, thus to go inifinity. $\int e^{\mathrm{d}x}$ Again, I'd expect this to go to infinity as $e^0 = 1$, assuming the notation is valid. $\int (e^{\mathrm{d}x} - 1)$ This I could potentially imagine to have a finite value. Are any such integrals valid? If so, are there any interesting / enlightening examples of such integrals?
No, it's not valid. The dx in the integral is a representation of the fact that the integral is obtained as an area, so multiplying the "average" of the function value at each point by an infinitesimal interval. As the manner in which we don't calculate the area does not change, the notation does not change. There are different notations that are used when the integral is over a curve, or over more than variable (thus leading for example to volumes). The d(variable) notation is also used as a reminder that the integral is against a specific variable and not another, e.g. that int x/y dx differs from int x/y dy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 1 }
Mandelbrot-like sets for functions other than $f(z)=z^2+c$? Are there any well-studied analogs to the Mandelbrot set using functions other than $f(z)= z^2+c$ in $\mathbb{C}$?
Starting with the very well known iteration formula that creates the Mandelbrot set $z_{n+1}=z_n^2+c$, where $c=z_0=x_0+iy_0=\text{Re}(c)+i\text{Im}(c)$ is a complex constant which is the starting point of the trajectory generated in the complex plane by the iteration, in Doppelpot , from the German blog Fraktale Welten by Nachtwaechter, it is explained that, when we have two complex exponentiations the fractals generated by the recursive formula are normally beautiful, particularly those of Julia sets. I suggested to the author to try the following formulae: $z_{n+2}=z_{n+1}^{2}+z_{n}+c$ $z_{n+1}=z_{n}^{z_{n}c}$ $z_{n+1}=z_{n}^{z_{n}+c}$ $z_{n+2}=z_{n+1}^{3}+c^{z_{n}}$ $z_{n+1}=z_{n}^{c}$ In Fünf Formeln the fractals are shown. I reproduced some with the author's permission in this entry of my blog. This one is generated by $z_{n+2}=z_{n+1}^{2}+z_{n}+c$: and this by $z_{n+2}=z_{n+1}^{3}+c^{z_{n}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 11, "answer_id": 7 }
If and only if, which direction is which? I can never figure out (because the English language is imprecise) which part of "if and only if" means which implication. ($A$ if and only if $B$) = $(A \iff B)$, but is the following correct: ($A$ only if $B$) = $(A \implies B)$ ($A$ if $B$) = $(A \impliedby B)$ The trouble is, one never comes into contact with "$A$ if $B$" or "$A$ only if $B$" using those constructions in everyday common speech.
It's easier to work out if you have a specific example: Let A:I am a parent B:I have a child I am a parent if and only if I have a child has two parts: I am a parent if I have a child can be rephrased: If I have a child, then I am a parent. B => A I am a parent only if I have a child can be understood to mean: if I do not have a child, then I am not a parent: ~B -> ~A But this is logically equivalent to if I am a parent, then I have a child: A=> B So the "if and only if" locution implicitly involves some grammatical transformations. The meaning may not be immediately obvious, but it can be worked out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 4, "answer_id": 1 }
Proof that $n^3+2n$ is divisible by $3$ I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! Problem: For any natural number $n , n^3 + 2n$ is divisible by $3.$ This makes sense Proof: Basis Step: If $n = 0,$ then $n^3 + 2n = 0^3 +$ $2 \times 0 = 0.$ So it is divisible by $3.$ Induction: Assume that for an arbitrary natural number $n$, $n^3+ 2n$ is divisible by $3.$ Induction Hypothesis: To prove this for $n+1,$ first try to express $( n + 1 )^3 + 2( n + 1 )$ in terms of $n^3 + 2n$ and use the induction hypothesis. Got it $$( n + 1 )^3+ 2( n + 1 ) = ( n^3 + 3n^2+ 3n + 1 ) + ( 2n + 2 ) \{\text{Just some simplifying}\}$$ $$ = ( n^3 + 2n ) + ( 3n^2+ 3n + 3 ) \{\text{simplifying and regrouping}\}$$ $$ = ( n^3 + 2n ) + 3( n^2 + n + 1 ) \{\text{factored out the 3}\}$$ which is divisible by $3$, because $(n^3 + 2n )$ is divisible by $3$ by the induction hypothesis. What? Can someone explain that last part? I don't see how you can claim $(n^3+ 2n ) + 3( n^2 + n + 1 )$ is divisible by $3.$
The driving force behind induction is that you show that a base case (when $n = 1$, for example). Then you show that the hypothesis being true at some $k$ implies that it holds at $k+1$. Then, since you have verified the hypothesis at $n = 1$, you have it at $n = 2$. Then, since it holds at $n = 2$, it holds at $n = 3$, and so on. Note that the domain over which the hypothesis holds should be defined in the hypothesis itself. Now, for your specific case, let's see how this works. First, for the base case ($n = 1$), we see the following: $$1^3 + 2 \cdot 1 = 3.$$ That is clearly divisible by $3$, so we have our base case. Now assume that for all $k \geq 1$, $k^3 + 2k$ is divisible by $3$. Then for $n = k + 1$, we have: $$(k+1)^3 + 2(k+1) = k^3 + 3k^2 + 3k + 1 + 2k + 2 = k^3 + 3k^2 + 5k + 3$$ The $3k^2 + 3$ portion is clearly divisible by $3$, so we need only show that $k^3 + 5k$ is divisible by $3$. From the assumption above, we know that $k^3 + 2k = 3m$ for some positive integer $m$. Then, $$k^3 + 5k = k^3 + 2k + 3k = 3m + 3k = 3(m + k),$$ so the hypothesis holds at $n = k+1$. Thus, we have for all $n \geq 1$, $n^3 + 2n$ is divisible by $3$. As the others have suggested, there are certainly other ways of showing the $(k+1)$th case, but hopefully this overall form helps you see how mathematical induction works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 13, "answer_id": 8 }
Which average to use? (RMS vs. AM vs. GM vs. HM) The generalized mean (power mean) with exponent $p$ of $n$ numbers $x_1, x_2, \ldots, x_n$ is defined as $$ \bar x = \left(\frac{1}{n} \sum x_i^p\right)^{1/p}. $$ This is equivalent to the harmonic mean, arithmetic mean, and root mean square for $p = -1$, $p = 1$, and $p = 2$, respectively. Also its limit at $p = 0$ is equal to the geometric mean. When should the different means be used? I know harmonic mean is useful when averaging speeds and the plain arithmetic mean is certainly used most often, but I've never seen any uses explained for the geometric mean or root mean square. (Although standard deviation is the root mean square of the deviations from the arithmetic mean for a list of numbers.)
I admit I don't really know what type of answer your looking for. So, I'll say something that might very well be entirely irrelevant for your purposes but which I enjoy. At least, it'll provide some context for the power means you asked about. These generalized power means are basically the discrete (finitary) analogs of the L^p norms. So, for instance, it's with these norms that you prove (using, say, elementary calculus) the finitary version of Holder's inequality, which is really important in analysis, because it leads (via a limiting argument) to the more important fact that $L^p$ and $L^q$ spaces (which are continuous analogs of these finitary $l^p$ spaces) are dual for $p,q$ conjugate exponents. This duality is really important: one example is that if you are trying to prove something about the $L^p$ spaces that is preserved under duality, you just have to restrict yourself to the case $1 \leq p \leq 2$. The theory of singular integral operators provides examples of this: basically, it's easy to prove they are bounded (i.e., reasonably well-behaved) for $p=2$ by Fourier analysis; you prove that they're "weak-bounded" on $L^1$ (in some sense which I won't make precise); then you apply to general results on interpolation to get boundedness in the range $1-2$; finally, this duality operation gives it for $p>2$ as well. Also, root-mean-square speed is used to define temperature in physics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
How can I understand and prove the "sum and difference formulas" in trigonometry? The "sum and difference" formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, * *How can I prove that these formulas are correct? *More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula, although such answers are still encouraged, for completeness.
Let $\alpha, \beta, \theta \in \mathbb{R}.$ Consider vectors $\vec{P} = (\cos(\alpha), \sin(\alpha)), \vec{Q} = (\cos(\beta), \sin(\beta) ),$ and their rotated versions $\vec{P'} = (\cos(\alpha + \theta), \sin(\alpha + \theta) ), \vec{Q'} = (\cos(\beta + \theta), \sin(\beta + \theta) ).$ Rotations preserve distances, so $PQ = P'Q',$ ie $(\cos(\alpha) - \cos(\beta)) ^2 + (\sin(\alpha) - \sin(\beta) ) ^2$ $= (\cos(\alpha + \theta) - \cos(\beta + \theta) ) ^2 + (\sin(\alpha + \theta) - \sin(\beta + \theta) ) ^2 ,$ ie ${\color{green}{\cos(\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta) = \cos(\alpha + \theta) \cos(\beta + \theta) + \sin(\alpha + \theta) \sin(\beta + \theta)}}.$ This holds for all values of $\alpha, \beta, \theta.$ Setting $\alpha = 0,$ $\cos(\beta) = \cos(\theta) \cos(\beta + \theta) + \sin(\theta) \sin(\theta + \beta).$ Further setting $\lbrace \beta = A+B, \theta = (-B) \rbrace$ gives $\cos(A+B) = \cos(A) \cos(B) - \sin(A) \sin(B).$ Similarly setting $\alpha = \frac{\pi}{2}$ gives $\sin(\beta) = -\sin(\theta) \cos(\beta + \theta) + \cos(\theta) \sin(\beta + \theta),$ and further setting ${ \lbrace \beta = A + B, \theta = (-B) \rbrace }$ gives ${ \sin(A+B) = \sin(A) \cos(B) + \cos(A) \sin(B) }.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119", "answer_count": 14, "answer_id": 11 }
Prove: $(a + b)^{n} \geq a^{n} + b^{n}$ Struggling with yet another proof: Prove that, for any positive integer $n: (a + b)^n \geq a^n + b^n$ for all $a, b > 0:$ I wasted $3$ pages of notebook paper on this problem, and I'm getting nowhere slowly. So I need some hints. $1.$ What technique would you use to prove this (e.g. induction, direct, counter example) $2.$ Are there any tricks to the proof? I've seen some crazy stuff pulled out of nowhere when it comes to proofs...
Hint: Use the binomial theorem. This states that $(a + b)^n = \sum \limits_{k = 0}^n {n \choose k} a^{n-k} b^k = a^n + b^n + \sum \limits_{k=1}^{n-1} {n \choose k} a^{n-k} b^k$. Now, note that every term in the second sum is positive; this is because a, b, and the binomial coefficients are all positive. Therefore, $(a+b)^n=a^n+b^n+\text{ (sum of positive terms) }\geqslant a^n+b^n\;.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 3 }
Why $PSL_3(\mathbb F_2)\cong PSL_2(\mathbb F_7)$? Why are groups $PSL_3(\mathbb{F}_2)$ and $PSL_2(\mathbb{F}_7)$ isomorphic? Update. There is a group-theoretic proof (see answer). But is there any geometric proof? Or some proof using octonions, maybe?
Both are simple groups of order 168, and each simple group of order $168$ is isomorphic to $PSL_2(7)$. An extended exercise, with hints. Prove the following: Let $G$ be a simple group of order $168$. It has $8$ Sylow $7$-subgroups. It can be identified with a subgroup of $A_8$. Labelling the objects it acts on as $\infty,0,1,\ldots,6$ one Sylow $7$-subgroup is generated by $g=(0\ 1\ 2\ 3\ 4\ 5\ 6)$. The group $G$ is $2$-transitive. The normalizer of $\langle g\rangle$ is generated by $g$ and $h=(1\ 2\ 4)(3\ 6\ 5)$. The setwise stabilizer $H$ of $\{\infty,0\}$ is generated by $h$ and another element $k$ which is the product of $(\infty\ 0)$ and three other disjoint transpositions. If $H$ is cyclic, then the Sylow $2$-subgroup of $G$ would be unique, leading to a contradiction. So $H$ is nonabelian and we can take $k=(\infty\ 0)(1\ 6)(2\ 3)(4\ 5)$. Finally $G$ is $PSL_2(7)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 7, "answer_id": 5 }
Symmetric nash equilibrium I was reading this paper on position auctions for web ads. Basically, there are N slots each with an expected number of clicks (in a particular time period) $x$. Each agent makes a bid $B_i$ of how much they are willing to pay per click. The bids are put in decreasing order and agent who makes the $i^\text{th}$ highest bid receives receives the slot with the $i^\text{th}$ highest click through rate for a price $P_i$ equal to $B_{i+1}$ (except for the last agent who pays nothing). To obtain the least price to win the $i^\text{th}$ slot, we note that we have to beat the $i^\text{th}$ agent's bid if we are moving up, but that we only have to beat the $i^\text{th}$ agents price if we are moving down. We easily obtain the following equations for Nash Equilibria: $(v_s-p_s)x_s\ge(v_s-p_t)x_t$ for $t>s$ $(v_s-p_s)x_s\ge(v_s-P_{t-1})x_t$ for $t < s$ The paper then defines the symmetric Nash equilibrium to be a set of prices with: $(v_s-p_s)x_s\ge(v_s-p_t)x_t$ for all $t$ and $s$ Basically, instead of having the second part of the previous conditions, the first part of the previous equations is valid anywhere. Is the symmetric Nash equilibrium defined more generally? In particular, is it the same as the symmetric equilibrium in this Wikipedia article
The notion "symmetric equilibrium" (the one from Wikipedia article) is not applicable here, because the game is not symmetric (different players have different "profits per click"). I've a look at the paper and I think, that "symmetric Nash equilibrium" in your case is nothing but technically convenient case of Nash equilibrium (the latter is not unique in your case). Also I've noted a probable misprint in the proof of the "Fact 1". It should be $(v_s-p_s)x_s \geq (v_s-p_{S+1})x_{S+1}$ instead of $(v_s-p_s)x_s \geq (v_{S+1}-p_{S+1})x_{S+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding an addition formula without trigonometry I'm trying to understand better the following addition formula: $$\int_0^a \frac{\mathrm{d}x}{\sqrt{1-x^2}} + \int_0^b \frac{\mathrm{d}x}{\sqrt{1-x^2}} = \int_0^{a\sqrt{1-b^2}+b\sqrt{1-a^2}} \frac{\mathrm{d}x}{\sqrt{1-x^2}}$$ The term $a\sqrt{1-b^2}+b\sqrt{1-a^2}$ can be derived from trigonometry (since $\sin(t) = \sqrt{1 - \cos^2(t)}$) but I have not been able to find any way to derive this formula without trigonometry, how could it be done? edit: fixed a mistake in my formula.
The formula is proved easily by assuming $f(a) =\int_{0}^{a}(1-x^2)^{-1/2}\,dx$ and then setting $$u=f(a)+f(b), v=a\sqrt{1-b^2}+b\sqrt{1-a^2}$$ and showing that $u, v$ are functionally dependent so that $u=g(v) $ for function $g$. Putting $b=0$ we get $v=a$ and $u=f(a)=f(v) $ so that $f=g$ and hence $u=f(v) $ as desired. The functional dependence between $u, v$ is proved by noting that $$\frac{\partial u} {\partial a} \frac{\partial v} {\partial b} =\frac{\partial u} {\partial b} \frac{\partial v} {\partial a} $$ Using same technique one can prove the more difficult formula $$\int_{0}^{a}\frac{dx}{\sqrt {1-x^4}}+\int_{0}^{b}\frac{dx}{\sqrt{1-x^4}}=\int_{0}^{c}\frac{dx}{\sqrt{1-x^4}}$$ where $$c=\frac{a\sqrt{1-b^4}+b\sqrt{1-a^4}} {1+a^2b^2} $$ (Euler and Fagnano established this and it was one of the key results in early development of elliptic function theory).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Is the natural map $L^p(X) \otimes L^p(Y) \to L^p(X \times Y)$ injective? Let $X,Y$ be $\sigma$-finite measure spaces, and let $L^p(X) \otimes L^p(Y)$ be the algebraic tensor product. The product has a natural map into $L^p(X \times Y)$ which takes $\sum a_{ij} f_i \otimes g_j$ to the function $F(x,y) = \sum a_{ij} f_i(x) g_j(y)$. A moment's thought shows that this map is well-defined. Is it also injective? It seems that this should be true, but I can't see how to prove it. Intuitively, one needs to show that if $\sum a_{ij} f_i(x) g_j(y) = 0$ a.e., then one should be able to cancel all the terms in the sum using bilinearity. It is not quite clear how to do this without knowing anything about the terms.
EDIT: Here is a cleaned-up and corrected version of this answer, based on Pierre-Yves' suggestion (thanks!). His answer above contains a much more complete version. If $\sum_{i=1}^n a_{i} f_i \otimes g_i$ is not the zero element of $L^p(X) \otimes L^p(Y)$, we may assume without loss of generality that the $f_i$ are linearly independent. We can also assume that $a_1 \ne 0$ and $g_1 \ne 0$. Suppose that the corresponding function $F(x,y) = \sum_{i=1}^n a_{i} f_i(x) g_i(y) = 0$ a.e. Since $g_1 \ne 0$, there is a measurable $B \subset Y$ of positive finite measure such that $\int_B g_1 \ne 0$ (the integral is finite by Hölder). Then by Fubini's theorem, for a.e. $x$ we have $$ 0 = \int_{B} F(x,y)dy = \sum_{i=1}^n a_i \left(\int_{B} g_i\right) f_i(x). $$ This contradicts the assumed linear independence of the $f_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
How can I calculate this expected rate? In DotA, there is a character called "Axe". Every time he is attacked, he has a chance to spin ($17\%$) his blade and deal damage based on what level the skill is, $100 / 125 / 150 / 175$ damage for levels $1, 2, 3$, and $4$. When the spin activates, it triggers the cooldown of $0.7 / 0.65 / 0.6 / 0.55$ seconds, so that attacking Axe does not generate a chance to spin. I was trying to calculate an average damage per second that this skill generates, given that most of the time you will find Axe taking around $5$ attacks per second (average $1.667$ attacks per second from creeps, $3$ creeps per camp), so I figured that would mean the probability of him spinning in any second is $1 - 0.83^5$, so given the damage from earlier, we should expect him to deal $60.61 / 75.76 / 90.91 / 106.07$ damage per second? I got this calculation, but then I realized that I have to factor in the cooldown somewhere, but I have no idea where to start.
Let r be the chance an attack caused axe to spin and d be the damage it does. Since we know the number of attacks on Axe per second, we just need to find the expected spin damage per attack, call this x. If someone attacks Axe once, we expect the retaliation damage to be rd. However, each attack blocks a certain number of attacks depending on cooldown, call this number b. This reduces our expected value by brx. So we have: x=rd-brx x(1+br)=rd x=rd/(1+br) Note that this assumes the combat goes for an infinite amount of attacks. When the combat is shorter, the expected damage will be higher. Given the speed that combat happens in Dota, this effect will often be significant
{ "language": "en", "url": "https://math.stackexchange.com/questions/1675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Indefinite summation of polynomials I've been experimenting with the summation of polynomials. My line of attack is to treat the subject the way I would for calculus, but not using limits. By way of a very simple example, suppose I wish to add the all numbers between $10$ and $20$ inclusive, and find a polynomial which I can plug the numbers into to get my answer. I suspect its some form of polynomial with degree $2$. So I do a integer 'differentiation': $$ \mathrm{diff}\left(x^{2}\right)=x^{2}-\left(x-1\right)^{2}=2x-1 $$ I can see from this that I nearly have my answer, so assuming an inverse 'integration' operation and re-arranging: $$ \frac{1}{2}\mathrm{diff}\left(x^{2}+\mathrm{int}\left(1\right)\right)=x $$ Now, I know that the 'indefinite integral' of 1 is just x, from 'differentiating' $x-(x-1) = 1$. So ultimately: $$ \frac{1}{2}\left(x^{2}+x\right)=\mathrm{int}\left(x\right) $$ So to get my answer I take the 'definite' integral: $$ \mathrm{int}\left(x\right):10,20=\frac{1}{2}\left(20^{2}+20\right)-\frac{1}{2}\left(9^{2}+9\right)=165 $$ (the lower bound needs decreasing by one) My question is, is there a general way I can 'integrate' any polynomial, in this way? Please excuse my lack of rigour and the odd notation.
For any particular polynomial there is an easier way to do indefinite summation than using the Bernoulli numbers, going off of Greg Graviton's answer. Here we'll use the forward difference $\Delta f(x) = f(x+1) - f(x)$. Then $\displaystyle \Delta {x \choose n} = {x \choose n-1}.$ This implies that we can perform a "Taylor expansion" on any polynomial to write it in the form $f(x) = \sum a_n {x \choose n}$ by evaluating the finite differences $\Delta^n f(0)$ at zero. For any particular polynomial $f$ it is easy to write these finite differences down by constructing a table. In general, the formula is $\displaystyle a_n = \Delta^n f(0) = \sum_{k=0}^{n} (-1)^{n-k} {n \choose k} f(k)$ as one can readily prove by writing $\Delta = S - I$ where $S$ is the shift operator $S f(x) = f(x+1)$ and $I$ is the identity operator $I f(x) = f(x)$. Then the indefinite sum of $f$ is just $\sum a_n {x \choose n+1}$. This is the easiest way I know how to do such computations by hand, and it also leads to a fairly easy method for polynomial interpolation given the values of a polynomial at consecutive integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Fields that require both CS and pure math I'm mainly a CS major, but I also want to learn more advanced mathematics. I see lot of cross over between CS and applied math, same can't be said about pure math. Definition of 'advanced': Something a pure math major learns during and after late undergraduate years. What are the fields that combines advanced mathematical topics with computer science? Theoretical computer science and computational group theory are good examples.
Cryptography is a field which requires lot of Number theory. Formal languages and automata theory requires some Graph theory, and these automata things can be connected to computer science as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 12, "answer_id": 0 }
Why is Gimbal Lock an issue? I understand what the problem with Gimbal Lock is, such that at the North Pole, all directions are south, there's no concept of east and west. But what I don't understand is why this is such an issue for navigation systems? Surely if you find you're in Gimbal Lock, you can simply move a small amount in any direction, and then all directions are right again? Why does this cause such a problem for navigation?
You seem to have a very idealised idea of what is likely to happen. In the vicinity of a pole the magnetic field lines are nearly vertical. Any compass that needs gimbals is not going to be reliable over a wide area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }