Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Determining existence of a solution for a system of linear inequalities I have a set of vectors $\mathbf{v}_1,\mathbf{v}_2,...,\mathbf{v}_m\in\mathbb{R}^n$ and I want to know if there exists a nonzero vector $\mathbf{x}$ such that $\mathbf{x}\cdot\mathbf{v}_i\le0$ for any $i$. This is the same as saying the equation $\mathbf{x}^TV\leqq\mathbf{0}$ has a nonzero solution, where $V$ is an $n\times m$ matrix whose columns are $\mathbf{v}_i$. Geometrically, a solution exists if a plane can be drawn through the origin such that all of the vectors are on one side (on contained within the plane). One can show using Stiemke's Theorem that a solution exists if and only if $V\mathbf{y}=\mathbf{0}\ ,\ \mathbf{y}>\mathbf{0}$ does not have a solution. I have found a few sources on feasibility problems in linear programming, but none that have allowed me to find a solution to this problem. Any help would be appreciated.
An alternative approach based on Stiemke's theorem would involve the solution of a single somewhat larger LP. Solve the LP $\max z$ Subject to $Vy=0$ $y_{i} \geq z, i=1, 2, \ldots, n$ If the optimal solution has $z=0$, then $Vy=0$, $y > 0$ has no solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Prove that a particular matrix is full column rank Let $A$ be an $m \times n$ matrix and $\operatorname{rank}(A) =r$. Let $P$ be an $m \times (m-r)$ matrix, such that $\mathbb{C}^m= \mathcal{R}(A) \oplus \mathcal{R}(P)$, then $P$ is of full column rank. $\mathcal{R}(A)$ : Range of $A$ in $\mathbb{C}^m$ I'm trying to prove that with such conditions on $P$, $P$ is of full column rank.
$$\begin{cases}\operatorname{rank}P=\dim\mathcal R(P) \\m=\dim\mathcal R(A)+\dim\mathcal R(p)=r+\dim\mathcal R(P)\\ \operatorname{rank}P\le \min(m,m-r)=m-r\end{cases}$$ What does this yield, in light of the definition of "being full-rank"?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $y = W^T x$, what is $\frac{\partial y}{\partial W}$? I would like to derive the derivative of a vector by matrix, i.e. $y = W^Tx$, where $W$ is a matrix, $x,y$ are vectors. What is $\frac{\partial y}{\partial W} = \frac{\partial W^T x}{\partial W}$? Follow-up: Define another function $z = a^T y = a^T W^Tx$, so that $z$ is a scalar. We know that $\frac{\partial z}{\partial W}$ is a matrix with the same of $W$. At the same time, by chain rule \begin{equation} \frac{\partial z}{\partial W} = \frac{\partial z}{\partial y} \cdot \frac{\partial y}{\partial W} \end{equation} where $\frac{\partial z}{\partial y}$ is a $1\times N$ vector ($N$ is the dimension of $y$). So it seems the dimensions of matrices in left and right hand side don't match. Any explanations?
Consider the full matrix version of this problem, written in terms of the Frobenius (:) Inner Product $$\eqalign{ Y & = W^TX \cr z &= A:Y = XA^T:W \cr }$$ The gradient of $z$ can be evaluated directly $$\eqalign{ dz &= XA^T:dW \cr \frac{\partial z}{\partial W} &= XA^T \cr\cr }$$ It can also be evaluated by the chain rule $$\eqalign{ dz &= A:dY \cr \frac{\partial z}{\partial Y} &= A \cr\cr dY &= dW^TX &= {\mathbb E}X^T:{\mathbb B}:dW \cr \frac{\partial Y}{\partial W} &= {\mathbb E}X^T:{\mathbb B} \cr }$$ Taking the inner product of these 2 gradients (note that the second one is a 4th order tensor) yields $$\eqalign{ \frac{\partial z}{\partial W} &= \frac{\partial z}{\partial Y} : \frac{\partial Y}{\partial W} \cr &= A:{\mathbb E}X^T:{\mathbb B} \cr &= AX^T:{\mathbb B} \cr &= XA^T \cr }$$ The 4th order isotropic tensors have components $$\eqalign{ {\mathbb E}_{ijkl} &= \delta_{ik}\,\delta_{jl} \cr {\mathbb B}_{ijkl} &= \delta_{il}\,\delta_{jk} \cr }$$ and have the interesting properties when multiplied by a matrix $M$, using the Frobenius product $$\eqalign{ {\mathbb E}:M = M:{\mathbb E} &= M \cr {\mathbb B}:M = M:{\mathbb B} &= M^T \cr }$$ Having derived this for the full matrix case, the result also holds when the matrices $(Y, X)$ are replaced by the vectors $(y, x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why is $P = I_N - \vec{1}\vec{1}^T/N$ a projection matrix, and $P^2=P$? Why is $P = I_N - \vec{1}\vec{1}^T/N$ a projection matrix, and $P^2=P$? For example, for N=3 : $$P = I_3 - \vec{1}\vec{1}^T/3 = \begin{pmatrix} 0.67 & 0.33 & 0.33 \\ 0.33 & 0.67 & 0.33 \\ 0.33 & 0.33 & 0.67 \\ \end{pmatrix} $$ and, $$P^2 = \begin{pmatrix} 0.44 & 0.11 & 0.11 \\ 0.11 & 0.44 & 0.11 \\ 0.11 & 0.11 & 0.44 \\ \end{pmatrix} $$ But the text I am reading says P is supposed to be a projection matrix, for which $P^2 = P$ ?
Your computation of the matrix is wrong. $$ P = I_3 - \frac{1}{3}\vec{1}\vec{1}^T = \begin{pmatrix} 0.67 & -0.33 & -0.33 \\ -0.33 & 0.67 & -0.33 \\ -0.33 & -0.33 & 0.67 \\ \end{pmatrix} $$ Note the negative signs. ' This is indeed a projection; it sends $(1,1,1)$ to $(0,0,0)$ and sends $(-1, 1, 0)$ and $(-1, 0, 1)$ to themselves, so in the basis consisting of those three vectors, it's projection onto the second two coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1880010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Showing there's no maximum in the given interval Let $f$ be function as $f(x) = x^2, x \in [\frac{1}{2}, 3).$ If we want to show there's no maximum in the given interval, is this the right way to do it: Assume there's maximum $y \in (8, 9)$ for some $x \in [\frac{1}{2}, 3)$. Then show that there's some $y' \in (8, 9)$ for all $y \in (8, 9)$ such that $y' > y$ for some $x' \in [\frac{1}{2}, 3)$ so that we get a contradiction. Does that make sense?
Note that $x^2$ is increasing on this interval; thus its supremum is at the right end of the interval which isn't attained from the domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1880147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
If $G\simeq K$ and $H\simeq M$ then is it true that $G/ H \simeq K / M$? Let $G$ and $K$ be groups. Let $H$ be a normal subgroup of $G$ and $M$ be a normal subgroup of $K$ such that $H\simeq M$. Question: is $ G/H \simeq K / M$? I am fairly certain that this is tru of the groups are finite. For example, if the groups are cyclic, then the quotients are cyclic and by orders of the groups, then quotients would have to be isomorphic. But what happens when the groups are not finite? From Does $\displaystyle \frac{G}{H}$ $\simeq$ $\displaystyle \frac{G}{K}$ $\Rightarrow$ $H$ $\simeq$ $K$? I see that the other direction is not true.
$$\frac{\mathbb Z}{2\mathbb Z}\not\cong \frac{\mathbb Z }{3\mathbb Z},$$ while $\mathbb Z\cong 2\mathbb Z \cong 3\mathbb Z$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1880247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Process of finding intersecting points between two functions I have to find the intersecting points of these two functions $$f(x)=e^x-5x+7$$ and $$g(x)=2x^2+16x+2$$ I know how to do this with two quadratic equations, by putting the two functions equal to each other, but f(x) is confusing me, because i don't know what to do with the ex
I don't think you can get a "nice" number by solving this analytically. You may need to use root-finding methods such as Newton's method (or just use the "intersect" feature on your graphing calculator).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1880349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Under which conditions there exists a solution in linear equations I am having some troubles with an exercise from "Finite dimensional vector spaces" by Halmos. I am not sure if my proof is correct or if my answer is what the author asked for. Thanks for any help! Suppose that $m < n$ and that $y_{1}, \dots, y_{m}$ are linear functionals on an $n$-dimensional vector space $\mathbb{V}$. Under what conditions on the scalars $\alpha_{1}, \dots, \alpha_{m}$ is it true that there exists a vector $x$ in $\mathbb{V}$ such that $y_{j}(x) = \alpha_{j}$ for $j = 1, \dots, m$? What does this result say about the solutions of linear equations? We assume without loss of generality that at least two vectors of $y_{1}, \dots, y_{m}$ are linearly dependent. More precisely there exists a $k, l \leq m$ and $c \in \mathbb{K}$ such that $y_{k}(x_{0}) = cy_{l}(x_{0})$ with $k \neq l$. Now we can write $y_{k}(x_{0}) - cy_{l}(x_{0}) = \alpha_{k} - c\alpha_{l} = 0$. From that we can state that such a vector $x$ in $\mathbb{V}$ exists under the condition that there does exist such a $c \in \mathbb{K}$ such that $\alpha_{k} = c\alpha_{l}$ if $y_{k}$ and $y_{l}$ are linearly dependent.
There is loss of generality in your assumption. For example the sum of all may vanish while pairwise they are independent. You need to find the kernel of the map $\Lambda: c\in {\Bbb C}^m \mapsto \sum_j c_j y_j$. Then clearly if $c\in \ker \Lambda$, a solution to $y_j(x)=\alpha_j$ must verify: $$ 0=\sum_j c_j y_j(x)= \sum_j c_j \alpha_j$$ So $\alpha$ should be orthogonal to the kernel of $\Lambda$. This happens also to be a sufficient condition. If $\Lambda^* : x\in V \mapsto (y_1(x),\ldots, y_m(x)) \in {\Bbb C}^m$ denotes the dual map to $\Lambda$ then a theorem from (finite dimensional) linear analysis states that $$ \mbox{Im } \Lambda^* = (\ker \Lambda)^T$$ and this translates into the above-mentioned condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1880458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
LU Decomposition vs. Cholesky Decomposition What is the difference between LU Decomposition and Cholesky Decomposition about using these methods to solving linear equation systems? Could you explain the difference with a simple example? Also could you explain the differences between these decomposition methods in: * *inverse of a matrix *forward and backward substitution *pivoting
Both LU and Cholesky Decomposition is matrices factorization method we use for non-singular( matrices that have inverse) matrices. In general basic different between two method. the later one uses only for square matrices (A = A^T). however LU decomposition we can use any matrices that have inverses. for example see the following equation with 3 unknown 2x + y 3z = 4 2x - 2y -z = -1 -2 + 4y z = 1 the above equation since the coefficient of a variable can't form a square matrix so we use LU decomposition to factorize the matrices, basically two steps process first row reductions until make all zero below the main diagonal so that we find upper matrix (U) and save all the factors we use on this step to substitute to lower matrix(L) and put the values of 1's to the main diagonal and 0s to above the main diagonal. using either of two (L,U) we can easier solve the equation through back substitution values of x, y & z are 3/2, 1, 0. Cholesky factorization simple example Matrix A; 2x + y = 2 x + 2y = 4 this method use A = LL^T. Simply taking 2x2 lower triangular matrix multiply(components) with its transpose (with variables values).and matches with the coefficient of Matrix A and try to solve the unknown variable so that you can factor A with L, L^T.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1880573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 2, "answer_id": 1 }
Prove "Two parallelograms on the same base and in the same parallels, are equal." I understand Euclid's way of proving this. But, the book also says that I can prove this by decomposing one parallelogram into pieces, and then forming another parallelogram by combining those pieces together. I was thinking of dividing one parallelogram into infinitely small rectangles, and then combine them again in the contour of another parallelogram, like Riemann sum. But I am also assuming that this is not what author wants because calculus is not yet covered in the book. By using basic properties of parallelograms, how can you prove this postulate?
Here's one way you can do it: you can use the "easier case" in which $e$ is between $b$ and $c$ and $f$ is to the right of $c$ repeatedly. In that case you can simply cut off a triangle from $abcd$ and then reattach it with a translation to get $aefd$. But once you've done this, you can do it again: if you have points $g$ and $h$ such that $g$ is between $e$ and $f$ and $h$ is to the right of $f$ with $gh=ef$, then repeating the same argument gives that $aefd$ and $aghd$ have the same area. And you can repeat this over and over. In particular, if you keep repeating this moving the top edge of the parallelogram to the right by a distance at least $bc/2$ each time, you can get it as far to the right as you want in only finitely many steps. Thus you can eventually get that your original parallelogram has the same area as any other parallelogram with base $ad$ and whose opposite edge is on the line $bc$. In each step of this construction, you cut the parallelogram into two pieces and then rearrange them. So the entire process can be done in one step by cutting the original parallelogram along all the cuts you would eventually end up making, and then rearranging them in the final form. It is complicated to see what this looks like explicitly (and the number of pieces depends on how many steps the construction takes), but it does not involve any subtraction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1881739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is travelling salesman problem with integer weight NP-hard? I wonder if travelling salesman problem remains to be NP-hard with an additional constraint that the edge weight is integer.
The travelling salesman problem already has integer edge weights! For example, in Garey & Johnson, Computers and Intractability, the problem is defined as follows: TRAVELLING SALESMAN INSTANCE: A finite set $C=\{c_1,c_2,\ldots,c_m\}$ of "cities", a "distance" $d(c_i,c_j)\in Z^+$ for each pair of cities $c_i,c_j\in C$, and a bound $B\in Z^+$ (where $Z^+$ denotes the positive integers). QUESTION: Is there a "tour" of all the cities in $C$ having total length no more than $B$? In order for a problem to be in NP, it must be possible to verify a solution to the problem in time that's polynomial in the size of the problem description. For integer distances, it's obvious that this can be done. For fractional distances, it's not obvious: if you try to convert them all to a common denominator, how big might the denominator get? I think it can be done (the common denominator has size that's polynomial in the size of the problem description), but it's an awkward complication and it's easy to see why the problem is usually defined with integer distances.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1881827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence/divergence of $\int_{0}^{\infty} \frac{\sqrt[3]x}{1+x}dx$ $$\int_{0}^{\infty} \frac{\sqrt[3]x}{1+x}dx$$ I have read an example on book and they did the following: $$\frac{\sqrt[3]x}{1+x^2}<\frac{\sqrt[3]x}{x}=\frac{1}{x^{\frac{2}{3}}}$$ and we know that the integral of $\frac{1}{x^{\alpha}}$ converges for $\alpha>1$ So $$\int_{0}^{\infty} \frac{\sqrt[3]x}{1+x}dx$$ converges, but does not this holds just for $\int_{c}^{\infty}\frac{1}{x^{\alpha}}$ where $c>0$ and $\alpha>1$? Because $\int_{0}^{\infty}\frac{1}{x^2}$ diverges?
Your function $\frac{\sqrt[3]x}{1+x}\sim \frac{\sqrt[3]x}{x}=\frac{1}{x^{2/3}}$ as $x\rightarrow +\infty$. We now that the integral $\int_{c}^{\infty}\frac{1}{x^{\alpha}}$ converges for $a>1$ (with $c>0$). Since $2/3 < 1$, your integral diverges
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Find the number of integer solutions of $|x|+|y| \le 10$ Find the solutions of $|x|+|y| \le 10$, where $x$ and $y$ are integers. My solution: $$|x|+|y|+z=10$$ Now the solutions if there were no absolute values is: $$\binom{13-1}{10}=\frac{11*12}{2}=66$$ now subtract that once that have $0$ then multiply the others by $4$ and multiply that ones that only has $1$ zeroes by $2$.I get $221$ Am I right?
Pick's Theorem says that the area of a polygon whose vertices have integer coordinates is given by $$A=I+{B\over2}-1$$ where $I$ is the number of Interior points with integer coordinates and $B$ is the number of Boundary points with integer coordinates. For the given problem the polygon is a square with diagonals of length $20$, so $A=200$. The bounday points satisfy $|x|+|y|=10$, so it's easy to see that $B=40$, hence $I=200-{40\over2}+1=181$. The number we want is $$I+B=181+40=221$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is $S_3 \oplus \Bbb Z_2$ isomorphic to $A_4$ or to $D_6$? I know $S_3 \oplus \Bbb Z_2$ is isomorphic either to $A_4$ or to $D_6$, where $S_3$ is the symmetric group of degree $3$, $A_4$ is the alternating group of degree $4$, $D_6$ is the dihedral group of order $12$, and $\oplus$ is the external direct product. Without writing out the tables for each group and comparing, is there an easier way to show which one of the two groups it is? I've tried comparing a few elements but haven't made any progress.
One way to easily see the isomorphism is to note that we may identify $ C_6 $ in $ S_3 \times C_2 $ as the normal subgroup $ N = A_3 \times C_2 $, and if we denote $ H = \langle ((12), e) \rangle \cong C_2 $ and let $ ((12), e) = h $ then $ NH = S_3 \times C_2 $ and $ N \cap H $ is trivial. It is easily checked that conjugation by $ h $ induces the automorphism $ x \to x^{-1} $ on $ N $, therefore $ S_3 \times C_2 = N \rtimes_\varphi H = C_6 \rtimes_\varphi C_2 $ where $ \varphi : C_2 \to \textrm{Aut}(C_6) $ is given by $ g \to (x \to x^{-1}) $ where $ g $ is the generator of $ C_2 $. This semidirect product is clearly the group $ D_6 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Minkowski difference of two convex sets is convex? Hi my question is quite straightforward, if we have two disjoint compact convex sets A and B, is their minkowski difference A-B then convex again? Thanks!
Each point in $A-B$ is of the form $a-b$, where $a\in A$ and $b\in B$. Letting $a-b$ and $a^{\prime}-b^{\prime}$ be two points in $A-B$, we note that for any $\theta\in[0,1]$, $$ \theta\left(a-b\right)+\left(1-\theta\right)\left(a^{\prime}-b^{\prime}\right)=\left[\theta a+\left(1-\theta\right)a^{\prime}\right]-\left[\theta b+\left(1-\theta\right)b^{\prime}\right]. $$ What does this imply?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Help with the limit of an integral I am trying to evaluate this limit $$\lim \limits_{x \to \infty} \frac {1}{\ln x} \int_{0}^{x^2} \frac{t^5-t^2+8}{2t^6+t^2+4} dt=? $$ Any help will be appreciated.
Hint: Compare with the integral of just $t^5/(2 t^6)$ as $t$ goes to infinity. Show that the rest really doesn't matter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Sample mean: dependence I have a question that is possibly more about language than math, but still it concerns me a lot. I understand that this question may irritate many (because it's stupid, and apparently because I am stupid too), but still I ask not to hate me too much. We all remember the definition of sample mean: $ {\displaystyle A={\frac {1}{n}}\sum _{i=1}^{n}a_{i}.} $ It is sum divided by size. Okay. Note that we're not talking about population mean, we're talking about sample mean. So, today a colleague of mine uttered the following sentence: Sample mean does not depend on sample size. Now I'll try to explain my hesitation. As I see that there's some $ n $ that is not constant in this formula, I want to say that sample mean depends on $ n $. But as this $ n $ is fully defined by the sample itself, there is, seemingly, a sense in which sample mean does not depend on $ n $. So the question is: is my colleague's statement true? And what's that important thing that I don't get about sample mean that makes me so confused about such a basic thing?
I think that, without a more specific qualification of that statement, it is difficult to say what was really meant. For example, one interpretation could be that, for a simple random sample, the sample mean is an unbiased estimator of the population mean, and this property is independent of the sample size. But this interpretation goes well beyond what was literally said.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\operatorname{Hom}_k(k,V)$ is a vector space? Is it true that a vector space is just the set of maps from the underlying field to the space itself. I.e. if $V$ is a vector space of the field $k$ then $$ V\cong \operatorname{Hom}_k(k,V) $$ if so then this would make an intuitive understanding of the dual space $V^*$ somewhat trivial since $$ V^{**}\cong V\cong \operatorname{Hom}_k(k,V)\implies V^*=\operatorname{Hom}_k(V,k) $$ If true, an explanation of why $V\cong \operatorname{Hom}_k(k,V)$ with a simple example or two would provide a lot of clarity for me since I could easily grasp the dual vector space idea from that point. Edit: I actually had to read two proposed answers a couple times for the idea to sink in but I could only pick one answer.
This works even for infinite-dimensional vector spaces (or for that matter for general modules over unital rings): The map $$ f \in \operatorname{Hom}_k(k,V) \mapsto f(1) \in V $$ is always vector space isomorphism. You don't need duals for that. This is clearly injective and a homomorphism; to see that it is surjective, note that $v\in V$ corresponds to the map $t\in k\mapsto t\cdot v\in V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Solve $\int_{0}^{1}\frac{1}{1+x^6} dx$ Let $$x^3 = \tan y\ \ \text{ so that }\ x^2 = \tan^{2/3}y$$ $$3x^2dx = \sec^2(y)dy$$ $$\int_{0}^{1}\frac{1}{1+x^6}dx = \int_{1}^{\pi/4}\frac{1}{1+\tan^2y}\cdot \frac{\sec^2y}{3\tan^{2/3}y}dy = \frac{1}{3}\int_{1}^{\pi/4} \cot^{2/3}y\ dy$$ How should I proceed after this? EDITED: Corrected the final integral and the limit from $45$ to $\pi/4$
Continuing where MK12 left off, we proceed as follows: $$\frac{2-x^2}{x^4-x^2+1} = \frac{1}{2} \times \frac{4-2x^2}{x^4-x^2+1} = \frac{1}{2} \times \frac{1+x^2 + 3(1-x^2)}{x^4-x^2+1}$$ Split the sub-integral into two parts. For one, make the substitution $u=x+\frac{1}{x}$ and the other $v = x-\frac{1}{x}$ Then the rest should be fairly trivial, provided you are meticulously careful with the limits of integration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
Proving that $\log_b(r^s) = s\log_b(r)$ The Question If $b,r,s \in \mathbb{R^+}$ prove that $\log_b(r^s) = s\log_b(r)$ My Work 1) $\log_b(r^s)$ 2) $s$ can be expressed as the sum of an integer part $n$ and a real part $m$: $s = m + n$ 3) $\log_b(r^{n+m})$ 4) $\log_b(r^nr^m)$ 5) $\log_b(r^n) + \log_b(r^m)$ 6) $\log_b(r\cdot r\cdot r \cdots r) + \log_b(r^m)$ 7) $\log_b(r) + \log_b(r) + \log_b(r) + \cdots + \log_b(r) + \log_b(r^m)$ 8) $n\log_b(r) + \log_b(r^m)$ Where I am Having Trouble I'm having trouble getting the $m$ in front of the second summand which I feel is necessary for the theorem. How can I finish off this proof? I was given this rule as a secondary school student with no proof and would like to have it explained.
Take $\log_b (r^s):=F(s)$ ($r$ an arbitrary constant) and note that $F(0)=\log_b (1)=0$ Show that: $F(s)=F(s-1)+\log_b r$ Hence $\frac{F(s)-F(s-1)}{s-(s-1)}=\log_b (r)$ and $F$ is of slope $\log_b (r)$: $$F(s)=(\log_b r)s+c$$ But $F(0)=0$ gives $c=0$. If we proved it for all positive arbitrary constants $r>0$, then the equation must hold for all $r>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Proving a ring has only infinite dimensional modules. Let $R$ be the ring $\mathbb{C}\langle x,y\rangle/(xy-yx-1)$, a quotient of free associative algebra on two generators. (a) Show every nonzero $R$-module has infinite dimension as a complex vector space. (b) Let $M$ be an $R$-module with a nonzero element $z$ such that $xz=0$. Show that $z, yz, y^2z, \ldots$ are $\mathbb{C}$-linearly independent in $M$. I would love some good references to read about this stuff, because I couldn't find much. I'm probably not looking in the right places to be honest (not even sure what to tag this as). My gut says something along the lines of modules over this ring have to somehow contain the ring, and the ring, although quotiented, still has an infinite basis over $\mathbb{C}$ as there does not appear to be a relation between $x, x^2, x^3, \ldots$. But again, I might be way off base here.
(a) $A$ is the ring $C\langle u,v\rangle/(uv-vu-1)$ which means that $[u,v]=1$. Consider some nonempty $A$-module. That is, $M$ is an abelian group under addition and we also have the operation $A\times M \rightarrow M$ satisfying all the module properties. Note that $C\subset A$, and thus we have the operation $C \times M\rightarrow M$. Thus $M$ can be thought of as a $C$ module, i.e. a complex vector space. From linear algebra we know that a nonempty vector space has a nonempty basis $B$. For sake of contradiction assume that the basis is finite $|B|=n >0$. Consider the operation on the vector space by elements $u, v \in A$.By the module properties we can see these act as linear operators, and thus we can represent them with finite matrices $X, Y \in M_n(C)$. However we have that these matrices satisfy $XY-YX=I_n$, and computing the trace (which we must be able to do in finite dimension) of both sides gives $0=Tr(XY)-Tr(YX)=Tr(XY-YX) = Tr(I_n)=n$, a contradiction. Thus we either have that the module is indeed empty or is infinite dimensional with undefined trace on the operators $u,v$. (b) Let $y\in M$, a nonzero element in the nonempty $A$-module. We want to show that the elements $y, vy, v^2y, \ldots$ are $C$-linearly independent. This is equivalent to showing that for all $n>0$ that $c_0y+c_1vy+c_2v^y+\ldots c_nv^ny=0$ implies that all $c_i=0$ where the $c_i\in C$. To see this we consider operating on the sum by $u$. First we will want to show that $uv^ky=kv^{k-1}y$ for all positive integers $k$. The base case is simply $uvy=(1+vu)y=y+vuy=y+0=y$. Assume the hypothesis is true up to $n-1$ i.e. $uv^{n-1}y=(n-1)v^{n-2}y$. The inductive step is simply $uv^ny=uvv^{n-1}y=(1+vu)v^{n-1}y=v^{n-1}y+vuv^{n-1}y = v^{n-1}y+v(n-1)v^{n-2}y= nv^{n-1}y.$ Thus we can think of $u$ as the differential operator on the variable $v$. Repeated application of $u$ on $c_0y+c_1vy+c_2v^y+\ldots c_nv^ny=0$ gives $n!c_ny = 0$ which implies $c_n=0$ as neither $n!$ nor $y$ are zero. Following in this way we get that $c_{n-1}=0, \ldots c_1=0$. Thus all the $c_i=0$ and so the set $\{v^ky\}$ is $C$ linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1882944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Adapting the Simplex method to use the distance function as the target. Has anyone adapted the simplex method to use the distance from a point as the minimization criteria? E.g. Given target vector $\mathbb{t}$, matrix $\mathbb{A}$ and limits $\mathbb{b}$ Minimize $\sqrt{\sum _{i=0}^m \left(\mathbf{x}_i-\mathbf{t}_i\right){}^2}$ where $\mathbb{A}.\mathbf{x}\leq \mathbf{b}$ $\forall _{i,1\leq i\leq m}x_i\geq 0$ It seems to me there should be an efficient variant of the simplex method for a problem like this. In a linear minimization problem, the solution is either (1) one of the vertices of the feasible region (2) an entire "edge" of the feasible region. (Where an "edge" may have any of 1 ... m-1 dimensions) With a distance function, the solution is either (1) one of the vertices of the polytope (2) a point somewhere on an edge, since for each edge there is one unique point that is closest to the target point $\mathbb{t}$. Is it possible to pivot towards these closest points as efficiently as you can pivot towards a vertex? Has anyone worked out the details on this?
See for instance: Philip Wolfe, The Simplex Method for Quadratic Programming, Econometrica, Vol. 27, No. 3, (Jul., 1959), pp. 382-398 This is an extension to the Simplex method for a standard Quadratic Programming (QP) problem, so a slightly more general problem than you are stating.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about Symmetric groups Given permuations $$ \sigma= \begin{pmatrix} \text{1 2 3 4} \\ \text{2 3 4 1}\\ \end{pmatrix} \qquad\tau= \begin{pmatrix} \text{1 2 3 4} \\ \text{4 3 2 1}\\ \end{pmatrix}, $$ show that the subgroup $D_8:=\langle\tau,\sigma\rangle$ of $\operatorname{Sym}(4)$ has order 8 and write down its elements With the identity element, I got 9. The elements I got are $\sigma,\sigma^2\sigma^3,\tau,\tau\sigma,\sigma\tau,\tau\sigma^2,\tau\sigma^3,1$ It's important to note that $\tau\sigma^2=\sigma^2\tau$ and $\tau\sigma=\sigma^3\tau$, so I just chose one representative.
Note $\sigma$ and $\sigma^2\sigma^3$ are the same since $\sigma^4$ is the identity permutation. Anyway, if you label the points of a unit square $1,2,3,4$ counterclockwise, then $\sigma$ is a right angle rotation and $\tau$ is a reflection. Evidently $\sigma\tau=\tau\sigma^{-1}$ (this rule defines all dihedral groups $D_{2n}$, and even the orthogonal group $\mathrm{O}(2)$). The elements should be $\sigma^0,\sigma^1,\sigma^2,\sigma^3$ (rotations) and $\sigma^0\tau,\sigma^1\tau,\sigma^2\tau,\sigma^3\tau$ (reflections). The latter four can be relabelled $\tau\sigma^0,\tau\sigma^1,\tau\sigma^2,\tau\sigma^3$ if that floats your boat.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Geodesics on a sphere lie on a plane I have a question concerning closed geodesics on a sphere. I know that non-constant closed geodesics on a sphere are great circles. Hence if choosing one the image lies on a plane. This is easily seen if we know that closed geodesics are great circle. But can one show this without knowing about this? The geodesic equation is given by $\ddot\gamma \bot T_\gamma S^2$, which yiels also that we have $$ \ddot \gamma = |\dot \gamma|^2 \gamma. $$ From the above equations we find that $\gamma \cdot \dot\gamma = 0$ and $\ddot \gamma \cdot \dot\gamma =0$. If we can show that $\dot\gamma \times \ddot\gamma =0$ then we're done (right?), as the torsion would be $0$. Could you help me out with this a litte? Maybe my thought are also completely wrong. If you have another idea to show that if one chooses a closed goedesic on $S^2$ then it lies on a plane with the constraint that one can't use that it is a great circle, I would be very happy. Thanks in advance!
It suffices to show that $\dot{\gamma}$ is normal to a constant vector $n$. Then it has to move in a plane orthogonal to $n$. The vector $n=\gamma\times \dot{\gamma}$ is normal to $\dot{\gamma}$ and $\dot{n}=\gamma\times \ddot{\gamma}= 0$ by the geodesic equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
number of elements in subset sum problem I have a set of numbers $i=1,...,100$. How many combinations exist using numbers from this set that sum to 100 of length 8? So for example these would be valid solutions: $(1, 2, 3, 4, 5, 6, 7, 72)$, $(10,11,22,1,5,8,9,34)$ also, the order is important, that means $(1, 2, 3, 4, 5, 6, 7, 72) \neq (2,1, 3, 4, 5, 6, 7, 72)$ also zeros can be added in any amount, e.g. $(100,0,0,0,0,0,0,0)$. numbers can also be used more than one time, e.g. $(10,10,10,10,10,10,10,30)$
If $(100,0,0,0,0,0,0,0) \neq (0,100,0,0,0,0,0,0)$, for instance, you're looking for the number of solutions $(x_1, x_2, ..., x_8)$ to the equation $x_1 + ... + x_8 = 100$, where each $x_i$ is a non-negative integer. The solution is given by a stars-and-bars argument. In your particular case, the answer is $$ \binom{107}{7}. $$ Otherwise, if the zeroes can only be added at the end, you're looking for the number of solutions to $x_1 + ... + x_k = 100$, where $k \le 8$ and each $x_i$ is a positive integer. A similar stars-and-bars argument gives $$ \sum_{k=1}^8 \binom{99}{k-1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Arithmetical function : How can I prove? How can I show that this sum $\sum_{d|n} \mu(d) \log^kd$ is $0$ where $\mu(d)$ is mobius function. I've expect that this question is solved by induction..! $k$ is integer that is a power of $\log$
For $k=1$ the identity should be the following $$S_1(n)=\sum_{d|n}\mu(d)\ln(d)=\begin{cases} 0& \mbox{if $n$ is not a power of a prime},\\ -\ln(p)&\mbox{if $n$ is a power of a prime $p$.} \end{cases}$$ Let $n=p_1^{a_1}\cdots p_r^{a_r}$ with $r>0$ then $$S_1(n)=-\sum_{1\leq i_1\leq r}\ln(p_{i_1})+\sum_{1\leq i_1<i_2\leq r}(\ln(p_{i_1})+\ln(p_{i_2}))-\sum_{1\leq i_1<i_2<i_3\leq r}(\ln(p_{i_1})+\ln(p_{i_2})+\ln(p_{i_3}))+\cdots+ (-1)^r(\ln(p_{i_1})+\dots+\ln(p_{i_r})).$$ Now we count the terms $\ln(p_i)$ in the above sum $$-1+(r-1)-\frac{1}{2}(r-1)(r-2)+\dots +(-1)^r=-\sum_{j=0}^{r-1} (-1)^{j}\binom{r-1}{j}\\=-(1-1)^{r-1}=\begin{cases} 0& \mbox{if $r>1$},\\ -1&\mbox{if $r=1$.} \end{cases}$$ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A horrid-looking integral $\int_{0}^{5} \frac{\pi(1+\frac{1}{2+\sqrt{x}} )}{\sqrt{10}\sqrt{\sqrt{x}+x}} $ $$ \mathbf{\mbox{Evaluate:}}\qquad \int_{0}^{5} \frac{\pi(1+\frac{1}{2\sqrt{x}} )}{\sqrt{10}\sqrt{\sqrt{x}+x}} \,\,\mathrm{d}x $$ This is a very ugly integral, but appears to have a very simple closed form of: $$\Gamma(\frac15)\Gamma(\frac45)$$ Mathematica can evaluate this integral, but WolframAlpha doesn't even give a correct numerical answer. I have tried many techniques on this integral but have not been able to crack it at all. Any help on this integral would be greatly appreciated. Thank you!
$u=\sqrt{x}$, we have $$ \int_{0}^{5} \frac{\pi(1+\frac{1}{2\sqrt{x}} )}{\sqrt{10}\sqrt{\sqrt{x}+x}} \,\,\mathrm{d}x=\frac{\pi}{\sqrt{10}}\int_{0}^{\sqrt{5}} \frac{2u+1} {\sqrt{u+u^2}}du=\frac{2\pi}{\sqrt{10}}\sqrt{u^2+u}\Big{|}_{0}^{\sqrt{5}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
How do I find out particular solution for my differential equation? How do I find out particular solution for my differential equation? For example $\ddot{x}+4\dot{x}+4x=t+1+\sin t$. Can someone explain me why is particular solution here $x_p(t)=At+B+C\sin t+D\cos t$?
Use Laplace transform: $$x''(t)+4x'(t)+4x(t)=1+t+\sin(t)\Longleftrightarrow$$ $$\mathcal{L}_t\left[x''(t)+4x'(t)+4x(t)\right]_{(s)}=\mathcal{L}_t\left[1+t+\sin(t)\right]_{(s)}\Longleftrightarrow$$ $$\mathcal{L}_t\left[x''(t)\right]_{(s)}+\mathcal{L}_t\left[4x'(t)\right]_{(s)}+\mathcal{L}_t\left[4x(t)\right]_{(s)}=\mathcal{L}_t\left[1\right]_{(s)}+\mathcal{L}_t\left[t\right]_{(s)}+\mathcal{L}_t\left[\sin(t)\right]_{(s)}\Longleftrightarrow$$ $$\mathcal{L}_t\left[x''(t)\right]_{(s)}+4\cdot\mathcal{L}_t\left[x'(t)\right]_{(s)}+4\cdot\mathcal{L}_t\left[x(t)\right]_{(s)}=\mathcal{L}_t\left[1\right]_{(s)}+\mathcal{L}_t\left[t\right]_{(s)}+\mathcal{L}_t\left[\sin(t)\right]_{(s)}\Longleftrightarrow$$ Now, use: * *$$\mathcal{L}_t\left[1\right]_{(s)}=\frac{1}{s}$$ *$$\mathcal{L}_t\left[t\right]_{(s)}=\frac{1}{s^2}$$ *$$\mathcal{L}_t\left[\sin(t)\right]_{(s)}=\frac{1}{1+s^2}$$ *$$\mathcal{L}_t\left[x(t)\right]_{(s)}=\text{X}(s)$$ *$$\mathcal{L}_t\left[x'(t)\right]_{(s)}=s\text{X}(s)-x(0)$$ *$$\mathcal{L}_t\left[x''(t)\right]_{(s)}=s^2\text{X}(s)-sx(0)-x'(0)$$ $$\left(s^2\text{X}(s)-sx(0)-x'(0)\right)+\left(4s\text{X}(s)-4x(0)\right)+\left(4\text{X}(s)\right)=\left(\frac{1}{s}\right)+\left(\frac{1}{s^2}\right)+\left(\frac{1}{1+s^2}\right)\Longleftrightarrow$$ $$\text{X}(s)=\frac{\frac{1}{s}+\frac{1}{s^2}+\frac{1}{1+s^2}+4x(0)+sx(0)+x'(0)}{(s+2)^2}$$ Now, with inverse Laplace transform we find: $$x(t)=\frac{25t-16\cos(t)+12\sin(t)+e^{-2t}\left(4(4+25x(0))+5t(40x(0)+20x'(0)-1)\right)}{100}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Correction factor for Hyperbolic Curve I have generated several data sets under varying experimental conditions, that are plotted as hyperbolic curves. I have two experiments that were done under identical conditions, but the curve is not the same. I'll call experiment A the "ideal". The equation for this line is: y=(435.6*S)/(0.333*S). In experiment B, I would expect the same result but instead the equation I get is: y=(390.1S)/(0.3176+S) I'd like to generate a correction factor to shift equation B to match equation A, and then apply that correction factor to other data sets within experiment B. Is this possible? How would I go about finding the correction factor?
First, to answer your question of how you shift a function like the one you've written, think of the function as a general function $f(x)$. You can shift this function to the right by writing a new function $f(y)$ where $y=x+a$. For your function this substitution is on the variable $S$. If $a>0$ the shift is to the left; conversely, if $a<0$ the shift is to the right. To shift up and down, write a new function $g(x)=f(x)+b$, where $b>0$ will shift up and $b<0$ will shift down. You can also apply a gain factor so that $h(x)=c\cdot f(x)$. This will generally distort your curve, so rather than shifting, it will flare open. That said, from a statistics/experimental standpoint, warping data B to fit data A gives me pause. It sounds like there was some scaling difference between experiment A and experiment B that you are trying to eliminate (e.g., someone bumped a knob and all the data are off by a similar amount). That is a calibration question, and you can fit a nonlinear calibration curve between sets A and B that essentially provides a mapping from B to A. To do that you would want to plug your S-values from B into your fit from A. This would give you what you "should" have measured based on your A data. Plot the A-model predicted values versus your B measurements and do a nonlinear curve fit on that data set. The resulting equation converts a B measurement into the A measurement that it "should" have been.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $A_1, A_2,\dots$ be a sequence of disjoint, finite subsets of $\mathbb{N}$. How can $\bigcup_{n=1}^\infty A_n$ be either finite or infinite? Let $A_n$ be finite subsets of $\mathbb{N}$ that are not $\emptyset$, and $\forall i,j, i\not = j$, $A_i, A_j$ are disjoint, then must $$\bigcup_{n=1}^\infty A_n = \mathbb{N}$$ and thus be (countably) infinite? So basically I am looking for an example of $A_1,A_2,\dots$ such that $$\bigcup_{n=1}^\infty A_n$$ is finite. Or, is the only way for the infinite union to be finite for $A_i =A_j = \emptyset$? Thanks. Edit: A comment has answered how $\bigcup_{n=1}^\infty A_n \not = \mathbb{N}$ is possible. I am not sure if it must be infinite, though, when $A_n \not = \emptyset$
It must be countable (i.e. in bijection with $\Bbb N$), but not necessarily $=\Bbb N$. Let $f: \Bbb N \to \bigcup_n A_n$ be given by $f(n) = \min A_n$. It is clear that $f$ is injective, so $\bigcup_n A_n$ is at least as large as $\Bbb N$. Now since the countable union of finite sets is at most countable, it follows that $\cup_n A_n$ is countable. As suggested in the comments, the sequence $(A_n)$ where $A_n := \{3n\}$ satisfies the requirement, yet the union is not $\Bbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Are two compact Hausdorff spaces homeomorphic if their algebras of continuous functions are isomorphic? Suppose that $X$ and $Y$ are two compact Hausdorff spaces and $F\colon C(X) → C(Y)$ is a continuous isomorphism of algebras. Can I say $X$ and $Y$ are homeomorphic? The key words always lead me to other questions. Can anyone give me some reference? Thanks a lot!
This is the so-called Gelfand-Kolmogorov theorem. It says: Let $X$ and $Y$ be compact, Hausdorff spaces. Suppose that there exists a ring isomorphism $T\colon C(X)\to C(Y)$. Then there exists a homeomorphism $h\colon Y\to X$ such that $$Tf = f\circ h\text{ for all }f\in C(X).$$ In particular, $T$ is a continuous algebra isomorphism. Click here for some references. Actually there is a more general result due to Milgram which asserts that two compact Hausdorff spaces $X$ and $Y$ are homeomorphic if and only if there exists a multiplicative bijection between $C(X)$ and $C(Y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1883920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
When to use the modulus symbol and when not to use the modulus symbol in integration and differentiation? I am facing this conceptual doubt for quite some time now. We know $$\frac{d}{dx}{(\sec^{-1}{x})}=\frac{1}{|x|\sqrt{x^2-1}}$$ whereas $$\frac{d}{dx}{(\csc^{-1}{x})}=\frac{-1}{|x|\sqrt{x^2-1}}$$ Now suppose I need to find the integral $$\int\frac{1}{x\sqrt{x^2-1}}dx$$ then will the answer be $\sec^{-1}{x}$ or $\csc^{-1}{x}$ in case the modulus function is not used for $x$ in the denominator? Why? Another similar doubt I have is that $$\int{\frac{1}{x^2-a^2}} \, dx$$ equals $$\frac{1}{2a}\ln\left(\frac{x-a}{x+a}\right)+C$$ or $$\frac{1}{2a}\ln\left|\frac{x-a}{x+a}\right|+C \text{?}$$ Some books use the former formula and some use the latter.Which one is correct and why? Pardon me if you find this question too trivial.But really I'm confused with this thing from the past few months!
The functions $sec^{-1}$ and $-cosec^{-1}$ only differ by a constant in the points where they are both defined. The same is true for $sin^{-1}$ and $-cos^{-1}$. So, when they are both defined, you can choose one or another as you please, as long as you add an additive constant. This happens because of the formula $\cos(x) = \sin(\pi/2-x)$ which says that $\sin(x)$ and $\cos(-x)$ are obtained one from the other with an horizontal translation, so their inverse functions correspond one to the other with a vertical translation. To be more precise, (and to discuss the case of $\log x$ versus $\log |x|$) if the domain of the integrand function is not an interval, it is not enough to add a constant. You need to add a combination of different constant on each different connected component of your domain. Example. Many books say that $$ \int \frac{1}{x}\, dx = \log |x| + C. $$ This is not completely true. All the antiderivatives of $1/x$ are given by a family of functions with two parameters, not one: $$ F(X) = \begin{cases} \log x + C_1 & \text{if $x>0$,}\\ \log (-x) + C_2 & \text{if $x < 0$.} \end{cases} $$ It is true that in most applications you only need to find an antiderivative in one fixed interval (for example if you are solving differential equations). So, given the interval, there can be a preferred choice for the solution. If you are integrating $1/x$ for $x<0$, the solution can be written as $log(-x)+C$. If you are integrating $\frac{1}{x\sqrt{x^2-1}}$ on $(-1,0)$ the solution can be written as $cosec^{-1}(x) + C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How can I prove: $\sum_{n\leq x}\frac{\phi(n)}{n^2} = \frac{\log x}{\zeta(2)}+\frac{C}{\zeta(2)} + A + O\left(\frac{\log x}{x}\right)$ The problem is that prove that $$\sum_{n\leq x}\frac{\phi(n)}{n^2} = \frac{\log x}{\zeta(2)}+\frac{C}{\zeta(2)} + A + O\left(\frac{\log x}{x}\right)$$ where $C$ is Euler's constant and $A = \sum_{n \geq 1}\frac{\mu(n)\log n}{n^2}$ The following is things I did try: $$\sum_{n\leq x}\frac{\phi(n)}{n^2} = \sum_{n\leq x}\frac{1}{n}\frac{\phi(n)}{n} =\\ \sum_{n\leq x}\frac{1}{n}\sum_{d\mid n} \frac{\mu(d)}{d} = \sum_{q\leq x}\frac{1}{q} \sum_{d\leq x} \frac{x}{q}\frac{\phi(d)}{d^2} =\\ \sum_{q\leq x}\frac{1}{q}\left( \frac{1}{\zeta(2)} + O\left(\frac{q}{x}\right) \right) = \frac{\log x}{\zeta(2)} + \frac{C}{\zeta(2)} + O\left(\frac{1}{x}\right) + \sum_{q\leq x}\frac{1}{q} O\left(\frac{q}{x}\right) $$ Here $\mu$ is the Möbius function and $\phi$ is the Euler totient function.
Let we implement the approach suggested by Winther in the comments, with a minor variation. From $$ \sum_{n\leq x}\varphi(n) = \frac{x^2}{2\zeta(2)}+O(x\log x) \tag{1}$$ and Abel's summation formula we get: $$ \sum_{n\geq x}\frac{\varphi(n)}{n^2}=\frac{1}{2\zeta(2)}+O\left(\frac{\log x}{x}\right)+2\int_{1}^{x}\left( \frac{u^2}{2\zeta(2)}+O(u\log u)\right)\frac{du}{u^3}\tag{2} $$ and the claim readily follows by rearranging terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Integration of trigonometric function $\int\frac{\sin(2x)}{\sin(x)-\cos(x)}dx$ $$\int\frac{\sin(2x)}{\sin(x)-\cos(x)}dx$$ My attempt: Firstly, $\sin(2x)=2\sin(x)\cos(x)$. After that, eliminate the $\cos(x)$ seen in both the numerator and denominator to get $$2\int\frac{\sin(x)}{\tan(x)-1}\ dx.$$ From here onwards, should I convert $\sin(x)$, $\tan(x)$ to half-angles and use $\tan(x/2)=t$? But this would be a time consuming method. Any suggestions?
HINT: $$\int\frac{\sin(2x)}{\sin(x)-\cos(x)}\space\text{d}x=$$ Use $\sin(2x)=2\sin(x)\cos(x)$: $$2\int\frac{\sin(x)\cos(x)}{\sin(x)-\cos(x)}\space\text{d}x=$$ Sustitute $u=\tan\left(\frac{x}{2}\right)$ and $\text{d}u=\frac{x\sec^2\left(\frac{x}{2}\right)}{2}\space\text{d}x$: $$-8\int\frac{u(u^2-1)}{(u^2+1)^2(u^2+2u-1)}\space\text{d}u$$ Now, use partial fractions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Does $2764976 + 3734045\,\sqrt[3]{7} -2707603\,\sqrt[3]{7^2} = 0$? In the process of my numerical computations I have found a very special identity: * *$\;\;1264483 + 1707789 \,\sqrt[3]{7} - 1238313\,\sqrt[3]{7^2} = 9.313225746154785 \times 10^{-10}$ *$ -1500493 - 2026256\,\sqrt[3]{7} + 1469290\,\sqrt[3]{7^2} = 9.313225746154785 \times 10^{-10}$ Therefore if we subtract these two equations one should find the difference is zero. On my computer I found: $$ 2764976 + 3734045\,\sqrt[3]{7} -2707603\,\sqrt[3]{7^2} \stackrel{?}{=} \left\{ \begin{array}{cl} 0 & \text{by hand} \\ -1.862645149230957 \times 10^{-9} & \text{by computer} \end{array}\right. $$ Subtracting these two numbers - which might be the same - we have gotten twice the number!
No. Mathematica tells us that its value is about $-2.0876013027695663896 \times 10^{-9}$. $$1264483 + 1707789 \times 7^{1/3} - 1238313 \times 7^{2/3} = -3.1767789172657775703*10^{-10}$$ according to Mathematica. $$-1500493 - 2026256 \times 7^{1/3} + 1469290 \times 7^{2/3} = 1.7699234110429886326 \times 10^{-9}$$ similarly. These are calculated with Mathematica's arbitrary-precision arithmetic, so it is "guaranteed" not to have the rounding errors that floating-point arithmetic can often introduce.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 4 }
Help calculating two integrals-- generalized and definite The first one is to calculate $$\int_{-\infty}^{\infty} \frac{1}{(1+4x^2)^2} dx.$$ I think this one should be solvable with the method of substitution but I tried using $t=4x^2$ which didn't work well, or where I have miscalculated something. The second one is, for every $n\in\mathbb N$, to calculate $$\int_{-1}^{1} (1-x^2)^n dx.$$ So, I really don't know how to solve the second integral. Maybe through the use of $n=1,2,3,\cdots$ and look for any patterns?
The first integral can also be evaluated using Glaisher's theorem, which says that if: $$f(x)=\sum_{n=0}^{\infty}(-1)^n c_n x^{2n}$$ then we have: $$\int_{0}^{\infty}f(x) dx = \frac{\pi}{2}c_{-\frac{1}{2}}$$ if the integral converges and where an appropriate analytic continuation of the series expansion coefficients has to be used (e.g. factorials replaced by gamma functions). This is a special case of Ramanujan's master theorem. In this case we can easily obtain the series expansion. Differentiating the the geometric series: $$\frac{1}{1+u} = \sum_{n=0}^{\infty}(-1)^n u^n$$ yields: $$\frac{1}{(1+u)^2} = \sum_{n=0}^{\infty}(-1)^n (n+1)u^n$$ We thus have: $$\frac{1}{(1+4 x^2)^2} = \sum_{n=0}^{\infty}(-1)^n (n+1)4^n x^{2n}$$ This means that $c_n = (n+1)4^n$ and $c_{-\frac{1}{2}} = \frac{1}{4}$, the integral from minus to plus infinity is thus $\frac{\pi}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Computing $\int \frac{\ln(x)}{(1+x)^2}dx$ I am trying to solve the following integral $$\int \frac{\ln(x)}{(1+x)^2}dx.$$ I have found a few similar questions on stackexchange but usually involving boundaries, so the answers have involved methods that don't seem applicable when the integral is unbounded. I have tried using partial integration $\int fg dx = Fg - \int Fg'dx$ with $f=ln(x)$ and $g = \frac{1}{(1+x)^2}$ and the other way around but I couldn't make it work.
Partial integration actually works: $$\int \frac{\log(x)}{(1+x)^2}dx=-\frac{\log(x)}{(1+x)}+\int\frac{1}{x(1+x)}dx$$ and this last one you can easily solve by writing $$\frac{1}{x(1+x)}=\frac1{x}-\frac{1}{1+x},$$ thus one obtains $$\int \frac{\log(x)}{(1+x)^2}dx=-\frac{\log(x)}{(1+x)}+\log(x)-\log(1+x)+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
support on a triangle and uniform marginal distributions Suppose $X$ and $Y$ are jointly distributed on the support $\operatorname{conv} \{(0,0),(0,1),(1,0)\}$ with the joint PDF $f>0$ everywhere on the support. Is it possible to find $f$ such that the marginal PDFs are given by $f_X(x)= 1$ for all $x\in[0,1]$ and $f_Y(y)=1$ for all $y\in [0,1]$? Thanks
You cannot have an everywhere positive density on the triangle support with the given marginal distributions. To see this, suppse such a distribution exists, and then consider the small triangle to the right of $x=1/2$. The x marginal distribution implies that the probability that x is greater than 1/2 is 1/2. Similarly, the probability that y is greater than 1/2 is 1/2. These 2 small triangles are disjoint, so the probability of being in one or the other is 1. But that means that for the square where x and y are bounded by 0 and 1/2, the distribution will be zero almost everywhere. So such a distribution does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Counting strings, need to under stand how this worked Question Let $N_4 = \{1, 2, ..., 4\}$. Calculate the number of strings on the set $N_4$ that are of length $8$. Calculate the number of strings on the set $N_4$ that are of length $8$ and contain exactly five ones. Answer For the first part of the question, for each of the $8$ positions in the string we have a choice of $4$ digits. Using the product rule, the number of different possible strings is $= 4^8 = 65536$. The task of creating a string from the set $N_4$ of length $8$ with $5$ ones may be partitioned into two tasks: Select the position of the ones this can be done in $C(8,5) = 56$ different ways. (THIS IS THE STEP I DON"T UNDERSTAND) (AND THIS TOO) Select the digits that go into the remaining $3$ positions $= 3^3 = 27$ different ways. We now apply the product rule to obtain the solution $= 1512$. Hence, the solutions are: $65536$ $1512$
Placing the ones: You have 8 positions to place the first '1', after that 7 positions remain available for the second '1', and so on till the fifth '1' (4 positions left for it). In total you would have 8×7×6×5×4 options; but, because the '1's are indistinguishable from each other, for each placement you are counting them 5×4×3×2×1 times (this is the number of ways you can order the '1's between themselves; imagine them having different colors for example). So the number of indistinguishable ways to place the '1's is: 8×7 = 56 (divide the two products above). Placing the other digits: You are now left with 3 vacant positions, where you can place 2, 3 or 4. So, you have 3 options for each position, and in total: 3×3×3 = 27 combinations
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
concept of canonical forms in PDEs - general question why the concept of deriving the canonical forms in PDEs is useful? is it because it gives a way to solve some problems in a numerical way which otherwise would be difficult to solve using geometry?
"The concept of deriving the canonical forms..." I think what is meant is, why is it useful manipulating a general 2nd order PDE into one of the canonical forms useful? To answer my own question, because we can easily solve these canonical form PDE's analytically (at least on reasonable domains). This then allows us to undo whatever transformations we made to get an analytical solution to the original PDE in question. For example, it is easier solving $\nabla^2 u = F(x)$ for $x \in \mathbb{R^n}$ than it is solving the general elliptic PDE. If the domain is complicated and one wishes to use a computer, numerical methods are more developed for efficiently solving the canonical equations as opposed to a general PDE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f(x+1)+f(x-1)=\sqrt 3 f(x), \forall x$ then $f$ is periodic. If $f: \mathbb{R} \rightarrow \mathbb{R}$ such that $f(x+1)+f(x-1)=\sqrt 3 f(x), \forall x$ then $f$ is periodic. I tried to replace $x$ by $x+1, x-1$ in the equality,to get something like $f(x + k)=f(x)$ but without success. Any help is appreciated.
Fix some $x \in \Bbb R$. Define $a_n = f(x+n)$, for $n \in \Bbb N$. Note that $a_n$ satisfies the recurrence relation $a_{n+1} = \sqrt{3} a_n - a_{n-1}$. By the general theory of recurrence relations it follows that $a_n=c_1 r_1^n + c_2 r_2^n$ where $r_1,r_2$ are the roots of the quadratic equation $r^2-\sqrt{3}r+1=0$. But note that $r_1,r_2$ are complex numbers with norm 1 (specifically, $\pm e^{i\pi/6}$). Hence it follows that $a_n = A\cos (\pi n/6) + B \sin(\pi n /6)$. Thus you can conclude that $a_{n+12}=a_n$ for all $n$, i.e, $f(x+12)=f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1884934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
Skew lines and what's between them Is it always possible to find a line perpendicular to two skew lines in space? And how can we visualise the proof geometrically? And if anyone could present the proof that it is always possible to exist a line perpendicular to both skew lines, please elaborate.
Think about how you would find the shortest distance between the two lines. You know that the shortest distance from a line to a point not on the line is along a segment that’s perpendicular to the line. By symmetry, this means that the shortest distance between two skew lines must be along a segment that’s perpendicular to both of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
$G$ is an abelian group of order $n$ with the property that $G$ has at most $d$ elements of order $d$, for any $d$ dividing $n$. Then $G$ is cyclic. $G$ is an abelian group of order $n$ with the property that $G$ has at most $d$ elements of order $d$, for any $d$ dividing $n$. Then $G$ is cyclic. I am not getting any clue how to start. Please Help.
First, decompose $G=C_1\times\cdots\times C_n$, via the Fundamental theorem of finitely generated abelian groups, where the $C_n$ are cyclic. Let's show that the orders of the $C_i$ are pairwise coprime. Let's start, say, with $C_1$ and $C_2$. Suppose $d$ divides both $|C_1|$ and $|C_2|$. Then $C_1$ has an element $\alpha$ of order $d$, and $C_2$ also has an element $\beta$ of order $d$. Then the elements of the form $(\alpha,\beta^i)$ and $(\alpha^i,\beta)$, $i=0,\ldots,d-1$ have order $d$. Thus we've found $2d-1$ elements of order $d$, so the hypothesis implies $2d-1\leq d$ and thus $d=1$. Thus, $|C_1|$ and $|C_2|$ are coprime, and therefore $D_1:=C_1\times C_2$ is cyclic (Product of two cyclic groups is cyclic iff their orders are co-prime). But now we have $G=D_1\times C_3\times\cdots C_n$, a product of $n-1$ cyclic groups. Proceeding by induction on the number of terms, we conclude that $G$ is cyclic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What is the laymen meaning of "sampling from a binomial distribution"? I know what is binomial distribution but I am not able to realize sampling associated with it. Further a formal statement and applications of sampling from a binomial distribution would be of great help. Thanks.
A Binomial Distribution is that of the count of successes among a certain amount of success/failure trials;† each with an identical and independent rate of success. So if you have a known amount of trials, $n$, all with the same probability of success, $p$, and each is independent of every other, then the count of successes is a $\mathcal {Bin}(n, p)$ random variable. Examples: * *tossing the same coin a certain amount of times and counting the heads. *rolling a certain amount of identically balanced dice and counting those showing 6 *drawing balls from an jar with replacement and shaking a certain amount of times, and counting those of a particular colour. *drawing cards from a standard deck with replacement and shuffling between draws a certain amount of times and counting the aces drawn. *and so forth. Additionally, in some cases taking a representative sample from a significantly large population and counting matches of a particular criteria may be approximated as having a binomial distribution.   Though more accurately the count will have a hypergeometric distribution. † btw, these are called Bernoulli trials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Check convergence and find the sum $\sum_{n=1}^{\infty} \frac{1}{9n^2+3n-2}$ $$\sum_{n=1}^{\infty} \frac{1}{9n^2+3n-2}$$ I have starting an overview about series, the book starts with geometric series and emphasizing that for each series there is a corresponding infinite sequence. For convergence I can look at the partial sums, but how can I find the sum of the series?
It is a telescopic series: $$\sum_{n=1}^{N} \frac{1}{9n^2+3n-2}=\sum_{n=1}^{N}\frac{1}{(3n+2)(3n-1)}=\sum_{n=1}^{N}\left(\frac{1/3}{3n-1}-\frac{1/3}{3n+2}\right)\\=\sum_{n=1}^{N}\frac{1/3}{3n-1}-\sum_{m=2}^{N+1}\frac{1/3}{3m-1}=\frac{1/3}{3\cdot 1-1}-\frac{1/3}{3(N+1)-1}\to \frac{1}{6}\quad \mbox{as $N\to+\infty$}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What is the nth term of the series? $0, 0 , 1 , 3 , 6 , 10 ....$ I am trying to find the relation between the number of nodes and the number of connections possible. So if there are $0$ nodes, that means $0$ connections possible, $1$ node still means $0$ connections possible, $2$ nodes $1$ connection possible, $3$ nodes $3$ connection and so on. How can I find the relation between $n$ nodes and the number of possible connections ?
Hint: If you have graph on $n$ nodes, an edge in the graph corresponds to a choice of $2$ of these nodes (the $2$ which the edge connects). Hence the number of possible edges in the graph is the number of ways you can choose $2$ nodes. Can you continue from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Geometry: Show that two lines are perpendicular I have a homework problem telling me the following; $ A, B, C, $ and $ D $ are points on a circle. $A_1, B_1, C_1$ and $ D_1$ are midpoints to the arcs $AB, BC, CD$ and $DA$. Show that $A_1C_1$ is perpendicular to $B_1D_1$. Here's what I drew real quick with Geogebra: circle Since the angles $\angle AMB \cong \angle DMC$ and $\angle BMC \cong \angle DMA$ because vertical angles are always equal. We also know that the angles $$\angle AMA_1 \cong \angle BMA_1$$ and $$\angle BMB_1 \cong \angle CMB_1$$ because the arcs $AA_1 \cong BA_1$ and $BB_1 \cong CB_1$. If I somehow could show that $\angle A_1MB$ and $\angle B_1MB$ are congurent I might be able to show that the sum of two angles, $|\angle A_1MB| + |\angle B_1MB| = 90^{°}$. So what I've done is drawing the line $A_1B_1$, as in the picture above, we know that the lines $A_1M \cong B_1M$ because they're the radius of the circle. Which gives us that $\angle A_1B_1M \cong \angle B_1A_1M$ because of the base angle theorem. But to my problem: if I somehow could show that $A_1B_1 \parallel AC$ then the we have the vertical angles $$\angle AMA_1 \cong \angle B_1A_1M \cong A_1B_1M \cong CMB_1$$ which proves that all angles are the equal size, which gives us $|\angle A_1MB| + |\angle B_1MB| = 90^{°}$ . How do I show that they're parallel? Or is there any other way to show that $A_1C_1$ and $B_1D_1$ are perpendicular? Update Since my figure is very misleading. I still can't solve it. this is what I get from this: I know the triangles are simular but I can't figure out how to actually show that they're perpendicular.
Call the intersection of $A_1C_1$ and $B_1D_1$ point $P$. $$\measuredangle A_1PB_1=\frac{\measuredangle A_1MB_1+\measuredangle C_1MD_1}{2}$$ We can prove that $\measuredangle A_1MB_1+\measuredangle C_1MD_1=180^{\circ}$ using the following fact: $$\measuredangle A_1MB_1+\measuredangle C_1MD_1=\frac{\measuredangle A_1MB_1+\measuredangle B_1MC_1+\measuredangle C_1MD_1++\measuredangle D_1MA_1}{2}=\frac{360^{\circ}}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is it possible that the fraction of variance unexplained becomes greater than 1? I read this definition of fraction of variance unexplained from https://en.wikipedia.org/wiki/Fraction_of_variance_unexplained: $$FVU = \frac{VAR_\text{err}}{VAR_\text{tot}}$$ Support the real data is $x_i$ and the regression estimate is $y_i$, then if $x_i$ and $y_i$ are completely uncorrelated, then $VAR_\text{error} = VAR(x - y) = VAR(x) + VAR(y)$, but $VAR_\text{tot} = VAR(x) \le VAR_\text{error}$ Therefore if $VAR(y)\ne 0$ then $FVU > 1$ I am wondering if I make anything wrong here? Thanks!
Lets assume a simple model, $Y=\beta_0 + \beta_1X+\epsilon$. The absolute value of Pearson's correlation coefficient is bounded by $1$, $$ |\rho_{X,Y}| = \left|\frac{cov(X,Y)}{\sigma_X \sigma_X}\right| \le 1 $$ you can easily show it by using Cauchy-Schwartz inequality. To estimate $\rho$ you use the sample equivalent $r^2=S_{XY}/(S_{XX}S_{YY})$. By using the LS estimators, you can show that for the simple model $r^2 = R^2$, that is, square of the Person's statistic equals the "proportion of explained variance" that have to lie in $[0,1]$. The key term here is the use of the Least Squares estimators. If you are using some other unrelated method to predict $Y$, then you definitely can get such "anomalies" like $SSE>SST$, but in this case the very use of the $R^2$ is questionable. Sure, you can construct some generalized or alternative measure of $R^2$ that would satisfy its initial mathematical properties, but it won't be the basic $SSR/SST$ proportion from the LS simple linear model.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Application of 'Total variation of a complex measure is finite.' I have a 'Real and Complex analysis' by Rudin. And theorem 6.4 says 'Total variation of a complex measure is finite.' Q: What is an application of this property? I've read all the exercise in the book and I couldn't find any application of the theorem. Thanks.
I don't know of any exercises that make use of this result, but I can tell you of a point in the text where he makes crucial use of it: the proof of the theorem of the Brothers Riesz (17.13). Briefly, to show that the measure is absolutely continuous (with respect to normalized Lebesgue measure on $\mathbb T$), he obtains a Radon-Nikodym derivative that is in $L^1(\mathbb T)$ as it is the boundary values of a function in $H^1(\mathbb D)$. Showing that the function is in $H^1(\mathbb D)$ rests upon being able to show that it is analytic, which comes from the main hypothesis of the theorem, and that its $H^1$ norm is finite, which comes from the fact that it is bounded above by $\lvert \mu \rvert$ and $\mu$ is a complex Borel measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
0/0 in limit evaluation When evaluating limits we often come across the form 0/0 and we cant get a limiting value. As stated in many textbooks and online resources, the method to still get a limiting value as the input approaches a specific value is to first factorize or simplify the function to cancel out the factors/variables responsible for the form 0/0. My question is doesn't it change the function altogether if we simplify it? Because if we had used the original function it would not have been defined at that particular input? Can anyone provide the intuition or concept behind this?
The main thing is: Taking the limit does not mean evaluating the expression at the point, but rather infinitely close to that point (i.e. approaching the point). Simplifying an expression does change the behaviour at the point (for example getting a value where it previously was indeterminate because of division by 0 or something else), but it does not change the behaviour close to the point, i.e. as you approach the point, which is what matters for the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1885968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to show that $\mu$ is $\sigma$-finite This is part of a problem from an old prelim exam in analysis. I am studying to prepare for my own prelim. Let $\{q_n\}=\mathbb Q$, and let $f_n : \mathbb R \to [0,\infty)$ be a Borel measurable function with $\int f_n d\lambda=1$ and with support $[q_n-2^{-n-1},q_n+2^{-n-1}]$. $\lambda$ is the Lebesgue measure. We furthermore define a Borel measure $\mu$ by $\mu(A) := \int_A \sum_{n=1}^\infty f_n d\lambda$. For the first part of this problem, I have already proven that $\sum_{n=1}^\infty f_n(x) < \infty\ \mathrm{a.e}$, that $\mu <\!\!<\lambda$, and that every open subset of $\mathbb R$ has infinite $\mu$-measure. Now we are asked to show that $\mu$ is $\sigma$-finite, and a I find myself fully at a loss to see how to do this. It seems very difficult to see how this could even be true, since again, every open subset of $\mathbb R$ has infinite $\mu$-measure.
If you've shown $f = \sum f_n <\infty$ on a set $A$ with $\lambda (\mathbb R\setminus A)=0,$ we can do this: For $j,k=1,2, \dots,$ let $B_{jk} = \{x\in [-j,j]\cap A: f(x)\le k\}.$ Then $A\cap [-j,j] = \cup_k B_{jk},$ and $\mu(B_{jk}) = \int_{B_{jk}} f \,d\lambda\le k\cdot 2j<\infty$ for all $j,k.$ We then have $A= \cup _{j,k} B_{j,k},$ which shows $\mu$ is $\sigma$-finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Simplifying the product of multiple binomial expansions I have a tricky product I'm trying to expand out into a summation, and I'm not sure how to go forward. I have two sets of numbers, each containing $n$ elements total. The first set, $\{x_k\}$ are real and positive: $$ x_k \in \mathbb{R}, \; x_k > 0 \; \forall \; k$$ The second set, $\{y_k\}$, are all naturals: $$ y_k \in \mathbb{N} \; \forall \; k$$ The expression I'm trying to expand out is a product of different binomial expansions: $$ f(c) = \prod_{k=1}^{n} (c x_k +1)^{y_k} = \prod_{k=1}^{n} \left( \sum_{m=0}^{y_k} \binom{y_k}{m}c^{m} x_{k}^{m} \right)$$ where $c>0$. When fully expanded, we can only group by powers of $c$, which must run between zero and $\sum_{k}^{n} y_k$. Letting $$ Y = \sum_{k=1}^{n} y_k $$ we should be able to express $f(c)$ as $$ f(c) = \sum_{m=0}^{Y} \alpha_m c^{\beta_m}$$ Now I'm stuck for how to calculate the coefficients $\alpha_m$ and exponents $\beta_m$. Anyone have any advice? Thanks!
Using the same index of summation $m$ in all the sums might be a bit confusing, as these different indices could all take different values in the product. It therefore might be better to put subscripts on those indices, so we would have $$ f(c) = \prod_{k=1}^n \left( \sum_{m_k = 0}^{y_k} \binom{y_k}{m_k} c^{m_k} x_k^{m_k} \right). $$ When we exchange the summation and the product, the sum will now be over all possible choices of the indices $m_k$. Hence it will be a sum over the vectors $\vec{m} = (m_1, m_2, ..., m_k)$, where each $m_k$ satisfies $0 \le m_k \le y_k$. Hence we have $$ f(c) = \sum_{\vec{m} \in \Lambda} \prod_{k=1}^n \left( \binom{y_k}{m_k} c^{m_k} x_k^{m_k} \right), $$ where $\Lambda = \mathbb{Z}^n \cap \prod_{k=1}^n [0, y_k]$. We can now collect the factors of $c$, and we would get $$ f(c) = \sum_{\vec{m} \in \Lambda} c^{\sum_{k=1}^n m_k} \prod_{k=1}^n \left( \binom{y_k}{m_k} x_k^{m_k} \right). $$ Now different vectors $\vec{m} \in \Lambda$ could result in the same power $p = \sum_{k=1}^n m_k$ of $c$. If you would like, you could group them together; let $\Lambda_p = \left\{ \vec{m} \in \Lambda: \sum_{k=1}^n m_k = p \right\}$. As you pointed out, the possible powers are the integers between $0$ and $Y = \sum_{k=1}^n y_k$. We would then have $$ f(c) = \sum_{p = 0}^Y \alpha_p c^p, $$ where the coefficients $\alpha_p$ are given by collecting the terms from $\Lambda_p$: $$ \alpha_p = \sum_{\vec{m} \in \Lambda_p} \prod_{k=1}^n \binom{y_k}{m_k} x_k^{m_k}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why aren't vacuous truths just undefined? I am struggling to understand this. According to truth tables, if $P$ is false, it doesn't matter whether $Q$ is true or not: Either way, $P \implies Q$ is true. Usually when I see examples of this people make up some crazy premise for $P$ as a way of showing that $Q$ can be true or false when $P$ is something outrageous and obviously untrue, such as "If the moon is made of bacon-wrapped apple-monkey carburetors, then I am a better wakeborder than Gauss." $P$ is clearly false, but $P \implies Q$ is true no matter what the state of $Q$ is, and I don't understand why. Are we saying "If $P$ is false, then all bets are off and $Q$ can be anything, either true or false, and not contradict our earlier claim, and if it isn't false, it must be true"? Otherwise why can't we say that if $P$ is false, then we can't make any claims one way or the other on whether or not it implies anything at all?
Consider the statement: All multiples of 4 are even. You would say that statement is true, right? So let's formulate that in formal logic language: $\forall x: 4|x \implies 2|x$ (Here "$a|b$" means "$a$ divides $b$", that is, $b$ is a multiple of $a$.) Now a $\forall$ statement is true if it is true whatever you insert for the quantified variable (after all,that's "for all" means). So let's try to insert $3$: $4|3 \implies 2|3$ But wait, $4|3$ is false! Moreover, $2|3$ is also false. So the only way for the original statement to be true is that the implication $\text{false}\implies\text{false}$ gives true. A similar argument can be done for $\text{false}\implies\text{true}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 12, "answer_id": 0 }
How does $\left\{f\mid f\colon\mathbb{N}\to\mathbb{R}\right\}$ mean 'the set of all real-valued functions of one natural number variable'? On pg. 83 of Hefferon's Linear Algebra, it says this: The set $\left \{ f\mid f\colon\mathbb{N}\rightarrow \mathbb{R} \right \}$ of all real-valued functions of one natural number variable is a vector space under the operations. How do I read the notation to mean what the sentence says? I thought the "|" and ":" mean the same thing--"such that...". So I thought someone would write it $\left \{ f\mid \mathbb{N}\rightarrow \mathbb{R} \right \}$. Also, I don't understand the "$\mathbb{N}\rightarrow \mathbb{R}$" part. I saw from Wikipedia that "$\rightarrow$" means "implies". It makes the notation sound like "$\mathbb{N}$ implies $\mathbb{R}$", which doesn't make sense to me.
If we read it out loud we would say: The set of functions $f$ such that the domain of $f$ is $\mathbb N$ and the co-domain of $f$ is $\mathbb R$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How does one find the integrals of piecewise functions? Of course, I am interested in finding out about how one would find the result of a piecewise function if I simply integrated it. In other words, how does one integrate a function like: $f(x)=\{(x^2: x\le0),(x:x>0)\}$ or $g(x) = sgn(\sin x)$ where $sgn(x)$ is the signum function Simply put it, how does one find the antiderivative? Not evaluated between $a$ or $b$ points, but rather the whole function. Please, if you can, put in some sources or the steps to getting the result.
The first thing to recognize is that "the" antiderivative is a misnomer, because if it exists, it is not unique: we can add any constant and the result will be another antiderivative. In particular, assuming $f$ is continuous, any integral of the form $$\int_c^x f(t) dt$$ is an antiderivative of $f$. (This is one of the fundamental theorems of calculus.) Here, $c$ is any constant. A particularly easy constant to work with is $c = 0$. In this case, an antiderivative of $f$ is $$I(x) = \int_0^x f(t) dt$$ To evaluate this, consider two cases. Case 1 $x \leq 0$ In this case, $f(t) = t^2$ in the interval $[x,0]$, so $$I(x) = \int_0^x f(t) dt = \int_0^x t^2 dt = \frac{x^3}{3}$$ Case 2 $x > 0$ In this case, $f(t) = t$ in the interval $[0,x]$, so $$I(x) = \int_0^x f(t) dt = \int_0^x t dt = \frac{x^2}{2}$$ Putting the pieces together, we see that an antiderivative of $f$ is given by $$I(x) = \begin{cases} \displaystyle \frac{x^3}{3} \text{if }x \leq 0 \\ \displaystyle \frac{x^2}{2} \text{if }x > 0 \\ \end{cases}$$ More generally, $I(x) + C$ is an antiderivative for any constant $C$. For the second example, $$g(x) = \text{sgn}(\sin x),$$ note that $g$ is not continuous at $x=0$. To see this, observe that for small positive $x$, we have $\sin x > 0$, so $g(x) = 1$, whereas for small negative $x$, we have $\sin x < 0$, so $g(x) = -1$. Finally, $g(0)$ is either $0$, $1$, or $-1$, depending on how you define $\text{sgn}$. In any case, $\lim_{x \to 0^+}g(x) = 1$ whereas $\lim_{x \to 0^-}g(x) = -1$, so $\lim_{x \to 0}g(x)$ does not exist, hence $g$ cannot be continuous at $x=0$ no matter how we define $g(0)$. For the same reason, $g$ is not continuous at any $x$ where $\sin(x) = 0$, in other words, at any multiple of $\pi$. Now, this means that there cannot be any function $I$ which is differentiable everywhere such that $I' = g$. This is because the derivative of a differentiable function must satisfy the intermediate value property, so $I'$ cannot have a step discontinuity. Of course, we can find a function $I$ which is differentiable everywhere except at multiples of $\pi$, such that $I'(x) = g(x)$ for all $x \neq 0$. To do this, take an arbitrary interval $(k\pi, (k+1)\pi)$ (where $k$ is an integer), and find an antiderivative of $g$ in this interval. Thus for $x \in (k\pi, (k+1)\pi)$, we have $\sin(x) > 0$ if $k$ is even, and $\sin(x) < 0$ if $k$ is odd, so in either case, $g(x) = \text{sgn}(\sin(x)) = (-1)^k$. Then $$\int_{k\pi}^x g(t) dt = \int_{k\pi}^x (-1)^k dt = (-1)^k (x - k\pi)$$ Therefore, the function $I$ defined piecewise by $$I(x) = (-1)^k(x - k\pi), \qquad x \in (k\pi, (k+1)\pi)$$ satisfies $I'(x) = g(x)$ for all $x$ except multiples of $\pi$. Note that $I$ is a periodic "triangle" function with slope $+1$ or $-1$ on intervals $(k\pi, (k+1)\pi)$ where $k$ is even or odd, respectively. More generally, if for each integer $k$, we let $c_k$ be any constant, then $$J(x) = (-1)^k(x - k\pi) + c_k, \qquad x \in (k\pi, (k+1)\pi)$$ satisfies $J'(x) = g(x)$ for all non-multiples of $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examples of problems that are easier in the infinite case than in the finite case. I am looking for examples of problems that are easier in the infinite case than in the finite case. I really can't think of any good ones for now, but I'll be sure to add some when I do.
A Markov chain is characterized by a discrete probability distribution of the initial state of a system, 0, and a transition matrix P such that pij is the probability of going from state i to state j after any one transition. Under certain assumptions, the probability distribution of states after the first transition is 1 = 0P, after two transitions is 2 = 1P = 0P2, after n transitions, n = 0Pn. By constrast, the steady-state (as n -> ∞) probability distribution of states, , must satisfy = P, and = 1, where is a column vector of 1's. This is an over-determined (by one) system of equations, but the upshot is that the steady-state probability distribution of states is obtained by inverting a matrix. See, for example, the Markov chain Wiki for details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119", "answer_count": 26, "answer_id": 8 }
All real values of $m$ for which $x^2-(m-3)x+m>0\forall x\in \left[1,2\right]$ All real values of $m$ for which $x^2-(m-3)x+m>0\forall x\in \left[1,2\right]$ $\bf{My\; Try::}$ We can write it as $$x^2-(m-3)x+\left(\frac{m-3}{2}\right)^2+m-\left(\frac{m-3}{2}\right)^2>0$$ So $$\left[x-\left(\frac{m-3}{2}\right)\right]^2-\left[\frac{\sqrt{m^2-10m+9}}{2}\right]^2>0$$ So $$\left[x-\left(\frac{(m-3)+\sqrt{m^2-10m+9}}{2}\right)\right]\cdot \left[x-\left(\frac{m-3-\sqrt{m^2-10m+9}}{2}\right)\right]>0 $$ Now How can i solve after that , Help required, Thanks
I am sorry but I don't agree with your curves @Roman83. Here is how some of them look: for values of parameter $m=0,1,\cdots 10$. If $m=0$ we get the parabola passing through the origin ; if $m=10$ we get the parabola which passes through point $(0,10)$, with increasing order between all of them. Let us show how it gives us a graphical evidence (I don't say complete proof) of the solution. Set apart the fact that all thesee parabolas pass through a common point $(1,4)$ (a fact that is easy to establish) what do we see on this graphics ? That the case $m=10$ is a limit case ; if m>10, we will have values of $x \in [0,2]$ such that expression $x^2-(m-3)x+m$ is negative. The condition is then visibly fulfilled iff $m \leq 10$. It remains to prove it (by algebraic means)...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A possible Property of Euler's totient function: $n$ such that $\varphi(n)$ and $\varphi(n+1)$ are both powers of two $n$ is an odd positive integer such that $\varphi(n)$ and $\varphi(n+1)$ are both powers of two . Here , $\varphi(n)$ denotes Euler's totient function. Is it true that $(n+1)$ is either $6$ or a power of $2$? Please help me to prove or disprove this statement . (I was randomly flipping through some values attained by the $\varphi$ function when I observed this pattern) :)
This is not an answer, but might be useful when someone smarter than me tries to prove this. Write $n=\prod_{i}p_i^{n_i}$ and $n+1=\prod_{i}(p_i')^{m_i}$. Then $\phi(n)=\prod_{i} p_i^{n_i-1}(p_i-1)$ being a power of two implies that if $p_i\neq 2$, then $n_i=1$ and $(p_i-1)$ is a power of two. The same holds for $\phi(n+1)$. Let $I$ be an enumeration of the prime numbers such that the prime number minus one is a power of two, i.e. $0$ corresponds to $2$, $1$ corresponds to $5$, $2$ corresponds to $17$ and so on. Then $$n=2^{n_0}\prod_{i\in I, i\geq 1}p_i^{n_i}$$ where $n_i\in \left\{0,1\right\}$ and $$n+1=2^{m_0}\prod_{i\in I,i\geq 1}p_i^{m_i}$$ where $m_i\in \left\{0,1\right\}$. Here $p_i$ denotes the $i$-th prime number such that $p_i-1$ is a power of $2$ (and we start counting from zero). From this we get that $$2^{n_0}\prod_{i\in I, i\geq 1}p_i^{n_i}+1=2^{m_0}\prod_{i\in I, i\geq 1}p_i^{m_i}.$$ From this equation, we should be able to find something, however, I do not even know whether the index set $I$ is finite or not, making further manipulations difficult.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Properties of convergent sequences theorem help I am reviewing some material about sequences and I ran across the following theorem. Theorem: Let $(x_{n})$ and $(y_{n})$ be two convergent sequences with limits $x$ and $y$, respectively. Then $(x_n + y_n)$ converges to $x+y$. The proof that the text uses is as follows. Proof: Since $x_n$ and $y_n$ are convergent, given $\epsilon > 0$, $\exists N_1,N_2$ such that $x_n \rightarrow x$ and $y_n \rightarrow y$ for all $n \geq N_1$, $n \geq N_2$, respectively. Taking $N=\max\{N_1,N_2\}$, and with the use of the triangle inequality it follows that $$|(x_n+y_n)-(x+y)| \leq |x_n-x| + |y_n-y| \leq 2\epsilon.$$ The question that I have is with the last inequality. If $|x_n-x| + |y_n-y| \leq 2\epsilon$ is true then how does this imply the limit exists? Attempt at understanding If $|x_n-x| + |y_n-y| \leq 2\epsilon$, then $2\epsilon$ is the largest neighborhood that contains the limit. Thus we can choose another epsilon $\epsilon'\neq\epsilon$ with $\epsilon'<\epsilon$ such that if the limit, in this case $(x+y) \in (x-2\epsilon,x+2\epsilon)$, then $(x+y) \in (x-\frac{\epsilon'}{2},x+\frac{\epsilon'}{2}) \in (x-2\epsilon,x+2\epsilon)$.
I would prove it in this way and this proof would stick to the definition of limit: If $x_n$ and $y_n$ are convergent, let $\lim_{n->\infty} x_n=L_1$ and $\lim_{n->\infty} y_n=L_2$. Let $\epsilon$ be any given positive number. Then for the positive number $\frac{\epsilon}{2}$, we can find a positive integer $N_1$ such that $$|x_n-L_1|\lt \frac{\epsilon}{2}\text{ , for all }n>N_1\text{ ...(1)}$$ Again, for the positive number $\frac{\epsilon}{2}$, we can find a positive integer $N_2$ such that $$|y_n-L_2|\lt \frac{\epsilon}{2}\text{ , for all }n>N_2\text{ ...(2)}$$ Take $N=\max\{N_1,N_2\}$, When $n>N$, both inequalities (1) and (2) hold. Hence $$|(x_n+y_n)-(L_1+L_2)|=|(x_n-L_1)+(y_n-L_2)|\le |x_n-L_1|+|y_n-L_2|\lt \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$$for all $n>N$. Therefore $\lim_{n->\infty}(x_n+y_n)=L_1+L_2$ Perhaps this proof would help you understand it because it is derived directly from the definition of limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1886950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A property of dual norm In the convex optimization textbook, page 475, Stepend Boyd defines the normalized steepest descent direction with repect to the norm $||.||$ as $$ \Delta x_{nsd} = argmin \{ \bigtriangledown f(x)^T v \; | \; ||v|| \leq 1 \}$$ In the Appendix A of this book, the dual norm is defined as $$ || \bigtriangledown f(x) ||_{\ast} = \sup \{ \bigtriangledown f(x)^T v \; | \; ||v|| \leq 1 \}$$ I tried many ways to figure out the following conclusion $$\bigtriangledown f(x)^T \Delta x_{nsd} = - || \bigtriangledown f(x) ||_{\ast} $$ Unfortunately, I have not figured out any potential solution. Do you have any solution or suggestion to help me? Thank you very much!
HINT It boils down to understanding the dual norm. This dual norm has the nice property that $$ |\bigtriangledown f(x)^Tv|\leq\|\bigtriangledown f(x)^T\|_*\|v\| $$ for any vector $v$. So when $\|v\|\leq1$ we have $$ |\bigtriangledown f(x)^Tv|\leq\|\bigtriangledown f(x)^T\|_* $$ Then you need to figure out when equality is achieved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1887185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$f$ is differentiable everywhere, does that imply $f'$ is bounded in small nieghborhoods The motivation of this question is when I am looking at Taylor's Remainder theorem, suppose $f$ is $n+1$ differentiable, then the remainder can be written as $$R_{n+1}(x) = \frac{f^{(n+1)}(c)}{n!} x^{n+1} \quad \text{ where } c\in [0,x],$$ but I don't see $R_{n+1}(x) \in O(x^{n+1})$ if we have no control of $f^{(n+1)}(c)$ for $c\in [0,x]$. I know since $f'$ is a pointwise limit of continuous functions, thus it is continuous on a dense set, would this help?
A counterexample to the question in the title is yielded by the function defined by $f(x)=x^2\sin(1/x^2)$ if $x\ne 0$ and $f(0)=0$. This is differentiable everywhere with $f'(0)=0$ (use the definition of the derivative). But if $m\in\mathbb{N}$, then $f'(m^{-1})=-2m\cos(m^2)+2\sin(m^2)/m$ and as $m\to\infty$, $\sin(m^2)/m\to 0$ and $\left|2m\cos(m^2)\right|$ oscillates wildly between $0$ and $2m$. So on any nbhd of $0$, the derivative is not bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1887294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Induced isomorphisms on cohomology of double differential complexes Let $K^{\bullet,\bullet}=\bigoplus_{p,q\ge0}K^{p,q}$ be a double differential complex, i.e. we have differential operators $$\cdots\stackrel{d}{\to} K^{p,q-1}\stackrel{d}{\to} K^{p,q}\stackrel{d}{\to} K^{p,q+1}\stackrel{d}{\to}\cdots $$ and $$\cdots\stackrel{\delta}{\to} K^{p-1,q}\stackrel{\delta}{\to} K^{p,q}\stackrel{\delta}{\to} K^{p+1,q}\stackrel{\delta}{\to}\cdots.$$ We may define a single differential complex $$K^\bullet=\bigoplus_n\left(\bigoplus_{p+q=n}K^{p,q}\right)$$ with differential operator $D=\delta+(-1)^pd$. Let $L^{\bullet,\bullet}$ be a second double complex, also with differential operators $d$ and $\delta$. Let $f:K^{\bullet,\bullet}\to L^{\bullet,\bullet}$ be a complex homomorphism in the following sense: $f$ is a vector space homomorphism, $f(K^{p,q})\subset L^{p,q}$, and $f$ commutes with $d$ and $\delta$. Since $f$ commutes with $d$, it descends to a map $f^*:H_d(K^{p,\bullet})\to H_d(L^{p,\bullet})$ of cohomologies, where $$K^{p,\bullet}=\bigoplus_q K^{p,q},$$ with differential $d$ and similarly for $L^{p,\bullet}$. Suppose that $f$ induces an isomorphism $H_d(K^{p,\bullet})\cong H_d(L^{p,\bullet})$ for all $p$ in this manner. Does it follow that this $f^*$ also gives an isomorphism between $H_D(K^\bullet)$ and $H_D(L^\bullet)$? I know this to be true when the $\delta$-cohomologies are zero, i.e. the $q$ rows are all exact. In Bott & Tu, this general case is claimed, but not proven.
My earlier answer gives a counterexample that ignores your condition that $K$ and $L$ live in the first quadrant. Given that condition, what you want is true: Let $C^{\bullet,\bullet}$ be the mapping cone of $f$. Then the long exact cohomology sequence for $f^{p,\bullet}$ shows that the horizontal cohomology of $C^{\bullet,\bullet}$ is zero. Therefore the first spectral sequence for the cohomology of $C^{\bullet,\bullet}$ collapses at $E^2$. But because $C^{\bullet,\bullet}$ lives in the first quadrant, the spectral sequence must converge, necessarily to zero, which gives your desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1887529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How would you show that the series $\sum_{n=1}^\infty \frac{(2n)!}{4^n (n!)^2}$ diverges? How would you show that the series $$\sum_{n=1}^\infty \frac{(2n)!}{4^n (n!)^2}$$ diverges? Wolfram Alpha says it diverges "by comparison", but I'd like to know to what you would compare it? I've tried some basic things to no avail: The ratio test is inconclusive, comparing with something "smaller" which I know diverges is proving difficult. If possible, I'd like to show this without using anything fancy (such as Stirling's approximation). Can you do this with a "basic" comparison?
After the great answer of Adam Hughes one likely needs no more... However, I like the following comparision-test, at which I arrived after some fiddling. We compare $\sum a_n $ with $\sum \frac1{2n}$ and show by induction, that always $a_n>\frac1{2n}$ and the series diverges by comparision with the harmonic series. Let's denote the series $$ s=\sum_{k=0}^\infty a_n = 1+ \frac12+\frac12\frac34+\frac12\frac34\frac56 + \frac12\frac34\frac56\frac78 + ...\\ $$ Rewritten in the spirit of the said comparision this is $$ s =1+ \frac12\left[(1)1 +\left({3\over2}\right)\frac12+\left({ 3 \cdot 5\over 2\cdot 4}\right)\frac13+\left({ 3 \cdot 5 \cdot 7\over 2\cdot 4 \cdot 6}\right)\frac14 + ... \right] \\ $$ and we want to show, that all parentheses have value greater than 1. Induction as follows: we observe that this is true for the parenthese $p_2= \left( \frac32\right)$ at $a_2$: $\left({ 3\over2}\right) \gt 1$ The next parenthese is the previous postmultiplied by $\frac54$ so also this must be greater than 1. And this is obviously the case for all following parentheses. Formally: * *Let $\displaystyle \qquad a_n = {1 \cdot 3 \cdot ... \cdot (2n-1)\over 2^n n!} $ and $\displaystyle \qquad p_n = {1 \cdot 3 \cdot ... \cdot (2n-1)\over 2^{n-1} (n-1)!} $ *Then $ \displaystyle \qquad p_n\gt1 \implies p_{n+1}=p_n \cdot {2n+1\over2n} \gt 1$ *and because we know that $ \displaystyle \qquad p_2>1 $ all parentheses $p_{n \gt 1} \gt 1$ and all terms $a_{n \gt 1} \gt \frac1{2n}$ and the series diverges. After @Cameron Williams has shown, that even $a_n \gt {1\over \sqrt{\pi} n} \approx {1\over 1.7724} \frac1n $ I used regression to find an even tighter estimate for the $a_n$ in relation to the harmonic numbers. If I did not have a stupid bug with this I found something like $$ p_n ^2 \gt -0.317894293187 +1.27323781105n \approx {4n-1\over \pi} \\ \implies a_n \gt \sqrt{\small {4n-1\over \pi }} \cdot \frac1n$$ (The estimate ${4n-1\over \pi}$ is due to the comments of @Claude Leibovici)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1887601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 13, "answer_id": 9 }
Permutations of words from original word. How many distinct 4-letter arrangements can be made with the letters in the word "PARALLEL"? My approach: Because we are only looking at how many different permutations there are and not the frequency at which these permutations exist, we can delete the repeated letters and leave only one. This leaves us with the following set of letters: $\{P, A, R, L, E\}$ So $5$ permute $4$ is $120$. It was only then that I realized that deleting repeats will remove words such as $LLLE$ to exist. At this point, I do not know how to add on these possibilities to my approach. I would appreciate help, and as always, bash me whenever you see a typical blunder of mine.
There are a few cases: * *All $4$ letters distinct. There are $\binom{5}{4} = 5$ ways to pick the letters and $4!$ ways to order them. *Two letters distinct, one letter repeated twice. Either two A's or two L's. If we have two A's, then we have $\binom{4}{2} = 6$ ways to pick the other letters. $4!$ ways to order them, but divide by $2!$ to account for the doubled letter. Then do the same thing again for the double-L case. *Two letters repeated twice. Two A's, two L's. $4!$ ways to order the letters, divide by $2!2!$ to account for the repetitions. *One letter by itself, one letter repeated three times. Just the L here. Once we have the triple L, there are $4$ choices for the final letter. $4!$ ways to arrange the letters, divided by $3!$ for the triple L. $$5 \cdot 4! + \frac{6 \cdot 4! \cdot 2}{2!} + \frac{4!}{2! \cdot 2!} + \frac{4 \cdot 4!}{3!} = 286$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1887765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Partial Derivatives Continuous does not guarantee Gradient Function continuous Just want to check my understanding that partial derivatives continuous does not mean that the gradient function $\nabla f$ is continuous. Is that correct? E.g. $\frac{\partial f}{\partial x}$, $\frac{\partial f}{\partial y}$ continuous does not necessarily mean that $\nabla f=(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y})$ is continuous. I concluded that based on "Continuity in each argument is not sufficient for multivariate continuity". Question 2) What conditions guarantee that $\nabla f$ is continuous? Thanks for any help.
We have that $\nabla f$ is continuous. Note that: \begin{align*} || \nabla f(x,y) - \nabla f(x_0,y_0)||^2 = (\frac{\partial f}{\partial x}(x,y)- \frac{\partial f}{\partial x}(x_0,y_0))^2 + (\frac{\partial f}{\partial y}(x,y)- \frac{\partial f}{\partial y}(x_0,y_0))^2 \end{align*} Thus continuity of the partial derivatives imply that the gradient is continuous. Your quote says something different too. It says that continuity in x and y resp. does not imply continuity in $(x,y)\in \mathbb R^2$. This makes perfect sence, in $\mathbb R^2$ you can approach a point from many more directions that just parallel along the $x$- and $y$-axis. DETAILS: Let $\epsilon >0$ be fixed. Then by continuity of the partial derivatives we know there exist $\delta_1 >0$ and $\delta_2>0$ so that: \begin{align*} ||(x,y)-(x_0,y_0)|| <\delta_1 \quad \Rightarrow \quad|\frac{\partial f}{\partial x}(x,y)- \frac{\partial f}{\partial x}(x_0,y_0)| < \frac{\epsilon}{\sqrt{8}}\\ ||(x,y)-(x_0,y_0)|| <\delta_2 \quad \Rightarrow \quad|\frac{\partial f}{\partial x}(x,y)- \frac{\partial f}{\partial x}(x_0,y_0)| < \frac{\epsilon}{\sqrt{8}} \end{align*} Now let $\delta = \min \{\delta_1, \delta_2 \}$. Now we have that: \begin{align*} || \nabla f(x,y) - \nabla f(x_0,y_0)||^2 &= (\frac{\partial f}{\partial x}(x,y)- \frac{\partial f}{\partial x}(x_0,y_0))^2 + (\frac{\partial f}{\partial y}(x,y)- \frac{\partial f}{\partial y}(x_0,y_0))^2\\ & < \frac{2\epsilon^2}{8} = \frac{\epsilon^2}{4}. \end{align*} Thus we have: \begin{align*} || \nabla f(x,y) - \nabla f(x_0,y_0)|| < \frac{\epsilon}{2}, \end{align*} as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Which of the following facts are true of a sequence satisfying $\lim a_n^{\frac{1}{n}}=1$? Let $a_n$ be a sequence of non-negative numbers such that $$\lim a_n^{\frac{1}{n}}=1$$ Which of the following are correct? * *$\sum a_n$ converges *$\sum a_nx^n$ converges uniformly on $[-\frac{1}{2},\frac{1}{2}]$ *$\sum a_nx^n$ converges uniformly on $[-1,1]$ *$\lim \sup\frac{a_{n+1}}{a_n}=1$ My effort: 1.false ,consider $a_n=n$ 2.true,The radius of convergence of a power series $\frac{1}{R}=\lim a_n^{\frac{1}{n}}=1\implies R=1$ .Hence the series converges uniformly for compact sets inside $|x|<1$.Hence the series converges uniformly in $|x|\leq 0.5$ 3.false ,putting $x=1$ the same as in case 1. 4.I am unable to prove this fact.How to solve this .
First of all, way to go for your efforts. As far as I can see, your answers to the first three questions are correct. To refute the last one consider the sequence $$\{a_n\}=\{1,1,2,1,3,1,4,1,5,1,6,1,....\}=\begin{cases}k,&n=2k-1\\{}\\1,&\text{otherwise}\end{cases}$$ Observe that $$\left\{\frac{a_{n+1}}{a_n}\right\}=\left\{1,2,\frac12,3,\frac13,4,...\right\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Proof of definite integral Let $f$ be a continuous function. Without taking an anti-derivative, prove that $$\lim_{a\rightarrow 0^{+}}\int_{0}^{a}f(t)dt=0$$
There exists $c_a\in [0,a]$ such that $\int_0^af(t)dt=af(c_a)$. Since $f$ is continuous at $0$, there exists $c>0, M>0$, such that for every $x\in [-c,c]. |f(x)|<M$. For $a\in [-c,c]$, $|\int_0^af(t)dt|\leq |af(c_a)|\leq |a|M$. This implies that $|lim_{a\rightarrow 0}\int_0^af(t)dt|\leq lim_{a\rightarrow 0}M|a|=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solution of $ydx-xdy+3x^2y^2e^{x^2}dx=0$ Find the solution of given differential equation: $$ydx-xdy+3x^2y^2e^{x^2}dx=0$$ I am not able to solve this because of $e^{x^2}$. Could someone help me with this one?
By dividing both side to $y^2$ we get $$ydx-xdy+3x^{ 2 }y^{ 2 }e^{ x^{ 2 } }dx=0\\ \frac { ydx-xdy }{ { y }^{ 2 } } +3x^{ 2 }e^{ x^{ 2 } }=0\\ d\left( \frac { x }{ y } \right) =3x^{ 2 }e^{ x^{ 2 } }\\ \int { d\left( \frac { x }{ y } \right) =\int { 3x^{ 2 }e^{ x^{ 2 } }dx } } =\frac { 3 }{ 2 } \int { x } d{ e }^{ { x }^{ 2 } }=\frac { 3 }{ 2 } \left( x{ e }^{ { x }^{ 2 } }-\int { { e }^{ { x }^{ 2 } }dx } \right) \\ \frac { x }{ y } =\frac { 3 }{ 2 } \left( x{ e }^{ { x }^{ 2 } }-\int { { e }^{ { x }^{ 2 } }dx } \right) +C\\ y=\frac { 2x }{ 3\left( \left( x{ e }^{ { x }^{ 2 } }-\int { { e }^{ { x }^{ 2 } }dx } \right) +C \right) } $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inequality involving rearrangement: $ \sum_{i=1}^n |x_i - y_{\sigma(i)}| \ge \sum_{i=1}^n |x_i - y_i|. $ If $x_1 \ge x_2 \ge \cdots \ge x_n$ and $y_1 \ge y_2 \ge \cdots \ge y_n$ are real numbers, and $\sigma$ is any permutation, then $$ \sum_{i=1}^n |x_i - y_{\sigma(i)}| \ge \sum_{i=1}^n |x_i - y_i|. $$ This must be a known inequality. What is it called, and how is it proven? (Just a reference is OK.) The conditions are similar to rearrangement inequality. The inequality is a simple statement about minimizing the $\ell^1$ distance between a finite sequence and any rearrangement of another finite sequence. I searched around and clicked through various pages but couldn't find something relevant. If it is true, perhaps a proof could be constructed by decomposing the permutation into a sequence of transpositions.
We can prove it in a similar way as the rearangement inequality. There are only finitely many possibilities for $\sigma$, so a minimum is achieved, pick $\sigma$ so that it has the least possible number of inversions among all the permutations that minimize the expression. Suppose by way of contradiction there is $i<j$ with $\sigma(i)>\sigma(j)$. Notice $|x_i-y_{\sigma_i}|+|x_j-y_{\sigma(j)}|\geq |x_i-y_{\sigma(j)}|+|x_j-y_{\sigma(i)}|$. So the permutation that transposes $i$ and $j$ must also minimize the expression, and has less inversions, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Construct a vector orthogonal to a given vector in $\mathbb{R}^3$ without singularities I have a unit vector $v = (x_v, y_v, z_v)$ in $\mathbb{R}^3$ and I want to construct another vector $u$ which is orthogonal to $v$. The construction process should be a straight-forward formula without singularities (divisions by zero, roots of negative number, etc.) and special cases. For example, we can take the plane equation $$ A(x-x_0) + B(y-y_0) + C(z-z_0) = 0 $$ for a plane which contains $(x_0, y_0, z_0)$ point and $(A,B,C)$ is a normal vector for it. In our case it boils down to $$ x_vx + y_vy + z_vz = 0 $$ and finally $$ x = \frac{- y_vy - z_vz}{x_v} $$ so we can chose any $y$ and $z$ and we'll get $x$ such that $(x,y,z)$ is orthogonal to $v$. But what if $x_v$ is zero? In this case the plane contains $Ox$ axis, thus the choice of $x$ does not matter for any $y$ and $z$ from the plane. But I want to implement the construction of such vector in a GPU shader program, so processing special cases (branching) is unwanted because: * *It breaks performance of the program *$x_v$ may be arbitrary close but not equal to zero *$x_v$ may smoothly vary with time, so special case may introduce some animation artifacts. Another approach is to take another predefined (i.e. hard-coded) vector $a$ and make $u = v \times a$, but the only restriction on $v$ is it's unitness, so this approach faces same problems if $a$ and $v$ are collinear or close to that. I also tried to represent $v$ as some rotation of $(1,0,0)$, but I ended up with $$ \beta = \arcsin(-z), $$ $$ \cos{\gamma} = \frac{x}{\cos{\beta}}. $$ For $z = \pm{1}$ it faces the same problem as above and it corresponds to a case when we rotate $(0,0,1)$ about $Oz$ axis, which does not make sense and always equals to identity rotation. Could you give me a clue how to construct an orthogonal vector without singularities or an outline of a proof if it's impossible?
There is no continuous map $f\colon S^2\to S^2$ such that $f(x)\perp x$ for all $x$. This is a consequence of the hairy ball theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In the expression $3x$, what is the $3$? So this is quite a simple question. I KNOW I learnt this before, but can't for the life of me figure out or find anywhere that refers to the definition I'm looking for. Look at the expression $$ 3x $$ In this expression, what is the $3$ in this context? The $3$ 'prefixes' the $x$, but I don't believe that it is called a prefix of $x$.
A "scalar" in contex of Algebra. Could be also a "coefficient".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1888965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Cases in which the limiting value of a function $f(x)$ (as $x\to c$) is not equal to $f(c)$? Can anyone state some cases such as mentioned in the title. I have tried to look for them on google and my textbook, but cant find any examples. And incidentally how can this fact be ever true? does it have something to with the function not being defined at the limiting point $c$?
There are artificial examples created by piecewise definitions. (Richard Feynman once expressed surprise that anybody thought such things are functions.) I'll try to give a more serious example: You are walking past a building with a conventional rectangular shape. It is on your left. The distance from you to the rightmost point on the building that you can see changes in a continuous manner until you pass the corner, and then it abruptly increases; it has a jump discontinuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A functor which takes quasi-isomorphisms to isomorphisms factors through a triangulated functor? Let $\mathcal A$ be an abelian category. Let $\mathbb K(\mathcal A)$ be the homotopy category of complexes of $\mathcal A$. Now. consider a functor $F$ from $\mathbb K(\mathcal A)$ to $\mathbb E$, which is also a triangulated category. Suppose the $F$ takes quasi-isomorphisms to isomorphisms. Now by the universal property of $\mathbb D(\mathcal A)$, the derived category of $\mathcal A$, we know there is a functor $F'$ from $\mathbb D(\mathcal A)$ to $\mathbb E$ induced by functor $F$. Is $F'$ a triangulated functor? By triangulated functor, I mean a functor which commutes with the translation functor and takes distinguished triangles to distinguished triangles.
If you don't assume that $F$ is a triangulated functor then this may not be true. For example, let $F:\mathbb{K}(\mathcal{A})\to\mathbb{K}(\mathcal{A})$ be the obvious functor sending a complex to its degree zero homology considered as a complex concentrated in degree zero. Then $F'$ doesn't commute with the translation functor. But if you intended to specify that $F$ is triangulated, then it is true, as explained in Pierre-Guy Plamondon's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Triangular numbers and Wilson's Theorem To explain the process, I was working on some ProjectEuler.net problems and I had just figured out a formula to solve for triangle numbers efficiently. The next problem needed me to solve for factorials. At the time (before I found out there is not a known equation where you can input n and get the nth factorial) I thought maybe I would work on relating the nth triangle, and n with the nth factorial. After some time playing around with this, I found that: If $F_n = n!$ and $T_n$ is the $n^{th}$ triangular number then $F_n\text{ (mod $T_n$)} = n$ if and only if $n+1$ is prime After some research I found that this is very closely related to Wilson's Theorem: $( n − 1 ) ! \equiv − 1 ( \text{ mod $n$} )$ I also understand that Triangle Numbers are very closely related to $n$ given the formula mentioned earlier $T_n = {n(n+1) \above 1.5pt 2}$. But these two statements do not seem to explain away the relation between: $F_n \text{ (mod $T_n$)} \equiv F_n\text{ mod(n+1)}$ At least not in a way that I am grasping. Admittedly, I feel like I'm missing something very basic. So, I have spent my free time for about a week now trying to prove a lack of significance here. Mostly by factoring out various numbers and seeing how n and Tn relate. They obviously do achieve similar results when dividing other numbers besides Fn, but the reason that it lines up for every Fn still seems to escape me. So I guess my question is: is this completely insignificant? And if not, are there other materials or proofs that could help me dive into this problem further? I can not find many papers written about the relationship between triangles, primes, factorials. Other than this one: http://www.integers-ejcnt.org/l50/l50.pdf but this does not seem to be congruent with my problem. Python code: def get_fact_num(n): fact = 1 for x in range(1,n+1): fact = fact*(x) return fact def get_tri_num(n): tri_num = (n*(n+1))/2 return tri_num def prime_test(n): tri = get_tri_num(n) fact = get_fact_num(n) mod = fact % tri mod2 = fact % (n+1) print n,"::",tri,":",mod,":",mod2 for x in range(4,2000): prime_test(x)
Your two statements are equivalent to saying the following: $n! = -1 + (n+1)j$ for odd prime $n+1$ and integer $j$. $n! = n + \frac{n(n+1)k}{2}$ for odd prime $n+1$ and integer $k$. Let $m=j-1$ and $q = \frac{nk}{2}$ (which will be an integer since $n+1$ is prime so $n$ is an even number and will divide by the denominator of $2$), then these equations become $n! = n + (n+1)m$ for odd prime $n+1$ and integer $m$. $n! = n + (n+1)q$ for odd prime $n+1$ and integer $q$. Exact same expressions. The result isn't all that significant, it just happens to give the same result because of Wilson's Theorem and the fact that triangle numbers also involve $(n+1)$. Since we peel off $n+1$ from the modulus of the first equation and change it to $(n+1) + (-1) = n$, this is why it happens to be the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Do we still have a category of sets if the inverse image has more than one element? I'm kinda lost at one example in Awodey's: Category Theory. I am trying to check this example, for that, I made a simple example function: With this, I mean that there is one function $f$ which associates each element of the codomain. In this case, I can't have an inverse function - I'm not sure if this invalidates it being a category, but I faintly believe it does. Because it seems that in this universe of discourse, the arrows are functions. But I guess I could counter this by creating two pseudo-inverse functions (?!) in the following way: It doesn't seem to violate the rules, except a little doubt for one of them: The definitions of composition. If I compose $f_{1}^{-1} \circ f$, I have: $$cod(f)=dom(f_{1}^{-1})$$ $$cod(f_{1}^{-1})=dom(f)$$ I believe this is valid because he speaks about codomain instead of image. But it feels weird, $f_{1}^{-1}$ doesn't goes back entirely to $dom(f)$, that is: It doesn't takes all the elements of $dom(f)$. Or perhaps I'm utterly confused/lost/stupid/in_need_of_help_for_mental_illness and completely missed the point.
The main problem is that the composition of functions with at most two preimages is not necessarily a function with at most two preimages. In this wanna-be category, there does exist an arrow $\{1,2,3,4\}\to\{1,2\}$ (in fact several) and an arrow $\{1,2\}\to\{1\}$, but no arrow from $\{1,2,3,4\}$ to $\{1\}$. For the restriction with finite pre-images, this problem does not occur, and you can verify that we indeed obtain a category.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why in these matrices are $AB=BA$ not equal? What is the logic behind them? We know that in matrices AB=BA.Why in this Matrices $A=\begin{bmatrix} -1 & 3\\ 2 & 0\end{bmatrix}$, $B=\begin{bmatrix} 1 & 2\\ -3 & -5\end{bmatrix}$ are not equal to $AB=BA$. WHY? This is matrix of order $2\times 2$ for both $A$ and $B$.
Matrix multiplication is generally not commutative unless they're both equal or they're inverses (in which case you will obtain the identity).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Proving $S^3$ and $\mathbb{R}^3$ are not homeomorphic Prove $S^3$ and $\mathbb{R}^3$ are not homeomorphic. I've encountered this question on a PhD exam in topology. This is at a level where we are expected to understand cohomology already, so there are already a lot of obvious one line proofs I could give (e.g. they don't have the same homology groups). But this appears among the "give a detailed answer" questions, as opposed to the more computational questions in the latter half. (The heading says to show all work and support all statements to the best of my ability.) So, I am confused about the level of detail I would have to include here. Is there an obvious choice for how to prove this directly without relying on any one liners that assume higher level stuff? I realize this question is a little opinion based, but maybe the answer will be unambiguous to those with more experience in topology. How would you answer this question?
$S^3$ is compact, while $\mathbb{R}^3$ is not. Since any continuous function $f:S^3\rightarrow \mathbb{R}^3$ maps compact subsets of $S^3$ to compact subsets of $\mathbb{R}^3$, it can't be surjective (or else $f(S^3)=\mathbb{R}^3$ is also compact).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to visualize the one point compactification of a plane with finitely points removed and $\mathbb{R}_{\text{discrete}}$ I wish to visualize the one point compactification of $\mathbb{R}^2$ with finitely points removed and that of $\mathbb{R}_{\text{discrete}}$ For the first question, I can picture it with one point removed. The resulting shape is a hollow sphere containing the removed point. The sphere is closed and bounded set of $\mathbb{R}^3$ so it is compact. But I cannot for the life of me picture what would happen if finite points were removed. Is it some sort of nested sphere? I am also not sure what would the one point compactification of $\mathbb{R}_{\text{discrete}}$ be. It seems to be clear that for the usual topology, we simply wrap it to a circle. But what about the discrete topology? Thanks for your help!!
Judging from your description, you’re probably not visualizing the one-point compactification of $\Bbb R^2\setminus\{p\}$ correctly. The one-point compactification of $\Bbb R^2\setminus\{p\}$ can be visualized as a horn torus, a torus with inner radius $0$. It’s what you get if you start with $S^2$ and identify the north and south poles. If $F\subseteq\Bbb R^2$ is finite, the one-point compactification of $\Bbb R^2\setminus F$, is what you get if you start with a sphere $S^2$ and identify $|F|+1$ points. As for $\Bbb R_{\text{discrete}}$, Remember that the open nbhds of the new point are the complements of the compact sets in $\Bbb R_{\text{discrete}}$. The compact sets in $\Bbb R_{\text{discrete}}$ are precisely the finite sets, so the open nbhds of the new point are the cofinite sets containing it. If $p$ is the new point, the topology is $$\big\{\{p\}\cup(\Bbb R\setminus F):F\text{ is finite}\big\}\cup\wp(\Bbb R)\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1889841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Multiplying sparse matrices If I have two sparse matrices, $A$ and $B$. Let's say $A$ has $k$ non-zero entries and $B$ has $j$ non-zero entries. Let's assume all I know is the amount of non-zero entries each matrix has, I don't know where they are or what their value is. The dimensions of the matrices are known and are compatible, so it cannot be assumed that they are always square matrices (although they could be in some cases). So, let's assume A is a MxN matrix and B is an NxP matrix - making AB an MxP matrix. If I multiply these two sparse matrices together (A*B), what is the maximum amount of non-zero values the product of the matrices could have? I have a feeling it must just be $k+j$ but I can't define it mathematically. I'm not after a completely formal proof.
It cannot be k+j as it can be seen in the next counterexample: We define k = 3 and j = 2 and create the next two matrices, with dimension that agree for the multiplication as you stated: $A=\left( \begin{array}{ccc} 1 & 0 & \cdots & 0 \\ 1 & 0 & \cdots & 0\\ 1 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots &\ddots &\vdots\\ 0 & 0 & 0 & 0\end{array} \right),B=\left( \begin{array}{ccc} 1 & 1 & 0 &\cdots & 0 \\ 0 & 0 & 0 &\cdots & 0\\ 0 & 0 & 0 &\cdots & 0 \\ \vdots & \vdots& \vdots &\ddots &\vdots\\ 0 & 0 & 0 & 0 & 0\end{array} \right)$ Which multiplying them will give us the next matrix: $AB=\left( \begin{array}{ccc} 1 & 1 & 0 &\cdots & 0 \\ 1 & 1 & 0 &\cdots & 0\\ 1 & 1 & 0 &\cdots & 0\\ 0 & 0 & 0 &\cdots & 0 \\ \vdots & \vdots& \vdots &\ddots &\vdots\\ 0 & 0 & 0 & 0 & 0\end{array} \right) $ As it can be seen, the result gives us a matirx with six nonzero entries, which is a higher value than $k+j=5$. It gives the intuition that the maximun amount of nonzero entries will be $k*j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Evaluate the reciprocal of the following infinite product I hae to evaluate the reciprocal of the following product to infinity $$\frac{1 \cdot 3 \cdot 5 \cdot 7 \cdot 9 \cdot 11 \cdot 13 \cdot 15 \cdot 17 \cdot 19}{2 \cdot 2 \cdot 6 \cdot 6 \cdot 10 \cdot 10 \cdot 14 \cdot 14 \cdot 18 \cdot 18}\cdot\ldots $$ I am guessing you express each number as a sum. Example, the first $\frac{1}{2}$ can be expressed as $\frac{2-1}{2}$ and the next $\frac{3}{2}$ as $\frac{2+1}{2}$, hence their product equals $1-\frac{1}{2^2}$.
Note that $$1-\frac{1}{\left(4n-2\right)^{2}}=\frac{\left(4n-3\right)\left(4n-1\right)}{\left(4n-2\right)\left(4n-2\right)} $$ and $$P=\prod_{n\geq1}\left(1-\frac{1}{\left(4n-2\right)^{2}}\right)^{-1}=\prod_{n\geq1}\frac{\left(4n-2\right)\left(4n-2\right)}{\left(4n-3\right)\left(4n-1\right)} $$ $$=\prod_{n\geq0}\frac{\left(4n+2\right)\left(4n+2\right)}{\left(4n+1\right)\left(4n+3\right)}=\prod_{n\geq0}\frac{\left(n+1/2\right)\left(n+1/2\right)}{\left(n+1/4\right)\left(n+3/4\right)}$$ and now we can use the well known identity $$\prod_{n\geq0}\frac{\left(n+a\right)\left(n+b\right)}{\left(n+c\right)\left(n+d\right)}=\frac{\Gamma\left(c\right)\Gamma\left(d\right)}{\Gamma\left(a\right)\Gamma\left(b\right)},\, a+b=c+d $$ which follows from the representation of the Gamma function in the form $$\Gamma\left(z\right)=\lim_{n\rightarrow\infty}\frac{n^{z-1}n!}{z\left(z+1\right)\cdots\left(z+n-1\right)}. $$ Hence $$P=\frac{\Gamma\left(1/4\right)\Gamma\left(3/4\right)}{\Gamma\left(1/2\right)^{2}}=\color{red}{\sqrt{2}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Perpendicular Medians Medians $\overline{AX}$ and $\overline{BY}$ of $\triangle ABC$ are perpendicular at point $G$. Prove that $AB = CG$. In your diagram, $\angle AGB$ should appear to be a right angle. I've drawn the diagram, but I don't have anything in mind.
Let $M -$ midpoint $AB$. The median on the hypotenuse of a right triangle equals one-half the hypotenuse. In triangle $AGB$ $\angle AGB=90^{\circ}$ then $GM=MA=MB=\frac12AB \Rightarrow AB=2GM$ The centroid divides each median into two segments, the segment joining the centroid to the vertex multiplied by two is equal to the length of the line segment joining the midpoint to the opposite side. $CG:GM=2:1 \Rightarrow CG=2GM$. Hence, $$CG=AB$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\int_0^1 \frac{\ln^3(x)}{1+x} dx = \sum_{k=0}^{\infty}(-1)^{k+1} \frac6{(k+1)^4}$ Show that; $$ \int_0^1 \frac{\ln^3(x)}{1+x} dx = \sum_{k=0}^{\infty}(-1)^{k+1} \frac6{(k+1)^4}$$ I arrived to the fact that $$ \int_0^1 \frac{\ln^3(x)}{1+x} dx = \sum_{k=0}^{\infty}(-1)^k\int_0^1 x^k \ln^3(x)dx$$ But I am unable to continue further.
Just integrate by parts: \begin{align*} \int_0^1 x^k \ln^3(x) \, dx &= \frac{x^{k + 1}}{k + 1} \ln^3 x \big|_0^1 - \int_0^1 \frac{x^{k + 1}}{k + 1} 3 \ln^2(x) \frac 1 x \, dx \\ &= -\frac{3}{k + 1} \int_0^1 x^k \ln^2(x) \, dx \end{align*} The next application reverses the sign, picks up factor $2/(k + 1)$, and drops the power on the log by $1$. Continue until the log disappears.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 1 }
Intuitively understanding Fatou's lemma I learnt Fatou's lemma a while ago. I am able to prove it and use it. I know examples showing that the inequality may be strict. But I don't really have an intuitive way to understand it. Any good thoughts?
Fatou's lemma tells you that in the limit "mass" can only be lost but not generated. Let's recall the satement. If $f_n,f\geq 0$ are measurable and $f_n\to f$ pointwise a.e., then we have $\int f \leq \liminf_{n\to\infty} \int f_n$. A classical example is $f_n= n \chi_{[0,1/n]}$ where $\int f_n=1$ for all $n$, but in the limit the mass escapes to "vertical" infinity, so it is lost, and we have that $f_n\to 0=:f$ a.e., with $\int f=0$. The other example where, mass escapes to "horizontal" infinity, is $f_n= \chi_{[n,n+1]}$. Again $f_n$ has mass $1$, but the limit has mass $0$. If we shut down these escape possibilities, then mass is preserved, i.e. $\int f=\lim_{n\to\infty} \int f_n$. For example, one way to do that is to assume that $f_n$ are bounded and all of them are supported on a large interval $[-M,M]$. This follows from the Dominated Convergence Theorem which gives a fairly general criterion for convergence of the integral: if $|f_n|\leq g$ where $\int g<\infty$, then the mass is preserved under the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 1 }
Let $B_n = H_{n+1} \cap \dots \cap H_{n+ [\log_2\log_2 n]}$, $H_n$ probability $n$th coin heads. How are $\{B_n\}_n^\infty$ not independent? I think title is pretty self explanatory. Consider infinite fair coin tossing. Let $H_n$ be the the event that the $n^\text{th}$ coin comes up heads. Define $$B_n = H_{n+1} \cap H_{n+2} \cap \dots \cap H_{n+ [\log_2\log_2 n]}$$ Why are the $B_n$ not independent of each other? Also, related to this (and perhaps why I am misunderstanding), does it make sense to consider values of $n$ where $\log_2\log_2 n$ is not an integer? If so, how is this interpreted? I mean, I think I get why: The $B_n$ overlap each other (to varying extents). But that the overlap is not apparent to me right now. More specifically, $\log_2(\log_2 n)$ tends to be pretty small, even for large $n$. If we consider only values of $n$ where $\log_2(\log_2 n)$ is a whole number, the dependence of $\{ B_n\}$ is not obvious to me, because such values of $n$ seem to be very far apart (far enough to prevent overlap, I think?). If we allow $\log_2(\log_2 n)$ to not be an integer, then I can see how the $B_n$ are not independent (because then we have $n+1, n+2,\dotsc, n+4$ for one value of $n$ and then $(n+1) +1 = n+2,\dotsc, (n+1) +4.xxxx$ for the next), but then I don't know how to interpret, for example, $H_{n+4.90689}$ (when $n=2^{30}$). Thanks.
I'll assume the $H_n$'s are independent with $0 < P(H_n) = p < 1$ You can show the $B_n$'s are not independent by showing that they are not pairwise independent. $$B_n = H_{n+1} \cap H_{n+2} \cap \cdots \cap H_{n+ [\log_2\log_2 n]}$$ $$B_{n+1} = H_{n+2} \cap H_{n+3} \cap \cdots \cap H_{n+1+ [\log_2\log_2 (n+1)]}$$ $$P(B_n) = p^{n+ [\log_2\log_2 n] - (n+1) + 1} = p^{[\log_2\log_2 n]}$$ $$P(B_{n+1}) = p^{n+1+ [\log_2\log_2 (n+1)] - (n+2) + 1} = p^{[\log_2\log_2 (n+1)]}$$ Now just show that $$P(B_n, B_{n+1}) \ne p^{[\log_2\log_2 (n)]} p^{[\log_2\log_2 (n+1)]} $$ I believe that $\{B_n, B_{n+1}\} = H_{n+2} \cap \cdots \cap H_{n+ [\log_2\log_2 n]}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to approach this complex numbers question? I have started to approach this problem from writing the point on the perpendicular bisector as (z2+z3)/2 + ix(z2-z3), then was thinking of equaling the distance or something to find x. But I am not able to proceed, I am new to complex number, and don't know the real tricks how to approach the question. What some basic theorems/forumales are to be used here?
Since the circum circle is $|z|=1$, the orthocenter is given by $z_1+z_2+z_3$. If $z_4$ represents the point $P$, then \begin{align*} \frac{z_1+z_2+z_3 - z_1}{\overline{z_1+z_2+z_3} - \bar{z_1}} &= \frac{z_4-z_1}{\bar{z_4}-\bar{z_1}}\\ z_2z_3 &= -z_4z_1 \end{align*} where we have used $\bar{z} = \frac{1}{z}$ when $|z|=1$ in the last step. Thus $z_4 = -\frac{z_2z_3}{z_1}$. Since $D$ is the mid point of $HP$, it follows that $D$ is given by \begin{align*} \frac{1}{2}\left(z_1+z_2+z_3 -\frac{z_2z_3}{z_1}\right) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
proving or refuting the convergence of a digital sequence Let the following digital sequence; $\Sigma_{n=2}^\infty \dfrac{sin(nx)}{log(n)}$ Dirichlet's criteria says that if $b_n$ decreases and $lim$ $b_n =0$ and if the partial sums of a sequence $a_n \in \mathbb{R}$ then $\Sigma_{n=1}^\infty a_n b_n$ converges. * *It's obvious that $b_n$ in this case is $1/log(n)$ and it does indeed decrease and converges in $0$. *Now what I have trouble on is showing that all the partial sums of $a_n$ (in this case $sin(nx)$) converge. I know that showing that all the partial sums of a sequence converge is equivalent to showing that $\exists M>0, \forall n \vert{S_n}\vert \leq M$ With $S_n := \Sigma_{n=1}^\infty a_n$ I found a proof (that all the partial sums of $a_n$ converge) but I can't seem to understand the first step. It goes like this: $\vert \vert \Sigma_{k=1}^\infty sin(kx) \vert \vert$ = $\vert \vert \dfrac{sin(\dfrac{1}{2} nx) sin ((n+1)\dfrac{x}{2})}{sin\dfrac{x}{2}} \vert \vert$ I really don't know where that result come from
In order to apply the Dirichlet's Test you still have show that the sequence $\{a_n\}_{n\geq 2}$ is bounded. Now by the addition formula $\cos(x+y)=\cos x \cos y -\sin x \sin y$, for $x\not=2 m\pi$ with $m\in\mathbb{Z}$ (otherwise $a_n=0$), it is easy to obtain $$\sin(kx)=\frac{\cos\bigl((k-\frac{1}{2})x\bigr)-\cos\bigl((k+\frac{1}{2})x\bigr)}{2\sin\frac{x}{2}}.$$ Hence $$\sum_{k=1}^{n} \sin(kx)=\frac{1}{2\sin\frac{x}{2}}\sum_{k=1}^n \left[\cos\bigl((k-\frac{1}{2})x\bigr)-\cos\bigl((k+\frac{1}{2})x\bigr)\right]=\frac{\cos\bigl((1-\frac{1}{2})x\bigr)-\cos\bigl((n+\frac{1}{2})x\bigr)}{2\sin\frac{x}{2}}$$ and $$|a_n|=\left |\sum_{k=1}^{n} \sin(kx)=\right|\leq \frac{1}{\left|\sin\frac{x}{2}\right|}.$$ So the $\{a_n\}_{n\geq 2}$ is bounded and by the Dirichlet's Test, your series is convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1890927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is Rayo's number really that big? I was reading about large numbers, and came across Rayo's Number which is defined to be the smallest integer that is not nameable by any expression in the language of set theory that contains less than $10^{100}$ symbols. Now, my question is: Is this number really that large? If we pick some "random" number $2091580284...384901284021$ with $10^{100}$ digits, wouldn't it be non-nameable with less than a googol symbols? Wouldn't this number be bigger or equal to Rayo's Number?
No, a number like 10^(10^100) is much smaller than Rayo(10^100), because there are not a Googol digits, but a Googol symbols in first order set-theory, and it is pretty efficient to write down big numbers such as TREE(3), which can be expressed with MUCH LESS than a Googol symbols, you don't need anything like a Googol symbols to express TREE(3),and if we can say that : Rayo's Number > TREE(3) We can certainly say that : Rayo's Number > Any number with 10^100 digits So Rayo's Number is really that big.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
How to show $\log \cosh(\sqrt x)$ is concave? I know the definition of convex and concave functions and the second order condition to justify convexity (concavity). But still, I do not know how to show $\log \cosh(\sqrt x)$ is concave. Thanks for your help.
For $x>0$ the second derivative is: $$\frac{\text{sech}^2\left(\sqrt{x}\right)}{4x}-\frac{\tanh \left(\sqrt{x}\right)}{4x^{3/2}}=\frac{1}{4x\cosh(\sqrt{x})} \left(\frac{1}{\cosh(\sqrt{x})}-\frac{\sinh \left(\sqrt{x}\right)}{\sqrt{x}}\right)$$ Hence the second derivative is negative (and our function is concave) as soon as we show that for $t>0$, $$f(t):=\sinh(t)\cosh(t)-t=\frac{\sinh(2t)}{2}-t>0$$ which holds because $f(0)=0$ and $f$ is strictly increasing in $[0,+\infty)$ since $f'(t)=\cosh(2t)-1>0$ for $t>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $\sin(54°)\sin(66°) = \sin(48°)\sin(96°)$ I'm trying to prove that $\sin(54°)\sin(66°) = \sin(48°)\sin(96°)$ but I don't really have a way to approach it. Most of what I tried was replacing $\sin(2x)$ with $2\sin(x)\cos(x)$ or changing sines with cosines but none of that has really simplified it. Would appreciate any solution.
As $\sin54^\circ=\cos36^\circ$ and $$\sin48^\circ=2\cos24^\circ\sin24^\circ$$ the proposition reduces to $$\cos36^\circ=2\sin24^\circ\sin96^\circ=\cos(96-24)^\circ-\cos(96+24)^\circ=\cos72^\circ-\left(-\dfrac12\right)$$ Now use How do we prove $\cos(\pi/5) - \cos(2\pi/5) = 0.5$ ?.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Number of bit strings of length 8 that do not contain "$100$"? I am thinking the total number of possible strings is $2^8$ and the number of strings with $100$ at the beginning would be $2^8 - 2^3 = 2^5$. Now "$100$" can shift across the string $5$ times going to the right. Is the answer then $2^8 - 2^5 \times 5$?
As discussed in the comments, the straight forward approach as proposed in the question won't work because it multiply counts the bad strings in which $100$ appears more than once (indeed, it counts bad strings once for each appearance of $100$). For short strings (like length $8$) a more careful count via the principle of Inclusion/Exclusion isn't impossible but it's not exactly easy and, as the length increases, this method gets harder and harder. I think it's easier to attack the problem recursively. Toward that end, define some sub-types of the "good" strings of length $n$. Specifically, let $A_n$ denote those good strings that end in $1$ and let $B_n$ denote those that end in $10$. Note that the total $T_n$ is then given by $$T_n=A_n+B_n+1$$ where the $1$ comes from the good string $0^n$ which ends in neither $1$ nor $10$. Recursive, we note that $$A_n=A_{n-1}+B_{n-1}+1=T_{n-1}$$ since you get a good string of length $n$ by appending a $1$ to any good string of length $n-1$. Similarly $$B_n=A_{n-1}=T_{n-2}$$ Thus $$T_n=T_{n-1}+T_{n-2}+1$$ It is easy to see that $A_1=1$, $A_2=2$, $B_1=0$, $B_2=1$ whence $$\{T_n\}=\{2,4,7,12,20,33,54,88,\cdots\}$$ Consistency Check: Let's count $T_4,\;T_5,\;T_6$ directly. There are $16$ strings of length $4$ and the bad ones are $x100$ and $100x$, thus there are $4$ bad strings so $T_4=16-4=12$ as desired. Similarly the bad strings of length $5$ are $100xx$, $x100x$, $xx100$ so $T_5=32-12=20$ as desired. To count the bad strings of length $6$ we have to be a little careful...the patterns are $100xxx$, $x100xx$, $xx100x$, $xxx100$ but we have to add back $1$ for the double counted string $100100$. Thus $T_6=64-8\times 4+1=33$ as desired. Induction shows that, in fact, $T_n=F_{n+3}-1$ where $F_i$ denotes the Fibonacci numbers $\{F_i\}_{i=1}^{\infty}=\{1,1,2,3,5,8,13,21,\cdots\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Solve $2\ddot{y}y - 3(\dot{y})^2 + 8x^2 = 0$ Solve differential equation $$2\ddot{y}y - 3(\dot{y})^2 + 8x^2 = 0$$ I know that we have to use some smart substitution here, so that the equation becomes linear. The only thing I came up with is a smart guessed particular solution: $y = x^2$. If we plug this function in, we get: $$2\cdot2\cdot x^2 - 3(2x)^2 +8x^2 = 4x^2 - 12x^2 + 8x^2= 0$$ I made a mistake. The coefficients where different in the exam: $$ \begin{cases} 3\ddot{y}y + 3(\dot{y})^2 - 2x^2 = 0, \\ y(0) = 1, \\ \dot{y}(0) = 0. \end{cases} $$ Does it make the solution easier?
Hint: Let $y=\dfrac{1}{u^2}$ , Then $y'=-\dfrac{2u'}{u^3}$ $y''=\dfrac{6(u')^2}{u^4}-\dfrac{2u''}{u^3}$ $\therefore\dfrac{2}{u^2}\left(\dfrac{6(u')^2}{u^4}-\dfrac{2u''}{u^3}\right)-3\left(-\dfrac{2u'}{u^3}\right)^2+8x^2=0$ $\dfrac{12(u')^2}{u^6}-\dfrac{4u''}{u^5}-\dfrac{12(u')^2}{u^6}=-8x^2$ $\dfrac{4u''}{u^5}=8x^2$ $u''=2x^2u^5$ This reduces to a special case of Emden-Fowler equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Decompose rotation matrix to plane of rotation and angle I would like to decompose an $n$-dimensional orthogonal rotation matrix (restricting to simple rotation with a single plane of rotation) to the two basis vectors of the plane of rotation, and an angle of rotation. The common method is decomposing the rotation matrix to an axis and angle, but this doesn't work in higher dimensions. For example in $\mathbb{R}^3$ given the rotation matrix $R_{xy}=\begin{bmatrix}\cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}$ it's obvious that the plane of rotation is the $xy$-plane spanned by basis vectors $b_0 = (1,0,0)$ and $b_1=(0,1,0)$ and the angle of rotation is $\theta$. However decomposing it mathematically is rather challenging. What is the solution for a general (restricted to a single, but arbitrary plane) rotation matrix in $\mathbb{R}^n$?
This is the same answer as given by "arctic tern," but expressed differently. If $R$ is an orthogonal rotation matrix (i.e. $R^{-1} = R^T$ and $\det(R) = 1$), then it can be diagonalized via a unitary similarity matrix. The eigenvalues have absolute value $1$, and come in conjugate pairs. And their product must be $1$. Hence there are $k$ (where $2k \le n$) eigenvalues of the form $\cos(\theta_m) \pm i \sin(\theta_m)$ for $1 \le m \le k$, and $n-2k$ eigenvalues $1$. Thus $R$ is the product $\prod_{m=1}^{k} R(\Pi_m,\theta_m)$ (as in "arctic tern's" answer), where $\Pi_m$ is the space generated by the eigenvectors corresponding to the eigenvalues $\cos(\theta_m) \pm i \sin(\theta_m)$. (The eigenvectors also come in conjugate pairs, unless $\theta_m = \pi$, so you can take the two real vectors to be the real and imaginary parts of one of them. For the case where the eigenvalues are $-1$ twice, just pick an eigenvector corresponding to each eigenvalue. If any of the $\theta_m$'s are repeated, this representation won't be unique.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Triangle inequality in complex numbers let $|z|=R$, by using the triangle inequality, find a lower bound for $$|z^4+5z^2+4|$$ approachh: $$|z^4+5z^2+4|\geq|z^4|-|5z^2+4| \geq |z^4|-(|5z^2|+4)=|z^4|-|5z^2|-4=R^4-5R^2-4$$ but the solution is $$|z^4+5z^2+4| \geq R^4-5R^2+4$$ what went wrong?
You get a sharper lower bound if you render $z^4+5z^2+4=(z^2+1)(z^2+4)$ and find the lower bounds for the absolute values of the factors on the right side.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Negative/Positive Index Numbers I came across a question that I was unable to solve, it involves positive and negative index numbers in the same fraction and I'm not sure how to solve that, if anyone could help me please?? $$\dfrac{\left(\frac{7a^5b^3}{5a^6b^2}\right)}{\left(\frac{7b^3a^2}{5b^5a^4}\right)}$$ Also some working out would be great so I can see how to work it out properly. Thanks!
$$\dfrac{\left(\frac{7a^5b^3}{5a^6b^2}\right)}{\left(\frac{7b^3a^2}{5b^5a^4}\right)}=\frac{7a^5b^3}{5a^6b^2}\times\frac{5b^5a^4}{7b^3a^2} =\frac{a^5b^3b^5a^4}{a^6b^2b^3a^2}=\frac{a^9b^8}{a^8b^5}=ab^3.$$ The division of a fraction is equivalent to the multiplication of its reciprocal, which is obtained by swapping the numerator and denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does there exist such a subgroup of a linear group? Could anybody give an example of the following case : $F$ is a field with nonzero characteristic $p$, $V$ is a $F$-vector space with finite dimension, $G$ is a subgroup of $GL(V)$, $G$ is infinite but has finite exponent and its (least) exponent is not divisible by $p$ ? Thanks in advance.
This does not exist. With no loss of generality, you can suppose that the field is algebraically closed. Let $H$ be the Zariski closure of $G$ and $H_0$ its unit component. Then $H$ is a connected positive-dimensional algebraic group, of finite exponent. Since tori have infinite exponent, $H_0$ has to be unipotent. Hence its exponent has the form $p^n$ with $n\ge 1$. Hence $p$ divides the exponent of $G$. Added: To avoid the torus argument (but still with a little smaller amount of Zariski topology), one observes that since elements of $H$ have bounded order, they achieve finitely many characteristic polynomials, and hence by connectedness it only achieves on $H_0$ the characteristic polynomial $(X-1)^k$ where $k$ is the ambient dimension. So all elements of $H_0$ are unipotent, and hence their order is a power of $p$. Thus the exponent of $H_0$ is $p^m$ for some $m\ge 0$, and actually $m>0$ since $H_0\neq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Approximation using the Taylor series $$\sqrt{1+2t\sigma\cos\theta+\frac12t^2\sigma^2(3+\cos(2\theta))}=1+t\sigma\cos\theta+\frac12t^2\sigma^2+O(t^3)$$ What is the omitted step between these two equations? The parameter $t$ lies between $0$ and $1$ and it is said that approximation by using Taylor series is used.
Hint. One may use the Taylor series expansion, as $u \to 0$, $$ \sqrt{1+u}=1+\frac u2-\frac{u^2}{8}+O(u^3) $$ applying it to $u=2t\sigma \cos \theta+\dfrac 12t^2\sigma^2 (3+ \cos 2\theta) $ as $t \to 0$, observing that $$ \frac u2-\frac{u^2}{8}=t\sigma \cos \theta+\dfrac 14t^2\sigma^2 (3+ \cos 2\theta)-\frac{4t^2\sigma^2 \cos^2 \theta}{8}+O(t^3). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transitive sets: problem in proof of Lemma I.8.6 of Kunen's 'Foundations of Mathematics' I've been studying Kunen's notes titled 'The Foundations of Mathematics'. Definition I.8.1 in Kunen says $z$ is a transitive set iff $\forall y \in z\, [y \subseteq z]$ In the proof of Lemma I.8.6, $\alpha$ is a transitive set and $x,y,z \in \alpha$. Kunen claims that … we have $x \in y \in z \rightarrow x \in z$ because the $\in$ relation is transitive on $\alpha$ … But this does not seem to follow from Kunen's definiton of transitivity. Consider $\alpha=\{\emptyset, \{\emptyset\},\{\{\emptyset\}\}\}$. Then it is transitive by Kunen's definition but if we take $x$, $y$ and $z$ to be the three elements in the order given then $x \in y \in z$ but $x \notin z$. Am I reasoning correctly? I also have the print edition of the notes and the definition and proof are identical there.
I agree with you that the end of the proof of Lemma I.8.6 in Kunen is misleading and incomplete. It can be fixed as below. Let $\alpha$ be an ordinal, that is, $\alpha$ is a transitive set (every element of it is a subset of it) such that the relation $\in$ on $\alpha$ is a well-order (i.e., a (strict) total order for which every nonempty subset has a minimal element). At the end of the proof of I.8.6 we have $x\in y\in z\in\alpha$ with $x,y,z\in\alpha$. We have to show that $x\in z$. Because $\in$ is a total order, we either have (1) $z=x$, or (2) $z\in x$, or (3) $x\in z$. Case (1) is impossible because that would give a cycle $x\in y\in x$ with no minimal element for the set $\{x,y\}$. Case (2) is impossible because that would give a cycle $x\in y\in z\in x$ with no minimal element for the set $\{x,y,z\}$. So case (3) $x\in z$ must hold. Your example $A=\{\emptyset, \{\emptyset\},\{\{\emptyset\}\}\}$ is a transitive set, but the $\in$ relation on $A$ is not a well-order, as it is not even a total order: the elements $\emptyset$ and $\{\{\emptyset\}\}$ are not comparable, as neither is a member of the other. As you correctly observed, $A$ being a transitive set is not enough to conclude that the relation $\in$ on $A$ is a transitive relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1891943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can probability of getting a ball from a box be greater than $1$ A box has three black balls and three red balls. Therefore, probability of getting a black ball is $\dfrac{1}{2}$ (so is the case for red ball also) Suppose one person takes a ball and put it back in the box. Then second person takes a ball and put it back in the box. Then third person takes a ball and put it back in the box. What is the probability that at least one person gets a black ball? (note: question is created myself to clear a doubt) Probability of first person getting a black ball is $\dfrac{1}{2}$ Probability of second person getting a black ball is $\dfrac{1}{2}$ These three are independent events. So, it appears as $\dfrac{1}{2}+\dfrac{1}{2}+\dfrac{1}{2}=\dfrac{3}{2}$ is the probability that at least one person gets a black ball. I know this is wrong as I studied in my book that probability cannot be greater than $1$ Then, I myself got this contradictory argument also. probability that all get black ball $=\left(\dfrac{1}{2}\right)^3=\dfrac{1}{8}$ Probability that exactly two of them get a black ball $=\dbinom{3}{2}\left(\dfrac{1}{2}\right)^3=\dfrac{3}{8}$ Probability that exactly one of them get a black ball $=\dbinom{3}{1}\left(\dfrac{1}{2}\right)^3=\dfrac{3}{8}$ In this way, Probability that at least one of them get a black ball $=\dfrac{1}{8}+\dfrac{3}{8}+\dfrac{3}{8}+=\dfrac{7}{8}$ I have one more argument for this. Probability that none of them get a black ball $=\left(\dfrac{1}{2}\right)^3=\dfrac{1}{8}$ So, Probability that at least one of them get a black ball $=1-\dfrac{1}{8}=\dfrac{7}{8}$ Could you please help me to understand which approach is right? And more importantly, the first argument which is also convincing to me, has probability more than $1$
In your first (wrong) approach you use the rule: $\Pr(A\cup B\cup C)=\Pr(A)+\Pr(B)+\Pr(C)\tag1$ However, this rule can only be used if the events $A,B,C$ are mutually exclusive which is not the case here. A rule that always works is: $\Pr(A\cup B)=\Pr(A)+\Pr(B)-\Pr(A\cap B)\tag2$ $A,B$ are by definition mutually exclusive if $A\cap B=\varnothing$, and if that is the case then $\Pr(A\cap B)=0$ so that $(2)$ becomes: $\Pr(A\cup B)=\Pr(A)+\Pr(B)\tag3$ corresponding with $(1)$ applied on $2$ events. Usage of the general $(2)$ in order to solve the problem would lead to: $$\Pr(A\cup B\cup C)=$$$$\Pr(A)+\Pr(B)+\Pr(C)-\Pr(A\cap B)-\Pr(A\cap C)-\Pr(B\cap C)+\Pr(A\cap B\cap C=\frac12+\frac12+\frac12-\frac14-\frac14-\frac14+\frac18=\frac78$$ That answer agrees with your (correct) second and (correct and most efficient) third effort.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $f:X\to X$ is an epimorphism, which of these statements are true? $X$ is a linear space, $\dim X = n < \infty$. $f:X\to X$ is epimorphism. Then: a. $f$ is monomorphism. b. there is exists base such that matrix of $f$ is diagonal. c. there is exists base such that matrix of $f$ is symmetric. a. is true, because: Let $A$ be matrix of $f$. Since, $A$ is matrix of change base then this matrix is non-singular, so it is reversible. Then $Ax=0\Leftrightarrow A^{-1}Ax=A^{-1}0 \Leftrightarrow x = 0$ so $f$ is monomorphism. b. No idea, c. No idea, What about a. ? Is it correct ? Can you help me with b., c. ?
For (a), your argument isn't really complete; I'd say you haven't fully justified how you know $A$ is a change-of-basis matrix. But there's actually no need to talk about matrices at all, just use the rank-nullity theorem, the fact that $X$ is finite-dimensional, and the fact that * *$f$ is an epimorphism $\iff$ $\operatorname{im}(f)=X$ *$f$ is a monomorphism $\iff$ $\ker(f)=0$ For (b), remember that * *$f$ is diagonalizable $\iff$ $\mathrm{algmult}(\lambda)=\mathrm{geomult}(\lambda)$ for every eigenvalue $\lambda$ of $f$ Take a look at a linear map like $f\left(\begin{bmatrix}x\\y\end{bmatrix}\right)=\begin{bmatrix}1 & 1\\ 0 & 1\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$ from $\mathbb{R}^2$ to $\mathbb{R}^2$, which (you can check) is indeed an epimorphism. The above map is also a good example to think about for part (c). Can there be a basis $\left\{\bigl\lbrack\begin{smallmatrix}a\\c\end{smallmatrix}\bigr\rbrack ,\bigl\lbrack\begin{smallmatrix}b\\d\end{smallmatrix}\bigr\rbrack\right\}$ of $\mathbb{R}^2$ in which the linear map given above is symmetric? Equivalently, can there be an invertible matrix $\bigl\lbrack\begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\bigr\rbrack$ (which remember, is equivalent to $ad-bc\neq 0$) such that $$\begin{align*} \begin{bmatrix}a & b\\ c & d\end{bmatrix}\begin{bmatrix}1 & 1\\ 0 & 1\end{bmatrix}\begin{bmatrix}a & b\\ c & d\end{bmatrix}^{-1}&=\begin{bmatrix}a & a+b\\ c & c+d\end{bmatrix}\begin{bmatrix}a & b\\ c & d\end{bmatrix}^{-1}\\[0.1in] &=\frac{1}{ad-bc}\cdot\begin{bmatrix}a & a+b\\ c & c+d\end{bmatrix}\begin{bmatrix}\hphantom{-}d & -b\\ -c & \hphantom{-}a\end{bmatrix}\\[0.1in] &=\frac{1}{ad-bc}\cdot\begin{bmatrix}ad-ac-bc & a^2\\ -c^2 & -bc+ac+ad\end{bmatrix} \end{align*}$$ is symmetric?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What applications require the study of nonhomogenous partial differential equations I've been working on a bit of abstract calculus which allows me to solve various PDEs in a somewhat novel fashion. It occurs to me, I can also solve nonhomogeneous PDEs by my method. For example, $$ u_{xx}+u_{yy} = x^2-y^2$$ I could probably solve. But, before I get carried away with my investigation, it occurs to me I should ask: Is there a good exposition of the physical or mathematical significance of nonhomogeneous PDEs ? Or, if possible, can you offer one here? I know for ODEs the significance of forcing terms is, well, force. The idea of superposition and response of linear systems to external sources is one of the more satisfying chapters in the study of ODEs. I wonder, is there some such story for PDEs? In my example, I guess I know the answer, the inhomogeneity could be viewed as charge which fills the plane with a certain density. In other words, a nonhomogeneous Laplace equation is a Poisson equation. What about other PDEs? For example, the wave equation, or heat equation? I've found the nonhomogeneous problems listed in various documents, but, my search thus far has not turned up anything which satisfies my big picture curiousity. Incidentally, I am also interested in corresponding questions for systems of PDEs, so, if you have something to say in that arena feel free. Thanks in advance for your insight!
The first application I could think of would be geometric and/or optical modeling. Multivariate polynomials describe or approximate geometries and if you differentiate and put constraints on their level sets you can be getting (systems of) differential equations of the type you have there. And if you can use it to describe geometries you could also use it to describe the geometries of boundary conditions used for other differential equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }