Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
In how many different ways can we place N identical rooks on a chess board WITHOUT CORNERS so that no two of them attack each other (for N > 3)? I'm trying, unsuccessfully, to figure it out for a moment now For N = 8, the chessboard would look like: Just for reference here is a similar question without the corner restriction and with N = 8 : In how many different ways can we place $8$ identical rooks on a chess board so that no two of them attack each other?
This problem can be conveniently solved using rook polynomials. We can pairwise exchange rows and columns without changing the number of possible arrangements of non-attacking rooks. An equivalent board is shown below.                                                                  The rook polynomial of the four forbidden squares in the top left corner is \begin{align*} 1+4x+2x^2\tag{1} \end{align*} where the coefficient of $x^k$ gives the number of ways to place $k$ non-attacking rooks on the forbidden squares. The number of valid arrangements to place eight non-attacking rooks on the $(8\times 8)$ board respecting the forbidden squares is given by (1) using the inclusion-exclusion principle as \begin{align*} 1\cdot 8!-4\cdot 7!+2\cdot 6!\color{blue}{=21\,600} \end{align*} We can place eight non-attacking rooks in $8!$ ways, subtract $4\cdot7!$ ways having at least one rook on one of the four forbidden squares and add as compensation for double counting $2\cdot6!$ ways having two non-attacking rooks on the four forbidden squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proving the truth of an inequality (by finding a value) as part of proving a question in homework I came across the following logarithmic inequality: $$ \log ^4n<n^{1-\epsilon}$$ I need to find a value of epsilon that the inequality will continue to be true. I checked with wolfy and the value is around 0.3 (can be less, but not greater). I thought the best way will be to use L'Hopital's Rule on this expression: $$ \underset {n\rightarrow \infty}{lim}\frac {\log n}{n^\frac{1-\epsilon}{4}}=0 $$ and then if that's true than there'll be an $n_0\in \mathbb{R}$ that from it and on the expression: $\frac {\log n}{n^\frac{1-\epsilon}{4}}<1$ will be true but it doe not give me any information on epsilon (except that it needs to be less than 1).
Hint: instead of worrying about limits at infinity, find where the function $$f(x) = \frac{\log~x}{x^\alpha}$$ takes on its maximum value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $f'(x)=0$ true at a cusp? If I am given a graph that has a cusp and I am asked to find every point $x$ where $f'(x)=0$ is satisfied, does this include the cusp? I know that when the derivative of a function equals zero this means that there is a horizontal tangent at that point, I also know that the derivative does not exist at a cusp. However, a cusp does have a horizontal tangent but is not differentiable at that point, so do we include the point $x$ where the cusp is when the question is asking us where $f'(x)=0$? By cusp I mean when $$\lim_{x \to a^{+}} f'(x)=+ \infty \text{ and } \lim_{x \to a^{-}} f'(x)=- \infty$$
Regards Jotam. If a derivative does not exist at the cusp, say at point $x_o$, then it means that $f'(x_o)$ does not exist, it does not have any unique value. The limits in both direction are different : $$ \lim_{h \rightarrow 0} \frac{ f(x_o + h) - f(x_o) }{h} \ne lim_{h \rightarrow 0^{-}} \frac{ f(x_o + h) - f(x_o) }{h}$$ A tangent line at a point $ x $ is a straight line with gradient $f'(x)$. Since there is no derivative at the cusp, there is no tangent at the cusp. To be spesific for your question, $f'(x_o)$ cannot be zero because it can not have any unique value. A tangent line represents the $\textbf{continuous} $ rate of change at the considered point. But you can observe that there is a discontinuity of change at the cusp. $ \textbf{There is a sudden change at the cusp}.$ A tangent line has a direction. Thanks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
From the given figure, prove that $[\triangle ABC]=[\triangle CDE]$ From the given figure, prove that $\triangle ABC=\triangle CDE$ in area. In $\triangle ABC$ and $\triangle CDE$ 1. $AB=CD$ 2. $BC=DE$ 3.??
It is easy to see with a picture; simply stick the triangles together by a side that is the same length, and see (where the two green sides are the same and the two blue sides are the same): It is now trivial that the area is equal, since their base and height are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What makes a function not defined? I have started studying precalculus and would then start up with calculus. While studying about functions I wondered whether this function would be defined at $a$ or not. Take a look at it. $$ f(x) = \frac{(x-a)(x-b)(x-c)...(x-n)}{(x-a)} $$ Here if we will simplify it further then the term $\left( x-a\right)$ would cancel out making the function defined at $a$ but if we would leave it as such it would be undefined at that point. I asked this question because I found in some sources that the graph of such functions have an open dot at that point indicating it discontinuous at that point. But I couldn't explain it. Are the expressions before and after cancelling different or it's something else? I would be highly obliged for your help and thanks ...
A function $f$ is a special asymmetric relation between two sets $A$ and $B$, represented by $f:A\to B$. The relation consist in that for every element of $A$ (called the domain of $f$) exists a unique element in $B$ (called the codomain of $f$) defined by the function $f$. In your example, if the codomain of $f$ is $\Bbb R$, then $f(a)\notin \Bbb R$ because the division by zero is not defined in $\Bbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Show such a function has a maximum Let $f:[0, \infty)$ be a continuous function. $f(0) = 1 $ and $\forall x \in [0, \infty)$ $f(x)\leq \frac{x+2}{x+1}$ Show that $f$ gets a maximal value in $[0, \infty)$. My intuition: if $f(0)$ is the maximum i'm done if not the function you showed me converges to $1$ I want to show that from a certain point it will be lower than $1.$ Can you help me formalize it?
(1). If $f(x)\leq 1$ for all $x\in [0.\infty)$ then $1=f(0)=\max_{x\geq 0}f(x).$ (2). If $f(x_0)>1$ for some $x_0>0$ then there exists $x_1>x_0$ such that $$\forall x\geq x_1\;( f(x)<\frac {1}{2}(1+f(x_0))$$ because $\lim_{x\to \infty}\sup_{y\geq x}f(y)\leq 1.$ There exists $x_2\in [0,x_1]$ with $f(x_2)=\max \{f(x): x\in [0,x_1]\}$. Since $x_0\in [0,x_1]$ we have $f(x_0)\leq f(x_2)$, so for $x>x_1$ we have $$f(x)<\frac {1}{2}(1+f(x_0))<f(x_0)\leq f(x_2).$$ Therefore $f(x_2)= \max_{x\geq 0}f(x).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Find the number of ways you can put fruits into drawers We have 5 fruits and 3 drawers. I need to find the number of ways I can put the fruits in them when there will be at least one fruit in the lowest drawer and the first and the second fruit won't be in the same drawer. I tried to do it by the complementary way, I took all of the options $3^5$ and subtruct from it the bad options($2^5$ when there is no fruits in the lowest and $3^4$ when the first and the second are in the same drawer ) I got to $3^5-2^5-3^4 =130$ but the answer is 146. Thanks in advance!
5 fruit in any of 3 drawers. $3^5$ Fruit 2 not in the same drawer as fruit 1 $3^4\cdot 2=162$ minus the cases where all of the fruit are in the top two drawers. $-2^4\cdot 1=16$ $162-16 = 146$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Series Convergence / Divergence: $\sum \frac{3^{1/\sqrt{n}}-1}{n}$ Does the following series converge or diverge: $$\sum_{n=1}^\infty \frac{3^{1/\sqrt{n}}-1}{n}$$ Many thanks!
Using MVT $3^x-1=3^{\epsilon}\ln{3}\cdot (x-0),\epsilon \in (0,x)$ or $$0<\frac{3^{\frac{1}{\sqrt{n}}}-1}{n}=3^{\epsilon}\ln{3}\frac{1}{n\sqrt{n}}<3\ln{3}\frac{1}{n\sqrt{n}}$$ because $\epsilon \in (0,\frac{1}{\sqrt{n}})$ or $0<\epsilon<1$ and $f(x)=3^x$ is ascending. As a result: $$\sum_{n=1}^{\infty}\frac{3^{\frac{1}{\sqrt{n}}}-1}{n}<3\ln{3}\sum_{n=1}^{\infty}\frac{1}{n^{1+\frac{1}{2}}}$$ which converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2201912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Smallest odd-deficient odd number and its family. Let $o$ be an odd positive integer. We say that $o$ is deficient if its sum of positive divisors denoted by $\sigma(o)$ satisfies the inequality $\sigma(o)<2o$. In this case $\sigma(o)=2o-d$ where $d$ is called the deficiency of $o$. If $d $ is odd we say thay $o$ is odd odd deficient. The list of odd deficient numbers is listed in the OEIS and begins $1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79, 81, 83, 85, 87, 89, 91, 93, 95, 97, 99, 101, 103, 105, 107, 109, 111, 113, 115, 117, 119, 121, 123, 125$ Solving for the deficiency of some of the given numbers on the list, I found out that their deficiencies $d$ is mostly even. Since I have no computing tool as of this moment for computing the deficiency of a large numbers on the list. My questions are: * *Among the list, what is the smallest integer whose deficiency is odd. (Answered already:9) *If there is, can we characterize them? or *There is no integer in the list with deficiency $d$ odd? (Answer there is) So question 2 is the one that is needed to be answered. Thanks in advance.
The odd deficient numbers which have odd deficiency are the odd deficient squares. We have $d=2o-\sigma(o)$, so $d$ will be odd when $\sigma(o)$ is odd. All the divisors of $o$ are odd, so $\sigma(o)$ will be odd when $o$ has an odd number of divisors. But given a divisor $k, \frac ok$ is also a divisor so they come in pairs unless $o$ is a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Residue for quotient of functions Let $f, g$ be holomorphic functions on a disk $\mathbb{D}(z_0,r)$ centered at $z_0$ and of radius $r>0$. Suppose $f$ has a simple zero at $z_0$. I want to find an expression for $Res(g/f, z_0)$. But I'm not sure what this expression should look like. Here's my guess: Since $f$ has a simple zero at $z_0$, $\exists h(z)$ holomorphic on $\mathbb{D}(z_0,r)$ such that $f(z)=(z-z_0)h(z)$ and $h(z_0)\ne 0$. So that we can represent $g/f = \frac{g(z)}{(z-z_0)h(z)}$, where we observe that $z_0$ is a pole of order 1 of $g/f$. This implies that we can express $$g/f=\frac{a_{-1}}{z-z_0}+\sum\limits_{n=0}^\infty a_n(z-z_0)^n$$ Hence, $$a_{-1}=\frac{g(z)}{f(z)}(z-z_0)-\sum\limits_{n=0}^\infty a_n(z-z_0)^{n+1}$$ Does this look like a correct approach? I think that this expression is too general because of the infinite series on the right-hand side. Is there a clue I'm missing? Update: Another approach might be this: $$g/f = \frac{g(z)}{(z-z_0)h(z)}=\frac{c_0+c_1(z-z_0)+\dots}{d_1(z-z_0)+\dots}=\frac{c_0}{d_1(z-z_0)+\dots}\\ +\frac{c_1(z-z_0)+\dots}{d_1(z-z_0)+\dots}=\frac{a_{-1}}{z-z_0}+\sum a_n(z-z_0)^n$$ But what next?
Suppose that $z_0$ is a simple zero of the fraction $$\mathrm {Res}(g/f,z_0) = \lim_{z\to z_0}(z-z_0) \frac {g(z)}{f(z)}=\frac {g(z_0)}{f'(z_0)} \tag{1}$$ Alternatively Since $f$ has a simple zero at $z_0$ $$\frac{1}{f} = \frac{a_{-1}}{(z-z_0)}+\sum_{k=0}^\infty a_k (z-z_0)^k$$ Suppose that $g$ is analytic in a nbhd of $z_0$ $$g(z) = \sum_{k=0}^\infty b_k(z-z_0)^k$$ Then $$G(z)=\frac{g(z)}{f(z)} = \left( \sum_{k=0}^\infty b_k(z-z_0)^k \right)\left( \frac{a_{-1}}{(z-z_0)}+\sum_{k=0}^\infty a_k (z-z_0)^k\right)$$ Finally we have the residue at $z_0$ $$\mathrm {Res}(G(z),z_0) = a_{-1}b_0 \tag{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Understanding the cohomology ring of the Grassmannian Some background first: I'm trying to understand the solution of some enumerative geometry problems, such as proving that a smooth cubic contains $27$ lines. I know that this becomes easier once one understands the cohomology of the Grassmannian. I know that the Grassmannian can be given a CW-complex structure, but I don't understand how to compute the actual cohomology ring. I think that is the subject of Schubert calculus, and names like Pieri's or Giambelli's formulas often pop up. But I have also read elsewhere, such as in Hatcher's book Vector Bundles and K-Theory, that one can use Chern classes to describe the cohomology ring. My question is, how are the two approaches related, and, most importantly, what is a comprehensive textbook on the subject?
The two texts I have most often been referred to for studying the cohomology of the Grassmannian are Young Tableaux by Fulton and Symmetric Functions, Schubert Polynomials, and Degeneracy Loci by Manivel. From my perspective, Manivel takes a fairly combinatorial perspective while Part III of Fulton gives an excellent treatment of this question from a more geometric perspective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find constant term in two quadratic equations Let $\alpha$ and $\beta$ be the roots of the equation $x^2 - x + p=0$ and let $\gamma$ and $\delta$ be the roots of the equation $x^2 -4x+q=0$. If $\alpha , \beta , \gamma , \delta$ are in Geometric progression then what is the value of $p$ and $q$? My approach: From the two equations, $$\alpha + \beta = 1$$, $$\alpha \beta = p$$, $$\gamma + \delta = 4$$, and, $$\gamma \delta = q$$. Since $\alpha , \beta , \gamma , \delta$ are in G. P., let $\alpha = \frac{a}{r^3}$, $\beta = \frac{a}{r^1}$, $\gamma = ar$, $\delta = ar^3$. $$\therefore \alpha \beta \gamma \delta = a^4 = pq$$ Now, $$\frac{\alpha + \beta}{\gamma + \delta} = \frac{1}{r^4}$$ $$\frac{1}{4} = \frac{1}{r^4}$$ $$\therefore r = \sqrt(2)$$ From here I don't know how to proceed. Am I unnecessarily complicating the problem??
let $$\beta=\alpha y$$,$$\gamma=\alpha y^2$$,$$\delta=\alpha y^3$$ then we get from the first equation $$\alpha^2-\alpha=\alpha^2y^2-\alpha y$$ from here we get $$\alpha=\frac{1}{1+y}$$ analogously we get from the second equation:$$\alpha=\frac{4}{y^2(1+y)}$$ combining these equations we have $$y^2=4$$ can you finish now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Find interval solutions of $y'=2y\cos^2t-\sin t$ I need to find interval solutions of following equation: $y'=2y\cos^2t-\sin t$ So it looks like odrinary linear equation, so we can solve it first, assuming that $\sin t=0$, and obtain $$y=ce^{\sin t\cos t+t}$$ Then assume it is solution of "full" equation and find $c$, which, I believe, is $$-\int\frac{\sin t}{e^{\sin t\cos t+t}}$$ But what to do next?
You have the equation of the form $y'+p(x)y=q(x)$ It can be solved by a sum of the solution to the homogeneous equation and the particular solution given by integrating factor. The homogeneous solution you have just obtained $$\int\frac{dy_{h}}{y_{h}}=2\int\cos^{2}(t)dt$$ $$y_{h}(t)=c_{1}e^{(\frac{1}{2}\sin(2t)+t)}$$ The integrating Factor is $$I=e^{\int{p(t)}dt}=e^{(-\frac{1}{2}\sin(2t)-t)}$$ So, the particular solution is $$y_{p}(t)=-e^{(\frac{1}{2}\sin(2t)+t)}\int{e^{(-\frac{1}{2}\sin(2t)-t)}}\sin(t)dt$$ So the solution of the equation is $$y(t)=y_{h}(t)+y_{p}(t)=e^{\frac{1}{2}\sin(2t)+t}\Big[c_{1}-\int{e^{(-\frac{1}{2}\sin(2t)-t)}}\sin(t)dt\Big]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Countability of Gaussian Integers I am attempting to show that Gaussian Integers are countable. Is it valid to map $a + bi$ to an ordered pair $(a, b)$ and then map this to the set of rationals $a/b$ ? I am unsure if this works since $a/b$ is not defined at $b = 0$ and am unsure of a different way to go about this. Any hints welcome, thanks!
We have $\mathbb{Z}[i]=\{a+bi\mid a,b\in \mathbb{Z}\}$ and thus a bijection to pairs $(a,b)\in \mathbb{Z}\times \mathbb{Z}$. Since the product of two countable sets is countable, this is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Differentiating $ x^{a}y^{b} = c $, in its simplest form. $$ x^{a}y^{b} = c, $$ where a, b and c are constants. My attempts so far $$ \frac{dy}{dx} = ax^{a - 1}by^{b - 1}$$ $$ \frac{d^2y}{dx^2} = (a^2 - a)x^{a-2}(b^2 - b)y^{b - 2} $$ I think that these first and second derivatives are correct, however my issue is, are these the derivatives in their simplest form? Any hints or inputs are welcomed.
Hint: $$\frac{d}{dx}f(x)g(x)=f'(x)g(x)+f(x)g'(x)$$ Then differentiate the $\frac{d}{dx}$ similarly. Assuming you are not dealing with implicit differentiation otherwise you may need The Chain Rule as well as The Product Rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Using Weierstrass approximation to show $f(x) = 0.$ (i). Let $f: [0,1] \to \mathbb{R}$ be continuous such that $$\int_{0}^{1} x^kf(x)dx = 0, \ \forall k \geq 5.$$ Show that $f(x) = 0$, for all $x \in [0,1]$. (ii). Let $f: [0,1] \to \mathbb{R}$ be continuous such that $$\int_{0}^{1} x^{2k}f(x)dx = 0, \ \forall k \geq 1.$$ Show that $f(x) = 0$, for all $x \in [0,1]$. For part (i), by the Weierstrass approximation theorem, we know that there exists a sequence of polynomials such that $|x^5f(x) - p_n(x)| < \frac{1}{n}$ for large enough $n$. By the linearity of integrals, we can conclude that $$\int_{0}^{1} x^{5}f(x)p_n(x)dx = 0.$$ Furthermore, $$0 \leq \int_{0}^{1} [x^{5}(f(x))]^2 dx \leq \int_{0}^{1} x^5f(x)[p_n(x)+\frac{1}{n}]dx = 0.$$ By the continuity of $f(x)$ it follows that $f(x) = 0.$ For part (ii), I tried to take a similar approach by also using the Weierstrass approximation theorem. However, the sequence of polynomials that approximate $f(x)$ may potentially have odd-powered terms, which leaves me unable to conclude that $$\int_{0}^{1} x^{2k}f(x)p_n(x)dx = 0.$$ Can I have a hint on how to proceed?
I cannot really follow what you are trying to do in your argument, nor where your inequalities come from. The way it is usually done is that because you can approximate $x^5f(x)$ uniformly with polynomials, you get get $(x^5f(x))^2=0$, and the $f(x)=0$ by continuity. In more detail: let $\{p_n\}$ be polynomials with $p_n(x)\to x^5f(x)$ uniformly. The uniform convergence makes the integrals $0=\int_0^1 x^5 f(x)p_n(x)dx$ to converge to $\int_0^1x^5f(x)g(x)\,dx=0$. In part (ii), you can show that $p(x^2)\,x^2f(x)=0$ for all $p$. Now choose polynomials $\{p_n\}$, with zero constant term, that approximate uniformly the function $g(x)=x\,f (\sqrt x)$. Then $$ 0=\int_0^1p_n(x^2)x^2f(x)\,dx \to \int_0^1g (x^2)x^2f (x)\,dx=\int_0^1 x^4(f (x))^2\,dx. $$ So $x^2f(x)=0$; this immediately implies that $f(x)=0$ for $x\ne0$, and we also get $f(0)=0$ by continuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A book on Banach space valued random variable. I have recently become interested in probability theory that take place on a Banach space setting. What are some good books for a beginner like me? The topic that I am especially interested in is Banach space valued $L^p(\Omega;X)$, i.e. the space of all measurable functions $f:\Omega\to X$, where $\Omega$ is a probability space and $X$ a Banach space, such that $\int||f(\omega)||_X^p\ \text{d}\omega < \infty$.
I suggest Martingales in Banach spaces by Gilles Pisier. A very preliminary version of the book (242 pages vs 580 in final version) is available on the author's website. The book begins with an introduction to Banach-space-valued $L^p$ spaces (i.e., Lebesgue-Bochner spaces). It's not long but clearly written and hits important points like the structure of the dual of $L^p(\Omega;X)$ with and without the Radon-Nikodým property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2202978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do you construct a function that is continuous over $(0,1)$ whose image is the entire real line? How do you construct a continuous function over the interval $(0,1)$ whose image is the entire real line? When I first saw this problem, I thought $\frac{1}{x(x-1)}$ might work since it is continuous on $(0,1)$, but when I graphed it, I saw that there is a minimum at $(1/2,4)$, so the image is $[4,\infty)$ and not $(-\infty,\infty)$. Apparently, one answer to this question is: $$\frac{2x-1}{x(x-1)}$$ But how is one supposed to arrive at this answer without using a graphing calculator?
Keeping in mind that (cumulative) distribution functions in statistics take the real line to the unit interval, inverse cdfs for variables defined over the whole real line are a rich and convenient source of these. An example of this would be the $\log(\frac{x}{1-x})$, which is the inverse cdf for the standard logistic distribution (this particular one is sometimes called the logit function). If you scroll to the bottom of the page at that link, you'll see a couple of dozen possibilities in the table at the bottom, mostly under "Continuous univariate supported on the whole real line". Some of those will not have convenient inverse cdfs, but a number of them do. Once you have a few of them you can combine them with functions from the real line to the real line or the unit interval to the unit interval to get a very wide range of easily-obtained functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2203081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 8, "answer_id": 5 }
Could you help with the concept of ratio and income/expenditure? Incomes of $A$ and $B$ are in the ratio $4:5$ and expenditures are also in the ratio $4:5$. Who saves more? Options: I) A II) B III) both save equally IV) cannot be determined on the basis of the information provided We've tried solving this by taking Income of A = 4x and expA = 4y, Inc B as 5x and exp B = 5y. But these are all ratios so there's no way of actually determining the value. We tried hypothetically taking A:B actual values as 40:50, and savings as 4:5, and here were getting B saves more. But we have no way of knowing if that would always apply. There are also different variations of the question where the expenditure is in different ratios, 5:6 as an example. The popular opinion of my group seems to be the answer would be CBD regardless.
You've started in the right direction, but you've kept your equations separate; the key part is figuring out how to combine the equations you've gotten to represent the information you've been given. In this case, you have $$Inc(A)=4x,\quad Inc(B)=5x,\quad Exp(A)=4y, \quad Exp(B)=5y.$$ Alright, but the problem is asking about the savings $A$ and $B$ make - how does savings relate to income and expenditure? Well, this is just: $$Sav=Inc-Exp.$$ So we have $$Sav(A)=Inc(A)-Exp(A)=4x-4y,\quad Sav(B)=Inc(B)=Exp(B)=5x-5y.$$ That's step one. Now, we want to compare these two quantities. That is, we're asking: Which is larger, $4x-4y$ or $5x-5y$? So let's subtract the first from the second; if the difference is positive, the second is bigger, and if it's negative the first is bigger, and if it's zero they're equal. This difference is $$(5x-5y)-(4x-4y)=x-y.$$ So now the entire problem boils down to: Is $x-y$ positive or negative? Do you think this is a question that you have enough information to answer, or does it depend on what exactly $x$ and $y$ are? What does this tell you about the answer to the whole problem?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2203210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
a submoudule can be both essential and superfluous enter image description here I am reading Rings and Categories of Modules by Frank W.Aderson,on 73 pages,I can't understand the statement in the picture.I can't found a submodule is both essential and superfluous.I hope someone can help me,thanks!
Your excerpt from Anderson and Fuller consists of two sentences: the first one is the claim and the second one is a module in which every nontrivial submodule is an example. So it is hard to understand why you are asking unless you simply don't understand how to find a single submodule of $\mathbb Z_{p^\infty}$. If that's the case you should probably say something explicitly or else you look very foolish. It is not hard to prove, or to look up, what the submodules look like. It turns out they are linearly ordered, and that is why each nontrivial submodule is both superfluous and essential. If you need a smaller example, just use the quotient ring $F_2[X]/(X^2)$. This ring has four elements and exactly three ideals (linearly ordered) and that one nontrivial ideal is also an example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2203424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does this graph look like? $y = \log_x{2}$ The equation is $y = \log_x{2}$, where x is the variable and the base of the logarithm. What does the graph look like? In general, what does $y = \log_x{k}$ look like, where k is some real constant? I cannot plug this into online graphers like fooplot.com because they don't seem to have a notation that allows putting x in the base of a logarithm.
$$y(x)=\ln_x(2)=\frac{\ln(2)}{\ln(x)} \qquad \text{and more generally}\quad \ln_x(k)=\frac{\ln(k)}{\ln(x)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2203536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A one-tape Turing machine which operates in linear time can only recognize a regular language Show that A one-tape Turing machine which operates in linear time can only recognize a regular language I have no idea how to solve it. Can you give me show how to solve it ? I am beginner at this subject so I ask for an indulgence.
The proof is not straightforward. This article Theory of one-tape linear-time Turing machines describes how it was proved. I quote them here : Hennie [18] made the first major contribution to the theory of one-tape linear-time Turing machines in the mid 1960s. He demonstrated that no one-tape linear-time deterministic Turing machine can be more powerful than deterministic finite state automata. To prove his result, Hennie described the behaviors of a Turing machine in terms of the sequential changes of the machine’s internal states at the time when the tape head crosses a boundary of two adjacent tape cells. Such a sequence of state changes is known as a crossing sequence generated at this boundary. Using this technical tool, he argued that (i) any one-tape linear-time deterministic Turing machine has short crossing sequences at every boundary and (ii) if any crossing sequence of the machine is short, then this machine recognizes only a regular language. Using the non-regularity measure of Dwork and Stockmeyer [13], the second claim asserts that any language accepted by a machine with short crossing sequences has constantly-bounded non-regularity. Extending Hennie’s argument, Kobayashi [25] later showed that any language recognized by one-tape o(n log n)-time deterministic Turing machines should be regular as well. This time bound o(n log n) is actually optimal since certain one-tape O(n log n)-time deterministic Turing machines can recognize non-regular languages.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2203692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to find the invertible elements of $\Bbb{Q}[X]$ mod $X^2$ I am looking for some hints for finding all the invertible elements of $\Bbb{Q}[X]$ mod $X^2$. Thank you very much in advance.
The ring $\mathbb{Q}[X]/(X^2)$ is local, that is, it has a unique maximal ideal. An element of the maximal ideal is not invertible (prove it). What about the elements not in the maximal ideal?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2203920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Definition of the gamma function for non-integer negative values The gamma function is defined as $$\Gamma(x)=\int_0^\infty t^{x-1}e^{-t}dt$$ for $x>0$. Through integration by parts, it can be shown that for $x>0$, $$\Gamma(x)=\frac{1}{x}\Gamma(x+1).$$ Now, my textbook says we can use this definition to define $\Gamma(x)$ for non-integer negative values. I don't understand why. The latter definition was derived by assuming $x>0$. So shouldn't the whole definition not be valid for any $x$ value less than zero? P.S. I have read other mathematical sources and most of them explain things in mathematical terms that are beyond my level. It would be appreciated if things could be kept in relatively simple terms.
To add on the other answers : This is one of the 1st example of analytic continuation. It is clear that $$\Gamma(z) = \frac{\Gamma(z+1)}{z}=\frac{\Gamma(z+2)}{z(z+1)}=\frac{\Gamma(z+n+1)}{z(z+1)\ldots(z+n)}$$ makes $\Gamma(z)$ well-defined for $z \in \mathbb{C}, -z \not \in \mathbb{N}$. But it is not so obvious (without a lot of theorems in complex analysis) that this continuation is the only one being analytic, in the same way that $\frac{1}{1-z}$ is the only one analytic continuation of $\sum_{n=0}^\infty z^n$ beyond $|z|< 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Name of Octonions With Biquaternion Coefficients? Ordinary biquaternions are quaternions $(\mathbb{H})$ whose coefficients are complex $(\mathbb{C})$. What is the name, analogous to "biquaternions", for octonions $(\mathbb{O})$ whose coefficients are ordinary biquaternions? If there is no such name, what is the most concise descriptive phrase?
The two things you are talking about are $\mathbb H \otimes \mathbb C$ and $\mathbb O \otimes (\mathbb H \otimes \mathbb C)$ (tensor products over $\mathbb R$). I would call the former the complexified quaternions or the complex quaternion algebra (there is a unique quaternion algebra over $\mathbb C$ up to isomorphism, namely $M_2(\mathbb C)$. (I have never heard the term "biquaternion" for this, and I am reasonably well acquainted with modern terminology regarding quaternion algebras, though I have not read the old works of Hamilton, Cayley, etc.) By analogy, I would call $\mathbb O \otimes (\mathbb H \otimes \mathbb C) \simeq \mathbb O \otimes M_2(\mathbb C)$ something like the complexified quaternionic octonions, or possibly the two-by-two complex octonionic matrices, depending on how I represented them. (The latter would be as $M_2(\mathbb O_{\mathbb C})$, where $\mathbb O_{\mathbb C} = \mathbb O \otimes \mathbb C$ is the (split) octonion algebra over $\mathbb C$. Such tensor products have been examined (in greater generality) in this paper, but I did not see a specific name for these objects, so I would venture that there is no name in common modern usage. (Admittedly, I just skimmed that paper quickly, so you might want to take a closer look.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Zero sets intersect in a line A line in $\mathbb{P}^3$ through points $[u_1:u_2:u_3:u_4]$ and $[v_1:v_2:v_3:v_4]$ is $\{[ku_1+lv_1:\dots:ku_3+lv_3]\}$ where either $k\ne 0$ or $l\ne 0$. But there's something worrying me, and it comes from the following problem. Let $Z_1=Z(X_1^2-X_0X_2)$, $Z_2=Z(X_1X_2-X_0X_3)$, $Z_3=Z(X_2^2-X_1X_3)$. I want to show that $Z_1$ intersects $Z_2$ in the union of $Z_1\cap Z_2 \cap Z_3$ and a line in $\mathbb{P}^3(\mathbb{C})$. I consider 3 cases: take $[X_0:X_1:X_2:X_3]\in Z_1\cap Z_2$ * *if $X_1\ne 0$, then $X_0,X_2,X_3\ne 0$ and therefore $X_0=X_1X_2/X_3\implies X_3X_1^2=X_1X_2^2\implies X_3X_1=X_2^2$, so $[X_0:X_1:X_2:X_3]\in Z_1\cap Z_2 \cap Z_3$ *if $X_1=X_2=0$, then clearly $[X_0:X_1:X_2:X_3]\in Z_1\cap Z_2 \cap Z_3$ *if $X_1=0, X_2\ne 0$, then $X_0=0$ and I want to conclude that $[X_0:X_1:X_2:X_3]$ belongs to the line $\{[0:0:X_2:X_3]\}$ But in the latter case $X_3$ may be zero or nonzero. If it is nonzero, then by what I said at the beginning, $\{[0:0:X_2:X_3]\}$ is the line thorugh the points $[0:0:1:0]$ and $[0:0:0:1]$. However if $X_3$ is zero then $\{[0:0:X_2:X_3]\}=\{[0:0:X_2:0]\}$, and this is not a line because $[0:0:0:0]$ is not a point of $\mathbb{P}^3$. So I have 2 questions: * *What should I do with the case $X_3=0$? *Is the rest of my reasoning correct?
So...you're right in how you're thinking about this problem. (I actually just did this problem last semester in my first alg. geo. class in grad. school and pulled up my solution set.) Ok so you have the line given by $[0:0:1:a]$ union with a point at infinity, $[0:1:0:0]$, a point lying in the set $Z_1\cap Z_2$. Sort of the point here is that you have to have both the line and the variety $Z_1\cap Z_2\cap Z_3$ in your decomposition of $Z_1\cap Z_2$ and the variety and the line don't have trivial intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Assume that $T$ is a linear transformation $T: V → ℝ$ and $T(\mathbf{v}_1)=1$ , $T(\mathbf{v}_2)=-1$, find $T(3\mathbf{v}_1-5\mathbf{v}_2)$ Assume that $T$ is a linear transformation $T: V\to\mathbb{R}$ $T(\mathbf{v}_1)=1$, $T(\mathbf{v}_2)=-1$ find $T(3\mathbf{v}_1-5\mathbf{v}_2)$ Not sure how to go about this question
As $T$ is a linear transformation, we have: $$T(3v_1-5v_2)=T(3v_1)-T(5v_2)=3T(v_1)-5T(v_2)=3\cdot1-5\cdot-1=3+5=8$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I rotate a line segment in a specific point on the line? If I have two points $A$ and $B$, which $AB$ is a line segment, how can I rotate the $AB$ in another Point $C$ which is a point on the line $AB$. Thank you in advance.
Using complex numbers: In the complex, multiplying by $e^{i\theta}$ amounts to a rotation around the origin by the angle $\theta$. To rotate around another point, translate so that the center moves to the origin, rotate and translate back. Hence, for any point $a$ in the plane $$a'=(a-c)e^{i\theta}+c.$$ This expands as $$a'_x=(a_x-c_x)\cos\theta-(a_y-c_y)\sin\theta+c_x,\\ a'_y=(a_x-c_x)\sin\theta+(a_y-c_y)\cos\theta+c_y. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding the cluster points of sequences So, I've been doing these questions and I've done 3 out of 4 (I think :/) but I'm not sure how to do the last one. Here's the question: Find the cluster points and name one convergent sub-sequence of each of the following sequences. For this problem you don't need to prove your statements. (a) $1, 1/2, 1, 1/3, 1, 1/4, 1, 1/5$,... So I said; ${{a_n}{_k}}$ (That's a double sub-script) = 1,1,1,1,1,1,... $\lim_{n\to\infty}$ $a_n{_k} = 1$ ${a_n{_k}_{+1}}$ (Double sub-script) = $1/1, 1/2, 1/3, 1/4 = 1/n$ $\lim_{n\to\infty}$ $a_n{_k}_{+1}$ (Double sub-script) $= 0$ Therefore the two subsequences converge to 1 and 0 and consequently the cluster points of the sequence ${a_n}$ is 1 and 0. (b) $(a_n)$, where $a_n = 1+\frac{1}{n^2}$ for all $n \in \mathbb{N}$ For this one I just did the limit of $a_n$ and got 1, so I concluded that the cluster point is 1 (c) $(a_n)$, where $a_1 = 5$, $a_{2k} = 2+\frac{1}{2k}$ and $a_{2k+1} = 6-\frac{1}{2k+1}$ for $k \in \mathbb{N}.$ Thus, the sequence is: $5, 2\frac{1}{2}$, $5\frac{2}{3}$, $2\frac{1}{4}$, $5\frac{4}{5}$,... For this one I said there were three subsequences, ($a_1, a_{2k}, a_{2k+1}$). Foudn the limits of each and got the cluster points were 6, 2 and 5. (d) $1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,1...$ I'm stuck on this one so I'm unsure what to do, any help on this would be GREATLY appreciated also if you could check my previous solutions to see if I've done them correctly, or if there any errors in it. Thanks!! :)
Your answers to $a$ and $b$ are correct, good job. In the answer to $c$, $5$ is not a cluster point. Which subsequence converges to $5$? Remember : the cluster points of a sequence do not change if we remove finitely many terms from the start of the sequence, because the definition of a cluster point (or limit point) goes like : $\forall \epsilon > 0, \exists N \in \mathbb N, \forall n > N$ etc. , here the points under scrutiny are $a_n$, where $n>N$ will be some possibly arbitrarily large number if $\epsilon$ is chosen small. In short, the tail of the sequence matters as far as cluster points are concerned, not the head. So the answer is only that $2,6$ are the cluster points. Furthermore, a subsequence is itself a sequence, so $a_1$ is a term of the sequence $a_n$, not a subsequence. To give a subsequence, you have to specify infinitely many terms with distinct indices, which is not the case with $a_1$. Similarly, you can approach $d$. This is a little more subtle problem, but is nevertheless very interesting. Claim : Every positive integer is a cluster point of this sequence. Proof : Every positive integer appears infinitely many times in the sequence, because it will appear first when we write down the list $1,2, ...,k$, then again when we write down $1,2, ..., k+1$, and similarly for any $n>k$, since the list of terms $1,2,...,n$ is contained in the subsequence, $k$ will be contained in the sequence infinitely many times. So just remove this subsequence, and you will see that $k$ is a cluster point of the sequence for every $k$ being an integer. Now, this I leave you to see. Use the $\epsilon-\delta$ definition, try playing the contradiction game, but the end result is: A sequence of integers can only have limit points which are integers. In case you do not wish to solve this, here is a spoiler (hover over the yellow box): Suppose $x$ is not an integer and is a limit point of the sequence, let $\tau$ be the distance of $x$ from the nearest integer, then it is greater than zero. If $x$ is a cluster point, then for $\epsilon = \frac \tau 4$, there should exist $N$ such that if $n>N$, then $|a_n - x| < \epsilon$. But then, $a_n$ are integers, and by the definition of $\tau$, this cannot happen! These two points together show that the set of limit points is exactly the st of positive integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is $5^{n+1}+2\cdot 3^n+1$ a multiple of $8$ for every natural number $n$? I have to show by induction that this function is a multiple of 8. I have tried everything but I can only show that is multiple of 4, some hints? The function is $$5^{n+1}+2\cdot 3^n+1 \hspace{1cm}\forall n\ge 0$$, because it is a multiple of 8, you can say that$$5^{n+1}+2\cdot 3^n+1=8\cdot m \hspace{1cm}\forall m\in\mathbb{N}$$.
HINT: If $a_n=(2b+1)^n$ $$a_{m+2}-a_m=8(2b+1)^m\cdot\dfrac{b(b+1)}2$$ which is multiple of $8$ as $b(b+1)$ is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2204834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Finding $ \int \frac{5x^2-x-4}{x^5+x^4+1}dx$ Finding $\displaystyle \int \frac{5x^2-x-4}{x^5+x^4+1}dx$ Attempt : $\displaystyle I = \int\frac{5x^2-x-4}{x^5+x^4+1}dx = \int\frac{5x^2-x-4}{(x^2+x+1)(x^3-x+1)}dx$ because $\omega,\omega$ are the roots of $x^5+x^4+1 = 0$ so one factor is $(x-\omega)(x-\omega^2) = (x^2+x+1)$ could some help me how to solve it, thanks
HINT:Using partial fraction decomposition $$\frac{5x^2-x-4}{(x^2+x+1)(x^3-x+1)}=\frac{Ax+B}{x^2+x+1}+\frac{Cx^2+Dx+E}{x^3-x+1}$$ $${5x^2-x-4}=(Ax+B)(x^3-x+1)+(Cx^2+Dx+E)(x^2+x+1)$$ Solving gives.. $$\frac{5x^2-x-4}{(x^2+x+1)(x^3-x+1)}=\frac{-3x-3}{x^2+x+1}+\frac{3x^2-1}{x^3-x+1}$$ Another hint: Maybe at some point you might require:$$\int\frac{1}{x^2+x+1}dx$$ Complete the square and use $$\int\frac{1}{a^2+x^2}dx=\frac{\arctan(\frac xa)}{a}+C$$$$$$ Can you do it now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it possible to argue that $\nabla F(x^{*})\neq \textbf{0}$? Let $F:\mathbb{R}^{n}\rightarrow \mathbb{R}$ be differentiable. For every affine subset $L$ of $\mathbb{R}^{n}$ of the form \begin{eqnarray} L=\{{x}\in \mathbb{R}^{n}:A{x}=b\} \end{eqnarray} for some $m\times n$ matrix $A$ having full rank and $b\in \mathbb{R}^{m}$, with $m\leq n$, it is known that $F$ attains a $\textbf{unique minimum}$ (let's call it $x^{*}$) over $L$. Further, it is known that $F$ attains a unique minimum on $\mathbb{R}^{n}$ itself. Let's call this point as $x^{0}$. Given $L\subset \mathbb{R}^{n}$ of the above form satisfying * *$x^{0}\notin L$, and *$x^{*}\in L$ is the unique point where $F$ attains a minimum in $L$, we know, from Lagrange's theorem, that there exists $\lambda^{*}\in \mathbb{R}^{m}$ such that \begin{eqnarray} \nabla F(x^{*})=A^{T}\lambda^{*}. \end{eqnarray} Is it possible to argue that $\nabla F(x^{*})=A^{T}\lambda^{*}\neq \textbf{0}$, the all-zero vector? If so, can someone provide a proof of the same? Since $F$ attains a global minimum at $x^{0}$, it is clear that \begin{eqnarray} \nabla F(x^{0})=\textbf{0}. \end{eqnarray} Since $x^{0}$ is the global minimum, my question is if it is possible that $\nabla F(x^{*})=\textbf{0}$ for $x^{*}\neq x^{0}$ that is the unique point of minimum in $L$. Nothing more is known about the function $F$, except that it is differentiable. Can imposing more constraints on $F$ (such as requiring $F$ to be strictly convex) pave way for arguing that $\nabla F(x^{*})$ should be a non-zero vector? Add 1: If we are told that $\nabla F(x^{*})=\textbf{0}$ for some $x^{*}\in \mathbb{R}^{n}$, then we may not be able to conclude if $x^{*}$ is a point of minimum or maximum (or even saddle). However, knowing apriori the fact that $x^{*}$ is the unique point in $L$ where $F$ attains a minimum, and that $x^{0}$ is the unique point in $\mathbb{R}^{n}$ where $F$ attains a minimum, isn't it reasonable to say $\nabla F(x^{*})\neq \textbf{0}$? For, if it were $\textbf{0}$, there would be two points (namely $x^{*}$ and $x^{0}$) where $F$ attains minima in $\mathbb{R}^{n}$? This is just a thought.
It is possible that $\nabla F(\mathbf x^*)=0$ for $\mathbf x^*\ne \mathbf x^0$. Example: take $f(t)=(t-1)^3$ and consider $F(x,y)=f(x^2+y^2)$. Basically, the graph of $F$ is the rotation of the following curve The level sets of $F$ are circles, hence, the minimum on any line is unique (intersections of circles and tangent lines are unique). However, the equation $$ \nabla F(x,y)=2 f'(x^2+y^2)\begin{bmatrix}x\\y\end{bmatrix}=0 $$ has several solutions: the origin (the global minimum of $F$) and the circle $x^2+y^2=1$ where $f'=0$. Hence, taking $L=\{y=1\}$, for example, will give $\mathbf x^*=(0,1)\ne \mathbf x^0=(0,0)$ with $\nabla F(\mathbf x^*)=0$. The condition that $F$ attains unique minimum on all linear manifolds is equivalent to the level subsets $\{F(\mathbf x)\le C\}$ being strictly convex, that makes $F$ necessarily strictly quasiconvex. If we strengthen it to be pseudoconvex (or, in particular, convex) then $$ \nabla F(a)=0\quad\Rightarrow\quad a\text{ is the global minimum} $$ and by uniqueness it can happen only at $x^0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding the asymptotes of an integral I need to find the asymptote of $$\int\limits_{0}^{\infty} \frac{\ln(1 + \frac{x}{\sqrt{n}})}{x + x^3} dx$$ I've taken $b_{n} = n^{-1/2}$ which reduces the problem to finding, $$\lim_{n\to\infty}\ \sqrt{n}\int\limits_{0}^{\infty} \frac{\ln(1 + \frac{x}{\sqrt{n}})}{x + x^3} dx$$ but now I'm stuck. Can I bring the limit inside the integral? Am I supposed to show that the limit is equal to 1 so that $\lim_{n\to\infty} \frac{a_n}{b_n} = 1$?
I would use squeezing, with the inequalities $$ \frac{x}{x+1}<\ln(1+x)<x. $$ Inserting and calculating, you will find that $$ \frac{\sqrt{n}\pi-\ln n}{2+2n}\leq \int_0^{+\infty}\frac{\ln(1+x/\sqrt{n})}{x+x^3}\,dx\leq \frac{1}{\sqrt{n}}\frac{\pi}{2}. $$ It follows that $$ \lim_{n\to+\infty}\sqrt{n}\int_0^{+\infty}\frac{\ln(1+x/\sqrt{n})}{x+x^3}\,dx=\frac{\pi}{2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the integral $\int_0^1 \frac{x^{1/2}}{1+x^{1/3}}dx$? How to find $$\int_0^1 \frac{x^{1/2}}{1+x^{1/3}}dx$$ ? My attempt: I made $t^6=x$ and got $\displaystyle \int_0^1 \frac{6t^8}{1+t^2}dt$ and got stuck.
Try writing the numerator as $$6t^8 + 6t^6 - 6t^6 - 6t^4 + 6t^4 + 6t^2 - 6t^2 - 6 + 6.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
For sheaves, $f_*g_*=(fg)_*$ is equality, but $g^*f^* \cong (fg)^*$ only canonical isomorphism I wonder why do one cares that pushforward of quasicoherent sheaves satisfies equality $f_*g_*=(fg)_*$, but for pullback there is only canonical isomorphism $g^*f^* \cong (fg)^*$? I believe that the fact follows because $A \otimes_B C \otimes_C D \cong A \otimes_B D$ is only canonical isomorphism, not equality. The fact is mentioned in Vistoli's Grothendieck's FGA explained, 3.2.1 as $QCoh$ is a natural example of pseudo-functor. Thus another side of the question: why is it only pseudo-functor, and does one really cares about it (or is it like set-theoretic problems: one can always solve them, unless doing something really stupid)?
It seems to me that the question essentially contains its answer. One would care about the differences, if one cares to know what things are! Having the equalities $(f\circ g)_\ast=f_\ast\circ g_\ast$ and ${id_X}_\ast=id_{\mathbf{QCoh}(X)}$, for every composable pair of morphisms $f$ and $g$ in $\mathbf{Sch}/S$, and for every $X\in \mathbf{Sch}/S$, implies that there is a functor $$ \mathbf{QCoh}:\mathbf{Sch}/S\to \mathbf{CAT}, $$ where $\mathbf{CAT}$ is the category of large categories and functors between them, with $\mathbf{QCoh}(f)=f_\ast$. On the other hand, in general, ${()}^\ast$ is not a functor between categories, it is rather a pseudofunctor $$ \mathbf{QCoh}:(\mathbf{Sch}/S)^{op}\to \mathbf{CAT}_2, $$ where $\mathbf{CAT}_2$ is the strict $2$-category of large categories, functors between them, and natural transformations between the latter, with $\mathbf{QCoh}(f)=f^\ast$. The question whether one needs to distinguish between an equality and a canonical isomorphism is essentially the same as asking if, in a group, one needs to to distinguish between the identity element and a choice of an element of the group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the Law of Quadratic Reciprocity necessary or just convenient for calculation? I'm rather confused. Based on my understanding, you can solve a quadratic in some ring if the discriminant is a square in that ring. So if I have: $$ax^2 + bx + c \equiv 0 \pmod{n}$$ Then all I need to do is determine is where $b^2 - 4ac$ is a square modulo $n$. So if I calculate: $$\left( \frac{b^2 - 4ac}{p} \right)$$ Where $\left( \frac{p}{q} \right)$ is the Legendre symbol (not sure how to actually write it). If I get $1$, then I can solve it, and if I get $-1$, then I can't. Why then do I need the Law of Quadratic Reciprocity, which, as I understand, is simply that: $$\left( \frac{p}{q} \right)\left( \frac{q}{p} \right) = (-1)^{\frac{p-1}{2} \frac{q-1}{2}}$$ But I could also simply calculate it by trying every number modulo $n$, no? So is the Law of Quadratic Reciprocity necessary for determining whether a given quadratic can be solved, or does it just speed up the calculation?
It just speeds up the calculation, but it speeds up calculation rather dramatically. To compute $\left(\frac ap\right)$ by checking each value $0 \le b < p$ to see if $a \equiv b^2 \pmod p$, we need $O(p)$ steps. On the other hand, we have an $O(\log a \log p)$ algorithm by doing two things: using quadratic reciprocity, and extending our calculations to the Jacobi symbol when the bottom number is composite. (It's essentially the Euclidean algorithm, where at every step we also track the sign changes that follow from QR.) If we just use quadratic reciprocity, but stick to the Legendre symbol, it's still slow, since for something like $\left(\frac{15}{17}\right)$ we first have to do the hard computational work of factoring $15$ into $3\cdot 5$. (But occasionally, the Legendre symbol is also useful in proofs, not just for computation.) Note that all this will just tell you whether a square root exists, not find it for you. But even if you want to do the latter, either the Tonelli–Shanks algorithm or Cipolla's algorithm is still faster than brute force for large primes $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Having trouble understanding Taylor Series I'm having trouble interpreting the Taylor series formula. The nth term of the Taylor series looks like the nth integral of f(x). Is this correct? If so, I don't quite understand the meaning of the nth integral, and how it is able approximate f(x) at higher values of n. Edit: I was looking at an example where f(x) = e^x, which looked like the nth integral as n increased. Specifically, I'm trying to make sense of the division by n! and how this helps approximating the original function.
From a purely symbol manipulation point of view, you can easily obtain the Taylor series formula in the following way. Start by assuming $$f(x) \; = \; a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5 + a_6x^6 + \; \cdots $$ Plugging $x=0$ into this tells us that $f(0) = a_0.$ Now differentiate both sides, assuming we can differentiate a sum of infinitely many terms like we can differentiate a sum of finitely many terms: $$f'(x) \; = \; a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + 5a_5x^4 + 6a_6x^5 + \; \cdots $$ Plugging $x=0$ into this tells us that $f'(0) = a_1.$ Differentiate again: $$f''(x) \; = \; 2a_2 + (3)(2)a_3x + (4)(3)a_4x^2 + (5)(4)a_5x^3 + (6)(5)a_6x^4 + \; \cdots $$ Plugging $x=0$ into this tells us that $f''(0) = 2a_2,\;$ or $\;a_2 = \frac{1}{2}f''(0).$ Differentiate again: $$f^{(3)}(x) \; = \; (3)(2)a_3 + (4)(3)(2)a_4x + (5)(4)(3)a_5x^2 + (6)(5)(4)a_6x^3 + \; \cdots $$ Plugging $x=0$ into this tells us that $f^{(3)}(0) = (3)(2)a_3,\;$ or $\;a_3 = \frac{1}{3!}f^{(3)}(0).$ Differentiate again: $$f^{(4)}(x) \; = \; (4)(3)(2)a_4 + (5)(4)(3)(2)a_5x + (6)(5)(4)(3)a_6x^2 + \; \cdots $$ Plugging $x=0$ into this tells us that $f^{(4)}(0) = (4)(3)(2)a_4,\;$ or $\;a_4 = \frac{1}{4!}f^{(4)}(0).$ Keep going in this manner.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Semantic Consequence Definition "What is the difference between ⊨ (semantic consequence) and ⊢ (syntactic consequence)?" was a question that has been posted, but I am wanting a more specific answer. For example, this video explains what a syntactic consequence is. After watching this video, it is obvious that we say p ⊢q when p->q is a tautology where p and q are given propositions forming the tautology. What is an easy way to explain what a semantic consequence is? I have been obsessed looking at this question for awhile. Any help would be greatly appreciated.
$\vdash$ is used to make statement about formal proof systems, which include rules of inference, that say: "If you have a (or two) statement(s) that look like such-and-so, then you can write down a new statement that looks like this-and-that". For example, many formal proof systems include the following rule of inference called Modus Ponens: $$\varphi$$ $$\varphi \rightarrow \psi$$ $$\therefore \psi$$ So with this rule, I can, for example, infer $B \land C$ from $A$ and $A \rightarrow (B \land C)$. The fact that I can do this within the proof system we write as: $A, A \rightarrow (B \land C) \vdash B \land C$. Now, as it so happens, $B \land C$ does in fact logically follow from $A$ and $A \rightarrow (B \land C)$. That is, the way we defined the formal semantics (think truth-tables) is such that whenever $A$ and $A \rightarrow (B \land C)$ are true, $B \land C$ will have to be true as well. And that we write as $A, A \rightarrow (B \land C) \vDash B \land C$. But maybe the best way to illustrate the difference between $\vdash$ and $\vDash$ is to consider a case where they don't both hold at the same time. So, suppose I write a new logic textbook, and suppose that I develop a very simple system for making formal proofs, in that it has a single rule of inference: Hokus Ponens $$\therefore \varphi$$ Now, with Hokus Ponens, I can derive anything from nothing. Thus, for example, it will be true that $P \vdash Q$. Here is the derivation/formal proof: * *$P$ Premise *$Q$ Hokus Ponens! But obviously, $Q$ does not logically follow from $P$. That is: $P \not \vDash Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Apparently sometimes $1/2 < 1/4$? My son brought this home today from his 3rd-grade class. It is from an official Montgomery County, Maryland mathematics assessment test: True or false? $1/2$ is always greater than $1/4$. Official answer: false Where has he gone wrong? Addendum, at the risk of making the post no longer appropriate for this forum: Questions about context are fair. This seems to have been a one-page (front and back) assessment. Here is the front, notice the date and title: Based on the title, it seems to me that this is an assessment about the number line in which case my son's picture and written proof are inappropriate and better would have been to locate $1/2$ and $1/4$ on the line and state something like "No matter how many times you check, 1/2 is always to the right of 1/4." However, based on the teacher's response it seems the class has entered into a quagmire and is mixing up numbers with portions.
'What about this?' is ABSURD. Fractions are real numbers and $1/2$ is NOT smaller than $1/4$. Period. If the dumb teacher wants to compare a half of biscuit to a quarter of pizza, then they are no longer numbers, but physical quantities (masses or volumes), which have their units, and the stupid needs to consider bringing them to appropriate common unit of measure to compare. One should also consider the correspondence of the quantity type (say, not to compare a half of hour to a quarter of mile!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2205890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "146", "answer_count": 7, "answer_id": 5 }
How can we one show that $\sum_{i=1}^{\infty}\sum_{j=1}^{2k}(-1)^{j-1}{2\over i+j}=H_k?$ Given the double sums $(1)$ $$\sum_{i=1}^{\infty}\sum_{j=1}^{2k}(-1)^{j-1}{2\over i+j}=\color{blue}{H_k}\tag1$$ Where $H_k$ is the n-th harmonic number How can one prove $(1)$? Rewrite $(1)$ as $$\sum_{i=1}^{\infty}\left({2\over i+1}-{2\over i+2}+{2\over i+3}-\cdots+{2\over i+k}\right)\tag2$$ Rewrite $(2)$ as $$\sum_{i=1}^{\infty}\left({2\over (i+1)(i+2)}+{2\over (i+3)(i+4)}+{2\over (i+5)(i+6)}+\cdots+{2\over (i+k)(i+k+1)}\right)\tag3$$ Help required, not sure what is the next step. Thank you.
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{i = 1}^{\infty}\sum_{j = 1}^{2k}\pars{-1}^{\,j - 1}{2 \over i + j} & = 2\sum_{i = 1}^{\infty} \sum_{j = 1}^{2k}\pars{-1}^{\,j - 1}\int_{0}^{1}x^{i + j - 1}\,\dd x = 2\sum_{i = 1}^{\infty}x^{i} \int_{0}^{1}\sum_{j = 1}^{2k}\pars{-x}^{\,j - 1}\,\dd x \\[5mm] & = 2\sum_{i = 1}^{\infty}x^{i} \int_{0}^{1}{\pars{-x}^{2k} - 1 \over -x - 1}\,\dd x = 2\int_{0}^{1}{1 - x^{2k} \over 1 + x}\sum_{i = 1}^{\infty}x^{i}\,\dd x \\[5mm] & = 2\int_{0}^{1}{x - x^{2k + 1} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, \int_{0}^{1}{x^{k} - 1 \over x - 1}\,\dd x = \int_{0}^{1} \sum_{n = 1}^{k}x^{n - 1}\,\dd x \\[5mm] & = \sum_{n = 1}^{k}{1 \over n} = \bbx{\ds{H_{k}}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
For which values of $z$ does the sequence $(e^{nz})$ converges? Where $z$ is a complex number. For which values of $z$ does the sequence $(e^{nz})$ converges? Where $z$ is a complex number. I was able to figure it out for numbers with null imaginary part, but I get troubled when considering the imaginary part. Being $z = a + bi$. With null imaginary part what we have is: $$ e^{nz} = e^{n\cdot(a+bi)} $$ As $b=0$, $$ e^{n\cdot(a+bi)} = e^{na}.$$ It converges if $a\leq 0$, and diverges if $a <0$. I could not develop the same argument when $b\neq 0$. What we have is: $$ e^{n\cdot(a+bi)} = e^{na}\cdot e^{nbi}$$ Which can be rewritten as: $$ e^{na} \cdot [\cos (nb) + i\cdot \sin(nb)]$$ At this part I could not go any further. I tried thinking about the second term as being bounded but couldn't quite figure it out if it realy is. Is this the right track? How can I show that $[\cos (nb) + i\cdot \sin(nb)]$ is bounded?
Put $e^z=:w$. Then $$e^{nz}=w^n$$ for all $n\geq0$. The complex sequence $\bigl(w^n\bigr)_{n\geq0}$ converges iff either $|w|<1$ or $w=1$. The first is the case if ${\rm Re}(z)<0$ and the second if $z=2k\pi i$ for some $k\in{\mathbb Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Differentiation of $g(x)=f(x+c)$ Suppose $g(x)=f(x+c)$. Prove $g'(x)=f'(x+c)$ I know that $$g'(x)=\lim_{x\to 0} \frac{g(x+h)-g(x)}{h}$$ that is it please help.
Fix $x$ and let $\tilde{x} = x+c$. Then $$\lim_{h \to 0} \frac{g(x+h) - g(x)}{h} = \lim_{h \to 0} \frac{f((x+h)+c) - f(x+c)}{h} $$ $$= \lim_{h \to 0} \frac{f(\tilde{x}+h) - f(\tilde{x})}{h} = f'(\tilde{x}) = f'(x+c).$$ You could also have used the chain rule: letting $u(x) = x+ c$, we have that $g = f \circ u$, so that $g'(x) = f'(u(x)) \cdot u'(x) = f'(x+c) \cdot 1 = f'(x+c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
About irreducible representations of C* algebra I have been thinking about what can we say about decomposing any representation as a direct sum of irreducible representations. I know that every cyclic representation corresponds to a representation coming from a state, which is the essence of the GNS construction. Moreover such a representation is irreducible $\textbf{iff}$ the state is a pure state. I also know that every representation can be decomposed into a direct sum of cyclic representations. Now my question is whether we can decompose any representation of a C* algebra $A$ into direct sum of irreducible representations. It seems to me that not every representation can be represenation can be decomposed in this way. Is there a description/classification of those representations which can be decomposed in this way. I don't know much representation theory but I think such results are true for representations of some other class of objects.
It can be shown that if $A$ is separable and $\pi:A\to B(H)$ is a representation with $H$ separable, then $\pi$ is approximately unitarily equivalent to a direct sum of irreducible representations. But the "approximate" part is essential. For instance let $A=C_r^*(\mathbb F_2)$, the reduced C$^*$-algebra of the free group in two generators, and consider the identity representation $\pi:A\to B(\ell^2(\mathbb F_2))$ (where $A$ is the C$^*$-algebra generated by the image of the left regular representation of $\mathbb F_2$). If $\pi_1$ is irreducible and $\pi=\pi_1\oplus\pi_2$, then $p=\pi_1(I_A)$ is a projection in the commutant $C_r^*(\mathbb F_2)'=R(\mathbb F_2)$, which is known to be a II$_1$-factor. Because $A$ is simple, $\pi_1$ is faithful, and so $\pi_1(A)=pA$; and $pA$ cannot be dense in $B(pH)$, because $p$ has nontrivial subprojections in $R(\mathbb F_2)$, so $pA$ has nontrivial commutant in $B(pH)$, contradicting the irreducibility of $\pi_1$. With similar ideas as above, one can show that any representation of a II$_1$-factor into a separable Hilbert space cannot be a sum of irreducible representations, because any irreducible representation of a II$_1$-factor is uncountably-dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Use the method of characteristics to solve $u_t+uu_x+\frac{1}{2}u=0$. Use the method of characteristics to solve $$u_t+uu_x+\frac{1}{2}u=0, \quad t>0, \quad {-\infty}<x<\infty$$ $$u(x,0)=\sin(x)\quad {-\infty}<x<\infty$$ (solution may be expressed in implicit form). Show that a shock solution is possible if $t=t_c=2\ln(2)$. Notes: Although I have used the method of characteristics to solve the wave equation and have cover basic theory in class. But, I am bit lost in application. Any hints would be appreciated.
The shock between states $u_L,u_R$ moves with speed $(u_L+u_R)/2$. This follows from applying conservation to an infinitesimal rectangular control volume that has part of the shock path as a diagonal. The source term has no effect on this. Past the shock formation time, the solution is double-valued, and in this region the shock is governed by the ODE $$\frac{dx_s}{dt}=\frac{1}{2}(u_1+u_2)$$ There will not be an explicit solution to this because $u_{1,2}$ are only given implicitly (but probably that was not what you meant}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Open set on $E^n$ (the $n$-dimensional euclidean space) Let $A$ be a countable set of $E^n$ (the $n$-dimensional euclidean space). Show that $A$ is not an open set of $E^n$. Definition of open set Let $(X, \mathcal T)$ be a topological space. A subset $U \subset X$ is called an open set of $X$ if $U \in \mathcal T$. For $x \in X$, if $U$ is an open set and if $x \in U$, then we call $U$ a neighborhood of $x$ How to show that $A$ is not an open set of $E^n$ ?
For $n>1$ and non-empty $A$: Let $x=(x_1,...,x_n)\in A.$ If $A$ is open then for some $r>0$ the open ball $B(x,r)$ of radius $r$, centered at $x$, is a subset of $A.$ The real interval $(-r+x_1,r+x_1)$ is uncountable and $$A\supset B(x,r)\supset (-r+x_1,r+x_1)\times (x_2,...,x_n)$$ which is uncountable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the domain and range of $f(x) = \sqrt {\frac{x+1}{x+2}}$ Find the domain and range of $$f(x) = \sqrt {\frac{x+1}{x+2}}$$ I got the domain $[-1, \infty)$ but the answer contains $(-\infty, -2)$ along with it. And how to calculate range?
We must have $$\frac{x+1}{x+2}\geq 0.$$ We consider the following cases: Case 1. Suppose that $x+1\geq 0$ and $x+2>0$. Then $x\geq -1$ and $x>-2$. Thus, $$SS_1=[-1,\infty).$$ Case 2. Suppose that $x+1\leq 0$ and $x+2<0$. Then $x\leq -1$ and $x<-2$. Thus, $$SS_2=[-\infty,-2).$$ Hence, domain$=SS_1\cup SS_2$ Let $y=f(x)=\sqrt{\frac{x+1}{x+2}}$. Note that $y\geq 0$. Now, $$y^2=\frac{x+1}{x+2}=1-\frac{1}{x+2}$$ and we get $$x=\frac{1}{1-y^2}-2\qquad;y\neq\pm 1$$ Because $y\geq 0$, it follows that the range is $\Bbb R\smallsetminus\{1\}$. For completeness the graph is given below:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing that the hyperintegers are uncountable In class, we constructed the hyperintegers as follows: Let $N$ be a normal model of the natural number with domain $\mathbb{N}$ in the language $\{0, 1, +, \cdot, <, =\} $. Also let $F$ be a fixed nonprincipal ultrafilter on $\omega$. Then we have $N^*$ as the ultrapower $N^\omega /F$ with domain $\mathbb{N}^{\omega} / F$ . Now I need to show that $N^*$ is uncountable (this means that it's cardinality is $2^{\aleph_0}$ I guess). The domain of $N^*$ are the equivalence classes of functions $\{f_F : f \mbox{ a function from }\mathbb{N} \to \mathbb{N}\}$ Further on, we have $N^{\sharp}$ which is the equivalence classes of all the constant functions. This is my intuition: I know that $N$ and $N^{\sharp}$ are isomporphic (hence have the same cardinality, which is $\aleph_0$ since $N$ is a model for the natural numbers). Furthermore I know that $N^{\sharp}$ is elementary equivalent to $N^*$. But $N^*$ has an extra 'copy' of $N^\sharp$ on top of it, the hyperintegers. So then the cardinalty of $N^*$ should be $2^{\aleph_0}$ (does that even exist? And is that uncountable?). I hope someone can help me with a formal proof.
Here's a diagonalization argument. Let $\left(\mathbf{x}^{(m)} \right)_{m \in \mathbb{N}}$ denote a sequence of natural sequences $\left(x_{n}^{(m)} \right)_{n \in \mathbb{N}} \in \mathbb{N}^{\mathbb{N}}$. Define a sequence $\left(y_n \right)_{n \in \mathbb{N}}$ by $$y_{n} = 1 + \max_{1 \leq m \leq n} \left(x_{n}^{(m)} \right) .$$ Let $\sim$ denote the ultrafilter equality, i.e. $(a_{k}) \sim (b_{k}) \iff \{ k \in \mathbb{N} : a_k = b_k \} \in F$. I claim that $(y_n) \not \sim \left( x_{n}^{(m)} \right)$ for all $m$. To see this, note that $y_n > x_{n}^{(m)}$ for all $n \geq m$, so $\left\{ n : y_n = x_{n}^{(m)} \right\} \subseteq \{ 1, \ldots , m - 1 \}$, which means the set is finite. But a nonprincipal ultrafilter contains no finite sets, so $\left\{ n : y_n = x_{n}^{(m)} \right\} \not \in F$. Thus $(y_n) \not \sim \left(x_{n}^{(m)} \right)$ for all $m$. Thus $^{*} \mathbb{N} \setminus S$ is non-empty for all countable $S$. Note that in fact we can say that $(y_n) > \left(x_{n}^{(m)} \right)$ as a nonstandard natural, so we can elaborate that every countable subset of $^{*} \mathbb{N}$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2206831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Showing that $\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}\sum_{j=0}^{k}{H_{j+1}\over j+1}={1\over (n+1)^3}$ Consider this double sums $(1)$ $$\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}\sum_{j=0}^{k}{H_{j+1}\over j+1}={1\over (n+1)^3}\tag1$$ Where $H_n$ is the n-th harmonic An attempt: Rewrite $(1)$ as $$\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}\left(H_1+{H_2\over 2}+{H_3\over 3}+\cdots+{H_{k+1}\over k+1}\right)\tag2$$ Recall $$\sum_{k=0}^{n}(-1)^k{n\choose k}{1\over k+1}={1\over n+1}\tag3$$ Not sure how to continue
We may notice that $$ \sum_{n\geq 1}\frac{x^n}{n}=-\log(1-x),\qquad \sum_{n\geq 1} H_n x^n = -\frac{\log(1-x)}{1-x}\tag{1} $$ hence $$ \sum_{n\geq 1}\frac{H_n}{n+1} x^{n}=\frac{\log^2(1-x)}{2x},\qquad \sum_{n\geq 1}\frac{H_{n+1}}{n+1} x^{n}=\frac{\log^2(1-x)+2\text{Li}_2(x)}{2x}-1 \tag{2}$$ and we may consider what the operator $$ T_n=\sum_{k=0}^{n}(-1)^k \binom{n}{k}\frac{[x^k]}{k+1} \tag{3} $$ does to an analytic function in a neighbourhood of zero. This is strictly related with the binomial transform (and with Stirling numbers of the first kind giving the Taylor series of $\log(1-x)^k$), hence to solve the question it is enough to compute a closed form for $$ \sum_{k=0}^{n}(-1)^k \binom{n}{k}\frac{1}{(k+1)^3} = \frac{1}{2}\int_{0}^{1}\sum_{k=0}^{n}(-1)^k \binom{n}{k} x^k \log^2(x)\,dx = \frac{1}{2}\int_{0}^{1}(1-x)^n \log^2(x)\,dx$$ where the last integral is $$ \frac{d^2}{d\alpha^2}\left.\int_{0}^{1}(1-x)^n x^{\alpha}\,dx\,\right|_{\alpha=0^+}=\frac{H_{n+1}^2+H_n^{(2)}}{2n+2}.\tag{4}$$ To finish the proof, it is enough to apply summation by parts to $$ \sum_{j=0}^{k}\frac{H_{j+1}}{j+1} = H_{k+1}^{2}-\sum_{j=0}^{k-1}\frac{H_{j+1}}{j+1}.\tag{5} $$ Long story short: OP's identity come from applying the binomial transform to the series defining $\zeta(3)$ and exploiting my $(4)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Let $E$ be a Banach space and $A\in L(E)$ such that $\parallel A \parallel<1$. Then, $I-A$ is invertible and $(I-A)^{-1}=\sum^\infty_{n=1}A^n$. I want to prove the following: Let $E$ be a Banach space and $A\in L(E)$ such that $\parallel A \parallel<1$. Then, $I-A$ is invertible and $(I-A)^{-1}=\sum^\infty_{n=1}A^n$. I already proved that the series $\sum^\infty_{n=1}A^n$ converges. Now, I just have to prove that $\sum^\infty_{n=1}A^n(I-A)=I$. I have $\sum^\infty_{n=1}A^n(I-A)=\lim_N\rightarrow\infty \sum^N_{n=1}(A^n-A^{n+1})$, so I have to check that this limit is $I$. Let $\epsilon>0$. I have $$\parallel \sum^N_{n=1}(A^n-A^{n+1})-I \parallel \le \sum^N_{n=1}\parallel A^n \parallel + \sum^N_{n=1}\parallel A^{n+1} \parallel +\parallel I \parallel \\ \le \sum^N_{n=1}(\parallel A\parallel^n + \parallel A\parallel^{n+1}) +1 $$ But now large $N$ does not help me getting arbitrarily small values (less than $\epsilon$)... quite the contrary. What to do?
Taking your notations, we already have $\sum A^n$ that converges in $L(E)$ to some element $B$ (also, note the comment that the series should begin at $n=0$). Now, we also have the following telescopic sum: $$ \left(\sum_{n=0}^N A^n\right) \circ (I - A) = I - A^{N+1} $$ The left hand side converges to $B \circ (I-A)$, the right hand side converges to $I$ (because $||A^{N+1}||\leq||A||^{N+1} \longrightarrow 0$ as $N \rightarrow \infty$). Uniqueness of the limit tells us that $B \circ (I - A) = I$. The same process can be done the other way around to show that $(I-A) \circ B = I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute integral $\int_0^a{f'(x)\cdot f(x)}dx$ what is the integral of the following function: $\int_0^a{f'(x)\cdot f(x)}dx$ Not quite sure how to integrate it.
Alternative hint: if you integrate by parts, you obtain $$\int_0^a f'(x)f(x)\,dx = -\int_0^af(x)f'(x)\,dx + \left[f(x)f(x) \right]_0^a $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is the moment map constant on the orbits of the action of the Lie algebra Given the action of a Lie group on a symplectic manifold, the moment map gives a mapping $\mu: M \rightarrow \mathfrak{g}^*$ to the dual of the Lie algebra $\mathfrak{g}^*$ defined by $d(\langle \mu,\eta\rangle)=i_{X_\eta}\omega$, where $X_\eta$ is the vector field generated by the action of $\eta \in \mathfrak{g}$ and $\langle \mu,\eta\rangle$ is just the pairing between elements of the Lie algebra and it's dual. I can compute simple examples but what I cannot see, intuitively, why the moment map is constant on the orbits of the action - which is of course why the moment map is important and useful in the first place. Can someone say an inspired sentence that will give me that "aha!" moment?
Since the flow is Hamiltonian we have $d(\langle \mu, \eta \rangle)=i_{X_\eta}\omega=dH_\eta$ for some function $H_\nu$. The orbits of the Hamiltonian vector field $X_\eta$, occur on the levels sets of $H_\eta$ and so on the level sets of $\langle \mu, \eta \rangle$, for which $\mu$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proof Verification: Eigenvalues of a Complex Vector Space I think I found a solution to question 16 from chapter 5.A of Axler's Linear Algebra book which states Suppose V is a complex vector space and $T\in L(v)$, and the matrix of T with respect to some basis of V only has real entries show that if $\lambda$ is an eigenvalue then so is its complex conjugate. and wanted to verify that the solution I attempted is indeed correct. My Proof Attempt: First I claim that for this matrix with the particular basis $C\circ T\circ C = T$ where $C$ is the complex conjugate map. Given any vector $v=\sum_{k=1}^{n}b_kv_k$, where $v_k$ is the particular basis, $T\circ C= \sum_{k=1}^{n}\overline{b_k}T(v_k)$. Hence $C\circ T\circ C= \sum_{k=1}^{n}b_k\overline{T(v_k)}$. But since the entries are real $T(v_k)=\overline{T(V_k)}$. Therefore the first claim is as such. To prove the claim let $v$ be the given eigenvector of $T$ then $C\circ T\circ C (v)= \lambda v$, for some non-zero complex number $\lambda$. Since $C$ is its own inverse it then follows that $T\circ C(v) = C(\lambda v) $. By definition the map $C$ takes the coefficient of the vector and gives the vector with co-coefficients which are the complex conjugate of the orginal. Thus it is clear that $T\circ C(v) = \overline{\lambda}C(v)$ and the proof is complete. I was wondering if this is correct and is clear enough, any critques and suggestions would be greatly appreciated, thank you!
T has only real values => Any complex vector field is being mapped to a real vector field. T:C^N->R^N Any complex vector v can be written as a+bi where a & b are vectors with real components. Tv=zv : z is the eigenvalue here Substituting v with a+bi & a-bi and z with x&y respectively we get: T(a+bi)=Ta+iTb=Ta=x(a+bi) -> The complex portion is 0 [A] T(a-bi)=Ta-iTb=Ta=y(a-bi) -> The complex portion is 0 [B] *(above: iTb is 0 as Ta & Tb are real and iTb is complex; the map is from complex to real so the complex portion is 0) Let x=c+di & y=e+fi from [A]: complex [(c+di)(a+bi)] =0 => bc+ad=0 [C] from [B]: complex [(e+fi)(a-bi)]=0 => af-be=0 [D] Also, the real portions of x(a+bi) & y(a-bi) are equal since, T(a+bi)=Ta+iTb=Ta & T(a-bi)=Ta-iTb=Ta (complex portion 0) => Real [(c+di)(a+bi)] = Real [(e+fi)(a-bi)] => ac-bd=ae+bf [E] Solve B,D & E to prove c=e & d=-f aX[E] => (a^2)c-abd=(a^2)e+abf [F] bX[C] => -abd=(b^2)c bX[D] => abf=(b^2)e Substituting in [F] (a^2)c+(b^2)c=(a^2)e+(b^2)e => c=e Using c=e in [C] & [D] d=-f => x=conjugate(y)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Summing sines of different frequencies Is there a general formula for solving the following equation: $$A \sin(Bt+C) + D \sin(Et+F) = G \sin(Ht+I)$$ All constants on the left side of the equation are known (t is a variable). Is there a formula for calculating G, H and I? Is this even solvable in general? I searched the web and found some semi-relevant hits, like here, but it didn't really seem to answer my question.
I'm not sure about a general formula, but to your answer wether it is solvable in general: the function $sin(3x+4)+sin(6x+7)$ is clearly not harmonic (you can see this by simply plotting it), providing a counterexample to the statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Statistics Hypothesis Testing finding out test stat and critical value For a) $$z =\frac{ \bar{x} - \mu }{ \frac{\sigma }{ \sqrt n}}$$ I have deciphered that sample mean is $$\frac{20 + 23 + 21 + 22}{ 4} = 21.5$$ I came up with $1.29099..$ for Standard Deviation. Sample size is $4$ since $4$ sharks $$z =\frac{ 21.5 - 20 }{ \frac{1.209 }{ \sqrt 4}}$$ I came up with $2.168870...$ for the test statistic. For critical value of $z$, I used the given alpha to find the value of $2.326$ from a confidence interval of 98%. What are the answers and what am I doing wrong ? (Thank you for the edit)
Your test statistic should be $2.3238$. I don't know how you got the value you got. You should calculate $(21.5-20)/(1.291/\sqrt{4}) = 2.32378...$ The critical value should be $4.54$. The question suggests doing a one-tailed test (because the biologist thinks that the sharks will be longer than $20$.) There are $4-1=3$ degrees of freedom. (For a two-tailed test, the critical value would be $5.841$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2207953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Supremum of sets containing inequalities. Let $r \in \mathbb{R}$ be fixed. Prove that: a) $\sup\{q \in \mathbb{Q}: q \leq r\} = r$. b) $\inf\{q \in \mathbb{Q}: q \leq r\} = r$. It looks so strange for me that the supremum equals the infimum, Could anyone help me or at least say that the problem contains a mistake?
I think you should have for (b): $\inf \{q \in \mathbb{Q}: q \geq r \} = r$ as $\inf \{q \in \mathbb{Q}: q \leq r \} = -\infty$ You might even say that the latter expression is undefined, regardless there is certainly no real $r = -\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2208049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the scalars If possible, find scalars c1, c2, and c3, so that the following is true. $$c_1(3, 2, -5) + c_2(-3, 3, 3) + c_3(-3, 8, 1) = (3, -3, 4)$$ I have no idea where to start. I think I have to make it into rref, but I am unsure. Can someone explain how to do this. Please.
Its possibly the question is:$$c_1(3, 2, -5) + c_2(-3, 3, 3) + c_3(-3, 8, 1) = (3, -3, 4)$$ Then Solve the system of equations: $$3c_1-3c_2-3c_3 =3, 2c_1+3c_2+8c_3 =-3,-5c_1+3c_2+c_3 =4.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2208170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is a positive cone defined the way it is? Let $K$ be a field. Consider $P \subseteq K$ satisfying the following properties: * *$x \in P$ and $y \in P$ imply $x+y \in P$ and $xy \in P$ *$x \in K$ imply $x^2 \in P$ *${-1} \notin P$ We call such $P$ a prepositive cone. We further call $P$ a positive cone if $K = P \cup {-P}$. I have a few questions regarding the notion of a positive cone (I have not found references for this concept elsewhere other than Wikipedia). * *Why the name positive "cone"? I understand why positive is there, but not the "cone" part. *If we keep Property $1$, delete Property $2$ and Property $3$, and add Property $2'$, which says that $x \in P$, or $x = 0$, or $x \notin P$ (only one case holds for any $x \in K$), and define a positive cone this way, what do we lose? Why not define positive cones this way instead?
Partial answer: A set $C$ is called a “cone” with vertex at the origin if for any $x\in C$ and any scalar $a\ge0$, $ax\in C$. Look familiar? There is a related concept of a cone for linear algebra. A cone $C$ is a convex cone if $ax + by$ belongs to $C$, for any positive scalars $a,b$ and any vectors $x, y$ in $C$. That should also look familiar. If you look at the Wikipedia page, you’ll see these actually look like cones. Convex cone generated by the conic combination of three black vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2208278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving Linear Congruences over Cryptography I have got some doubt in cryptography mainly related to Linear Congruences. Question Suppose that the most common letter and the second most common letter in a long ciphertext produced by encrypting a plaintext using an Affine Cipher $f\left(p \right)=\left (ap + b \right)\,\pmod{26} $are $Z$ and $J$ respectively. What are the most likely values of a and b? Converting Alphabets into Numerals we have $Z=25,J=9$ I was stuck. I looked into solution.In the they have assumed $f\left(4 \right)=25$ and $f\left(19 \right)=9$ I am not getting how they assuming to take $p=4$ for $f\left(p \right)=25$ and $p=19$ for $f\left(p \right)=9$. I closed the solution and solved the problem by taking $f\left(4 \right)=25$ and $f\left(19 \right)=9$ My Solution Affine Cipher is of the form $f\left(p \right) \equiv \left(ap+b \right)\,\pmod{26}$ $f\left(4 \right)=25$ and $f\left(19 \right)=9$ Equations-: * *$\Rightarrow 25 \equiv \left(4a+b \right)\,\pmod{26}$ *$\Rightarrow 9 \equiv \left(19a+b \right)\,\pmod{26}$ Subtracting 1 from 2, $\Rightarrow 10 \equiv \left(15a \right)\,\pmod{26}$ $\Rightarrow 2 \equiv \left(3a \right)\,\pmod{26}$ $\Rightarrow a \equiv 2*3^{-1}\,\pmod{26}$ $\Rightarrow 3^{-1}\Rightarrow$ inverse of $3\,\pmod{26}=9$ $\Rightarrow a \equiv 18\,\pmod{26}=18$ Now, $\Rightarrow 4a+b \equiv 25\,\pmod{26}$ $\Rightarrow 4*18+b \equiv 25\,\pmod{26}$ $\Rightarrow b \equiv 5\,\pmod{26}$ Answer is Correct.But Only thing i am not getting How $f\left(4 \right)=25$ and $f\left(19 \right)=9$ ???
I don't know why you're not getting $25$ and $19$, because that's what I get: $$f(4) = 18\cdot 4 + 5 = 77 \equiv 25 \pmod{26}.$$ And $$f(19) = 18\cdot 19 +5 = 347 = 9 \pmod{26}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2208486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Consider the six dot product of four vectors $v_1,v_2,v_3,v_4$ on $\mathbb{R}^2$. Can all of them be negative? Consider the six dot products of four vectors $v_1,v_2,v_3,v_4$ on $\mathbb{R}^2$. Can all of them be negative? If all dot products are negative, then the angle between each two vectors are larger than $\pi/2$. Intuitively, I don't think it's possible. But I don't know how to make a formal proof.
You can do this with simple algebra. Can you first convince yourself that there is no loss of generality taking $v_1 = (1,0)$? Suppose all the inner products are negative. Write $v_2 = (a_2,b_2)$, $v_3 = (a_3,b_3)$, and $v_4 = (a_4,b_4)$. Compute the inner products with $v_1$ to find $a_2 < 0$, $a_3 < 0$, and $a_4 < 0$. Since $a_2 a_3 + b_2 b_3 < 0$ and $a_2 a_3 > 0$ you get $b_2 b_3 < 0$. Likewise $b_2 b_4 < 0$ and $b_3 b_4 < 0$. This is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2208716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Describing the solution to a nonlinear PDE Given the pde $$xu_x+yu_y+uu_z=0$$ where $$u(x,y,0)=xy$$ for $x>0$ and $y>0$, the solution, gotten from the method of characteristics is $$u(x,y,z) = xye^{\frac{-2x}{u}}$$ My question is, how would one describe this solution? I have no idea how to even imagine it. Thanks. EDIT: Workings Characteristics : $$\frac{dx}{x} = \frac{dy}{y} = \frac{dz}{u} = \frac{du}{0}$$ $\frac{du}{dx} = 0 \implies u = K_1 = $ constant $\frac{dx}{dy} = \frac{x}{y} \implies \frac{y}{x} = K_2$ $\frac{dy}{dz} = \frac{y}{u} = \frac{y}{K_1} \implies \ln (y) - \frac{z}{u} = K_3$ So, $u = F(K_2, K_3) = F(\frac{y}{x},\ln (y) -\frac{z}{u}) = F(X,Y)$ Using the initial condition give above, $xy = F(\frac{y}{x},\ln(y))$. So, $X = \frac{y}{x}$ and $Y = \ln (y)$. Solving for $x$ and $y$, we get that $xy = \frac{e^{2Y}}{X}$ Then, $u(x,y,z) = \frac{e^{2Y}}{x} = \frac{xe^{2(\ln(y)-z/u)}}{y} = xye^{\frac{-2z}{u}}$
It's an inavoidably implicit function. Some nice way to visualize some of its characteristics is using a 3D plotter. For a three variables function the surfaces of $u$ constant are interesting to see. It's easy to isolate $z$ as function of $x$, $y$, and $u=k$. I've used GeoGebra to plot these surfaces. It has an slider to change the value of $k$ $$k=xy^{-2z/k}\implies z=-(k/2)\ln(k/xy)$$ Isolating $z$ has problems as function, so, you can isolate $x$ and change variables $\bar x=z$, $\bar z=x$ and $\bar y=y$ for the program could plot the function (entered always as $z=f=f(x,y)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $x$ is not belongs to span of $y$ find a functional on X with $\phi x=1$ and $\phi y=0$? If $x$ is not belongs to span of $y$ find a linear functional on X with $\phi x=1$ and $\phi y=0$ ? where $\phi: X->F$ ,F is a field ,X is in Banach space
Suppose $\;y\neq0\;$, then observe that $\;\{x,y\}\;$ is a linearly independent set. Complete it to a basis $\;\{x,y,x_i\}_{i\in I}\;$ of $\;X_F\;$ (you may need AC to do this. I am not sure...), and define $$\phi x=1\;,\;\;\phi y=\phi x_i=0$$ and extend the definition by linearity. If $\;y=0\;$ then, as $\;x\neq0\;$ , extend $\;\{x\}\;$ to a basis of $\;X_F\;$ as before and define in a similar way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Torsion points of an elliptic curve over infinite extension How can I see that, if $E$ is an elliptic curve over $\mathbb{Q}$, than $$E(\overline{\mathbb{Q}})_{tors}=E(\mathbb{C})_{tors}=(\mathbb{Q}/\mathbb{Z})^2,$$ and that $E(\overline{\mathbb{Q}})/E(\overline{\mathbb{Q}})_{tors}$ has infinite rank?
The infinitude of the rank is somewhat subtle. You can see a proof in this article of Frey and Jarden (see Section 2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Simplest (smallest element) of non-cyclic abelian group I am looking for some examples of non-cyclic abelian groups. I found something like order 12, or other group. Here i am looking for simplest (smallest element) non-cylcic abelian groups. If you know something about this please let me know.
The smallest noncyclic group is the four element Klein four-group https://en.wikipedia.org/wiki/Klein_four-group . All finite abelian groups are products of cyclic groups. If the factors have orders that are not relatively prime the result won't be cyclic. It doesn't make sense (in general) to ask for "small elements" of groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is $\lim_{x\to\infty} xe^{-x^2}$? What is $\lim_{x\to\infty} xe^{-x^2}$? I am calculating $\int_0^\infty x^2e^{-x^2}\,dx$. By integration by parts, I have $$I = -\frac{1}{2}xe^{-x^2} |_{0}^\infty+\frac{1}{2}\int_0^\infty e^{-x^2}\,dx$$ The second integration is just $\frac{\sqrt{\pi}}{2}$. Now I want to know how to calculate $\lim_{x\to \infty} xe^{-x^2}$. It is in the form of $\infty \cdot 0$.
For the limit itself, it's useful to remember that exponentials will always win out over polynomials. Even for something absurd like: $$\lim_{x\to\infty} x^{100,000}e^{-x} = 0$$ Based off of the nature of your question, I'm guessing you are currently in an introductory calculus course. You'll later see that you can express: $e^x = 1 + x + \frac{x^2}{2!}+\frac{x^3}{3!}+...$ which I think makes it a bit easier to see why exponentials always dominate polynomials. As for the direct calculation of the integral, there's a neat trick that might come in handy someday. Consider instead the integral: $\int_{0}^{\infty}x^2e^{-ax^2}dx$ This can be rewritten as: $-\frac{d}{da}\int_{0}^{\infty}e^{-ax^2}dx$ The integral is a standard Gaussian now, so if we remember that: $\int_{0}^{\infty}e^{-ax^2}dx = \frac{\sqrt{\pi}}{2\sqrt{a}}$ Thus, $\int_{0}^{\infty}x^2e^{-ax^2}dx = -\frac{d}{da}\frac{\sqrt{\pi}}{2\sqrt{a}} = \frac{\pi}{4a^{3/2}}$ and setting $a = 1$ as we have in our problem yields the answer $\frac{\sqrt{\pi}}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that T is a linear transformation Does it matter that in the first line it's written $T(\alpha p+ \beta g)$ and not $T(\alpha p(t)+ \beta g(t))$ but at the end it is written with $\alpha T(p(t)) + \beta T(g(t)))$ with the $t$'s. Define $T : \mathbb{P}_3 \to \mathbb{R}^4$ by $$T(p) = \begin{bmatrix} p(-3) \\ p(-1) \\ p(1) \\ p(3) \end{bmatrix}$$ Show that $T$ is a linear transformation.
It doesn't really matter. But it doesn't look very good. I would've written $\alpha T(p) + \beta T(g)$ the last time. Also, why choose the letters $p$ and $g$? Why not $f$ and $g$, or $p$ and $q$? That would make the proof look nicer, at least to my eyes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
In induced von Neumann algebra, do we have $P(PMP) =P\cdot P(M)P$ and $U(PMP) =P\cdot U(M)P$? Let $M$ be a von Neumann algebra and $P\in M'$ be a projection. Then, $PMP$ is an induced von Neumann algebra. I know that $ P Z(M) P = Z(PMP)$ and $ (PMP)' = PM'P $ (here, $Z(M)$ is the center and $M'$ is the commutant). I would like to ask, do we have $$P(PMP) =P\cdot P(M)P$$ and $$U(PMP) =P\cdot U(M)P,$$ where $P(PMP)$ denotes the set of all projections in $PMP$ and $U(PMP)$ denotes all unitary elements of $PMP$. It looks that Kadison-Ringrose I, Proposition 5.5.5 implies what I want above.
Edit: *This argument works when $P\in Z(M)$. Together with Proposition 5.5.5 in Kadison-Ringrose, as mentioned by @user92646, this gives the desired proof. If $Q\in M$ is a projection, then $QP$ is also a projection, since $PQ=QP$. Conversely, if $PX\in PMP$ is a projection, we have $(PX)^2=PX$. Let $Q=PX+I-P$. Then $PQ=PX$, and $Q^2=(PX)^2+I-P=PX+I-P=Q$. Similarly, if $U\in M$ is a unitary, $(UP)^*UP=PU^*UP=P$, $(UP)(UP)^*=PUU^*P=P$, so $UP$ is a unitary in $PM$. If $XP$ is a unitary in $PM$, this means that $P=(XP)^*XP=X^*XP$; let $U=PX+I-P$. Then $U^*U=PX^*XP+I-P=P+I-P=I$, and $UU^*=I$. The fact that $P\in M'$ was essential above. When $P$ is arbitrary, neither the compression of a unitary nor a projection has to be such in the compressed algebra. You can always do the following: let $X\in B(H)$ be any contraction. Then $$ U=\begin{bmatrix} X& (I-XX^*)^{1/2} \\ (I-X^*X)^{1/2} &-X^*\end{bmatrix} $$ is a unitary in $B(H\oplus H)$ with $PUP=\begin{bmatrix} X&0\\0&0\end{bmatrix}$, where $P=\begin{bmatrix} I&0\\0&0\end{bmatrix}$. If $X$ is positive, $$ Q=\begin{bmatrix} X& (X-X^2)^{1/2} \\ (X-X^2)^{1/2} &I-X\end{bmatrix} $$ is a projection in $B(H\oplus H)$ with $PQP=\begin{bmatrix} X&0\\0&0\end{bmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let be $a,b$ positive real numbers; $a+\frac {b} {a+\frac {b} {a+\frac {b} {\ddots }}}$ how can we prove whether this has a limit or not? Let, given example is like this $$3+\dfrac {4} {3+\dfrac {4} {3+\dfrac {4} {\ddots }}}=?$$ I wonder if it has a limit or not, but I have a idea, If we show that this sequence monotonous and has a bound, we can say that this has a limit. Therefore,firstly, we need to write sequence form; $$3+\dfrac4{x_n}=x_{n+1}$$ If we assume that this sequence has a limit, we can apply convergence of subsequences theorem, with respect to this theorem we can do the following conclusion.Let assume $x$ equals to limit of the sequence>$$3+\dfrac4{x}=x$$ $x_1=4$ and $x_2=-1$, limit should be positive, so $x=4$.That is; $$3+\dfrac {4} {3+\dfrac {4} {3+\dfrac {4} {\ddots }}}=4$$ My intuition says me that the sequence is increasing and has a upperbound but I couldn't show, so above proof is not valid for now. How we can that this is a monotonous and has a bound?
One way to prove convergence is to start with the assumed limit and consider the distance to it. As you've seen the equation for the assumed limits is a quadratic so we could instead start in the other end and write the recurrence formula: $$x_{n+1} = f(x) = (p+q) - pq/x_n$$ where $p$ and $q$ are the assumed limits. Now as the iteration is a solution method for the fix-value problem it will converge if the secant to the fixpoint has absolute slope of always less than $1$ (since that would mean that the distance to the solution will decrease in every step). Now the slope to a fixpoint is: $${f(x) - f(p)\over x-p} = {p+q-pq/x-p\over x-p} = {q(x - p)\over x(x-p)} = {q\over x}$$ So if always $|q/x_n|<1-\epsilon$ the series will converge to $p$. And if always $|q/x_n|>1+\epsilon$ it will diverge (and do so strictly). This works on your example since you will have with $p=4$ and $q=-1$ that if $|x-p|\le 1$ that $x \ge 3$ which means that $|q/x| = 1/|x| \le 1/3$. As long as $|x-p|\le 1$ it will approach $p$ strictly and therefore it the $|x-p|\le 1$ will remain valid and it will aproach $p$ strictly indefinitely. However the sequence is not monotone (as you can easily see by testing). What I mean by strict approaching is that $|x_n-L|$ is strictly decreasing, strict convergence to $L$ is convergence to $L$ that strictly approaches $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
A consequence of the Fundamental Theorem of Arithmetic. The book said that:"If $S={1,2,3,.....,200}$, then for each $x \in S$, we may write $x=2^{k}y$, with $k \geq 0$, and gcd(2,y)=1." and the book added that this result follows from the Fundamental Theorem of Arithmetic, but I do not know how? could anyone explain this for me?
Write $x$ as a product of primes, where $2$ occurs exactly $k \ge 0$ times, say. Collect together the $k$ occurrences of $2$ to get $x = 2^{k} y$. Clearly $y$ is odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2209987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Expansion of this expression Let $x$ be a real number in $\left[0,\frac{1}{2}\right].$ It is well known that $$\frac{1}{1-x}=\sum_{n=0}^{+\infty} x^n.$$ What is the expansion or the series of the expression $(\frac{1}{1-x})^2$? Many thanks.
By the Cauchy product, $$\left(\sum_{n=0}^{\infty} x^n\right)^2=\left(\sum_{n=0}^{\infty} x^n\right)\left(\sum_{m=0}^{\infty} x^m\right)=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}x^nx^m=\sum_{k=0}^{\infty}\sum_{m=0}^k x^k=\sum_{k=0}^{\infty}(k+1)x^k.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2210103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Linear Regression: units of intecepts given $x$ and $y$ as log function? Given a line $y = ax + b$, where $x = log(w)$ and $y = log(l)$. To me, the units of $b$ should be unitless as $y$ will be unitless. Is this right?
Yes. Formally speaking, you're not taking $y = \log (l)$ but $y = \log (l/l_0)$, where $l_0$ is one of whatever your unit of measure is - the argument of the log needs to be unitless. From the fact that you've chosen $l$ and $w$, probably "length" and "width", I'm guessing you mean for $l$ and $w$ to have units of length, so $l_0$ is one meter or foot or whatever. Similarly you really have $x = \log (w/w_0)$. But nobody ever spells this out. The slope $a$ is also unitless, since $y$ and $x$ are both unitless, and $y$ and $ax$ must have the same units.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2210215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find out if there 7 prime subtractors of a number I have a number and i want to find out if i can get that number by adding 7 prime numbers.How can i find that out? Example Number:14 Answer:2 2 2 2 2 2 2
For n sufficiently large $n>=100$ probably less you can substract $2$ and $3$ conveniently to form odd number. Then applying the Weak Golbach Conjecture (which as far as I remember it was proved recently) it is always possible to decompose an odd number into the sum of three prime numbers. When n is small you can do a brute force algorithm to find every decomposition and check whether it is valid or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2210305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Linear independence of $x, x^3, |x^3|$ How do I make sure whether $x, x^3, |x^3|$ are linearly independent? I know I have to show that $$kx+lx^3+m|x^3|=0\quad\forall x\in\mathbb{R}\quad \Rightarrow k=l=m=0,$$ but I'm not sure how to do this. Should I plug in different values for $x$?
Yes, the idea is good! For $x=1$ you get $$ k+l+m=0 $$ For $x=-1$ you get $$ -k-l+m=0 $$ Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2210554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof by induction, dont know how to represent range The question asks for me to prove the following through induction: $1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2^n} \geq 1 + \frac{n}{2}$ This is my proof thus far: Proving true for $n = 1$ \begin{align*} 1 + \frac{1}{2} &\geq 1 + \frac{1}{2}\\ \end{align*} Assuming true for $n = k$ \begin{align*} 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2^k} \geq 1 + \frac{k}{2} \end{align*} Proving true for $n = k + 1$ \begin{align*} 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2^k} + \frac{1}{2^{k+1}} &\geq 1 + \frac{k}{2} + \frac{1}{2}\\ (1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2^k}) + \frac{1}{2^{k+1}} &\geq (1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2^k}) + \frac{1}{2}\\ \frac{1}{2^{k+1}} &\geq \frac{1}{2}\\ \end{align*} I realized at the last statement that something was off, because the last statement contradicts the thing I'm trying to prove. I then realized that it was because the difference between $\frac{1}{2^k}$ and $\frac{1}{2^{k + 1}}$ sets was not simply the addition of $\frac{1}{2^{k + 1}}$, but rather, all numbers between $\frac{1}{2^k}$ and $\frac{1}{2^{k + 1}}$. For example n = 1 is $1 + \frac{1}{2}$, but $n = 2$ is $1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4}$. (note the addition of $\frac{1}{3}$.) So how can I represent this range? I believe my proof works except for the fact that $\frac{1}{2^{k + 1}}$ needs to be replaced with something else, I just don't know what that is.
$$ \underbrace{ 1 + \frac 1 2 + \frac 1 3 + \cdots + \frac 1 {2^k} + \frac 1 {2^{k+1}} }_{\Large\text{This is wrong.}} $$ $$ \overbrace{ \underbrace{ \underbrace{1 + \frac 1 2 + \frac 1 3 + \cdots + \frac 1 {2^k}}_{\Large\text{The is the case }n=k.} + \frac 1 {2^k+1} + \frac 1 {2^k+2} + \frac 1 {2^k+3} + \cdots + \frac 1 {2^{k+1}} } }_{\Large\text{This is the case } n = k+1.}^{\Large\text{This is what you need instead.}} $$ For example, suppose $n=k=3.$ Then $$ 1+ \frac 1 2 + \frac 1 3 + \cdots + \frac 1 {2^k} = 1 + \frac 1 2 + \frac 1 3 + \frac 1 4 + \frac 1 5 + \frac 1 6 + \frac 1 7 + \frac 1 8. $$ And the next case, $n=k+1 = 3+1=4$ is this $$ \underbrace{ \underbrace{1 + \frac 1 2 + \frac 1 3 + \frac 1 4 + \frac 1 5 + \frac 1 6 + \frac 1 7 + \frac 1 8}_{\Large\text{This is the case } n=3.} + \frac 1 9 + \frac 1{10} + \frac 1 {11} + \frac 1 {12} + \frac 1 {13} + \frac 1 {14} + \frac 1 {15} + \frac 1 {16}.}_{\Large\text{This is the case } n = 4.} $$ You don't just add one more term when you increment $n$ from $k$ to $k+1;$ you add $2^k$ terms to get up to $2^{k+1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2210649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
A book that includes the main types of manifolds and geometries Is there a book (or a few books) that gives the basic theory of the different types of manifolds and their geometries in an integrated manner? By the different types I primary mean C^k real manifolds, complex manifolds, real analytic and non-archimeadian (p-adic).
As far as I know, the p-adic case is never discussed in conjunction with the "standard" geometric structures. Most people who care about p-adic structures come from the algebraic geometry side and they do not care about, say, conformal structures or contact structures or foliations on manifolds. If you want the basic theory of geometric structures on $C^\infty$-manifolds, consider reading Shoshichi Kobayashi , "Transformation Groups in Differential Geometry", (Classics in Mathematics) 1995th Edition, Springer Verlag. Kobayashi's main interest is in automorphism groups of structures (more precisely, when are they Lie groups), but he also discusses geometric structures in great detail.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2210779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the dimension of $V$ as a vector space over $\mathbb F$ Let $\mathbb F$ be a subfield of a field $\mathbb K$ satisfying the condition that the dimension of $\mathbb K$ as a vector space over $\mathbb F$ is finite and equal to $r$. Let $V$ be a vector space of finite dimension $n>0$ over $\mathbb K$. Find the dimension of $V$ as a vector space over $\mathbb F$. I am not sure how to even approach this problem. Looking around online I found $\dim_{\mathbb F}(V)=\dim_{\mathbb F}(\mathbb K)\dim_{\mathbb K}(V)$ for a field extension $\mathbb K/\mathbb F$. Where does this come from?
Well you gave the answer yourself. The reason that equation is true is as follows. Suppose $\{k_1,\dots,k_r\}$ is a basis for $K$ over $F$, and $\{v_1,\dots,v_n\}$ is a basis for $V$ over $K$. Then it suffices to show $\{k_iv_j : 1 \le i \le r, 1 \le j \le n\}$ is a basis for $V$ over $F$. But this is just unfolding definitions and is left as an exercise to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Continuity of function in Sobolev W1,1(R^2) Suppose $u(x)$ is in $W^{1,1}(R^2)$ (Sobolev space on $R^2$ with 1 weak derivative and $L_1$ norm) and is locally bounded. Must $u(x)$ be continuous? Motivation: In $R^1$, a weakly differentiable function is continuous, but that is not the case for $R^2$ because of $f(x) = |x|^{0.5}$. However, this function is not bounded. So I am looking for a function that is bounded, weakly differentiable and not continuous.
As Jose27 mentioned, we look for functions of the form: $$f(x) = \sin(|x|^{-\alpha}), \alpha > 0$$ $$\displaystyle f_{x_i} = -\alpha \cos(|x|^{-\alpha})|x|^{-\alpha-1}\frac{x_i}{|x|}$$ Similar to the argument on Evan's page 260, we can prove $f$ is weakly differentiable. For any $\phi \in C^{\infty}_c(R^2)$ $$\int\int_{R^2 - B(0,\epsilon)} f\phi_{x_i} dx = -\int\int_{R^2 - B(0,\epsilon)} f_{x_i}\phi dx + \int_{\partial B(0,\epsilon)}f\phi \nu_i dS$$ As $\epsilon \rightarrow 0$, the last term goes to zero. The remaining equation is what is required by the weak derivative. The weak derivative also needs to be $L_1$ local. We have $$|Df| = |\cos(|x|^{-\alpha})| \alpha |x|^{-\alpha-1}$$ As long as $\alpha + 1 < 2$, $|Df|$ is $L_1$-local. Then $\alpha = 0.5$ should work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the limit or prove that the limit does not exist $\lim_{x \to c}x^2 + x + 1,$ for any $c \in R$ This is what I tried. For $\epsilon > 0$ there exists $\delta >0$ such that $0< \mid x - c \mid < \delta (x \in R)$ => $\mid x^2 + x + 1 - c^2 - c - 1 \mid = \mid x^2 + x - c^2 - c \mid < \epsilon$ After that I tried to use triangle inequlity $\mid x^2 - c^2 + x - c \mid \leq \mid x^2 - c^2 \mid + \mid x - c \mid$ if $\mid x^2 - c^2 \mid + \mid x - c \mid$ is less then $ \mid x^2 - c^2 + x - c \mid$, I was going to make $\mid x^2 - c^2 \mid + \mid x - c \mid$ to $\mid x + c \mid \mid x- c \mid +\mid x - c\mid$ and then $\mid x - c \mid \mid x+c\mid +1 \mid$ and then use $\mid x - c \mid \mid x+c\mid +1 \mid < \epsilon $ However, I cannot I am stuck now. I need help.
Here is a trick: Start by choosing $\delta \le |c|+1$, then $|x| \le |c|+|x-c| \le 2(|c|+1)$. Then $|x^2-c^2| = |x+c||x-c| \le 3 (|c|+1) |x-c|$. Then $|x^2+x+1 - (c^2+c+1)| \le (3|c|+2)|x-c|$. Now choose $\delta < \min (|c|, {\epsilon \over 3|c|+2})$, then you will obain the required bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A little PDE solving Guys I really don't know what to do, I apply the method of characteristics, but the system of equations it gave me looks unfamiliar and complicated. Any hint on how to approach this? Find a solution of the first order quasi-linear partial differential equation: $$z(x +z)\frac{\partial z}{\partial x} + y(y +z)\frac{\partial z}{\partial y} =0$$ such that it passes through points of parabolic cylinder $z = \sqrt y$ and $x=1$. Thanks.
The given condition implies that the curve $\Gamma = \{ (1,s^2,s) \in \mathbb{R}^3 : s \ge 0 \} $ must be contained in the graph of our solution. We apply the characteristics method and transform the initial PDE into the ODE system: \begin{cases} \dot x = z(x+z) \\ \dot y=y(y+z) \\ \dot z=0 \end{cases} The last equation immediately gives $z(t)=c_1=c_1(s)$, hence we are left to solve: \begin{cases} \dot x =c_1 x + c_1^2 \\ \dot y = y^2 + c_1 y \end{cases} The first equation is a classic first order linear non-homogeneous, while the second is first-order separable. With standard ODEs integration we get (hope I didn't mess up with those constants): \begin{cases} x(t,s)=c_2(s) e^{c_1(s)t}-c_1(s) \\ y(t,s)=-\frac{c_1(s)c_3(s)e^{c_1(s)t}}{c_3(s)e^{c_1(s)t}-1} \\ z(t,s)=c_1(s) \end{cases} Now it suffices to impose the initial condition (i.e, the curve $\Gamma$) to determine$c_1,c_2,c_3$ and finally, if possible, eliminate the parameters $t,s$ to get a solution $z(x,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Navier-Stokes equation, about pressure... I'm a computer science student writting a dissertation about fluid simulation on real time applications. I'm trying to understand a few things regarding the pressure term: 1) When talking about the Helmholtz-Hodge decomposition, my books say that you can decompose any vector field into a divergence-free field, and a gradient of a scalar fied, and that scalar field "turns out" to be the pressure. How do they get to that conclusion? (http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html chapter 38.2.4 section "The Helmholtz-Hodge Decomposition") 2) In many books or articles when computing pressure people say to take the divergence of the momentum equation, that ends up in a poisson equation. I don't understand why they take the divergence of everything, probably due to my lack of understanding of how poisson equation works or something related. (http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html chapter 38.2.4 section "The Helmholtz-Hodge Decomposition - Second realization" , in this article they don't take the divergence of the momentum equation but it's similar and with the same purpose) Keep in mind that I'm not a physics student so I kinda had to learn everything myself because in my university we don't really see much of this things, so probably my confusion is due to the lack of some basic concept that I haven't seen yet. Any help is appreciated, thanks.
here is a way to see (2). assuming the fluid to be incompressible , mass conservation tells you that $div\ u = 0.$ the newtons law for inviscid fluid is $\frac{du}{dt} = -\frac 1\rho\nabla P.$ take the divergence(dot product with $\nabla$) gives you the poisson equation $\Delta P = 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Number of q-ary strings of length m which do not contain k consecutive zeros A finite q-ary-alphabet is given $$A_q = {0,1,2,...,q-1}.$$ Now I am considering the set of all finite strings over the alphabet $A_q$. I am interested on the number $$N(m,k)_{A_q}$$ of strings of length $m$ which do not contain $k$ consecutive zeros. Is there a general formulary for this number? Or how can I determine $N(m,k)_{A_q}$?
Let me present another approach to this interesting problem, which will allow to get a closed expression for $N(m,k,q)$. Let's consider a word of length $m$ from the binary alphabet $\{0,X\}$, having a total of $s$ zero's. $$ \begin{array}{*{20}c} X &| & {0,} & {0,} & {0,} &| & X &| & 0 &| & {X,} & X &| & {0,} & 0 &| & X \\ 0 &| & {1,} & {2,} & {3,} &| & 0 &| & 1 &| & {0,} & {0,} &| & {1,} & 2 &| & 0 \\ \end{array} $$ Imagine to sequentially scan the word and count the number of consecutive zeros, resetting the counter when the character is different from $0$, as in the exaple above. Then we can bi-ject each word with an hystogram which has : - $j$ bars; - sum of the bars equal $s$; - $m-s$ Xs. Now, the number of ways to compose $j$ bars, summing to $s$, and with a heigth going from $1$ to max $r$ is given by $$ N_b (s - j,r - 1,j) = \text{No}\text{.}\,\text{of}\,\text{solutions}\,\text{to}\;\left\{ \begin{gathered} \text{0} \leqslant \text{integer}\;x_{\,j} \leqslant r - 1 \hfill \\ x_{\,1} + x_{\,2} + \; \cdots \; + x_{\,j} = s - j \hfill \\ \end{gathered} \right. $$ which is expressed by $$ \bbox[lightyellow] { \begin{gathered} N_b (s - j,r - 1,j)\quad \left| \begin{gathered} \;1 \leqslant \text{integer }r \hfill \\ \;0 \leqslant \text{integer}\;j \leqslant \text{integer }s \hfill \\ \end{gathered} \right.\quad = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,\,k\,\,\left( { \leqslant \,\frac{{s - j}} {{r - 1}}\, \leqslant \,j} \right)} {\left( { - 1} \right)^k \left( \begin{gathered} j \hfill \\ k \hfill \\ \end{gathered} \right)\left( \begin{gathered} s - 1 - k\,r \\ s - j - k\,r \\ \end{gathered} \right)} \hfill \\ \end{gathered} \tag{1} }$$ as explained in this post. Then we can dispose the $m-s$ (undistinguishable) X place-holders and the $j$ (distinguishable) bars by - reserving $j-1$ X's as separators between consecutive $0$ blocks - putting the remaining X's in any of the $j+1$ interstices, thus in ${m-s+1} \choose{j}$ ways. Finally we can compose the Xs in $(q-1)^{m-s}$ ways. We shall pay attention to that $j \leqslant s \leqslant \left\lceil {m/2} \right\rceil $ Thus the Number sought for, understood as the number of words which contains max $k-1$ consecutive zeros (in one or more runs) will be $$ \eqalign{ & N(m,k,q)\quad \left| \matrix{ \;1 \le {\rm integer }k,q \hfill \cr \;0 \le {\rm integer}\;m \hfill \cr} \right.\quad = \cr & = \sum\limits_{0\, \le \,\,s\,\, \le \,m} {\left( {q - 1} \right)^{\,m - s} \sum\limits_{0\, \le \,\,j\,\, \le \,s} {\left( \matrix{ m - s + 1 \cr j \cr} \right)\sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\left( {\, \le \,j} \right)} {\left( { - 1} \right)^i \left( \matrix{ j \cr i \cr} \right)\left( \matrix{ s - 1 - i\,\left( {k - 1} \right) \cr s - j - i\,\left( {k - 1} \right) \cr} \right)} } } = \cr & = \sum\limits_{0\, \le \,\,s\,\, \le \,m} {\left( \matrix{ \left( {q - 1} \right)^{\,m - s} \quad \cdot \hfill \cr \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\left( {\, \le \,m - s + 1} \right)} {\left( { - 1} \right)^i \left( \matrix{ m - s + 1 \cr i \cr} \right)\sum\limits_{\left( {0\, \le } \right)\,\,j - i\,\,\left( { \le \,s - i} \right)} {\left( \matrix{ m - s + 1 - i \cr j - i \cr} \right)\left( \matrix{ s - 1 - i\,\left( {k - 1} \right) \cr s - i\,k - \left( {j - i} \right) \cr} \right)} } \hfill \cr} \right)} = \cr & = \sum\limits_{0\, \le \,\,s\,\, \le \,m} {\left( {q - 1} \right)^{\,m - s} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\left( {\, \le \,m - s + 1} \right)} {\left( { - 1} \right)^i \left( \matrix{ m - s + 1 \cr i \cr} \right)\left( \matrix{ m - i\,k \cr s - i\,k \cr} \right)} } = \cr & = \sum\limits_{0\, \le \,\,s\,\, \le \,m} {\left( {q - 1} \right)^{\,m - s} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\, \le \,m/k} {\left( { - 1} \right)^i \left( \matrix{ m - s + 1 \cr i \cr} \right)\left( \matrix{ m - i\,k \cr m - s \cr} \right)} } = \cr & = \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\, \le \,m/k} {\left( { - 1} \right)^{\,i} \sum\limits_{0\, \le \,\,m - s\,\, \le \,m} {\left( \matrix{ m - i\,k \cr m - s \cr} \right)\left( {\left( \matrix{ m - s \cr i \cr} \right) + \left( \matrix{ m - s \cr i - 1 \cr} \right)} \right)\left( {q - 1} \right)^{\,m - s} } } = \cr & = \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\, \le \,m/k} {\left( { - 1} \right)^{\,i} \left( {\left( \matrix{ m - i\,k \cr i \cr} \right)q^{\,m - i\,k - i} \left( {q - 1} \right)^{\,i} + \left( \matrix{ m - i\,k \cr i - 1 \cr} \right)q^{\,m - i\,k - i + 1} \left( {q - 1} \right)^{\,i - 1} } \right)} = \cr & = q^{\,m} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\, \le \,m/k} {\left( {\left( \matrix{ m - i\,k \cr i \cr} \right)\left( {{{1 - q} \over {q^{\,k + 1} }}} \right)^{\,i} } \right)} - q^{\,m - k} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\, \le \,\left( {m - k} \right)/k} {\left( {\left( \matrix{ m - k - i\,k \cr i \cr} \right)\left( {{{1 - q} \over {q^{\,k + 1} }}} \right)^{\,i} } \right)} = \cr & = q^{\,m} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\left( {\, \le \,m/k} \right)} {\left( {\left( \matrix{ m - i\,k \cr m - i\,\left( {k + 1} \right) \cr} \right)\left( {{{1 - q} \over {q^{\,r + 1} }}} \right)^{\,i} } \right)} - q^{\,\left( {m - k} \right)} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\,\left( { \le \,\left( {m - k} \right)/k} \right)} {\left( {\left( \matrix{ \left( {m - k} \right) - i\,k \cr \left( {m - k} \right) - i\,\left( {k + 1} \right) \cr} \right)\left( {{{1 - q} \over {q^{\,k + 1} }}} \right)^{\,i} } \right)} = \cr & = M(m,\;k,\;q) - M(m - k,\;k,\;q) \cr} $$ where: - in the first passage we use the trinomial revision $\left( \matrix{ s \cr j \cr} \right)\left( \matrix{ j \cr n \cr} \right) = \left( \matrix{ s \cr n \cr} \right)\left( \matrix{ s - n \cr j - n \cr} \right)$ - in the sum in $s$ we use $\sum\limits_k {\left( \matrix{ n \cr k \cr} \right)\left( \matrix{ k \cr m \cr} \right)y^{\,k} } = \left( \matrix{ n \cr m \cr} \right)\left( {1 + y} \right)^{\,n - m} y^{\,m} $ obtainable from $$ \left( {1 + y + y} \right)^{\,n} = \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( {\, \le \,n} \right)} {\left( \matrix{ n \cr k \cr} \right)\left( {1 + y} \right)^{\,k} y^{\,n - k} } = \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( {\, \le \,n} \right)} {\sum\limits_{\left( {0\, \le } \right)\,\,j\,\,\left( {\, \le \,n} \right)} {\left( \matrix{ n \cr k \cr} \right)\left( \matrix{ k \cr j \cr} \right)y^{\,n - j} } } $$ In conclusion $$ \bbox[lightyellow] { \left\{ \matrix{ M(m,k,q) = q^{\,m} \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\left( {\, \le \,m/k} \right)} {\left( {\left( \matrix{ m - i\,k \cr m - i\,\left( {k + 1} \right) \cr} \right)\left( {{{1 - q} \over {q^{\,k + 1} }}} \right)^{\,i} } \right)} \hfill \cr N(m,k,q) = M(m,\;k,\;q) - M(m - k,\;k,\;q) \hfill \cr} \right.\quad \left| \matrix{ \;1 \le {\rm integer }k,q \hfill \cr \;0 \le {\rm integer}\;m \hfill \cr} \right. \tag{2} }$$ and it can be verified that the above expression * *${\bbox[#dfd,5px]{\text{satisfies the recursion provided in *N. Shales*'s answer}}}$ *${\bbox[#dfd,5px]{\text{gives the terms of the z-transform provided in *Markus Scheuer*'s answer}}}$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Prove: $z^{12}+3z^8+101z^4+1$ has a root on the unit circle $\newcommand{\cis}{\operatorname{cis}}$>Prove that $$f(z)=z^{12}+3z^8+101z^4+1$$ has a root on the unit circle or $|z|\leq 1$ So started with looking at $$z^{12}+3z^8+101z^4+1=0$$ Therefore $$z^{12}+3z^8+101z^4=-1$$ looking at $z=r\cis\theta$ we get $$r^{12}\cis(12\cdot\theta)+3r^8\cis(8\cdot \theta)+101r^4\cis(4\cdot \theta)=-1$$ And I can see that if $x=r^4\cis(4\theta)$ we get $$x^3+3x^2+101x+1=0$$ I also know that if $z$ is a solution so is $\overline{z}$ How should I continue? Moreover: Can we say that $12$ degree polynomial as $12$ complex roots ($z$ and $\overline{z}$) but because we have $z^8$ and $z^4$ so there will be less than $12$ solutions?
You can apply Rouché's theorem. Let $f(z)=z^{12}+3z^8+101z^4+1$ and $g(z)=101z^4$. For $|z| \leq 1$ : $$|f(z)-g(z)|=|z^{12}+3z^8+1|\leq|z^{12}|+|3z^8|+|1|=5$$ $$|g(z)|=101$$ We can apply the theorem because $5<101$. So $f$ has the same numbers of roots as $g$ in $\{z\in \mathbb{C};|z|\leq1\}$, so $f$ has $4$ roots in the domain considered.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Why solutions of 2nd order linear equations with constant coefficients goes to infinity, but solutions of 1st order only go asymptotic to 0? What I'm talking about is why does the solution of 2nd order $$y=k^2 y''$$ tends to blow up at both $x \rightarrow + \infty$ and $x \rightarrow - \infty$; but the solution of first order $y=k^2 y'$ only blows up at $x \rightarrow + \infty$? Also, why $y=-k^2 y''$ oscillates, but $y=-k^2 y'$ just go to $0$? I know the math checks out, but what could be the mathematical/analytical intuition here? What exactly makes the 2nd order differentiation so much stronger? I'm a Physics student and I understand them in a very physical way, which I believe is just reductive reasoning. Thanks a lot if someone could enlighten me on this! :D
What you say is not exactly true: $y=e^{-x}$ is a solution of $y=y''$ and goes to $0$ as $x\to+\infty$. As for the second question, an equation of the form $y''= a\,y$ represents a mechanical system in which $y''$ is the acceleration, and $a\,y$ an force that is proportional to the displacement. If $a>0$ then the force points in the same direction as the acceleration, the velocity is always increasing and everything goes to $\infty$. However, if $a<0$, the direction of the force is the oposite to the direction of the displacement. The further away you go, the strongest is the force against you. This crates oscillation, as in a pendulum or a mass attached to a spring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is this equality true? I'm working with a proof of local and global truncation errors for the euler method in numerical methods, and I can't seem to understand the equality and the less/equal seen below. At first I thought it had to do with geometrical sums, but I still don't understand it, any help would be appreciated: $$1+e^{Lh}+e^{2Lh}+...+e^{(i-1)Lh} = \frac{e^{iLh}-1}{e^{Lh}-1} \le \frac{e^{L(t_i-a)}-1}{Lh}$$
The formula $$\sum^i_{n=0} x^n = \frac{x^{i+1}-1}{x-1}$$ holds for any $x\ne 1$ [this is easily seen for $i=1$ and then proven by induction]. Just plug in $x=e^{Lh}$. The inequality follows since $e^{Lh} -1 \ge Lh$ and presumably $ih = t_i -a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2211959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solution for the integral $∫ \sin(4x)(1+\cos(4x))\mathrm dx$ Good day! I'm kind of stuck in solving this integral problem. I've tried multiple times, but I'm not still not sure what is the correct way of solving. The problem is to solve $$\int \sin(4x)(1+\cos(4x)) \mathrm dx$$ I've tried two methods: The first one is where I didn't distribute $\sin (4x)$, and the second is where I distributed $\sin (4x)$ to turn the function into $\int \left(\sin(4x)+(\sin(4x)\cos(4x))\right) \mathrm dx$. I'm not sure if I'm going to use trig. identities, or just straight up integrate by using $u = \cos (4x)$. Can someone help? Thanks in advance.
Method 1 - Let \begin{align} 1+\cos 4x &= u\\ -\sin 4x \cdot 4\ dx &= du\\ \sin 4x\ dx &= \frac 1{-4}\ du \end{align} Putting these values, $$\int \frac 1{-4} \cdot u\ du\\ \frac 1{-4} \int u\ du\\ \frac 1{-4} \cdot \frac {u^2}2 + c\\ \frac {u^2}{-8} + c$$ Put value of $u$ back. $\frac {(1+\cos 4x)^2}{-8} + c$ Method 2 - As you already done, Multiply and divide $\sin4x \cos4x$ with 2. We have, $$\frac{2\sin4x\cos4x}{2}$$ $$\frac{\sin8x}{2}$$ As, $2 \sin4x \cos4x = \sin8x$. You can then integrate easily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find all pairs of elements such that they are not covered by any given sets Given a universe $U$ and a family sets $F$ over $U$, I wanna find all pairs $(x, y)$ where $x, y \in U$, such that $x$ and $y$ are not in any set of $F$ at the same time, i.e., $\not\exists S \in F, x\in S \wedge y\in S$. Also, I try to avoid compute $U \times U$, because this is an abstract math problem from other scenarios. If I use $U \times U$, that costs too much. What's formal name of this problem? Is it possible to find all such pairs systematically using Cartesian product of two sets with set algebra? For example, for the two sets ($|F|=2$) case, let's say $F=\{A, B\}$, we can find all such pairs by $(A \setminus B) \times (B \setminus A)$ and $(U \setminus (A \cup B)) \times (A \cup B)$ For $|F|=3$, using Venn diagram can give the following possible solution, which basically to find every pair of pieces which are not covered by any single set. Venn diagram $(A \setminus (B \cup C)) \times (B \setminus (A \cup C))$ $\cup$ $(A \setminus (B \cup C)) \times ((B \cap C) \setminus A)$ $\cup$ $(B \setminus (A \cup C)) \times (C \setminus (A \cup B))$ $\cup$ $(B \setminus (A \cup C)) \times ((A \cap C) \setminus B)$ $\cup$ $(C \setminus (A \cup B)) \times (A \setminus (B \cup C))$ $\cup$ $(C \setminus (A \cup B)) \times ((A \cap B) \setminus C)$ $\cup$ $(U \setminus (A \cup B \cup C)) \times (A \cup B \cup C)$ However, there is another better solution with much less set operations: $A \setminus (B \cup C) \times (B \setminus A)$ $\cup$ $C \setminus (A \cup B) \times (A \setminus C)$ $\cup$ $B \setminus (A \cup C) \times (C \setminus B)$ $\cup$ $(U \setminus (A \cup B \cup C)) \times (A \cup B \cup C)$ For more general $|F| = k$ cases, that's complicated.
$$(U \times U) \setminus (\bigcup_{f \in F} f \times f)$$ EDIT: the idea is to consider all pairs of points and then for each set in $F$, subtract out the pairs of points such that both belong in it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What constitutes a mathematical proof? My question is what constitutes a mathematical proof? I ask because I am currently doing a Calculus course in University and I am constantly confused regarding what I'm allowed to assume within a proof. Here is an example: "Prove that f(x) = sqrt(|x| + x^2) is continuous in Real numbers" And the "proof" from our math book: "f is continuous, because we may present it as a union of continuous functions h(x) = |x| + x^2 and g(y) = sqrt(y)" Why are we allowed to assume that h(x) and g(y) are continuous? I know it's fairly obvious that they are, but it's also fairly obvious that f(x) is continuous. Yet we cannot simply assume that f(x) is continuous.
I've got a question for you. If we are given two functions $f$ and $g$ such that both $f$ and $g$ are continuous. Can you prove that: 1) $f+g$ or $f-g$ is continuous? 2) $f*g$ is continuous? 3) $f\circ g$ is continuous ($\circ$ is the compositions function meaning $f\circ g = f(g(x))$ )? 4) Is $f/g$ continuous if $g(x)\neq 0$? I ask because if you can prove these in a general case then your question becomes rather trivial, as it is a direct corollary to these. That is if they can be proven. Do you want to give that a try? Little warning you will need to be careful with the maps of the functions $f$ and $g$ (that is the image and preimage of the functions). However that maybe going above your understanding so far.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $\gcd(a,b)=9,$ then what is $\gcd(a^2,b^3)\,?$ I know that by the euclidean algorithm, I can obtain the following equations. I tried some algebraic manipulation but I can't seem to determine, if $$\text{if }\;\gcd(a,b)=9,\text{ then what is }\gcd(a^2,b^3)\;?$$
Note $\ (A,B) = 9\,\Rightarrow A,B = 9a,9b,\ \color{}{(a,b) = 1},\, $ so $\,\color{#c00}{(a^2,b^3) = 1}\,$ by Euler Thus $\ (A^2,B^3) = ((9a)^2,(9b)^3) = 81(\color{#c00}{a^2},9\color{#c00}{b^3}) = 81(a^2,9) = 81(a,3)^2 = (A,27)^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
What is the differential of $X'X$? Let $X = (x_{ij})$ be a square matrix with $n\times n$ variables in $\mathbf{R}$. Could you tell me the $\text{d}(X'X)$ when $X$ has a full column rank? [Update] Honestly speaking. What I have got is: $$ \text{d} X'X = (\text{d} X')X + X'\text{d}X. \tag{1} $$ But the text book of matrix differential calculus with applications in statistics and econometrics seems give a different result at page 171 excercise 3 as stated below: Show that $\text{d} \log |X'X| = 2 \text{ tr}(X'X)^{-1}X'\text{d}X$ at every point where $X$ has full column rank. As I know: $$ \begin{align} \text{d} \log |X'X| &= \text{ d} \text{ tr}(\log|X'X|) \\ & = \text{ tr}(\text{d}\log|X'X|) \\ & = \text{ tr}((X'X)^{-1}\text{d}(X'X)) \\ \end{align} $$ With result in (1), I cannot obtain $\text{d} \log |X'X| = 2 \text{ tr}(X'X)^{-1}X'\text{d}X$ as stated above. Could you help me to solve this?
After some exploration, I intend to answer my question. Lets assume $X$ has $m\times n$ variables where $m \leq n$. First we have $$ {\rm d} XX' = ({\rm d}X)X' + X({\rm d}X'). \tag{1} $$ Then $$ \begin{align} {\rm d} |XX'| &= {\rm tr} \big( (XX')^\# {\rm d} (XX') \big) \\ & = {\rm tr} \bigg( (XX')^\# \big(({\rm d}X)X' + X({\rm d}X')\big)\bigg)\\ & = {\rm tr} \bigg( (XX')^\# ({\rm d}X)X' + (XX')^\#X{\rm d}X'\bigg)\\ & = {\rm tr} \bigg( X'(XX')^\# ({\rm d}X)\bigg) + {\rm tr}\bigg( (XX')^\#X{\rm d}X'\bigg)\\ & = {\rm tr} \bigg( X'(XX')^\# ({\rm d}X)\bigg) + {\rm tr}\bigg(X' (XX')^\#{\rm d}X \bigg)\\ & = 2 {\rm tr} \bigg( X'(XX')^\# ({\rm d}X)\bigg) \end{align} $$ Futher $$ \begin{align} {\rm d} \log |XX'| &= \frac{1}{|XX'|} {\rm d} |XX'| \\ & = \frac{2}{|XX'|} {\rm tr} \bigg( X'(XX')^\# {\rm d}X\bigg) \\ & = 2\ {\rm tr} \bigg( X'\frac{(XX')^\#}{|XX'|} {\rm d}X\bigg) \\ & = 2\ {\rm tr} \bigg( X'(XX')^{-1} {\rm d}X\bigg) \\ \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Reference request: Representation theory over fields of characteristic zero Many representation theory textbooks and online resources work with the field of complex numbers or more generally algebraically closed fields of characteristic zero. I am looking for a good textbook on representation theory which works with fields of characteristic zero directly from the beginning. (In case there are many such textbooks, I would prefer "modern" ones which emphasize the theory of non-commutative algebras; it should really be the opposite of Fulton-Harris.) In particular, it should mention the classification of irreducible representations of symmetric groups over $\mathbb{Q}$, and there should be more than just a side remark that those over $\mathbb{C}$ descend to $\mathbb{Q}$.
The book "The representation theory of the symmetric group" (Encyclopedia of Mathematics and its Applications, Vol. 16, Addison-Wesley, Reading, Mass., 1981), by Gordon James and Adalbert Kerber, develops the characteristic zero representation theory of symmetric groups over $\mathbb{Q}$. In particular it has a theorem (Theorem 2.1.12) explicitly stating that every field is a splitting field for $S_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Consider an isosceles triangle Consider an isosceles triangle. Let $r$ be the radius of its circumscribed circle and $p$ the radius of its inscribed circle. Prove that the distance $d$ between the centres of these two circles is $d =\sqrt {r(r-2p)}$. I could not get any idea to solve. However I have tried to make a figure (partially).
Consider the following figure: Using Euclid's theorem of sides in a right triangle one has $$2r(r+d+p)=b^2=4r^2-a^2=4r^2-\left({2r\over r+d}\>p\right)^2\ .$$ It follows that $$(r+d)^2(r+d+p)=2r\bigl((r+d)^2-p^2\bigr)=2r(r+d+p)(r+d-p)\ .$$ Removing the factor $r+d+p$ leads to $$r^2+2rd+d^2=2r(r+d-p)\ ,$$ from which the claim immediately follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Strong Induction to find all possible combinations of two numbers Full disclosure, this is a homework question, so I'm only looking for hints not full solutions please. There is a store which offers two denominations of gift certificates, \$25 and \$40. Determine the possible total amounts that can be formed using the certificates, using strong induction to prove your answer. My first approach was to write out something like this, let $S$ be a possible amount where $S = 25a + 40b$ and $a, b \in \mathbb{Z}^+$ ($0$ being included in the positive integers). The problem is that I'm not sure how to use this as a propositional statement for strong induction. Second, I tried to make a table to find a pattern, like this: $$ \begin{array}{c|ccccc} 0 & 0 & 1 & 2 & 3 & 4 \\ \hline 0 & 0 & 25 & 50 & 75 & 100 \\ 1 & 40 & 65 & 90 & 115 & 140 \\ 2 & 80 & 105 & 130 & 155 & 180 \\ 3 & 120 & 145 & 170 & 195 & 220 \\ 4 & 160 & 185 & 210 & 235 & 260 \\ \end{array} $$ The issue with this approach was that, once again, I could see no particular path towards using strong induction (or any discernible pattern). I'm not sure how best to move forward, and I haven't been able to find many helpful resources. I'd be very grateful for some help, thanks!
First off, if: $$S=40a+50b$$ then: $$S+10=40(a-1)+50(b+1)$$ but only if $a\ge1$. If $a=0$ and $b\ge 3$, then: $$S+10=40\cdot4+50(b-3)$$ So, if $S\ge150$, we can form $S+10$ if we can form $S$. Now, something similair can be done for $S+5$. After you've got that figured out, use induction to prove you can reach all multiples of $5$ greater than a certain number (what number?) and prove that you can only reach multiples of $5$ (why?) Then you're left with the small cases, which you'll need to check by hand
{ "language": "en", "url": "https://math.stackexchange.com/questions/2212874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Relations between the second norms of matrices Consider two arbitrary complex matrices $L$ and $M$. Let $\|L\|_2 \le \|M\|_2$. Prove that $$ \Bigg\|\begin{bmatrix} I & 0 \\ L & I \\ \end{bmatrix}\Bigg\|_2 \le \Bigg\|\begin{bmatrix} I & 0 \\ M & I \\ \end{bmatrix}\Bigg\|_2 $$ My attempt was to reduce the problem. Namely, let $L = U\Sigma V^*$ be the SVD for $L$. Then it is easy to see that $$ \begin{bmatrix} I & 0 \\ L & I \\ \end{bmatrix} \begin{bmatrix} I & L^* \\ 0 & I \\ \end{bmatrix} = \begin{bmatrix} V & 0 \\ 0 & U \\ \end{bmatrix} \begin{bmatrix} I & \Sigma \\ \Sigma & \Sigma^2 + I \\ \end{bmatrix} \begin{bmatrix} V^* & 0 \\ 0 & U^* \\ \end{bmatrix} $$ It means that the eigenvalues of $ \begin{bmatrix} I & 0 \\ L & I \\ \end{bmatrix} \begin{bmatrix} I & L^* \\ 0 & I \\ \end{bmatrix} $ and $ \begin{bmatrix} I & 0 \\ \Sigma & I \\ \end{bmatrix} \begin{bmatrix} I & \Sigma \\ 0 & I \\ \end{bmatrix} $ are equal, so the singlular values for $ \begin{bmatrix} I & 0 \\ L & I \\ \end{bmatrix} $ and $ \begin{bmatrix} I & 0 \\ \Sigma & I \\ \end{bmatrix} $. It follows that it is sufficient to prove the statement for diagonal matrices with nonnegative values on the diagonal. But here I got stuck. Thanks for any help or ideas!
I will handle only the square matrix case here. The other cases are similar, but require some care. Let's reframe the question: if $L$ and $M$ are such that the largest singular values satisfy $\sigma_1(L) \geq \sigma_1(M)$, then (it suffices to show that) the diagonal matrices $\Sigma_M,\Sigma_N$ of singular values are such that the largest eigenvalue of $ \pmatrix{I & \Sigma_L \\ \Sigma_L & \Sigma_L^2 + I} $ is greater than the largest eigenvalue of $ \pmatrix{I & \Sigma_M \\ \Sigma_M & \Sigma_M^2 + I} $ To that end, suppose $\Sigma$ is a matrix with diagonal entries $\sigma_1 \geq \cdots \geq \sigma_n$. Let $$ A = \pmatrix{I & \Sigma \\ \Sigma & \Sigma^2 + I} = I + \pmatrix{0 & \Sigma \\ \Sigma & \Sigma^2} $$ Now, apply a permutation matrix similarity. We reach the block-diagonal matrix given by $$ PAP^* = I + \pmatrix{B_1 \\ & B_2 \\ && \ddots \\ &&& B_n} $$ where $B_i =\left[\begin{smallmatrix}0&\sigma_i\\ \sigma_i & \sigma_i^2\end{smallmatrix}\right]$. Each block of this has the characteristic equation $\lambda^2 - \sigma_i^2\lambda - \sigma_i^2$. It follows that the eigenvalues of $B_i$ are $$ \lambda = \frac{\sigma_i^2 \pm \sqrt{\sigma_i^4 + 4\sigma_i^2}}{2} = \frac{\sigma_i^2 \pm \sigma_i\sqrt{\sigma_i^2 + 4}}{2} $$ So, the largest eigenvalue of $A$ is $1 + \frac 12\left(\sigma_1^2 + \sigma_1\sqrt{\sigma_1^2 + 4}\right)$. Now, it suffices to note that if $\sigma_1(L) \geq \sigma_1(M)$, then $$ 1 + \frac 12\left(\sigma_1^2(L) + \sigma_1(L)\sqrt{\sigma_1^2(L) + 4}\right) \geq 1 + \frac 12\left(\sigma_1^2(M) + \sigma_1(M)\sqrt{\sigma_1^2(M) + 4}\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does it make sense to take the grad of this? I have been asked to calculate the grad of a function $V(r)$, where $r$ is a position vector and $\omega$ is the angular velocity. But surely, $V(r)$ is a scalar? So how is this possible? Context: I want to show that a force is conservative with this potential, the force is $F=-m\omega\land(\omega\land r)$ $$V(r)=-\frac12 m |\omega \land r|^2 $$
The gradient operator transforms a scalar field $\Phi(\vec t)$ into the vector field $\nabla \Phi(\vec r)$. It is written in Cartesian Coordinates as $$\nabla \Phi(\vec r)=\hat x\frac{\partial \Phi(\vec r)}{\partial x}+\hat y\frac{\partial \Phi(\vec r)}{\partial y}+\hat z\frac{\partial \Phi(\vec r)}{\partial z}$$ Let $V(\vec r)=-\frac12m|\vec \omega \times\vec r|^2$. Then, we can write $V(\vec r)$ as $$\begin{align} V(\vec r)&=-\frac m2 (\vec \omega \times \vec r)\cdot (\vec \omega \times \vec r)\\\\ &=-\frac m2 \vec \omega \cdot (\vec r\times (\vec \omega \times \vec r))\\\\ &=-\frac m2\omega \cdot(r^2\vec \omega-(\vec r\cdot \vec\omega)\vec r)\\\\ &=\frac m2((\vec \omega\cdot \vec r)^2-\omega^2r^2) \end{align}$$ Then, the gradient of $V$ is given by $$\begin{align} \nabla V(\vec r)&=\frac m2 \left(2(\vec \omega \cdot \vec r)\nabla (\vec \omega \cdot \vec r)-2r\omega^2\nabla (r)\right)\\\\ &m\left((\vec \omega \cdot \vec r)\vec \omega-\omega^2\vec r\right)\\\\ &=m(\vec\omega\times(\vec \omega\times\vec r)) \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $G_n$ contains a Hamilton cycle. this is a past paper question on my graph theory course, and I am struggling even to know when to start. The question hasn't asked me to prove anything beforehand so I don't think it warrants Dirac's Theorem or anything else. I don't know where to start! Given $n$ is a natural number, the $n\times n$ grid is the graph $G_n$ whose vertices are the pairs $(i,j)$ for all $1\leq i,j,\leq n$ and in which any two pairs $(i,j)$ and $(i',j')$ are joined by an edge if and only if either (i) $i=i'$ and $|j-j'|=1$, or (ii) $|i-i'|=1$ and $j=j'$. Show that $G_n$ contains a Hamilton cycle if and only if $n$ is even.
First, it's not hard to exhibit a Hamiltonian circuit on a $2n\times 2n$ grid graph: use for the vertices $(i,j)$, $1\le i, j\le 2n$. Then the path $$(1,1)\to (1,2n)\to (2,2n)\to (2,2)\to (3,2)\to \cdots \to (2n-1,2n)\to (2n,2n)\to (2n,1)\to (1,1)$$ is a Hamiltonian circuit. (Note, by the way, that this construction proves that there is a Hamiltonian circuit on an $m\times n$ grid graph if either $m$ or $n$ is even. In the construction above, only the horizontal coordinate needed to be even; the parity of the vertical coordinate was irrelevant.) Now, an $m\times n$ grid graph is a bipartite graph. If $mn$ is even, then there are the same number of vertices on each "side" of the graph; if $mn$ is odd, there are not. But if a bipartite graph has different number of vertices in each of the two sides, it's clear that there cannot be a Hamiltonian circuit (suppose the graph is $(A|B)$, with $|A|>|B|$. Choose any vertex $x$ in $A$. When the vertices of $B$ are exhausted, those of $A$ are not. So the circuit cannot return to $x$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I transform these triangles to $30-60-90$ triangles? I'm trying to transform the triangles below subject to a few constraints, and cannot figure out how (algebraically) to do so. For ease of notation, I'll refer to points like $E(C)$ as $C$. I want $CEF$ and $BEF$ to be $30-60-90$. I want to leave $E$ and $C$ fixed. Right now $ECF < 30$ and $EBF > 30$. How can I continuously transform the points of these triangles so the end results is to slide $F$ down (so that its $y$-coordinate is the same as $C$'s) and slide $B$ down and to the right so that $EBF = 30$, while preserving line segments inside each triangle? Bonus points if you can preserve line segments that cross between the two triangles (I think it's impossible). Ideally the solution would look something line $x' = f(x,y)$, $y' = g(x,y)$. I rotated and translated to get to the orientation below, because I suspected polar coordinates would help, but if it's easier to use some other starting point, that's fine. Thanks!
In general you can map $\triangle EBC$ to $\triangle EBC^\prime$ with a linear transformation $f(\mathbb{R}^2)\mapsto\mathbb{R}^2$ which leaves every point of line $EB$ fixed and maps $B$ to $C^\prime$. However, it will not map $F$ to $F^\prime$ unless $F$ is the midpoint of segment $CB$. For each point $X$ in $\triangle CEB$ the ray $BX$ intersects side $EC$ at some unique point $P$ and there is a unique number $s\in[0,1]$ such that \begin{equation} X=(1-s)B+sP \tag{1} \end{equation} For each $P$ of segment $EC$ there is a unique number $t\in[0,1]$ such that \begin{equation} P=(1-t)E+tC \tag{2} \end{equation} Thus for each point $X\in\triangle CEB$ there are unique $\triangle CEB$ coordinates $(s,t)$ for $X$ satisfying \begin{equation} X=(1-s)B+s(1-t)E+stC \tag{3} \end{equation} In $\triangle CEC^\prime$ the diagram below, segment $CF^\prime$ is congruent to segment $F^\prime C^\prime$. The transformation \begin{equation} f(X(s,t))=(1-s)C^\prime+s(1-t)E+stC \tag{4} \end{equation} maps $\triangle CEB$ to $\triangle CEC^\prime$ and maps each line segment in $\triangle CEB$ to a line segment in $\triangle CEC^\prime$. However, it does not map $F$ to $F^\prime$ (unless $F$ is the midpoint of $CB$). Instead it maps $F^\prime$ to some point $F^{\prime\prime}$ on segment $CC^\prime$. There is no linear transformation $g\,:\triangle CEC^\prime\to\triangle CEC^\prime$ leaving the vertices of $\triangle CEC^\prime$ fixed while mapping $F^{\prime\prime}$ to $F^\prime$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $x_n, y_n \in \mathbb{R}$, $\lim x_n=a$, $\lim y_n = b$ then $|x_n-b| Let $x_n, y_n \in \mathbb{R}$, $\lim x_n=a$, $\lim y_n = b$ then $|x_n-b|<r<|y_n-a|$ for all $n\in\mathbb N \implies r = |a-b|$. What I intend to do is to show that $|a-b|-\epsilon<r<|a-b|+\epsilon, \forall \epsilon>0$. For all $n>N_0$ we have, $$|y_n-a| = |y_n-b+b-a| \leq |y_n-b|+|b-a| \leq \epsilon+|b-a| $$ Then $r\leq \epsilon +|b-a|$. Now, \begin{align*} |b-a|-\epsilon &= |b-y_n+y_n-a|-\epsilon\\ &\leq |b-y_n|+|y_n-a|-\epsilon\\ &\leq \epsilon+|y_n-a|-\epsilon = |y_n-a|<r \end{align*} $|b-a|-\epsilon<r<|b-a|+\epsilon \implies |r-|b-a||<\epsilon\implies r = |b-a|\quad\quad\quad \blacksquare$ Is it correct?
No, your calculation has a problem; it is in the very last inequality $\vert y_n -a \vert < r$ in the second chain. The correct procedure is as follows: We are given that $\left( x_n \right)_{n \in \mathbb{N}}$ and $\left( y_n \right)_{n \in \mathbb{N}}$ are sequences of real numbers and $a$, $b$, and $r$ are real number such that $$ \lim_{n\to\infty} x_n = a, \ \tag{1} $$ $$ \lim_{n\to\infty} y_n = b, \ \tag{2} $$ and $$ \left\vert x_n - b \right\vert < r < \left\vert y_n - a \right\vert \ \mbox{ for all } n \in \mathbb{N}. \ \tag{3} $$ Let $\varepsilon$ be any positive real number. Then, by the definition of the limit of a sequence, from (1) and (2) we can conclude that, there exists some natural numbers $M$ and $N$ such that $$ \left\vert x_n -a \right\vert < \varepsilon \ \mbox{ for all } m \in \left\{ M, M+1, M+2, \ldots \right\}, \ \tag{4} $$ and $$ \left\vert y_n - b \right\vert < \varepsilon \ \mbox{ for all } m \in \left\{ N, N+1, N+2, \ldots \right\}. \ \tag{5} $$ Now for any real numbers $\alpha$ and $\beta$, the following chain of inequalities holds: $$ \left\vert \ \lvert \alpha \rvert - \lvert \alpha \rvert \ \right\vert \leq \left\vert \alpha \pm \beta \right\vert \leq \left\vert \alpha \right\vert + \left\vert \beta \right\vert. \ \tag{6} $$ So for all $n \in \mathbb{N}$ such that $n > \max \left\{ M, N \right\}$, we see that $$ \begin{align} \left\vert a-b \right\vert &= \left\vert a - x_n + x_n - b \right\vert \\ &\leq \left\vert a - x_n \right\vert + \left\vert x_n - b \right\vert \ \mbox{ by (6) above} \\ &< \varepsilon + r, \ \mbox{ by (3) and (4) above} \ \tag{7} \end{align} $$ and $$ \begin{align} \left\vert a-b \right\vert &= \left\vert a - y_n + y_n - b \right\vert \\ &\geq \left\vert a - y_n \right\vert - \left\vert y_n - b \right\vert \ \mbox{ by (6) above } \\ &> r - \varepsilon, \ \mbox{ by (3) and (5) above} \ \tag{8} \end{align} $$ Thus combining (7) and (8), we conclude that $$ r - \varepsilon < \left\vert a-b \right\vert < r + \varepsilon \ \mbox{ for every real number } \varepsilon > 0,$$ which is equivalent to $$ \left\vert \ \left\vert a-b \right\vert - r \ \right\vert < \varepsilon \ \mbox{ for every real number } \varepsilon > 0.$$ Hence $$\left\vert \ \left\vert a-b \right\vert - r \ \right\vert = 0,$$ which implies that $$\left\vert a-b \right\vert = r.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that $\sum_{n=1}^\infty \frac{n^2}{(n+1)!}=e-1$ Show that: $$\sum^\infty_{n=1} \frac{n^2}{(n+1)!}=e-1$$ First I will re-define the sum: $$\sum^\infty_{n=1} \frac{n^2}{(n+1)!} = \sum^\infty_{n=1} \frac{n^2-1+1}{(n+1)!} - \sum^\infty_{n=1}\frac{n-1}{n!} + \sum^\infty_{n=1} \frac{1}{(nm)!}$$ Bow I will define e: $$e^2 = 1+ \frac{2}{1!} + \frac{x^2}{2!} + ... + \infty$$ $$e' = 1 + \frac{1}{1!} + \frac{1}{2!} + ... + \infty$$ $$(e'-2) = \sum^\infty_{n=1} \frac{1}{(n+1)!}$$ Now I need help.
$$\frac{n^2}{(n+1)!} = \frac{(n+1)(n-1) + 1}{(n+1)!} = \frac{(n-1)}{n!} + \frac{1}{(n+1)!}$$ Remembering that we're summing to infinity, evaluating the first terms and paying careful attention to the indices, $$ \begin{align} \sum_{n=1}^\infty \left( \frac{(n-1)}{n!} + \frac{1}{(n+1)!} \right) &= \sum_{n=2}^\infty \frac{(n-1)}{n!} + \sum_{n=2}^\infty \frac{1}{n!}\\ &= \sum_{n=2}^\infty \frac{n}{n!}\\ &=\left( \sum_{n=1}^\infty \frac{n}{n!} \right) - 1\\ &= e - 1 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
$y''+\frac{a}{y^3}=b$, with $a,b>0$. Solving a physics problem, this equation has arised: $$y''-\frac{a}{y^3}=b$$ with $a,b>0$. Using a lagrangian I can avoid to solve this equation, but since the problem has a solution in terms of elementary functions, I want to know some way to solve this directly. I did $$y'y''-\frac{ay'}{y^{3}}=by' $$ and integrate, but eventually I get $y=0$.
Substitute $y'=p(y)$ with the new unknown function $p$ of a variable $y$. Then separate variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluate the integral I need some help in evaluating this integral: $$\int {\frac{{\cos x}}{{{{\sin }^3}x + \sin x + 4}}dx} $$ I've tried using the substitution $u=\sin{x}$ but I ended up with a cubic polynomial in the denominator.
$$\int\limits_0^\pi {\frac{{\cos x}}{{\sin {x^3} + \sin x + 4}}} dx % MathType!MTEF!2!1!+- % feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaa8qCaeaada % WcaaqaaiGacogacaGGVbGaai4CaiaadIhaaeaaciGGZbGaaiyAaiaa % c6gacaWG4bWaaWbaaSqabeaacaaIZaaaaOGaey4kaSIaci4CaiaacM % gacaGGUbGaamiEaiabgUcaRiaaisdaaaaaleaacaaIWaaabaGaeqiW % dahaniabgUIiYdGccaWGKbGaamiEaaaa!4BC6! $$ I'am sorry it was definite integral not indefinite,so there's no need to integrate ,because when we make the substitution $$u = \sin x % MathType!MTEF!2!1!+- % feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyDaiabg2 % da9iGacohacaGGPbGaaiOBaiaadIhaaaa!3BCB! $$ $$du = \cos xdx % MathType!MTEF!2!1!+- % feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamizaiaadw % hacqGH9aqpciGGJbGaai4BaiaacohacaWG4bGaamizaiaadIhaaaa!3E95! $$ $$\begin{array}{l}x = 0\\u = \sin (0) = 0\\x = \pi \\u = \sin (\pi ) = 0\end{array} % MathType!MTEF!2!1!+- % feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGceaqabeaacaWG4b % Gaeyypa0JaaGimaaqaaiaadwhacqGH9aqpciGGZbGaaiyAaiaac6ga % caGGOaGaaGimaiaacMcacqGH9aqpcaaIWaaabaGaamiEaiabg2da9i % abec8aWbqaaiaadwhacqGH9aqpciGGZbGaaiyAaiaac6gacaGGOaGa % eqiWdaNaaiykaiabg2da9iaaicdaaaaa!4ED5! $$ So, by definite integrals properties ,the integral becomes $$\int\limits_0^0 {\frac{{\cos x}}{{\sin {x^3} + \sin x + 4}}} dx = 0 % MathType!MTEF!2!1!+- % feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaa8qCaeaada % WcaaqaaiGacogacaGGVbGaai4CaiaadIhaaeaaciGGZbGaaiyAaiaa % c6gacaWG4bWaaWbaaSqabeaacaaIZaaaaOGaey4kaSIaci4CaiaacM % gacaGGUbGaamiEaiabgUcaRiaaisdaaaaaleaacaaIWaaabaGaaGim % aaqdcqGHRiI8aOGaamizaiaadIhacqGH9aqpcaaIWaaaaa!4C83! $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Convergence of a sequence in $\mathbb R^n$ with a special property Hi folks, I'm trying to prove (or find a counterexample) of this statement: Let $(x_k)$ be a sequence in $\mathbb R^n$ such that $(x_k - x_{k+1})\to0$, show that $(x_k)$ converge.
$x_n = \sum_{k=1}^n\dfrac1{k}$. Also, I think you should use different letters for the dimension ($\mathbb R^n$) and index ($x_n - x_{n+1}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2213786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }