Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Problem with a proof where algebraic extensions are assumed to be finite extensions I'm reading the article "Integration in Finite Terms" by Maxwell Rosenlicht and I have a problem with one step in a proof. Rosenlicht wants to prove the following: If $F$ is a differential field of characteristic zero and $K$ an algebraic extension field of $F$, then the derivation on $F$ can be extended to a derivation on $K$ and this extension is unique. After proving uniquess, Rosenlicht then continues as follows: "We now show that such a [differential field] structure on $K$ exists. Using the usual field-theoretic arguments, we may assume that $K$ is a finite extension of $F$, so that we can write $K=F(x)$, for a certain $x\in K$." Not being an expert in the theory of fields, I don't understand which "usual" arguments he's talking about. An algebraic extension isn't necessary finite, so why can we assume this here?
Let $K/F$ be an algebraic extension and let $\mathscr F$ denote the set of finite subextensions of $K/F$ that's the set of subfields $E$ of $K$ containing $F$ such that $E/F$ is a finite extension. For all $E\in\mathscr F$ your text proves the existence of one and only one derivation $d_E:E\to E$ extending that given on $F$. Since $K=\bigcup\mathscr F$, there exists one (and only one) function $d:K\to K$ such that $d|E=d_E$ for all $E\in\mathscr F$ and this is the required derivation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to minimize $f(x) = \|Ax-b\|$ Solve the problem of minimizing $f(x) = ||Ax-b||$. Consider all the cases and interpret geometrically. If we write $$\|Ax-b\| = (a_{11}x_1 + \cdots + a_{1n}x_n - b_1)^2 + \cdots + (a_{n1}x_1 + \cdots + a_{nn}x_n - b_1)^2$$ then $$\frac{\partial \|Ax-b\|}{\partial x_j} = 2(a_{11}x_1 + \cdots + a_{1n}x_n - b_1)a_{1j} + \cdots + (a_{n1}x_1 + \cdots + a_{nn}x_n - b_1)a_{nj}$$ If I try to do $\frac{\partial \|Ax-b\|}{\partial x_j} = 0$ I get nothing useful. For $x$ to be a minimizer, I have to have gradient $0$ and hessian positive definite. If we do the hessian just to see: $$\frac{\partial^2 \|Ax-b\|}{\partial x_k\partial x_j} = 2a_{1k}a_{1j} + \cdots + 2a_{nk}a_{nj}$$ I see nothing useful here. I think the geometric interpretation comes from the conditions for the gradient to be $0$ and the hessian to be $>0$, but I don't find these conditions useful. Any ideas?
\begin{eqnarray*} \left \lVert Ax-y\right\rVert^{2} &=& x^{T} A^{T} A x -2 y^{T} A x + y^{T} y \end{eqnarray*} Gradient w.r.t $x$ to $0$ translates to \begin{eqnarray*} \nabla_{x} \left \lVert Ax-y\right\rVert^{2} &=& 2 A^{T} A x - 2A^{T} y =0 \end{eqnarray*} yields, the normal equations. The solution is (the well known Least Square). \begin{equation*} \hat{x}_{LS} = \left(A^{T} A \right)^{-1} A^{T} y \end{equation*} Geometrically, the least square solution can be viewed as an (orthogonal) projection of the observation $y$ onto the image of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Show that the ring of polynomials with coefficients in a field, and in infinitely many variables, is not Noetherian Show that the ring of polynomials with coefficients in a field, and in infinitely many variables, is not Noetherian, that is, $R = k [x_i: i\geq1]$ is not Noetherian. I know that I need to exhibit an ideal of the ring that is not finitely generated, what could this ideal be? Could it be $(x_1,x_2,...,)$? Or could I give the following chain of ideals that do not have a maximal element $(x_1)\subset(x_1,x_2)\subset(x_1,x_2,x_3)\subset...$?How can all ideals that are not finitely generated be classified? What to do in the case where the number of variables is non-countable?
A ring is Noetherian if and only if it satisfies the ascending chain condition, i.e. every increasing chain of ideals terminates. Now you have a chain $$(x_1)\subsetneq(x_1,x_2)\subsetneq(x_1,x_2,x_3)\subsetneq\cdots$$ that never terminates, so $k[x_i: i\ge 1]$ is not Noetherian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A "concrete" example of a left Hopf algebra I came to know from the paper Left Hopf Algebras by Green, Nichols and Taft that one may consider a Hopf algebra whose antipode satisfies only the left (resp. right) antipode condition. To be more precise, let $\Bbbk$ be a field and $(B,\mu,\eta,\Delta,\varepsilon)$ a $\Bbbk$-bialgebra. We say that $B$ is a left Hopf algebra if there exists a linear endomorphism $S:B\to B$ such that $$S(b_1)b_2=\varepsilon(b)1$$ for every $b\in B$ (i.e. $S$ is a left convolution inverse of the identity morphism). In Section 3 of Left Hopf Algebras an "artificial" (in my opinion) example of such an object is provided. Are there some more "concrete" or "natural" examples of this construction?
I learn this "obvious" example from Peter Shawenburg. It doesn't give you a direct answer but is very closed. Take the $A=k\{a,b\}$ free algebra on 2 generators $a$ and $b$. Declare them to be group-like elements, so you have $A=$the semigroup algebra on the free monoid on 2 generators, that is a bialgebra. Consider the element $ab-1$. It is easy to check that it is skew primitive (in general, the difference of two grouplike is skew primitive), in particular, the ideal generated by it is a coideal. Define $B$= the quotient bialgebra. Now it is easy to construct examples of non invertible endomorphisms in some (infinite dimensional) vector space with left inverse. (for example, $V=k^{(\mathbb N)}$ and $f(a_1,a_2,\dots,)=(0,a_1,a_2,\dots,)$) This proves that the element $ba-1$ is not zero in $B:=k\{a,b\}/(ab-1)$. Notice that $B$ is also a semigroup algebra. The left inverse of $b$ is $a$, but $b$ do not have right inverse. Remark also that in a Hopf algebra, the antipode of a group-like is group-like, so a semigroup algebra is Hopf if and only if the semigroup is a group. This proves that the equation $S(h_1)h_2=\epsilon(h)$ cannot imply $h_1S(h_2)=\epsilon(h)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Integrating linearly I came across this question and just want to make sure my understanding is correct. I need to find the general solution of: $$ \frac{dx}{dt} = a(1 - x) $$ In this case, I'm finding the how $x$ changes with respect to $t$ so I'm integrating with respect to $t$. Does that mean the answer is $at - xat + C$? Thanks :)
You can't say that, simply because the function $x$ has a dependence in $t$ !
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Question about continuous function. Question. Which of the following statements are true: * *If $f \in C[0,2]$ is such that $f(0)=f(2)$, then there exist $x_1$ and $x_2$ in $[0,2]$ such that $x_1-x_2=1$ and $f(x_1)=f(x_2)$. *Let $f$ and $g$ be continuous real valued functions on $\mathbb{R}$ such that for all $x\in \mathbb{R}$, we have $f(g(x))=g(f(x))$. If there exists $x_0 \in \mathbb{R}$ such that $f(f(x_0))=g(g(x_0))$, then there exists $x_1\in \mathbb{R}$ such that $f(x_1)=g(x_1)$. My Attempts. * *Here $f(0)=f(2)$. If $f(1) \ne f(0)$, say, $f(1)>f(0)$ then for any $k$ such that $f(0)<k<f(1)$ there exists a $x_1,x_2$ with $0<x_1<1$ and $1<x_2<2$ such that $f(x_1)=f(x_2)=k$ (by Intermediate Value Property of $f$). But how can I prove $x_2-x_1=1$? Please help. *I don't have any guess here to start...
As always in this kind of problems, to use intermediate values theorem, you have to convert your "equation" in the form $g(x)=0$. Here you have $f(x+1)=f(x)$, so let $g:x\mapsto f(x+1)-f(x)$ be defined on interval $[0,1]$. You have $g(0)=f(1)-f(0)$, $g(1)=f(2)-f(1)$, so $g(0)+g(1)=f(2)-f(0)=0$. Either $g(0)$ and $g(1)$ are both $0$ (in which case you have found TWO solutions to your problem), or they are non null opposite. As $g$ is continuous, you can apply IVT to $g$ to prove existence of a solution. A note : with a little bit of work, you can generalize the result : for every fraction $\frac2n$ of the complete interval, you can find two points distant of this fraction where $f$ takes the same value. For example, if you run $10$ miles in an hour, there is a quarter of an hour where you actually ran $2.5$ miles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Calculate, $f\bigg(\frac{1}{1997}\bigg)+f\bigg(\frac{2}{1997}\bigg)+f\bigg(\frac{3}{1997}\bigg)\ldots f\bigg(\frac{1996}{1997}\bigg)$ If $$f(x)=\frac{4^x}{4^x+2}$$ Calculate, $$f\bigg(\frac{1}{1997}\bigg)+f\bigg(\frac{2}{1997}\bigg)+f\bigg(\frac{3}{1997}\bigg)\ldots f\bigg(\frac{1996}{1997}\bigg)$$ My Attempt: I was not able to generalise the expression or get a solid pattern, so I started with smaller numbers and calculated, $$f\bigg(\frac{1}{2}\bigg)=\frac{1}{2}$$ $$f\bigg(\frac{1}{3}\bigg)+f\bigg(\frac{2}{3}\bigg)=1$$ $$f\bigg(\frac{1}{4}\bigg)+f\bigg(\frac{2}{4}\bigg)+f\bigg(\frac{3}{4}\bigg)=\frac{3}{2}$$ I could see that, $$f\bigg(\frac{1}{n}\bigg)+f\bigg(\frac{2}{n}\bigg)+f\bigg(\frac{3}{n}\bigg)\ldots f\bigg(\frac{n-1}{n}\bigg)=\frac{n-1}{2}$$ So, $$f\bigg(\frac{1}{1997}\bigg)+f\bigg(\frac{2}{1997}\bigg)+f\bigg(\frac{3}{1997}\bigg)\ldots f\bigg(\frac{1996}{1997}\bigg)=998$$ which is indeed the right answer. But I am not satisfied with my method. How else can I solve it?
I would say your method is practically speaking what I would also do. Maybe I would rephrase it as follows: Claim: $f(a)+f(1-a)=1$. Then write $S$ for the sum in question, and then $2S$ can be written as $f(1/1997+1996/1997) + \cdots$ (the Gauss trick), which is $1996$ by the claim, so $S=998$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Sufficient Condition for Positivity of Matrix with Operator-valued Entries Let $\mathcal{H}$ be some Hilbert space, let $B(\mathcal{H})$ denote the bounded linear operators acting on $\mathcal{H}$, and let $M_n(B(\mathcal{H}))$ denote the $n \times n$ matrices with operator-valued entries. Let $A = [a_{ij}]$ be one such matrix. My question is, If $\sum_{i,j=1}^n u_i^* a_{ij} u_j$ is a positive semi-definite (PSD) operator in $B(\mathcal{H})$ for all $n$-tuples $(u_1,...,u_n)$ of unitary elements in $B(\mathcal{H})$, does this imply that $A$ is PSD in $M_n(B(\mathcal{H}))$?
Yes. Let $\xi\in \mathcal H^n$. Choose $r $ such that $\|\xi_r\|\geq\|\xi_j\|$ for all $j $. For each $j$, let $x_j\in B(\mathcal H)$ be a contraction such that $x_j\xi_r=\xi_j $. By the argument in this answer, there exists a unitary with $u_j\xi_r=\xi_j$. Then $$ \langle A\xi,\xi\rangle=\sum_{k,j}\langle a_{kj}\xi_j,\xi_k\rangle=\sum_{k,j}\langle a_{kj}u_j\xi_r,u_k\xi_r\rangle=\left\langle\left(\sum_{k,j}u_k^*a_{kj}u_j\right)\xi_r,\xi_r\right\rangle\geq0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2889948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving a function $f:\mathbb{Z}_{nm} \rightarrow \mathbb{Z}_n \times \mathbb{Z}_m$ is an isomorphism Let $n,m$ be two coprime numbers. Prove that the function $f:\mathbb{Z}_{nm} \rightarrow \mathbb{Z}_n \times \mathbb{Z}_m$ such that $f(\overline{r})=(\overline{r},\overline{r})$ is an isomorphism between rings. I've already proven that $f$ is an homomorphism but I'm struggling with the bijectivity part. Surjectivity: We have to prove that for any $(\overline{a},\overline{b}) \in \mathbb{Z}_n \times \mathbb{Z}_m$ there exists $\overline{r} \in \mathbb{Z}_{nm}$ such that $(\overline{r},\overline{r})=(\overline{a},\overline{b})$. But I don't know how to find such $\overline{r}$. However there are two requirements that such $\overline{r}$ has to meet: $\overline{a}=\overline{r} \iff a \equiv r \pmod n \iff a-r = c_1 n$ $\overline{b}=\overline{r} \iff b \equiv r \pmod m \iff b-r = c_2 m$ But I don't really know how to continue from here. Injectivity: We have to prove that if $\overline{a} \neq \overline{b} \implies (\overline{a},\overline{a})\neq(\overline{b},\overline{b})$. However I don't see why two elements of $\mathbb{Z}_{nm}$ being different implies they are different in both $\mathbb{Z}_{n}$ and $\mathbb{Z}_{m}$.
Surjectivity, in particular, $n,m$ are relatively prime. $um+vn=1$ implies that $r=anv+bmu$. Injectivity, $(\bar r,\bar r)=(\bar 0,\bar 0)$ implies that $r(um+vn)=r=0$ mod $m$ this implies that $rvn=0$ mod $m$ and $m$ divides $rv$, we deduce that $m$ divides $r$, since $m$ cannot divides $v$ since $um+rv=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Calculus/ Analysis books I am looking for some titles. Not looking for basic textbooks nor advanced, I am craving for real stuff. More in detail I would like some book that covers calculus in one variable from a more mature perspective ( such as the one that a phd student should have). Something that may be helpful in technical situation, not a good read, not for pleasure: not the book I deserve but the one I need. It must be a huge pile of tricks and inequalities. I must say that I am at my last year at uni as a math student, and I have read or at list I am aware of the classics such as Rudin, to make an example. They are great but not quite what I intend here. I don't need ( I hope) to learn about calculus, I need advanced tools, the kind of things that you run into by luck and you keep using ever after. Thank you.
"Introduction to measure theory" by Tao is great, it's also online
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 5 }
Ultrafilters on finite boolean algebras I am asked to prove a special case of Stone duality, namely that $B\cong \mathcal{P}(\text{Ult}(B))$ by the map $\phi:B\to \mathcal{P}(\text{Ult}(B))$ given by the homomorphism $$ \phi(x)=\{V\in \text{Ult}(B) \;|\; x\in V\}, $$ where $B$ is a finite boolean algebra and $\text{Ult}(B)$ denotes the set of ultrafilters contained in $B.$ I have shown that $\phi$ is injective. Now I need that $\phi$ is surjective. I have considered two ways to do this. The first way is to simply take a collection of ultrafilters $\mathcal{U}=\{U_1,\ldots U_n\}$ and find some $b\in B$ with $\phi(b)=\mathcal{U}.$ If $U=U_1\cap\cdots\cap U_n$ and $b=\bigwedge_{x\in U}x,$ then it seems like $\phi(b)$ should be $\mathcal{U}.$ This amounts to showing that if $V$ is an ultrafilter and $V\supseteq U,$ then $V=U_k$ for some $k.$ However, I do not know how to show this. The second way is to simply compute the cardinalities of $B$ and $\mathcal{P}(\text{Ult}(B)).$ This requires knowing that $|B|=2^m$ for some $m$ and $|\text{Ult}(B)|=m.$ Again, I do now know how to show this. Maybe I'm missing something rather simple, but I'm stuck. Any help is appreciated.
Here's an easier way to show surjectivity: since $\mathcal{P}(\text{Ult}(B))$ is generated by singletons, it suffices to show that every singleton is in the image of $\phi$. In other words, given an ultrafilter $U$ on $B$, we want to find $x\in B$ which is in $U$ and no other ultrafilters. I encourage you to try to finish the proof from here on your own. One way to do so is hidden below. For each ultrafilter $V\neq U$, choose $x_V\in U\setminus V$. Now let $x$ be the meet of all of these elements $x_V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hint on a measure theory question Let $(X, \mathcal{A}, \mu)$ be a measure space, and define $\mu^\bullet: \mathcal{A} \to [0, +\infty]$ by $$ \mu^\bullet (A) = \sup \left\{ \mu(B) : B \subseteq A, B \in \mathcal{A}, \mu(B) < + \infty \right\} $$ I'm trying to show that $\mu^\bullet$ is countably additive. One thing is clear: if $A_n$ are disjoint sets in $\mathcal{A}$, then if $B \subseteq \cup_n A_n$, and $B_n = B \cap A_n$, then $$ \mu(B) = \sum_n \mu(B_n) \leq \sum_n \mu^\bullet(A_n), $$ thus $\mu^\bullet(\cup_n A_n) \leq \sum_n \mu^\bullet(A_n)$. This is the easy direction (I think). Could I get a small hint on the reverse inequality? (I don't want the solution...)
Hint: fix $\epsilon>0$ and for each $n$ take $B_n$ with $\mu(B_n)+\epsilon/2^n>\mu(A_n)$, then try to use the fact that $\sum_n\epsilon/2^n=\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$\mathbf{H}(3)$ is diffeomorphic to $\mathbf{SL}\left( 2,\mathbf{C}\right) \mathbf{/SU}\left( 2\right) $ I'm reading the book from Jensen's "Surfaces in Classical Geometries". Could anyone help me understand why $\mathbf{H}(3)$ is diffeomorphic to $\mathbf{SL}\left( 2,\mathbf{C}\right) \mathbf{% /SU}\left( 2\right) $? The following is a print.
This follows from the general statement that if a lie group $G$ acts transitively on a space $X$, and if given $x\in X$ we define $G_x:=\{g\in G \mid gx=x \}$, then $X\cong G/G_x$. This can be seen as a generalization of the orbit-stabalizer theorem, as when $X$ is a finite set and $G$ is a finite group, $|G/G_x|=|G|/|G_x|$. We wish to produce a diffeomorphism between $G/G_x$ and $X$, and so we should start by having a map. We note that the map $G/G_x\to X$ sending $[h]$ to $hx$ is well defined, and one can prove that it is smooth and bijective. I'm not immediately seeing why the inverse map is smooth, but I will edit if I can find/think of a simple explanation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is it possible to assume the existence of “Dominating Turing Machines”? Consider three-tape (tape $1$ for the input, tape $2$ for the computation, tape $3$ for the output) two-symbol (blank symbol and non-blank symbol) Turing machines. Let $F(x, y)$ denote the minimal natural number greater than number of non-blank cells on the output tape when machine #$x$ halts given $y$ as the input in unary encoding ($0$ = “$1$”, $1$ = “$11$”, $2$ = “$111$” etc.), where $x$ is the natural number that identifies the corresponding Turing machine. For clarity, we assume that if machine #$x$ does not halt on $y$, then $F(x, y) = 0$. Now we can note that each Turing machine #$i$ corresponds to a particular infinite sequence of natural numbers: $$S_i = (F(i, 0), F(i, 1), F(i, 2), \ldots).$$ Then we can define that two Turing machines #$p$ and #$q$ are $F$-different if the sequence $S_p$ differs from the sequence $S_q$. The Dominating machine is defined as any $Z$-state Turing machine #$D$ such that there exist some minimal natural number $A$ and if you choose any natural number $B \geq A$, denote $F(D, B)$ by $a$, then choose any natural number $K$ that corresponds to any $z$-state machine (where $z \leq Z$ and $K \neq D$) and denote $F(K, B)$ by $b$, you will always observe that $a \geq b$. Do such machines exist? If no, then why? If yes, then let $V$ denote the minimal number of states in the table of instructions of the first Dominating machine. Can we assume that if we choose any number $W$ from the set $\{V+1, V+2, \ldots\}$ and explore all $W$-state Turing machines, then $W$ will correspond to its own family of Dominating machines (assuming that the family contains at least one Dominating machine) and any Dominating machine from this family is $F$-different from any $(W-1)$-state Dominating machine?
I can tell you that if Dominating machines exist, at most they have 14 states. Checking all small machines for a property as complicated as yours will be too laborious without some idea for why it is important or interesting. Let's look at a concept of Universal Turing machine. There is 2-symbol, 1-tape Universal Turing machine with 15 states. For 3-tape machine number of states might be fewer. Universal machine simulates a machine encoded in its input until halting. That means your function $F(u,y)$ for Universal machine #u would have two key properties: * *There are infinitely many numbers $y$ such that $F(u,y)=0$, as there are infinitely many non-halting machines. *Define function $G(Y)=max_{y\leq Y}(F(u,y))$. $G(Y)$ grows on the order of Busy Beaver numbers. Any machine with property (1) cannot dominate a simple machine that always outputs a single 1, and so cannot be Dominating. Any machine that has property (2) should also have property (1), otherwise it can be used to solve Halting problem. Any machine that dominates #u should have property (2), therefore it should have property (1), therefore it cannot be Dominating. And if it doesn't dominate #u, it obviously cannot be Dominating either. Therefore any Dominating machine must have fewer states than smallest Universal machine for your particular model of computation, which cannot be more than 15.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given GCD of two numbers is 42 and their product is 15876. How many possible sets of members can be found? Given GCD of two numbers is 42 and their product is 15876. How many possible sets of numbers can be found? I have no idea. I can only evaluate the lcm. Don't know how to get the answers.
Use $\gcd(x,y) \times \operatorname{lcm}(x,y) = xy$ \begin{array}{rrr} xy &= &15876 \\ \operatorname{lcm}(x,y) &= &378 \\ \hline \gcd(x,y) &= &42 \end{array} Assume $x < y$. Use $\gcd(p^a, p^b) = p^{min(a,b)}$ and $\operatorname{lcm}(p^a, p^b) = p^{max(a,b)}$ when $p$ is a prime number. \begin{array}{rcr|ccc} \gcd(x,y) &= &42 & 2^1 & 3^1 & 7^1 \\ \operatorname{lcm}(x,y) &= &378 & 2^1 & 3^3 & 7^1 \\ xy &= &15876 & 2^2 & 3^4 & 7^2 \\ \hline x &= &42 & 2^1 & 3^1 & 7^1 \\ y &= &378 & 2^1 & 3^3 & 7^1 \\ \end{array}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove that infinite number of pentagons exist satisfying the given requirements A given convex pentagon ABCDE has the property that the area of each of the 5 triangles ABC, BCD, CDE, DEA and EAB are equal. How can I prove that there exist infinitely many non-congruent pentagons having the above property? I tried to take one side of the triangle as fixed and build upon it but I couldn't do anything as I couldn't determine the angle of the pentagon. Any help please
The regular pentagon has this property. Any invertible affine transformation of the plane has the property that equal areas are sent to equal areas. So for instance we can stretch the regular pentagon by a given scale factor in the $x$-direction (but leave the scale in the $y$-direction unchanged) to get an infinite family of non-congruent pentagons with your area property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2890977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve: $\lim_{x\to -\infty} (\sqrt {4x^2+7x}+2x)$ Solve: $$\lim_{x\to -\infty} (\sqrt {4x^2+7x}+2x)$$ My attempt: Rationalizing: $$\lim_{x\to -\infty} (\sqrt {4x^2+7x}+2x) *\frac{\sqrt {4x^2+7x}-2x}{\sqrt {4x^2+7x}-2x}$$ $$=\lim_{x\to -\infty} \frac{4x^2+7x-4x^2}{\sqrt {4x^2+7x}-2x}$$ $$=\lim_{x\to -\infty}\frac{7x}{\sqrt {4x^2+7x}-2x}$$ Dividing numerator and denominator by x: $$=\lim_{x\to -\infty} \frac{7}{\sqrt{4+\frac{7}{x}}-2}$$ $$= \frac{7}{\sqrt{4+\frac{7}{-\infty}}-2}$$ $$= \frac{7}{\sqrt{4+0}-2}$$ $$=\frac{7}{2-2}$$ $$=\infty$$ Conclusion: Limit does not exist. Why is my solution wrong? Correct answer: $\frac{-7}{4}$
hint When $x$ goes to $-\infty,$ it becomes negative . on the other hand, we have $$\boxed{\sqrt{x^2}=|x|}$$ the mistake you made can be corrected by $$\sqrt{(-x)^2}=-x$$. In the denominator, factor out by $(-x)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
How to solve the given integral avoiding infinite series sum? Question: How to solve the following integral? $$I = \int_0^\infty \dfrac{x^{N_a + N_b - 1}}{(p \Omega_1 + \Omega_2 x)^{N_a + 1}} \ln (1 + Qx) \, _2F_1\left( N_b + 1, N_b; N_b +1; \dfrac{-\Omega_3}{\Omega_4}x\right)dx, \tag{1}$$ where $N_a, N_b \in \mathbb Z_+$ and $\Omega_1, \Omega_2, \Omega_3, \Omega_4, p, Q \in \mathbb R_+$. I am looking for a closed-form solution for the above integral. A solution in terms of any special function will also be good enough. Any leads appreciated. My attempt: Representing $\log(1 + Qx)$ in terms of Meijer's $G$ function, we have \begin{align} I = {} & \, \Omega_2^{-(N_a + 1)}\int_0^{\infty}x^{N_d + N_b - 1}\left( x + \dfrac{p\Omega_1}{\Omega_2}\right)^{-(N_a + 1)}G_{2, 2}^{1, 2}\left( Qx \left\vert \begin{smallmatrix} 1, & 1\\ 1, & 0\end{smallmatrix}\right.\right) \\ & \hspace{6cm}\times\, _2F_1\left( N_b + 1, N_b; N_b +1; \dfrac{-\Omega_3}{\Omega_4} x\right) \, dx \tag{2} \end{align} A solution to (2) exists in [1, eqn. 2.2], resulting into infinite series summation. Can anyone suggest any alternate solution that doesn't contain infinite series sum?
Here is an alternate way to solve. \begin{align} & \int_{0}^{\infty} \dfrac{x^{N_a + N_b - 1}}{(p \Omega_1 + \Omega_2 x)^{N_a + 1}} \ln(1 + Qx) \, _2F_1\left( N_b + 1, N_b,; N_b +1 , \dfrac{-\Omega_3}{\Omega_4}x\right)dx \\ = & (p\Omega_1)^{-(N_a + 1)} \int_{0}^{\infty} x^{N_a + N_b - 1} \left( 1 + \dfrac{\Omega_2}{p\Omega_1} x\right)^{-(N_a + 1)} \ln(1 + Qx) \, _2F_1\left( N_b + 1, N_b,; N_b +1 , \dfrac{-\Omega_3}{\Omega_4}x\right)dx \\ = & \dfrac{1}{(p\Omega_1)^{(N_a + 1)} \Gamma(N_a + 1)\Gamma(N_b)} \int_{0}^{\infty} x^{N_a + N_b - 1} G_{1, 1}^{1, 1} \left( \left.\dfrac{\Omega_2}{p\Omega_1} x\right\vert \begin{smallmatrix} -N_a \\ 0\end{smallmatrix}\right) G_{2, 2}^{1, 2} \left( Qx \left\vert \begin{smallmatrix} 1, & 1 \\ 1, & 0\end{smallmatrix} \right.\right) G_{2, 2}^{1, 2} \left( \left.\dfrac{\Omega_3}{\Omega_4}x \right\vert \begin{smallmatrix} -N_b, & 1 - N_b \\ 0, & -N_b\end{smallmatrix}\right)dx, \tag{1} \end{align} where the second term inside the integral is represented as a Meijer's G function using [1, Section IV-C], the log function is represented as a Meijer's G function using [2, below Fig. 1] and the Gauss hypergeometric function is represented as a Meijer's G function using [3, eqn. (17)]. The closed-form solution to (1) is now straightforward to represent using 4, in terms of extended generalized bivariate Meijer’s G function (EGBMGF). Note: Looking for some expert comments/opinions. [1]. I.S. Ansari, et.al., "Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop transmission Systems". [2]. P.S. Bithas, "Digital Communications over Generalized-K Fading Channels" [3]. V.S. Adamchik, et.al. "The algorithm for calculating integrals of Hypergeometric type functions and its realization in REDUCE system" [4]. http://functions.wolfram.com/07.34.21.0081.01
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding the range of a function without using inverse So I'm fairly close to beginner level in calculus and have usually used the inverse of a function to find its range however I'm not sure what to do when dealing with this particular function. $$ h(t) = \frac{t}{\sqrt{2-t}}$$ I found the domain to be $(-\infty, 2)$ but when I attempt to use the inverse to find the range, it ends up a mess because of the different powers of t. $$ y = \frac{t}{\sqrt{2-t}}$$ $$ \Rightarrow t = \frac{y}{\sqrt{2-y}}$$ $$ \Rightarrow t^2 = \frac{y^2}{2-y}$$ $$ \Rightarrow t^2(2-y) = y^2$$ $$ \Rightarrow 2t^2-t^2y = y^2$$... Maybe it's because I'm a beginner but I'm unsure where to go from here. Sorry if it's a really basic/easy question but I'd really like to learn how to deal with these types of questions. Any help would be appreciated!
We have that $h(t)$ is a continuos function defined for $t<2$ and $$\lim_{t \to -\infty} h(t)=-\infty$$ $$\lim_{t \to 2^-} h(t)=\infty$$ therefore by IVT the range is $\mathbb{R}$. Morover we have $$h'(t)=\frac{4-t}{2\sqrt{(2-t)^3}}>0$$ therefore $h(t)$ is also injective and the inverse exists from $\mathbb{R}\to (-\infty,2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Determining Whether the Number $11111$ is Prime. Used Divisibility Tests. I am asked to determine whether the number $11111$ is prime. Upon using the divisibility tests for the numbers 1 to 11, I couldn't find anything that divides it, so I assumed that it is prime. However, it apparently isn't prime. So what is the procedure to determine whether $11111$ is prime? Thank you for any help.
Here is a list for test of prime factor of less than $50$. Test for divisibility by $41$. Subtract four times the last digit from the remaining leading truncated number. If the result is divisible by $41$, then so was the first number. Apply this rule over and over again as necessary. $$1111-4(1)=1107$$ $$110-4(7)=110-28=82$$ $$8-4(2)=0$$ The number is divisible by $41$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Thinking of a cube as $\mathbb Z_6\times\mathbb Z_4$ I was wondering if it makes sense to think of a cube as an additive group in this way. For example $(4,1)$ corresponds to face $4$ edge $1$. I am a beginner in group theory and I hope this makes sense! If I added together face $4$ edge $1$ to face $3$ edge $0$: $(4,1)+(3,0)=(1,1)$ would correspond to face $1$ edge $1$?
I don't see how faces and edges are related here. I makes sense, but it doesn't seem to have anything to do with the cube. For instance, you could take four types of pens in six different colors. Then define addition of pen i with color j to be like the group $\mathbb{Z}_6 \times \mathbb{Z}_4$. ... If it helps you think about the definition of the group, then that is fine, but often when one defines a group based on a geometric object like a cube, one is thinking about some sort of geometry related to it, but sometimes it helps to visualize things better. It might be better to think of the group as a bike lock with two numbers to pick from, the first number to pick from is from 0 to 5 and the second number to pick from is from 0 to 3 and how you add two combinations of numbers is to rotate them the number of times in each slot. This naturally captures the idea of the group and the definition of the group is deeply tied into the geometry/movement of the object used to explain it. For example, this Bike lock could be used to $\mathbb{Z}_{10} \times \mathbb{Z}_{10} \times \mathbb{Z}_{10} \times \mathbb{Z}_{10}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Factoring a quadratic polynomial (absolute beginner level), are both answers correct? I'm following video tutorials on factoring quadratic polynomials. So I'm given the polynomial: $$x^2 + 3x - 10$$ And I'm given the task of finding the values of $a$ and $b$ in: $$(x + a) (x + b)$$ Obviously the answer is: $$(x + 5)(x - 2)$$ However the answer can be also: $$(x - 2) (x + 5)$$ I just want to make sure if the question asks for the values of '$a$' and '$b$', then '$a$' can be either $5$ or $-2$, and '$b$' can be either $5$ or $-2$. Therefore if a question asks what are the values of '$a$' and '$b$' both the following answers are correct: Answer $1$ $a = -2$ $b = 5$ or Answer $2$ $a = 5$ $b = -2$ I'm sure this is a completely obvious question, but I'm just a beginner in this.
.Yes, you are correct. Since $(x+5)(x-2) = (x-2)(x+5) = x^2 + 3x-10$, we note that $a$ and $b$ may either take the values $(5,-2)$ or $(-2,5)$. I would consider providing just one of the two solutions to be insufficient, since the question itself ask for the values of $a$ and $b$, but nowhere mentions that they are unique. However, any question saying "find the values of $a$ and $b$" is wrong with the word "the" : they are assuming uniqueness of $a$ and $b$, which is not the case.The question as quoted by you includes the word "the" , and this is misleading.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Prove or disprove that base of space is base of subspace Let vectors $v_1,v_2,v_3,v_4$ is base of space $V$, and if $W$ is subspace of $V$ such that $v_1,v_2\in W$ and $v_3,v_4\not \in W$ then $v_1,v_2$ is base of $W$? My Professor said that you can not make a base of subspace from base of space, but you can make a base of space from base of subspace. But here I do not know how to prove, can you help me?
Since $v_1,v_2,v_3,v_4$ is base of space $V$, $W$ is subspace of $V$ and $v_1,v_2\in W$ we know that $\dim W\ge 2.$ But it is possible to have $\dim W=3.$ For example, assume $v_3+v_4\in W.$ That is, consider $W=span\{v_1,v_2,v_3+v_4\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2891938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Span of 3 linearly dependent vectors The vectors are $(1,1,1)$,$(1,2,0)$, and $(2,3,1)$. I have shown that they are linearly dependent but don't really know how to find their span. (Note: my lecturer just literally defined what a span is and didn't get to the part where we actually calculate spans, so I'm completely lost!). Any help will be appreciated.
Given a set of vectors their span is given by the set of all linear combinations of those vectors. In that case the span is $$a(1,1,1)+b(1,2,0)+c(2,3,1)$$ Since the three vectors are linearly dependent but $(1,1,1)$ and $(1,2,0)$ are linearly independent the span is also given by $$a(1,1,1)+b(1,2,0)$$ or by any other pair of the three vectors. In that case any pair is a basis for the span of the three given vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Nature of infinite series $ \sum\limits_{n\geq 1}\left[\frac{1}{n} - \log(1 + \frac{1}{n})\right] $ $$\sum\limits_{n\geq 1}\left[\frac{1}{n} - \log\left(1 + \frac{1}{n}\right)\right]$$ Is it convergent or divergent? Wolfram suggests to use comparison test but I can't find an auxiliary series.
We may take the series $\sum_{n\geq 1}\left[\frac{1}{n}-\log\left(1+\frac{1}{n}\right)\right]$ as an equivalent definition of the Euler-Mascheroni constant $\gamma=\lim_{n\to +\infty}\left(H_n-\log n\right)$, where $H_n=\frac{1}{1}+\frac{1}{2}+\ldots+\frac{1}{n}$ is the $n$-th harmonic number. Over the interval $(0,1)$ we have that $\frac{x-\log(1+x)}{x^2}$ is a decreasing function, going from $\frac{1}{2}$ to $1-\log(2)$. In particular $\gamma$ is bounded between $(1-\log 2)\frac{\pi^2}{6}$ and $\frac{\pi^2}{12}$. More accurate approximations can be derived from creative telescoping, the integral representation $$ \gamma=\int_{0}^{1}-\log(-\log x)\,dx$$ or the Shafer-Fink inequality. Actually $\gamma$ is pretty close to $\frac{1}{\sqrt{3}}$. The irrationality of $\gamma$ is a long-standing open problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Prove $\lim_{n\to \infty} n\int_0^1x^nf(x) \,\text dx=0 $ Assume $f(x)$ is continuous on $[0,1]$ , and $f(1)=0$. Prove $$\lim_{n\to \infty} n\int_0^1x^nf(x)\,\text dx=0 $$ I already know that $\lim_{n\to \infty}\int_0^1x^nf(x) \, \text dx=0$. Is this helpful in the question above?
Fix some $\epsilon>0$. Then there is a $\delta>0$ (smaller one) so that on the interval $[1-\delta,1]$ we have $|f|<\epsilon$. Now we can easily estimate: $$ \begin{aligned} 0 &\le \left|n\int_0^1 x^n\; f(x)\; dx\right| \\ &\le \int_0^1 (n+1)x^n\; |f(x)|\; dx \\ &= \int_0^{1-\delta}(n+1)x^n\; |f(x)|\; dx + \int_{1-\delta}^1(n+1)x^n\; |f(x)|\; dx \\ &\le \int_0^{1-\delta}(n+1)x^n\; \|f\|\; dx + \int_{1-\delta}^1(n+1)x^n\; \epsilon\; dx \\ & \le (1-\delta)^{n+1} \|f\| + \epsilon \ . \end{aligned} $$ We pass to the limit (superior) w.r.t. $n$ now in the obtained inequality, getting $$ \limsup_n\left|n\int_0^1 x^n\; f(x)\; dx\right|\le \epsilon\ . $$ Now we let $\epsilon$ go to zero. So the limit exists, and is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Standard Coordinate Charts On A Sphere Below are excerpts from Lee's Introduction To Smooth Manifolds for the context of my question: What I am confused about is the part where he talks about $\phi_i^+ \circ (\phi_i^-)^{-1} = \phi_i^- \circ (\phi_i^+)^{-1} = Id_{\mathbb{B}}$. There seems to be a mismatch of domain and range. We have $\phi_i^-: U_i^- \cap \mathbb{S}^n \rightarrow \mathbb{B}^n$ so $(\phi_i^-)^{-1}: \mathbb{B}^n \rightarrow U_i^- \cap \mathbb{S}^n$. But $\phi_i^+: U_i^+ \cap \mathbb{S}^n \rightarrow \mathbb{B}^n$ so I don't understand how $\phi_i^+ \circ (\phi_i^-)^{-1}$ makes sense.
Ugh. Example 1.31 is entirely messed up. Someone pointed this out to me several months ago, but I was too busy at the time and forgot to get back to it. I've now added a correction to my errata list. Thanks for pointing it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Asymptotic for $y'' + \frac{\epsilon y'}{y^2} - y' = 0$, $y(-\infty) = 1$, $y(+\infty) = \epsilon$. Asymptotic for $y'' + \frac{\epsilon y'}{y^2} - y' = 0$, $y(-\infty) = 1$, $y(+\infty) = \epsilon$. I started with a regular expansion for $$y^2y'' + \epsilon y' - y^2 y' = 0$$ and $$y = y_0 + \epsilon y_1 + O(\epsilon^2)$$ with $y_0(-\infty) = 1$, $y_0(+\infty) = 0$ and $y_1(-\infty) = 0$, $y_1 (+\infty) = 1$. The zero order ODE is $$ y_0'' - y_0' = 0, $$ this gives $y_0 = Ae^{x} + B$, but I can this can not satisfy the boundary condition. How should I approach differently? So this problem is a singular perturbation problem. We can not use WKB because this is not linear, and I am not familar with explicit boundary layer calculation on an unbounded domain.
The differential equation has an exact solution in implicit form $$ x - x_0 = \frac{\ln \left( cy + y ^{2}+\epsilon \right)}{2} +{\frac {c}{\sqrt {{c}^{2}-4\, \epsilon}}{\rm arctanh} \left({\frac {2\,y +c}{\sqrt {{c}^{2}-4\, \epsilon}}}\right) } $$ Actually it's better (changing the constant $x_0$) to write this as $$ x - x_0 = \frac{\ln \left(- cy - y ^{2}-\epsilon \right)}{2} +{\frac {c}{\sqrt {{c}^{2}-4\, \epsilon}}{\rm arctanh} \left({\frac {2\,y +c}{\sqrt {{c}^{2}-4\, \epsilon}}}\right) }$$ If $c = -1-\epsilon$, it turns out that the right side will go to $-\infty$ as $y \to \epsilon+$ and $+\infty$ as $y \to 1-$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
If we restrict cosine to only where it satisfies a linear property, will it create ellipses? On another forum, someone asked if cosine was linear. I remarked, of course not! We know already that $$\cos(x+y) = \cos x \cos y - \sin x \sin y$$ If it were linear, we would need $$\cos(x+y) = \cos x + \cos y$$ So, I decided to graph it. Inside each $2\pi$ square, the graph looks very much like an ellipse. But, I am more of a combinatorist, and I do not have much intuition for how to check how "ellipse-like" it is. How would one check? Is there a substitution that could be used? Here is the link to the Wolframalpha plot: Wolframalpha Plot
The graph suggests that we may simplify the problem by rotating the coordinate axes in the anti-clockwise direction (or clockwise, but let's just choose to go anti-clockwise). (Indeed, note that for every solution $(a, b)$, $(b, a)$ is also a solution. So the graph is symmetric along the line $x=y$. Rotating thus makes the solution in the new basis symmetric along x=0) The forward transformations are: $$x' = x\cos(\frac{\pi }{4}) + y\sin(\frac{\pi }{4})$$ $$y' = -x\sin(\frac{\pi }{4}) + y\cos(\frac{\pi }{4})$$ This transforms the equation as you gave it to: $$\cos(\sqrt{2}y') = \cos\left ( \frac{1}{\sqrt{2}} \left ( x'+y' \right ) \right ) + \cos\left ( \frac{1}{\sqrt{2}} \left ( -x'+y' \right ) \right )$$ We derive the equation of the ellipse immediately above the origin, assuming it exists, and show that it contains points not in the solution set to the above equation. We first determine the semi-minor's length by finding the first two positive solutions to the above equation where $x'=0$, i.e. to: $$\cos(\sqrt{2}y') = 2\cos\left ( \frac{1}{\sqrt{2}} y' \right )$$ Skipping the details, the two sought-for solutions are $(x'_{1}, y_{1}')= (0, \sqrt{2} \cos^{-1}\left ( \frac{1-\sqrt{3}}{2} \right ))$ and $(x'_{2}, y'_{2}) = (0, \sqrt{2}\left (2\pi - \cos^{-1}\left ( \frac{1-\sqrt{3}}{2} \right ) \right ))$. The semi minor's length is therefore $\sqrt{2}\left ( \cos^{-1}\left ( \frac{1-\sqrt{3}}{2} \right )- \pi \right ) $. We continue the same process for the semi-major axis. This gives $(x_{3}', y_{3}') = (\frac{2\sqrt{2}\pi }{3}, 0)$ and $(x_{4}', y_{4}') = (-\frac{2\sqrt{2}\pi }{3}, 0)$. The equation of the ellipse above the origin, in the rotated coordinate system is: $$\left ( \frac{3x'}{2\sqrt{2}\pi } \right ) ^{2} + \left ( \frac{y'-\sqrt{2}\pi }{\sqrt{2}\left ( \cos^{-1}\left ( \frac{1-\sqrt{3}}{2} \right )- \pi \right) } \right )^{2} = 1$$ By trying different values of $x'$ and $y'$, one can check that the above does not always imply $$\cos(\sqrt{2}y') = \cos\left ( \frac{1}{\sqrt{2}} \left ( x'+y' \right ) \right ) + \cos\left ( \frac{1}{\sqrt{2}} \left ( -x'+y' \right ) \right )$$ So the curves you see in the picture aren't exactly rotated ellipses. But they sure are close.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 2, "answer_id": 1 }
Are the sets of a power set considered "elements?" I'm trying to review some set theory. The question I'm encountering is "How many elements are in a power set of a set?" I know the answer if my interpretation of the question is correct. If the original set A has n elements, the power set of A will have 2^n new sets within it, but are these sets considered "elements" of the power set, or am I misinterpreting the question? If these sets are not considered elements of the power set, then is the question asking for the total number of elements of all the sets of the power set? I'm not entirely sure.
Short Answer: Yes Long Answer: I know why this is confusing, but always think of it this way: a set can contain any kind of objects, but the term elements exclusively refers to the objects that are members of the set. For example, if I say $x$ is an element of $y$, then $x\in y$, regardless of what $x$ is, even if it is another set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finite dimensional division algebras over the reals other than $\mathbb{R},\mathbb{C},\mathbb{H},$ or $\mathbb{O}$ Have all the finite-dimensional division algebras over the reals been discovered/classified? The are many layman accessible sources on the web describing different properties of such algebras, but all the ones I have come across seem to stop short of fully restricting them to $\mathbb{R},\mathbb{C},\mathbb{H},$ or $\mathbb{O}$, yet do not mention the existence of anything beyond those four. The wikipedia page on division algebras mentions that any finite-dimensional division algebra over the reals must be of dimension 1, 2, 4, or 8. It also mentions the only finite-dimensional division algebras over the real numbers which are alternative algebras are the real numbers themselves, the complex numbers, the quaternions, and the octonions. (And this claims we don't even need the finite-dimensional qualifier for the last statement.) Hurwitz's theorem tells us that these are also the only normed unital division algebras over the reals. So any finite dimensional division algebra over the reals other than $\mathbb{R},\mathbb{C},\mathbb{H},$ or $\mathbb{O}$ cannot have a norm if it is unital, nor have a matrix representation, nor even be alternative. Are there any known examples? Have all the possibilities been classified?
Real division algebras have not been classified. The best known result is the classification of flexible division algebras, which is found in this paper by Darpo. Some more details can be found in this survey paper (also by Darpo), where he states that the general classification is not known.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Improper integral $\int_{0}^{\infty}\left(\frac{1}{\sqrt{x^2+4}}-\frac{k}{x+2}\right)\text dx$ Given improper integral $$\int \limits_{0}^{\infty}\left(\frac{1}{\sqrt{x^2+4}}-\frac{k}{x+2}\right)\text dx \, ,$$ there exists $k$ that makes this integral convergent. Find its integration value. Choices are $\ln 2$, $\ln 3$, $\ln 4$, and $\ln 5$. I've written every information from the problem. Yet I'm not sure whether I should find the integration value from the given integral or $k$. What I've tried so far is, $\int_{0}^{\infty} \frac{1}{\sqrt{x^2+4}} \, \text dx= \left[\sinh^{-1}{\frac{x}{2}}\right]_{0}^{\infty}$ How should I proceed?
There are no integration issue in a right neighbourhood of the origin, but when $x\to +\infty$ we have that the integrand function behaves like $\frac{1-k}{x}+O\left(\frac{1}{x^2}\right)$, so a necessary and sufficient condition for the integrability is $k=1$. In such a case $$\begin{eqnarray*} \int_{0}^{+\infty}\left[\frac{1}{\sqrt{x^2+4}}-\frac{1}{x+2}\right]\,\text dx &\stackrel{x\mapsto 2z}{=}& \int_{0}^{+\infty}\left[\frac{1}{\sqrt{z^2+1}}-\frac{1}{z+1}\right]\,\text dz\\[0.3cm]&=&\left[\text{arcsinh}(z)-\log(z+1)\right]_{0}^{+\infty}\\[0.3cm]&=&\lim_{z\to +\infty}\text{arcsinh}(z)-\log(z+1)\\&\stackrel{z\mapsto\sinh t}{=}&\lim_{t\to +\infty} \log\left(\frac{e^t}{\sinh t+1}\right)=\color{red}{\log 2}.\end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If $ x,y ∈\Bbb{Z} $ find $x$ and $y$ given: $2x^2-3xy-2y^2=7$ We are given an equation: $$2x^2-3xy-2y^2=7$$ And we have to find $x,y$ where $x,y ∈\Bbb{Z}$. After we subtract 7 from both sides, it's clear that this is quadratic equation in its standard form, where $a$ coefficient equals to 2, $b=-3y$ and $c=-2y^2-7$. Thus discriminant equals to $9y^2+16y^2+56=25y^2+56$. Now $x = \frac{3y±\sqrt{25y^2+56}}{4}$ , $x$ to be an integer $\sqrt{25y^2+56}$ has to be an integer too. I substituted $y$ as nonnegative integer first, because the answer wouldn't differ anyway as it is squared and found that when $y=1$, $\sqrt{25y^2+56}=9$, so we get $x = \frac{3±9}{4}$ and when we have plus sign we get $x = \frac{3+9}{4}=3$. So there we have it, $x=3, y=1$ is on of the solution. But $y=-1$ will also work because $y$ is squared, again, we get $x = \frac{-3±9}{4}$, if we have minus sign we have $x=-3$. Thus another solution, leaving us with: $$ x=3, y=1$$ $$ x=-3, y=-1$$ I checked other integers for $y$ but none of them lead to solution where $x$ is also an integer. But here's problem, how do I know for sure that these two solutions are the only solutions, I can't obviously keep substituting $y$ as integers, as there are infinitely many integers. So that's why I came here for help.
Notice $$2x^2-3xy-2y^2=(2x+y)(x-2y)=7.$$ Therefore, we have the four cases: 1) $2x+y=1, x-2y=7.$ Thus $x=\dfrac{9}{5},y=-\dfrac{13}{5}$, which are not integers. 2) $2x+y=7, x-2y=1.$ Thus $x=3,y=1$, which is a group of proper solution. 3) $2x+y=-1, x-2y=-7.$ $x=-\dfrac{9}{5},y=\dfrac{13}{5}$, which are not integers. 4)$2x+y=-7, x-2y=-1.$ Thus $x=-3,y=-1$, which is a second group of proper solution. As a result, we have find two group of integer solution that $$x=3, y=1,$$ or $$x=-3,y=-1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2892975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to reduce a polynomial congruences Consider the Legendre Symbol (2|p) which give the congruences $2^{\frac{p-1}{2}} = (-1)^{\frac{p^2-1}{8}} mod p$. Now ${\frac{p^2-1}{8}}$ is odd if is equal to 2k+1 with k integer that gives $p^2 = 16 k + 9$ and brings to the polynomial congruences $p^2 \equiv 9 (mod \,\,\,16)$. Now the solution gives the congruences $p \equiv \pm 3 (mod \,\,\,8)$ so a reduction is possible. Do you know why? Thanks
If $2^{m+2}|(p-a)(p+a),m\ge1$ and $a$ odd As $p+a,p-a$ have the same parity, both must be even $\implies2^m|\dfrac{p-a}2\cdot\dfrac{p+a}2$ As the two multipliers have opposite parities $2^m$ will divide exactly one of them
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$X$ is an admissible variation of $\mathbf{x}$ I'm reading this subject as a hobby. Could someone help me solve this problem, since I have been doing some geometry for some time? Let $\mathbf{e}_{3}$ be smooth unit normal along the immersion $\mathbf{x} \colon M \to \mathbb{R}^3$ compatible with the orientation of $M$. If $g$ is a smooth function with compact support $S\subset M$ then there exist $\epsilon >0$ such that $$ X \colon M \times (-\epsilon, \epsilon) \to \mathbb{R}^3, \quad X(m,t) = \mathbf{x}(m) + t g(m) \mathbf{e}_3(m) $$ is an admissible variation of $\mathbf{x}$. If $a$ and $c$ are the principal curvatures of $\mathbf{x}$, then $\epsilon = \min_{overS}\left\{ \frac{1}{|a|},\frac{1}{|c|}\right\} $ works. The definition of admissible variation is as follows: Definition 8.1. An admissible variation of $\mathbf{x}$ is any smooth map $$ X \colon M \times (-\epsilon, \epsilon) \to \mathbb{R}^3, $$ with compact support, such that for each $t \in (-\epsilon, \epsilon)$, the map $$ \mathbf{x}_t \colon M \to \mathbb{R}^3, \quad \mathbf{x}_t(m) = X(m,t), $$ is an immersion. The support of $X$ is the closure in $M$ of the set of points of $M$ where $\mathbf{x}_t(m) \neq \mathbf{x}(m)$, for some $t$. (Original scanned image here.)
This would be correct if you had no function $g$ in the variation, i.e., if $g=1$ everywhere. You'll need to divide your $\epsilon$ by the maximum of $|g|$ on $M$. Let's call that quantity $C$. Then the result is easy enough to prove. To simplify things, assume there are no umbilic points and let $\mathbf e_1,\mathbf e_2$ be a principal moving frame, with dual coframe $\omega_1,\omega_2$. Then $d\mathbf e_3 = -(k_1\omega_1\mathbf e_1 + k_2\omega_2\mathbf e_2)$, and \begin{align*} d\mathbf x_t &= d\mathbf x + tg d\mathbf e_3 + t\,dg\,\mathbf e_3 \\ &= (1-tgk_1)\omega_1\,\mathbf e_1 + (1-tgk_2)\omega_2\,\mathbf e_2 + t\,dg\,\mathbf e_3. \end{align*} This will have rank $2$ (independent of the nature of the function $g$) provided $|t|<\dfrac 1{C\sup|k_i|}$ for $i=1,2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Notation of the Taylor Polynomial with Lagrange Remainder I have this Theorem in my book: Consider $f: \mathbb{R^n} \rightarrow \mathbb{R}$ a function of class $C^1$ and $\overline{x}, d \in \mathbb{R^n}$. If $f$ is twice differentiable in the segment $(\overline{x}, \overline{x}+d)$, then exist $t \in (0,1)$ such that $$ f(\overline{x}+d) = f(\overline{x}) + \nabla f(\overline{x})^{T}d + \dfrac{1}{2}d{^T} \nabla^2f(\overline{x}+td)d. $$ So, I couldn't understand the last term notation. I don't know because this is valid for some $t \in (0,1)$ and I don't understand the notation. For example, I know that if I consider in $\mathbb{R^2}$ the pair $(x,y) = \overline{x} + d$ and $\overline{x} = (a,b)$, I have $$\begin{align} f(x,y) &= f(a,b) + f_x(a,b)(x-a) + f_y(a,b)(y-b) + \\ &+\dfrac{1}{2}[f_{xx}(a,b)(x-a)^2 + 2f_{xy}(a,b)(x-a)(y-b) + f_{yy}(y-b)^2]. \end{align}$$ And, the second term I can write $$ f_x(a,b)(x-a) + f_y(a,b)(y-b) = \nabla f(a,b)^T \begin{bmatrix} x-a \\y -b \end{bmatrix} = \nabla f(a,b)^T \left( \begin{bmatrix} x \\y \end{bmatrix} - \begin{bmatrix} a \\b \end{bmatrix} \right). $$ So, If I get $x = \overline{x}+d$ I have $$ f(\overline{x} + d) = f(\overline{x}) + \nabla f(\overline{x})^T(x-\overline{x}) = f(\overline{x}) + \nabla f(\overline{x})^Td. $$ Now I can't understand the last term notation. Could someone please explain the last notation and about the choose with $t$ in the last term of theorem in human language?
What you wrote doesn't make sense. I hope your book isn't writing the last term like that, or the author is using a strange notation. What you should have is something like this $${1\over 2} d^T H_f(\overline{x}) d$$ where $H_f(\overline{x})$ is the Hessian matrix which is the matrix of all the second order partial derivatives of a function. For a function $$f:\mathbb{R}^2\rightarrow\mathbb{R}$$ the Hessian is of the form $$H_f(\overline{x}) = \begin{bmatrix}\frac{\partial^2 f}{\partial x^2}&\frac{\partial^2 f}{\partial x\partial y} \\ \frac{\partial^2 f}{\partial y\partial x}& \frac{\partial^2 f}{\partial y^2}\end{bmatrix}$$ Most of the times, for Schwartz's theorem the off diagonal element are the same. Now you can see how this term make's sense. The notation used by the book is, most of the times, the Laplace's operator which is a totally different thing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Truthtelling/Lying question (If B is lying then I'm lying) I'm stuck at a question about truthtelling/lying. Although similar problems have been posted I couldn't find a case similar to mine. So a person is either a truthteller or a liar. Let's say we have two persons A and B. If I ask A: Are any of you telling the truth and A responds "If B is lying then I'm lying" How do I know who if they, individually, are lying or telling the truth. The way I see it there are two solutions, but I'm guessing one of my answers is wrong. I think that either they are both telling the truth or that A is lying and B is telling the truth. I know that this can be set up with truth tables, but I'd like an explanation in words as to why and what I'm wrong about in my answer.
The last one is not true. If $B$ tells the truth, then the implication $A$ is stating is a true implication. Hence, $A$ told the truth. Do you understand?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Walter Rudin Real and Complex Analysis Chapter 2 Walter Rudin Real and Complex Analysis Chapter 2 2.14 Riesz representation theorem the last step. Why did he put the absolute value of $a$ ? Is not it sufficient to assume $f$ is positive? Proof. Clearly, it is enough to prove this for real $f$. Also, it is enough to prove the inequality \begin{equation} \tag{16} \Lambda f \leq \int_X f \,\mathrm{d}\mu \end{equation} for every real $f \in C_c(X)$. For once $(16)$ is established, the linearity of $\Lambda$ shows that $$ -\Lambda f = \Lambda(-f) \leq \int_X (-f) \,\mathrm{d}\mu = - \int_X f \,\mathrm{d}\mu, $$ which, together with $(16)$ shows that equality holds in $(16)$. Let $K$ be the support of the real $f \in C_c(X)$, let $[a,b]$ be an interval which contains the range of $f$ (note the Corollary to Theorem 2.10), choose $\epsilon > 0$, and choose $y_i$, for $i = 0, 1, \dotsc, n$, so that $y_i - y_{i-1} < \epsilon$ and \begin{equation} \tag{17} y_0 < a < y_1 < \dotsb < y_n = b. \end{equation} Put \begin{equation} \tag{18} E_i = \{ x : y_{i-1} < f(x) \leq y_i \} \cap K \qquad (i = 1, \dotsc, n) \end{equation} Since $f$ is continuous, $f$ is Borel measurable, and the sets $E_i$ are therefore disjoint Borel sets whose union is $K$. There are open sets $V_i \supset E_i$ such that \begin{equation} \tag{19} \mu(V_i) < \mu(E_i) + \frac{\epsilon}{n} \qquad (i = 1, \dotsc, n) \end{equation} and such that $f(x) < y_i + \epsilon$ for all $x \in V_i$. By Theorem 2.13, there are functions $h_i \prec V_i$ such that $\sum h_i = 1$ on $K$. Hence $f = \sum h_i f$, and Step II shows that $$ \mu(K) \leq \Lambda\left( \sum h_i \right) = \sum \Lambda h_i. $$ Since $h_i f \leq (y_i + \epsilon) h_i$, and since $y_i - \epsilon < f(x)$ on $E_i$, we have \begin{align*} \Lambda f &= \sum_{i=1}^n \Lambda(h_i f) \leq \sum_{i=1}^n (y_i + \epsilon) \Lambda h_i \\ &= \sum_{i=1}^n (|a| + y_i + \epsilon) \Lambda h_i - |a| \sum_{i=1}^n \Lambda h_i \\ &\leq \sum_{i=1}^n (|a| + y_i + \epsilon)[ \mu(E_i) + \epsilon/n ] - |a| \mu(K) \\ &= \sum_{i=1}^n (y _i - \epsilon) \mu(E_i) + 2 \epsilon \mu(K) + \frac{\epsilon}{n} \sum_{i=1}^n (|a| + y_i + \epsilon) \\ &\leq \int_X f \,\mathrm{d}\mu + \epsilon[ 2\mu(K) + |a| + b + \epsilon ]. \end{align*} (Original scanned image here.)
If you use $a$ instead of $|a|$ you cannot go from the second to the third line, since not knowing that sign of $a$ precludes you from knowing if you keep the inequality $\sum_i\Lambda_i\geq\mu(K)$. And, if you do the proof just for $f\geq0$, you only get the inequality $\Lambda f\leq\int_Xf\,d\mu$, and not equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of sequence where every subsequence of specific type converges Let $(x_n)$ be a sequence in $\mathbb{R}$. Suppose that every infinite subsequence of $(x_n)$ which omits infinitely many terms of $(x_n)$ converges. Does this imply that $(x_n)$ converges? I have failed so far to come up with a counterexample. But I can't see why this is true. It seems that there could be such a sequence where two of the subsequences converge to different points. Any hints are welcomed.
Suppose the sequence $x_n$ has this property, but does not converge. The subsequence $x_{2n}$ (omitting all the odd-numbered terms) must converge, let's say to $L$. But since $x_n$ does not converge to $L$, there is some $\epsilon > 0$ such that infinitely many $x_n$ have $|x_n - L| > \epsilon$. Those $x_n$ form a subsequence, say $x_{m_j}$, which by assumption must converge, say to $M$, where $M \ne L$. Now take a subsequence which alternates between members of the first subsequence and the second, but leaves out infinitely many terms (e.g. the sequence is $x_{r_j}$ where $r_1 = 2$ and for even $j$, $r_j$ is the first $m_k$ greater than $r_{j-1}$, while for odd $j$, $r_j$ is the second even number greater than $r_{j-1}$). Again this subsequence leaves out infinitely many terms, but it's clear that it can't converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can any one prove for me $\ln(1+x) = \large{G}_{2,2}^{1,2}\left( x \left| \begin{array}{cc} 1,1 \\ 1,0 \end{array} \right. \right).$ I am a PhD student in Wireless Communications and recently I found a paper about the use of "The generalized upper incomplete Fox’s H function". I think that in order to understand this function, I need before to understand the Meijer’s G-Function. The thing is I do not understand how to use this formula of Meijer’s G-Function. Could anyone help me how to prove this formula? $$\ln(1+x) = \large{G}_{2,2}^{1,2}\left( x \left| \begin{array}{cc} 1,1 \\ 1,0 \end{array} \right. \right).$$ I started but I could not continue... Let $a_1=a_2=1$, $b_1=1,b_2=0$, then \begin{align} \large{G}_{2,2}^{1,2}\left( x \left| \begin{array}{cc} 1,1 \\ 1,0 \end{array} \right. \right)=\frac{1}{2\pi i}\int_{}^{}\frac {\Gamma(1-s)\Gamma(s)^2} {\Gamma(1+s)\prod_{3}^{2}(a_j-s)}x^s ds \end{align} How is this possible $$\prod_{3}^{2}(a_j-s)$$. Also I apply: \begin{align}e^{-x} &= \large{G}_{0,1}^{1,0}\left( -x \left| \begin{array}{cc} - \\ 0 \end{array} \right. \right)\\ &=\frac{1}{2\pi i}\int_{}^{}\frac {\Gamma(-s)} {\prod_{2}^{1}(1-s)}(-x)^s ds \end{align} How $$\Gamma(-s)$$ and $$\prod_{2}^{1}(1-s) $$ are possible? Thanks.
We have to use certain conventions for these cases. $$ \prod_{j=3}^2 w_j = 1,\quad\text{known as an "empty product"} $$ and similarly $$ \prod_{j=2}^1 w_j = 1. $$ The $\Gamma$ function is defined by an integral for positive arguments, but may be extended to other arguments. The functional equation $$ \Gamma(z+1) = z\Gamma(z) $$ is used for that. When $-1<z<0$, we have $z+1$ where $\Gamma(z)$ is known. So for $0<s<1$, apply this with $z=-s$: $$ \Gamma((-s)+1) = (-s)\Gamma(-s), \\ \Gamma(-s) = -\frac{\Gamma(1-s)}{s} $$ I note that you have not specified the integraion path here; it is a path in the complex plane, and you will need $\Gamma(-s)$ on that path. Note It may be an interesting exercise to evaluate the function this way. But I think in practice we would use the differential equation to do this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fermat's Last Theorem Resources Are there any resources which describe FLT in a very tangible way which will motivate students to be interested in this subject?
About ten years ago I wrote a monograph directed at students with a high school competency. It goes through many proofs of intermediate results that preceded the Wiles proof, but no higher analysis. It focuses on the mathematics, and not the history. I don't know whether it is still in print. See: https://www.amazon.com/Conversations-Fermat-Keith-Backman/dp/158909445X
{ "language": "en", "url": "https://math.stackexchange.com/questions/2893969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A holomorphic function with infinitely many zeros in the unit disc Prove that if $f$ is holomorphic in the unit disc, bounded and not identically zero, and $z_1, z_2, z_3, \dotsc, z_n, \dotsc$ are its zeros ($\vert z_k \vert$ $\lt1$ ),then $$\sum_{k=1}^\infty (1-\vert z_k \vert) \lt \infty$$ [Hint:Use Jensen's formula.] Since Jensen's formula can be used when $f$ vanishes nowhere on the circle $C_R$. I notice that there exist an increasing sequence $r_n$ for $\lim_{n\to \infty} r_n = 1$, and $f$ vanishes nowhere on each $C_{r_n}$. Suppose $f(0) \neq 0$, then use Jensen's formula on each circle $C_r$ and get $$ \sum_{k=1}^{n_r} \log \vert z_k \vert = \log \vert f(0) \vert + n_r \cdot \log r - \frac{1}{2\pi} \int_{0}^{2\pi} \log \vert f(re^{i\theta}) \vert \,\mathrm{d}\theta, $$ where $n_r$ denotes the numbers of zeros inside the disc $C_r$. But I don't know how to estimate the limit of $n_r \log r$ as $r$ tends to $1$.
Of course this is a theorem instead of an exercise in many complex books, so we may as well add MSE to the list of places one can look it up... Don't pull out the $r$ from $\log(|z_k|/r)$. Instead look at it this way: Define $$\log^+(t)=\begin{cases}\log(t),&(t>1), \\0,&(0<t\le1).\end{cases}$$ Note that $$\sum_k\log^+(r/|z_k|)=-\sum_{|z_k|<r}\log(|z_k|/r).$$ So (assuming wlog that $f(0)\ne0$) Jensen implies that $$\lim_{r\to1}\sum_k\log^+(r/|z_k|)<\infty.$$ Applying the Monotone Convergence Theorem to that sum shows that $$\sum_k\log(1/|z_k|)<\infty.$$ Or a more elementary version of the same argument: Say $|f|\le c$. Fix $N$. If $r$ is close enough to $1$ that $|z_k|<r$ for $k=1,\dots, N$ then Jensen shows that $$\sum_{k=1}^N\log(|z_k|/r) \ge\sum_{|z_k|<r}\log(|z_k|/r)\ge\log|f(0)|-\log(c).$$Since $N$ is fixed we can let $r\to1$: $$\sum_{k=1}^N\log|z_k|\ge\log|f(0)|-\log(c).$$So $\sum_{k=1}^\infty\log|z_k|>-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Discontinuity - Unsure If Piecewise Equation(s) Have Them I have a question on whether the functions following have a discontinuity, and if not, what are the points where two functions meet. First, the piece wise equation : \begin{align*} f(x)= \begin{cases} \sin(x), &\text{ if } 0 \leq x \leq 2\pi;\\ 0, &\text{ if } x<0 \text{ or } x>2\pi. \end{cases} \end{align*} What are the points called at $0$ and $2\pi$ ?. Are they a discontinuity, as I expect that they are not since there is no jump. The second piece wise equation is : \begin{align*} f(x)= \begin{cases} \sin(x), &\text{ if }0 \leq x \leq \pi/2;\\ 0, &\text{ if } x<0;\\ 1, &\text{ if } x>\pi/2. \end{cases} \end{align*} What is the point called at $\pi/2$? Is it a discontinuity, and if not, how would they be described ? The reason for this question is that I am examining DSP filters – and I need to ensure that I use the correct terminology when documenting results, and also, to ensure that I understand the theory. There have been statements that the first equation does have discontinuities, but I examined the definition, and I am unsure. Thanks and regards, Code_X.
I don't know if the points where the components of piece wise functions join have a name, but checking whether they are discontinuities, or non-differentiable points is not difficult. Lets begin with discontinuities. In order for the point $p$ not to be a discontinuity, $$\lim_{x\to p^+} f(x) = \lim_{x\to p^-} f(x) = f(p)$$ In the case of $p=0$, for your first piece-wise function, since $\sin(x)$ is continuous, $$f(0) = \lim_{x\to0^+} f(x)$$ So all that needs to be shown is that $\lim_{x\to0^-} f(x)=f(0)$, which is obvious since $$\forall x<0, f(x)=0$$ For the second piecewise function, the exact same logic applies since the functions at this point are the same. You can apply similar logic to the second point joining point of the piece-wise function, but of course, the cases of both functions are a tad bit different there. See how far you can get! On the other hand, if we find a point to be continuous (i.e., $f(0)$), we can also check whether we can find the derivative of the point, or if the point is a non-differentiable point on the function. We do that with the following formula :$$f'(p)=\lim_{h\to 0}\frac{f(p+h)-f(p)}{h}$$ If this exists, then it is differentiable. This exists iff $$\lim_{h\to 0^+}\frac{f(p+h)-f(p)}{h}=\lim_{h\to 0^-}\frac{f(p+h)-f(p)}{h}$$ In the case of $p=0$, when $h>0$, $f(p+h)=sin(h)$ and $f(p)=sin(p)=0$, so $$\lim_{h\to 0^+}\frac{f(p+h)-f(p)}{h} = \lim_{h\to 0^+}\frac{\sin(h)}h = 1$$ However, when $h<0$, $f(p+h)= 0$ and $f(p) = 0$, so $$\lim_{h\to 0^-}\frac{f(p+h)-f(p)}{h} = \frac{0-0}h = 0$$ Since these one sided limits are not equal, the function is not differentiable at $p=0$. Again, you can use this logic on the other point where the piece-wise functions join.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
G-principal bundle and homotopy retract Suppose that $f:X\rightarrow Y$ a continuous map between (connected) CW-complexes such that there exists a continuous map $g:Y\rightarrow X$ with the property that $g\circ f$ is homotopy equivalent to $id_{X}$ i.e., $X$ is homotopy retract of $Y.$ Let $P$ be a $G$-principal bundle over $Y$ ($G$ is a fixed connected topological group). We define the pullback $G$-principal bundle over $X$ given by $f^{\ast}P$. Is it true that $P$ is a homotopy retract of $f^{\ast}P$ ? Or more generally, can we prove that $\pi_{\ast}(f^{\ast}(P))\rightarrow \pi_{\ast}(P)$ is injective ?
I'm not sure if you have made a mistake with your question, but if you mean 'is $f^*P$ a retract of $P$' then the answer is false. For instance take $X=S^2$, $Y=S^2\times S^2$ and let $P=S^2\times S^3$ be the product of the trivial bundle and the Hopf bundle. Let $f=in_1:S^2\rightarrow S^2\times S^2$, $x\mapsto (x,\ast)$, and $g=pr_1:S^2\times S^2\rightarrow S^2$, $(x,y)\mapsto x$. Then $f^*P\cong S^2\times S^1$ is the trivial bundle. Clearly $$\pi_1(f^*P)=\pi_1(S^2\times S^1)=\mathbb{Z}\rightarrow \pi_1(S^2\times S^3)=0$$ cannot be injective. On the other hand, if you truly did mean, 'is $P$ a homotopy retract of $f^*P$', then the answer is still no. For example $$\pi_3(P)=\pi_3(S^2\times S^3)\cong \mathbb{Z}\oplus\mathbb{Z}\rightarrow \pi_3(S^2\times S^1)\cong\mathbb{Z}$$ is clearly not injective, forbidding such a retraction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A spherical snowballs radius is decreasing by 4% per second. Find the percentage rate at which its volume is decreasing. For this question I have to find the rate at which the volume of the sphere decreases, $\frac{dV}{dt}$. I already have $\frac{dr}{dt}$, the rate at which the radius decreases, which is $-\frac{4}{100}$. In order to be able to find $\frac{dV}{dt}$ I need to use the chain rule, seeing as I already know $\frac{dr}{dt}$. Thus we see that $$\frac{dV}{dt}=?*\frac{dr}{dt}$$ and deduce that we need to find $\frac{dV}{dr}$. Using the equation for the volume of a shpere($\frac{4}{3}\pi r^3$) we can relate the two variables, $V$ and $r$. And so we see that we need to find $$=\frac{d}{dr}(\frac{4}{3}\pi r^3)$$ $$\frac{dV}{dr}=\frac{3*4\pi r^2}{3}=4\pi r^2$$ Thus we have $$\frac{dV}{dt}=-\frac{4}{100}4\pi r^2=-\frac{16\pi r^2}{100}$$ However, we need $\frac{dV}{dt}$ to be in terms of $V$. Any ideas?
After one second, $$\frac{V'-V}{V\cdot1}=\frac{(0.96r)^3-r^3}{r^3\cdot1}=-0.115264.$$ After one millisecond $$\frac{V'-V}{V\cdot0.001}=\frac{(0.99996r)^3-r^3}{r^3\cdot0.001}=-0.11999520\cdots.$$ After infinitesimal time, $$\frac{V'-V}{V\cdot \theta}=\frac{((1-0.04\theta)r)^3-r^3}{r^3\theta}=-0.12+0.0048\theta-0.000064\theta^2\to-0.12.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $X$ is exponentially distributed with parameter $1$, prove that $\exp(-X)$ is uniformly distributed on $[0,1]$. This is what I have so far: The PDF of $X$ is $$f_X(x)=e^{-x}$$ when $x\geq0$ and $0$ otherwise. The CDF of $X$ is $$P(X\leq x)=F_X(x)=1-e^{-x}$$ when $x\geq 0$ and $0$ otherwise. I know that I want to end up with the pdf of $Y=e^{-X}$ being $$f_Y=1$$ on $[0,1]$ and $0$ otherwise, hence a uniform distribution. So, \begin{align}F_Y(y)&=P(Y\leq y)\\ &=P(e^{-X}\leq y)\\ &=P(-\ln(y)\leq X) \end{align} I don't know how to proceed from here. Also, I know that $X=-\ln(Y)$, but I am not sure how to use it/ if I need to.
You're almost there. You only need to observe that $$\Pr (-\ln y \le X) =1 - \Pr(X < -\ln y) = 1 - \left( 1 - e^{\ln y}\right) = y.$$ Hence, $F_Y(y) = y$, and $f_Y(y) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solve for a variable with a variable exponent. I'm working on a video-game and have come up with the following equation: $$B^S={B-1\over R} +1$$ I need to solve for B in terms of R and S (which will be supplied at runtime) but for the life of me I can't seem to simplify it. I even tried an online variable solver and it just gives up. Conceptually grasping what this is supposed to contribute to convinces me there has to be a solution, I'm hoping somebody has the knowledge to arrive at it. For what it's worth, the intended values for R are in the 0 < R < 1 range, and S is intended to be in the 1 <= S <= 2 range. Much gratitude! EDIT: In the end I found a different approach for my application which yielded a general solution. Thank you all who contributed, it did help!
Except for very few specific cases, you cannot get explicit solutions and you need numerical methods. Consider that you look for the zero(s) of function $$f(B)=B^S-\frac{B-1}{R}-1$$ $$f'(B)=S B^{S-1}-\frac{1}{R}$$ $$f''(B)=(S-1) S B^{S-2}$$ The first derivative cancels at a point $$B_*=\left(\frac{1}{R S}\right)^{\frac{1}{S-1}}$$ and the second derivative is always positive if $ 1 < S <2$ (I do not consider the cases $S=1$ or $S=2$ for which the problem is simple). Since we can bound the function, the solution is always between $\left(\frac 1R -1\right)$ and $1$. You can also notice that $B=1$ is a trivial solution for any $R,S$ in the provided ranges. Since $f(0)=\frac 1R >0$, if $f(B_*) <0$, then the solution is $> B_*$. So, what I should do is to compute $f(k B_*)$ for $k=2,3,4,\cdots$ until we find the smallest $k$ such that $f(k B_*)>0$. At this point, let $B_0=k_{min}B_*$ and start Newton method. Let us take one example using $S=1.234$ and $R=0.567$; this gives $B_*\approx 4.6$. Using the simplistic procedure, we find $k_{min}=2$; so start using $B_0=9.2$ and get the following iterates $$\left( \begin{array}{cc} n & B_n \\ 0 & 9.201489213 \\ 1 & 9.194700041 \\ 2 & 9.194696121 \end{array} \right)$$ Let us repeat using $S=1.357$ and $R=0.246$; this gives $B_*\approx 21.6$. Using the simplistic procedure, we find $k_{min}=3$; so start using $B_0=64.8$ and get the following iterates $$\left( \begin{array}{cc} n & B_n \\ 0 & 64.83555879 \\ 1 & 51.00444906 \\ 2 & 48.72198422 \\ 3 & 48.64769013 \\ 4 & 48.64760965 \end{array} \right)$$ If you want something more sophisticated, we could approximate $B_0$ expanding the function as a truncated Taylor series around $B_*$. This would give $$B_0=B_*+ \sqrt{-2\frac{f(B_*) }{f''(B_*) }}$$ For the worked examples, this would give as starting values $8.76$ and $46.05$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
permutation with repeated identical elements First of all I do know the solution to below problem I'm asking different way!! The problem is like this: consider the word $AABBB$ how many 3 letter words can be written using the given word? clearly this is a permutation problem, my problem is can we find the answer only using $nPr$ equation? this is the way I'll do it (if you have different way please answer thanks.) (In my answer I used $nCr$, I'm asking a way to do this using $nPr$) when 3 words equal permutations- $1C1\times\frac{3!}{3!}=1$ when only 2 words equal permutations- $2C2\times\frac{3!}{2!}=6$ numbers of words 7
In a case like this, it may be easier to start from all words that can be made using the letters A and B, then subtract those which don't work. So there are $2^3=8$ three-letter words using only As and Bs (since each letter independently has $2$ choices), and the only one which doesn't fit into AABBB is AAA, leaving you with $7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is $\sup_{n \in \mathbb{N}}\Big|\frac{n-1}{n}z_n\Big|\le\sup_{n \in \mathbb{N}}\Big|\frac{n-1}{n}\Big|\sup_{n \in \mathbb{N}}\Big|z_n\Big|$ $\sup_{n \in \mathbb{N}}\Big|\frac{n-1}{n}z_n\Big|\le\sup_{n \in \mathbb{N}}\Big|\frac{n-1}{n}\Big|\sup_{n \in \mathbb{N}}\Big|z_n\Big|$ when $z_n$ is a bounded sequence $\in \mathbb{C}$.
Let $a_n:=\frac{n+1}{n}$ and $c_n:=a_n|z_n|$. We want to estimate $\sup c_n$. Now, since $a_n\ge 0$, $$ c_n\le (\sup_m a_m)|z_n|\le (\sup_m a_m)(\sup_k |z_k|), $$ so taking the $\sup$ of both sides yields $$ \sup_n c_n \le (\sup_m a_m)(\sup_k |z_k|),$$ as we wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the "greater than" or "less than" symbol referred to as operators? My understanding of operators is it works on elements of a set and produces another element of the same set. I don't see how or why the "$>,≥,<,≤$" would be referred to as "operators" on some pages as it doesn't map to another element. (I think I've also seen it on Wikipedia as well) I've always thought of it as a "relation" though. Can anyone shed some light?
In mathematics, you generally won't see inequality signs referred to as "operators" at all. In programming languages, "operator" means generally any syntactic construct that can be used to build expressions from other simpler expressions. In most programming languages, inequality signs count as operators, because they are used to build expressions with Boolean values. Note that most programming languages do not follow the tradition in mathematical logic of distinguishing syntactically between terms (which denote mathematical objects) and formulas (which have truth values). They're all just expressions, and whatever primitives they're built with are operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2894975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to solve simultaneous linear equations with only two possible values per variable? I am trying to solve a system of simultaneous linear equations whose unknowns have only two possible values. How do I approach this, or what area of mathematics do I employ inorder to arrive at the exact solution. e.g. \begin{align} a+b+c+d+f+g+h+i &= 12, \tag{1} \\ b+c+d &= 6, \tag{2} \\ f+g &= 2, \tag{3} \end{align} where possible values for $a,b,c,d,f,g,h,i$ can only be $2$ or $0$. NB The example above is just an illustration of what I am trying to solve which is a nested Venn diagram problem with about $10$ equations of $14$ unknowns ($a,b,c,\dotsc,n$) whose values can only be $4$ or $6$.
You have linear system $Ax = b$ and want to know if there is a solution such that all coordinates of $x$ are either $4$ or $6$. Instead of that, consider system $Ay = b/2$ where all coordinates of $y$ have to be either $2$ or $3$. If there exists a solution to that system, there must be a solution to the same system modulo $2$. Thus, reduce the system modulo $2$ and solve it as a linear system over $\mathbb Z/2\mathbb Z$ (linear algebra applies!) This will give you a finite number of possible solutions to the original system - it is easy to reconstruct them since $2$ is even and $3$ is odd. This will be very efficient in your case, since $10$ equations with $14$ unknowns means the solution space is $4$-dimensional. Over $\mathbb Z/2\mathbb Z$ it means $16$ solutions to check.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
A statement following from the law of excluded middle Does the statement ~~$A\equiv A$ follow from the law of excluded middle? According to my book which is not on logic it does, but I do not know how to use the law of excluded middle for this simple tautology.
This question isn't clear. Is the question "Can (~~A≡A) follow from the law of the excluded middle" the answer is 'yes'. That follows immediately from every single tautology implying every other tautology. Or equivalently, "all tautologies imply every other tautology." However, if the question is "does the law of the excluded middle necessarily imply (~~A≡A)", where "imply" gets understood to mean that the law of the excluded middle will appear in any proof of (~~A≡A) where (~~A≡A) is not an axiom, then the answer is 'no'. As a simple example, (~~A -> A) and (A -> ~~A) along with modus ponens and ((A -> B) -> ((B -> A) -> (A ≡ B))) will imply that (~~A ≡ A). There is no law of the excluded middle there, and (~~A≡A) is not an axiom.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Differentiate $\frac{x^3}{{(x-1)}^2}$ Find $\frac{d}{dx}\frac{x^3}{{(x-1)}^2}$ I start by finding the derivative of the denominator, since I have to use the chain rule. Thus, I make $u=x-1$ and $g=u^{-2}$. I find that $u'=1$ and $g'=-2u^{-3}$. I then multiply the two together and substitute $u$ in to get: $$\frac{d}{dx}(x-1)^{2}=2(x-1)$$ After having found the derivative of the denominator I find the derivative of the numerator, which is $3x^2$. With the two derivatives found I apply the quotient rule, which states that $$\frac{d}{dx}(\frac{u(x)}{v(x)})=\frac{v'u-vu'}{v^2}$$ and substitute in the numbers $$\frac{d}{dx}\frac{x^3}{(x-1)^2}=\frac{3x^2(x-1)^2-2x^3(x-1)}{(x-1)^4}$$ Can I simplify this any further?Is the derivation correct?
It's better to use the quotient rule: $$\frac{d(\frac fg)}{dx}=\frac{f'g-g'f}{g^2}$$ $$f=x^3\to f'=3x^2$$ $$g=(x-1)^2\to g'=2(x-1)$$ $$\to\frac{d(\frac {x^3}{(x-1)^2})}{dx}=\frac{3x^2(x-1)^2-2x^3(x-1)}{(x-1)^4}=\frac{x^2(x-3)}{(x-1)^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Differentiate $\tan^3(x^2)$ Differentiate $\tan^3(x^2)$ I first applied the chain rule and made $u=x^2$ and $g=\tan^3u$. I then calculated the derivative of $u$, which is $$u'=2x$$ and the derivative of $g$, which is $$g'=3\tan^2u$$ I then applied the chain rule and multiplied them together, which gave me $$f'(x)=2x3\tan^2(x^2)$$ Is this correct? If not, any hints as to how to get the correct answer?
$$u'=2x$$ $$g'=3\tan^2u \cdot sec^2u$$ $$f'(x)=2x \cdot 3\tan^2(x^2)\sec^2(x^2) = 6x\tan^2(x^2)\sec^2(x^2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proof by Deduction $\sqrt{xy} ≤ \frac{x+y}{2}$ I want to ask a question about proof of deduction. I sat my Pure Mathematics Exam more than $3$ years ago but decided to return to the subject for a refresher. Proofs were not a requirement for my course but as my younger siblings are studying it, I decided to give it a go. This was the question: Prove that for all positive values of x and y $$\sqrt{xy} ≤ \frac{x+y}{2}$$ Now, I did some research on proofs of deduction and it involved a start point. My instinct was to work backwards from this inequality to something more meaningful towards this "start point" and work forwards. This is my working thus far: $$xy ≤ \frac{(x+y)^2}{4}$$ $$4xy ≤ (x+y)^2$$ $$4xy ≤ x^2 + 2xy + y^2$$ Unfortunately, I can't seem to see where I can go further to start this proof. Is this the correct approach? If so, is there a further step that I cannot see?
$\sqrt{xy} ≤ \frac{x+y}{2} \iff 2\sqrt{xy} ≤ {x+y} \iff x-2\sqrt{xy}+y \geq 0$ $ \iff (\sqrt{x}-\sqrt{y})^2 \geq 0 $. Which is true
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Inequality for a function 4 Let $u:[0,+\infty)\to\mathbb R^+$ be a bounded positive function such that $$u(t)\leq \int_0^t\left(-\frac{1}{\sqrt N}u(s)+\frac{1}{N}\right)ds +\frac{1}{N^{\frac{1}{4}}}$$ for every $t\geq 0$, where $N\in\mathbb N$. Is it correct that $$u(t)\leq\frac{1}{\sqrt N}+\frac{1}{N^{\frac{1}{4}}}$$ for every $t\geq 0$? How could I prove that?
The inequality does not hold. As a counterexample, consider $$ u(t) = \begin{cases}1/2, &0\leq t < 8 \\ 3, &8\leq t<9 \\ 1/2, &9\leq t<\infty\end{cases} $$ Then one can show that $u$ satisfies the first inequality for $N = 1$: $$ \int_0^t\left(-\frac{1}{\sqrt N}u(s)+\frac{1}{N}\right)ds +\frac{1}{N^{\frac{1}{4}}} = 1 + \int_0^t(1-u(s))\,ds $$ and $$1 + \int_0^t(1-u(s))\,ds = \begin{cases}1 + \frac 1 2t, &0\leq t < 8 \\ 5 - 2(t-8), &8\leq t<9 \\ 3 + \frac{1}{2}(t-9), &9\leq t<\infty\end{cases}, $$ so $u(t) \leq 1 + \int_0^t(1-u(s))\, ds$. But $$u(8) = 3 > 2 = \frac 1 {\sqrt N} + \frac 1 {N^{1/4}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Homomorphism: How do we get the equality? Let $Z = (\mathbb{Z},+)$ the additive group of integers and $G = (M,\star )$ an arbitrary group. I want to show that for all $a \in G$ the map $\phi_a : Z \rightarrow G$ defined by $\phi_a(k) = a^k$ is an homomorphism from $Z$ to $G$. Let $m,n\in Z$. Then we have that $\phi_a(m+n)=a^{m+n}$ and $\phi_a(m)\star \phi_a(n)=a^m\star a^n$. Do we consider $\star$ as a multiplication and so $a^{m+n}=a^m\star a^n$, or how do we get the equality?
Yes we see that as the multiplication between elements of group. Since $a^m$ and $ a^n$ are elements of your group, $a^m \star a^n = a^{m+n}$ is given as the associative property of the group operation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How does $f(x)= x \sin(\frac{\pi}{x})$ behave? I think this function is increasing for $x>1$ but wanted to find the reason. So I thought about taking the derivative: $f(x)= x \sin(\frac{\pi}{x})$ Aplying the chain an the product rule, we get: $f'(x)= \sin(\frac{\pi}{x})-\frac{\pi}{x} \cos (\frac{\pi}{x})$ The function is increasing if the derative is more than or equal to $0$, so: $\sin(\frac{\pi}{x})-\frac{\pi}{x} \cos (\frac{\pi}{x}) \ge 0$ $\sin(\frac{\pi}{x}) \ge \frac{\pi}{x} \cos (\frac{\pi}{x}) $ Since $ \cos ( x) > 0$, if $ 0< x < \pi$, $ \cos (\frac{\pi}{x}) > 0 $, because $ 0<\frac { \pi}{x}< \pi$. $ \tan (\frac{\pi}{x}) \ge \frac{\pi}{x}$ I get to this point and don't know how to continue. I'd like you to help me or give me a hint, or maybe see a different way of showing it. Anyway, thanks.
Try the Maclaurin series for tangent: $$\tan x = x + \frac{x^3}{3} + \frac{2 x^5}{15} + \ldots$$ All terms are positive when $x > 0$, so $\tan x > x$. This proves that $\tan \frac{\pi}{x} > \frac{\pi}{x}$ for $0 < x < \frac{\pi}2$. Alternatively, define $f(x) = \tan x - x$. Then $f(0) = 0$ and $f'(x) = \sec^2 x - 1 > 0$ for $0 < x < \frac{\pi}2$, so $f(x)$ is increasing, and $f(x) > 0$ on $0 < x < \frac{\pi}2$. Now, here's the catch: remember that tangent is not defined at $x = \frac{\pi}2$. However, the argument of your tangent function is $\frac{\pi}{x}$. So, for $x > 2$, this fraction decreases from $x = 2$ to $x = \infty$, and in fact, approaches zero. Therefore, you are safe with your argument showing that the original function is increasing for $x > 2$. For $1 < x \leq 2$, you need to be a bit more careful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Determine the point where function output will go from positive to negative I have a function that is like: f(x) = c - x^2 (c = some constant positive integer, x = +ve integer >= 0) The output of this function, goes from positive to negative as x -> +infinity. * *Is there a way to directly figure out the x which produces last of the positive outputs before entering in the negative domain? *Also, if I use an iterative method to locate such a point, how should I find the right increment to go with so that I can reach that value of x as quick as possible? Right now, I am initializing x = floor(sqrt(c) * 0.9) and using dumb +1 increments to x to reach the point where f(x) enters the negative range. *An observation: f(x) seems to behave in a weird way for c > 10e40 (i.e. when we initiate x = floor(sqrt(c) * 0.9) it gives out a value way too far from the desired point) and you can imagine how long the +1 increment takes to get to the desired output with such large values of c.. ;-(. Please help. thanks.
If we solve the quadratic equation $c-x^2=0$ for a real number $x$, we get $x=\sqrt c$ or $x=-\sqrt c$. Since we are looking at the branch $x\ge 0$, the required solution is the largest integer not exceeding $\sqrt c$. For example, if $c=2$ then $\sqrt c=1.414\ldots$ and the solution would be $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2895874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Product of a decreasing sequence and a diverging series Suppose we have a monotonic decreasing sequence $a_n$ converging to $0$. Does there always exist a non-negative series $b_n$, such that $$\sum_{n=1}^\infty b_n = \infty$$ but $$\sum_{n=1}^\infty a_n b_n < \infty$$? Edit: yes, as answered below. What if we also insist $b_n$ is decreasing?
Choose $n_1<n_2<...$ such that $a_{n_{k}} <\frac 1 {2^{k}}$ and define $b_n=1$ if $n \in \{n_1,n_2,...\}$, $b_n=0$ otherwise. Note that $b_n=1$ for infintely many $n$. Hence $b_n$ does not tend to $0$ which implies $\sum b_n =\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Last digit of sequence of numbers We define the sequence of natural numbers $$ a_1 = 3 \quad \text{and} \quad a_{n+1}=a_n^{a_n}, \quad \text{ for $n \geq 1$}. $$ I want to show that the last digit of the numbers of the sequence $a_n$ alternates between the numbers $3$ and $7$. Specifically, if we symbolize with $b_n$ the last digit of $a_n$, I want to show that $$ b_n = \begin{cases} 3, & \text{if $n$ is odd}, \\ 7, & \text{if $n$ is even}. \end{cases} $$ There is a hint to prove that for each $n \in \mathbb{N}$, if $a_n \equiv 3 \pmod{5}$ then $a_{n+1} \equiv 2 \pmod{5}$ and if $a_n \equiv 2 \pmod{5}$ then $a_{n+1} \equiv 3 \pmod{5}$. First of all, if we take $a_n \equiv 3 \pmod{5}$, then $a_{n+1}=3^3\pmod{5} \equiv 2 \pmod{5}$. If $a_n \equiv 2 \pmod{5}$, then $a_{n+1}=2^2 \pmod{5}=4$. Or am I doing something wrong? And also how does it follow, if we have shown the hint, that $b_n$ is $3$ when $n$ is odd, and $7$ if $n$ is even?
The mistake you are making is that if $a_n \equiv 2 \pmod 5$ it's not true that $a_n^{a_n} \equiv 2^2 \pmod 5$. The reason behind this is that the exponents aren't repeating in blocks of $5$, but instead in blocks of $\phi(5) = 4$, in your case. Indeed by Fermat's Little Theorem we have that $a_n^4 \equiv 1 \pmod 5$. Thus you need to find $a_n \pmod 4$ first. This isn't hard to do, as $a_1 = 3$. Thus whenever it's raised to an odd power we get that $a_n \equiv -1 \pmod 4$. Hence we have: $$a_n \equiv a_{n-1}^{a_{n-1}} \equiv a_{n-1}^{-1} \pmod 5$$ Now use the fact that $2$ is the modular inverse of $3$ modulo $5$ to conclude that: $$a_n \equiv \begin{cases} 3 \pmod 5, & \text{if $n$ is odd} \\ 2 \pmod 5, & \text{if $n$ is even} \end{cases}$$ Finally note that $a_n \equiv 1 \pmod 2$ and use Chinese remainder Theorem to conclude that: $$a_n \equiv \begin{cases} 3 \pmod{10}, & \text{if $n$ is odd} \\ 7 \pmod{10}, & \text{if $n$ is even} \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Probability of events with retries? Let's say I want to roll $n$ 20-sided dice, and I want none of those dice to be a 1. I figure that the probability at least one die will be a 1 is $\frac{19}{20}^n$. But now let's say that we will re-roll each individual die that is a 1 up to $r$ times. I want to know 2 things: * *Given the above, what is the probability one or more of the dice will be a 1? *Suppose I play this game a million times. How many dice rolls will a given game make on average? In other words, for each game, I will make $n+t$ dice rolls, where $t$ is the number of retries I've made. What would $t$ be on average?
In order for a die to end up being $1$, it has come up $1$ a total of $r+1$ times in a row. Therefore, each die ends up being one with probability $(1/20)^{r+1}$, so $$ P(\text{at least one die is }1)=1-P(\text{no dice are }1)=\boxed{1-\bigg(1-\frac1{20^{r+1}}\bigg)^n.} $$ To compute the expected number of rolls, we compute the expected number of rolls for each die and multiply by $n$. Let $X$ be the number times a particular die is rolled, and let's compute $P(X> k)$. The die is rolled more than $k$ times if and only if its first $k$ rolls are $1$, and $k<r$. Therefore, $$ P(X>k) = \frac{1}{20^k},\qquad k=0,1,\dots,r. $$ Then the expected value is $$ E[X]=\sum_{k=0}^r P(X>k)=\sum_{k=0}^r \frac{1}{20^k}=\frac{1-(1/20)^{r+1}}{1-1/20}. $$ Therefore, $$ E[\#\text{ of rolls}]=n\cdot E[X]=\boxed{n\cdot \frac{20}{19}\Big(1-\frac{1}{20^{r+1}}\Big).} $$ For example, when $r=0$, the expected number of rolls is $n$. As $r$ tends to infinity, the expected number of rolls tends to $n\cdot \frac{20}{19}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$C^1$ function with limit decay at infinity Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a continuously differentiable function (i.e. $f \in C^1(\mathbb{R})$). Assume that $$\lim_{x \rightarrow \infty} xf'(x) = 0 \ \mbox{and} \ \ \lim_{n \rightarrow \infty} f(2^n) = 0.$$ Then I would like to show that $\lim_{x \rightarrow \infty} f(x) = 0.$ So the picture is $f$ is a continuous function with some points, say $2, 4, 8 , 16 ,.. , 2^n, ...,$ approaching zero. Normally, with a continuous function $g$ defined by $$g(2^n) = 0$$ on each $n \in \mathbb{N}$ and $g(x) = 1$ all the time except a small interval around each $2^n$, $g(x)$ is just a line joining $1$ and $0$. Then this function $g$ does not have limit at infinity. What I see is there might be a sharp slope in each interval around $2^n$ (the line joining $1$ and $0$). This might where the condition $\lim_{x \rightarrow \infty} xf'(x) = 0$ comes to play (Roughly, it says that for all big enough $x$, the slope of the graph $f(x)$ cannot exceed $1/x$). So the slope of $g$ might exceed $1/x.$ From this I try to prove by contradiction by assumeing the limit at infinity of $f$ is not zero. But I seem to struck. Any suggestion to do the contradiction, or any kind of other proof which might seem simpler to do ?
Assume that $f(x)$ does not tend to zero as $x\to\infty$. Then there exists $\varepsilon>0$ and an infinite set of natural numbers $I$ such that for every $n\in I$ you can find a number $x_n$ in the interval $[2^n,2^{n+1})$ for which $|f(x_n)|>\varepsilon$. (If this were not the case, then for every $\varepsilon>0$ there would be some natural number $N$ such that for every $n>N$ and every $x$ in $[2^n,2^{n+1})$, the function $f$ would satisfy $|f(x)|<\varepsilon$, which implies that $\lim_{x\to\infty}f(x)=0$.) So take our $\varepsilon>0$ and our infinite set $I$ of natural numbers with the above property. We have our sequence $\{x_n\}\in [2^n,2^{n+1})$ for which $|f(x_n|>\varepsilon$. Take $n\in I$ such that $f(2^n)<\varepsilon/2$. This is possible because $I$ is infinite and $\lim_{n\to\infty} f(2^n)=0$. Assume for the moment that $f(x_n)>\varepsilon$. By the mean value theorem, there exists $\xi_n\in (2^n,x_n)$ such that $f'(\xi_n)(x_n-2^n)=f(x_n)-f(2^n)$. Note that $x_n-2^n<2^n<\xi_n$. Also, $f(x_n)>\varepsilon$ and $f(2^n)<\varepsilon/2$, so the MVT equation implies $$f'(\xi_n)\xi_n>f'(\xi_n)(x_n-2^n)=f(x_n)-f(2^n)>\varepsilon/2$$ Thus we found a sequence $\xi_n$ tending to infinity for which $f'(\xi_n)\xi_n$ is away from zero, contradicting the assumption that $xf'(x)\to 0$ as $x\to\infty$. A similar argument handles the case when $f(x_n)<0$, so $f(x_n)<-\varepsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $p(x) = x^4-4x^3+6x^2-4x+1$ is the Taylor polynomial of $f$ around $x=1$, then $1$ is a local minimum Consider $f:\mathbb{R} \to \mathbb{R} \in C^4$. Show that $p(x) = x^4-4x^3+6x^2-4x+1$ is the Taylor polynomial of order $4$ of $f$ around $x=1$, then $1$ is a local minimum. I'm not sure how to proceed. I know that $p(x) = \sum\limits_{k=0}^{4} \dfrac{f^{(n)}(1)(x-1)^n}{n!} = f(1)+f'(1)(x-1)+\dfrac{f''(1)(x-1)^2}{2} + \dfrac{f'''(1)(x-1)^{3}}{6} + \dfrac{f^{(4)}(1)(x-1)^4}{24}=x^4-4x^3+6x^2-4x+1$ and I can see that $p(1) = 0 \implies f(1)=0$. But how can I obtain information about $f'(1)$ and $f''(1)$?
For $0 \le k \le 4$, let $f^{(k)}$ denote the $k$-th derivative of $f$. Since the Taylor polynomial of degree $4$ for $f$ at $x=1$ is $$x^4-4x^3+6x^2-4x+1=(x-1)^4$$ it follows that $f^{(k)}(1)=0$, for $0\le k\le 3$, and $f^{(4)}(1)=24$. For brevity, let * *"holds near $x = 1$ on the left" mean "holds in some open interval with right endpoint at $1$".$\\[4pt]$ *"holds near $x = 1$ on the right" mean "holds in some open interval with left endpoint at $1$".$\\[4pt]$ *"holds near $x = 1$" mean "holds near $x=1$ on the left" and "holds near $x=1$ on the right". Since $f^{(4)}$ is continuous and $f^{(4)}(1) > 0$, it follows that $f^{(4)}(x) > 0$ near $x=1$. Using the above result and applying the Mean Value Theorem, it follows that $f^{(3)}(x) < f^{(3)}(1) = 0$ near $x=1$ on the left, and $f^{(3)}(x) > f^{(3)}(1) = 0$ near $x=1$ on the right. Using the above result and applying the Mean Value Theorem, it follows that $f^{(2)}(x) > f^{(2)}(1)=0$ near $x=1$. Using the above result and applying the Mean Value Theorem, it follows that $f^{(1)}(x) < f^{(1)}(1) = 0$ near $x=1$ on the left, and $f^{(1)}(x) > f^{(1)}(1) = 0$ near $x=1$ on the right. Using the above result and applying the Mean Value Theorem, it follows that $f(x)=f^{(0)}(x) > f^{(0)}(1)=0$ near $x=1$. Therefore $f$ has a local minimum at $x=1$. To show how the Mean Value Theorem was applied, let's examine one of the cases . . . Since $f^{(4)}(x) > 0$ near $x=1$, we have $f^{(4)}(x) > 0$ for all $x\in (1,b)$, for some $b > 1$. Suppose $f^{(3)}(t) \le 0$, for some $t\in (1,b)$. Then by the MVT, it would follow that $$f^{(4)}(s)=\frac{f^{(3)}(t)-f^{(3)}(1)}{t-1}\le 0$$ for some $s\in (1,t)$, contradiction, since $s\in (1,t)$ implies $s\in (1,b)$. Therefore $f^{(3)}(x) > 0$ for all $x\in (1,b)$. The reasoning for the other cases is analogous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Binomial sum formula for $(n+1)^{n-1}$ Has anybody seen a proof for $$ (n+1)^{n-1}=\frac{1}{2^n}\sum_{k=0}^n C_n^k(2k+1)^{k-1}(2(n-k)+1)^{n-k-1} ? $$ There are lots of reasons to think that this is true. In particular the formula holds for $n=0,1,2,3,4,5$.
A (not straight) proof. Using Bürmann-Lagrange formula we expand the regular at $z=0$ solution $w(z)$ of the equation $w=ze^{aw^2}$ as the following series $$ w(z)=\sum_{n=0}^{\infty}\frac{a^n(2n+1)^{n-1}} {n!}z^{2n+1} $$ (see problem 26.07 in СБОРНИК ЗАДАЧ ПО ТЕОРИИ АНАЛИТИЧЕСКИХ ФУНКЦИЙ, Под редакцией М. А ЕВГРАФОВА (Russian), -- any other reference - for this particular expansion, not for Bürmann-Lagrange formula - will be greatly appreciated). Using Cauchy formulas for the coefficients of the series products, we get from this expansion $$ w^2=\sum_{n=0}^{\infty}\frac{a^{n}} {n!}z^{2n+2} \sum_{k=0}^{n}C_n^k(2k+1)^{k-1}(2(n-k)+1)^{n-k-1}. $$ On the other hand, since $w^2=z^2e^{2aw^2}$, applying the expansion in the problem 26.06 (from the same problem-book), we get that $$ w^2=\sum_{n=0}^{\infty}\frac{a^{n}2^n(n+1)^{n-1}} {n!}z^{2n+2}. $$ Equating to each other the coefficients for $z^{2n+2}$ in both series we obtain the desired equality. The first mentioned problem 26.07 is for any $m\geq1$ and states that the regular at $z=0$ solution $w(z)$ of the equation $w=ze^{aw^m}$ has the expansion $$ w(z)=\sum_{n=0}^{\infty}\frac{a^n(mn+1)^{n-1}} {n!}z^{mn+1} $$ (above we have used it only for the value $m=2$). Based on the same two mentioned problems, the formula can be generalized for any $m\in\Bbb{N}$ as follows. Let $\alpha_k^{n}\in\Bbb{N}\cup\{0\}; \;\; 0\leq\alpha_k^{n}\leq n; \;\; 0\leq k\leq m.$ Denote by $A_{m}^{n}: =\{\alpha_1^{n},\alpha_2^{n},\ldots,\alpha_m^{n}\}$ an $n$-tuple of such numbers such that $\sum_{k=0}^m\alpha_k^{n}=n$. Further denote by $ (A_{m}^{n})!:=\alpha_1^{n}!\alpha_2^{n}!\cdots \alpha_m^{n}!$, and by $ C_n^{A_{m}^{n}}:=\binom{{n}}{A_m^n} := \frac{n!}{(A_{m}^{n})!}.$ Then, \begin{gather*} \boxed{\quad(n+1)^{n-1}=\frac{1}{m^n} \sum_{A_{m}^{n}} C_n^{A_{m}^{n}} (m\alpha_1^{n}+1)^{\alpha_1^{n}-1} \cdots (m\alpha_m^{n}+1)^{\alpha_m^{n}-1}\quad}. \end{gather*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On Infinite Limits I am currently learning about infinite limits in Calculus, basically determining the limit of a function as x approaches infinity. However, I am struggling to understand the method being used to find it. Let’s take the function above. The method above seems to be to ignore all the terms with a lower power except the terms with the highest power. Then, because both of the highest terms are x^5, we cross them out and then divide their coefficients, to get a limit of 2/3. I guess you can do this because at infinity, the value of x^5 would be so big that it would dominate all the other values However, I still have a few problems with this method: * *But on that logic, why should we care that the coefficient of the numerator of is 4 and the denominator’s coefficient is 6? The value of x^5 is so big that it would dominate both of them anyway? At this rate, because it would dominate everything, both the numerator and denominator of every function approaching infinity should be infinity over infinity! So wouldn’t the limit of all infinite functions be 1? *This is another example that confuses me. Apparently, when x is infinity, you can ignore the 10, because infinity would dominate the whole function, and therefore the limit would be 0. But this doesn’t make sense to me! Even at infinity, the difference between the 2 would be 10, not 0. No matter how large a number you sub in, the difference between the 2 functions will be 10, and therefore, how can the functions approach 0 as x approaches infinity? *I also I get that if you zoom out in the function above, it would truly seem like that function is approaching 0. But then that wouldn’t be the limit of the function would it? That would be zooming out! Once we zoom back in, we will be able see that the function is sticking at 10, not getting closer to 0! So how can we say that the limit of the function at infinity is 10? Can someone explain the above to me? Can you also not make the explanation too rigorous? I’m just learning Khan Academy Calculus, and still haven’t touched on things like epsilon delta proofs yet. Thank you!
For 2 and 3, it seems like you are making an algebra mistake. $$(\sqrt x+10)^2=x+20\sqrt x+100\ne x+100$$ For example, if $x=10000, \sqrt x=100$. $\sqrt{10100}<101$. Indeed, your problem shows $\sqrt{100+x}-\sqrt x=\dfrac{100}{\sqrt{100+x}+\sqrt x}$. As x increases, the denominator increases without bound while the numerator stays the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Question about limits. I am currently doing a physics project and have two expressions of two versions of a length $L$ of the form $$L_h=2\pi N\sqrt{\left\langle R \right\rangle^2 +\left(\dfrac{h}{2\pi N}\right)^2}$$ and $$L_v=\pi N \sqrt{\left\langle R \right\rangle^2+\left(\dfrac{h}{\pi N}\right)^2} +\frac{\pi^2 N^2\left\langle R \right\rangle^2}{h} \ln \left(\sqrt{1+\left(\dfrac{h}{\pi N\left\langle R \right\rangle}\right)^2}+ \dfrac{h}{\pi N\left\langle R \right\rangle} \right)$$ I am trying to find the limit of these expressions as $\dfrac{h}{N}\to0^+$. The issue is that I cannot isolate all $u=h/N$ out of the expressions (there's always one $N$ as a coefficient on each term.) When I use MATLAB to calculate the limit, I get $$\lim_{h\to0^+}L_f=2\pi N\langle R\rangle$$ and $$\lim_{h\to0^+}L_v=\frac32\pi N\langle R\rangle$$ However, I'm not sure about * *whether $\lim_{h\to0^+}L = \lim_{h/N\to0^+}L$, and *how to prove these results mathematically. Thank you very much.
$h$ seems to be some kind of a length-scale of your system, thus you may try to rescale $L_\nu$ by it, i.e. devide all your equations by $h$ and you have $u$-s all around. As for the rest: expand the root and the log consecutively in a power-series, then you may easily carry out the limit. For instance: $$ \sqrt{\langle R\rangle^2 + \left(\frac{h}{\pi N}\right)^2} \to \langle R\rangle +\frac{1}{2\langle R\rangle}\left(\frac{h}{\pi N}\right)^2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2896977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to take second derivative implicitly Let $$y^4 + 5x = 21.$$ What is the value of $d^2y/dx^2$ at the point $(2, 1)$? I’m stuck at trying to work out the second implicit derivative of this function. As far as I can work out, the first derivative implicitly is $$\dfrac {-5}{4y^{3}}$$ How do you take the second derivative implicitly with respect to $x$ when $x$ has vanished? There would be nowhere to plug in $x=2$. What am I missing here?
There's a typo in your problem statement. The point $(2,1)$ is not on the curve. They probably meant $(1,2)$. I would not have solved for the first derivative. Just differentiate the equation again, remembering that both $y$ and $y'$ are functions of $x$, and so the chain rule applies: $$y^4+5x=21$$ $$4y^3y'+5=0$$ $$4y^3y'' + 12y^2y'y' = 0.$$ Plug in $y=2$ in the second line to get $4\cdot 8 y'+5=0$ and solve to get $y' = -5/32.$ Then plut $y=2$ and $y'=-5/32$ in the third line and solve for $y''$. I get $-75/2048.$ (But I've had only one coffee this morning.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
$A^3 = A^2$ How can $A$'s minimal polynomial look like? Let $K$ be a field and $A \in K^{n \times n}$ a matrix with $A^3 = A^2$. How can $A$'s minimal polynomial $\mu_A$ look like? The only possibilities I could think of are * *$A = 0$. Then the characteristic polynomial is $P_A(t) = -t^n$. *$A = E$, where $E$ is the $n \times n$ identity matrix. Then the characteristic polynomial is $P_A(t) = (1 -t)^n$. I am surely missing some possibilities. How can I draw a conclusion about the minimal polynomial from characteristic polynomial, knowing that the exponents in the first are the sizes of the largest Jordan blocks.
Let $m(x)$ be the minimal polynomial. Because of Hamilton-Cayley's theorem, we have $m(x) | x^3-x^2$. So $m(x)$ can be $x,x-1$ (as you said) , but also * *$x(x-1)$ for example in $$\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$ *$x^2$ for example in $$\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$$ *$x^2(x-1)$ for example in $$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}$$ In general, given a polynomial $p$ such that $p(A)=0$ any $m(x) | p(x)$ can be $A$'s minimal polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Multi-variable chain rule with multi-variable functions as arguments What is the chain rule of a multi-variable function with arguments that are also multi-variable functions? Suppose $x$, $y$, $z$ are independent variables. I mean changing $x$ won't change $y$ and $z$. Is the general form of multi-variable chain rule similar to the following? $$\frac{\partial w(u(x, y), v(y, z), t(y, z))}{\partial x} = {\frac{\partial w}{\partial u}}\cdot{\frac{\partial u}{\partial x}} + {\frac{\partial w}{\partial v}}\cdot{\frac{\partial v}{\partial x}} $$ Thanks.
That's nearly right, but you left off the third term that accounts for $t$. So you should have $$\frac{\partial w}{\partial x} = \frac{\partial w}{\partial u}\cdot\frac{\partial u}{\partial x} + \frac{\partial w}{\partial v}\cdot\frac{\partial v}{\partial x} + \frac{\partial w}{\partial t}\cdot\frac{\partial t}{\partial x}$$ where $w$ is a function of $u,v,t$ and $u,v,t$ are functions of $x,y,z$. One mnemonic device for remembering this is to think of it as summing over all products of "fractions" that will partially cancel and each give $\partial w$ in the numerator and $\partial x$ in the denominator. But be warned that that's not really what's happening, these aren't fractions and they aren't cancelling. Nonetheless, this helps you to remember it. Addendum: Of course, if $x$ doesn't actually appear in the formula for $u$, $v$, or $t$, then that partial is zero. What I wrote is the most general formula, not just the formula for your specific case. In other words, even if $v=v(y,z)$ , you can still think of it as $v(x,y,z)$; but then ${\partial v}/{\partial x} = 0$ , and similarly for the others. Not sure if you really wrote what you meant, but my answer is applicable to the general case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Number of integer solutions combinatorics problem what is the number of integer solutions to $$x_1+x_2+x_3+x_4+x_5=18$$ with $$x_1\ge1\;\;\;x_2\ge2\;\;\;x_3\ge3\;\;\;x_4\ge4\;\;\; x_5\ge5$$ I know I have to use this formula $$\frac{(n+r-1)!}{(n-1)!\;r!}= {{n+r-1}\choose r}$$ My instinct says that I should use $n=18-1-2-3-4=18-15=3$ and $r=5$ but I m not sure it makes sense? Anyone can help me please?
We can change the problem from its original form $$x_1+x_2+x_3+x_4+x_5=18$$ $$x_1\ge1,x_2\ge2,x_3\ge3,x_4\ge4,x_5\ge5$$ to $$y_1+y_2+y_3+y_4+y_5=(x_1-1)+(x_2-2)+(x_3-3)+(x_4-4)+(x_5-5)=3$$ $$y_1\ge0,y_2\ge0,y_3\ge0,y_4\ge0,y_5\ge0$$ and use generating functions approach, giving the following function $$(1+y+y^2+...+y^k+...)^5$$ and the coefficient of $y^3$ term is the answer, which is 35. Here is another example to look at (which contains another link to another example ...).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Vector Space with unusual addition? I'm studying before my class starts in a few weeks and I encountered this question in one of the practice problems: The addition it has given me is defined as, $(a,b)+(c,d)= (ac,bd)$ It's asking me if this is a vector of space and I am stuck after proving this, There is an element $0$ in $V$ so that $v + 0 = v$ for all $v$ in $V$. I did this -> $(a,b)+(1,1) = (1a,1b) = (a,b)$ Stuck right here, For each $v$ in $V$ there is an element $-v$ in $V$ so that $v+(-v) = 0$. $(a,b)+(0,0) = (0a,0b) = (0,0)$ Is $(0,0)$ $a$ $-v$ when there's no such thing as '$-0$'? Do I stop proving right at the step? So this is not a vector of space? Thank you for your time. Edit: Thank you everyone! The question is stated exactly like so, Show that the set of ordered pairs of positive real numbers is a vector space under the addition and scalar multiplication. $$(a,b)+(c,d) = (ac,bd),$$ $$c(a,b) = (a^c, b^c).$$ So the additive inverse is an element that, when added to $(a,b)$, will give me the additive identity, which in this case is $(1,1)$?
As you have the neutral element $o=(1,1)$ you need to make sure your inverses are relative to that. Assuming $V=\{(a,b): a,b\in\mathbb{R}, a,b>0\}$ or something of that kind you could use $(a,b)+(\frac1a,\frac1b)=(1,1)$. What you still need is to tell us how your base field acts on $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
$f(x)-2f(\frac{x}{2})+f(\frac{x}{4})=x^2$ , find $f(x)$ Find $f(x)$ if $f(x)-2f(\frac{x}{2})+f(\frac{x}{4})=x^2$, where $x, f(x)\in (-\infty , \infty)$ and $f(x)$ is continuous.
$y=mx^2+c$ is one solution where $c\in R$ $$mx^2-2\cdot m \frac{x^2}{4}+m \frac{x^2}{16}=x^2$$ $$m=\frac{16}{9}$$ $$y=\frac{16}{9}x^2+c$$ As mentioned by @lulu if $f(x)$ is a solution than $f(x)+c$ is also a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Question regarding basis and dimension of vector space of polynomials Let $V_n$ be the vector space consisting of all polynomials of the form $$f(x,y)=\sum_{i=0}^n\sum_{j=0}^n a_{i,j}x^iy^j$$ where $a_{i,j}\in\mathbb{R}$. (a) State the dimension of $V$, and give a basis for $V$. (b) Let $U\leq V$ be the subspace of $V$ consisting of all $f\in V$ such that $f(x,x)=0$ for all $x\in\mathbb{R}$. Compute a basis for $U$, and determine the dimension of $U$. Part (a) is relatively straightforward. The dimension is $(n+1)^2$. I am looking for assistance in how to express the basis. It is clear that a basis is composed of all monic polynomials of order $0$ to $n$ in $x$ and $y$. So, something along the lines of $\{1,x^i,y^j,x^iy^j:i,j=1,2,\cdots,n\}$, maybe? For part (b), without the $a_{i,j}$ coefficients, $U$ would just be the trivial subspace, but I'm not sure otherwise.
You're correct about $(a)$. For part $(b)$, note that $U$ is the kernel of the linear map $$V\to \Bbb R^{2n+1}:\sum_{i=0}^n\sum_{j=0}^n a_{i,j}x^iy^j\mapsto \left(\sum_{i+j=N} a_{i,j}\right)_{N=0\ldots 2n+1}$$ This map is surjective, which gives you the dimension of $U$. For a basis, fix the value of $N$, set $a_{i,j}=0$ when $i+j\neq N$ and you are left essentially to compute a basis of the kernel of $$\Bbb R^{N+1}\to \Bbb R:(x_0,\ldots ,x_{N})\mapsto \sum_{i=0}^N x_i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$I_n = \int_{0}^{\frac{\pi}{2}}(\cos t)^n \ dt$ converges to 0? How one can prove that the sequence $\left ( I_n \right )$ defined as $$ I_n = \int_{0}^{\frac{\pi}{2}}(\cos t)^n \ dt, $$ $n \in \{ 0,1,2,...\}$ converges to $0$? Is easy to show, by the way, that the sequence is decreasing because, for $t \in (0, \pi/2)$, $$(\cos t)^{n+1}<(\cos t)^n \Rightarrow I_{n+1} < I_{n}, \ \forall n $$
With integation by parts one may show that $$I_{n}=\dfrac{n-1}{n}I_{n-2}$$ then from $I_0=\dfrac{\pi}{2}$ and $I_1=1$, both odd and even terms go $0$ as $n\to\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2897914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Probability of program equality based on samples Program p implements a side-effect-free function f that accepts k1 bytes as input and produces k2 bytes of output. Suppose we take N samples (tuples of input/output pairs where p(i) = o), where the inputs are perfectly random. Program q satisfies these samples (q(i) = o). Obviously, if N contains all input/output pairs, q = p. What is the probability that q = p, if the N is smaller, e.g. 10? What is the value of N to achieve 99.99% probability? Or 99.9999%? Is the domain size of the output important?
Suppose there are $n$ possible inputs to the function. In your case, assuming a byte can have $256$ values and the input consists of $k_1$ bytes, we have $n = 256^{k_1}$. The most difficult situation to detect is when only one of the inputs results in an error and all the other inputs are processed correctly. In this case, the probability of finding the error on any one test is $1/n$. If we perform $N$ tests, then the expected number of times we find the error case is $N/n$. Assuming $N/n$ is small, the total number of error cases found will approximately follow a Poisson distribution with parameter $\lambda = N/n$, and the probability the error is not found in $N$ tests is $e^{-\lambda} = e^{-N/n}$. If we want this probability to be small, say less than $0.0001$, then $e^{-N/n} < 0.0001$ requires $N > -n \ln(0.0001) \approx 9.21 n$. So to be $99.99\%$ sure with random testing that the error is detected requires a number of tests which is more than $9$ times the number of tests that would be required to test all possible inputs in a systematic, non-random, fashion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is this proof for if $0 < a < b$ then $a^2 < b^2$ correct? I'm reading the book 'How to prove it' from Daniel Velleman which he presents a proof for the following; if $0 < a < b$ then $a^2 < b^2$ as; Proof. Suppose $0 < a < b$. Multiplying the inequality $a < b$ by the positive number $a$ we can conclude that $a^2 < ab$, and similarly multiplying by $b$ we get $ab < b^2$. Therfore $a^2 < ab < b^2$, as required. Thus if $0 < a < b$ then $a^2 < b^2$. However, I was also wondering if the statement could be proved using the following method. Proof. Suppose that $0 < a < b$. Taking the square root of both sides of the inequality $\sqrt{a^2} < \sqrt{b^2}$ we get our original hypothesis $a < b$ . Thus if $0 < a < b$ then $a^2 < b^2$.
In the proof from the book you presented, you've assumed $0<a<b$ and deduced $a^2<b^2$. In your presented proof, you've essentially assumed $a^2<b^2$ and deduced $a<b$. Why? The hypothesis $0<a<b$ is never used and by taking the square root of $a^2<b^2$, you implicitly assume that statement, deducing $a<b$ from it. Thus, conceptually you have shown $B\Rightarrow A$ for the corresponding statements $A,B$, i.e. you have established $A\Leftrightarrow B$ using $A\Rightarrow B$ from the previous proof. EDIT: As discussed with Ennar in the comments, there are additionally some issues with deducing $a<b$ from $a^2<b^2$ as it requires the square root function to be monotone, a property that follows from the fact the the square function is monotone on $[0,\infty)$. Then of course, you can not establish $B\Rightarrow A$ without first establishing $A\Rightarrow B$, i.e. there is another circularity of reasoning. As you can see, there are even more implicit assumptions than I had initially pointed out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is isogonal family of a given family of curves? I searched in Wikipedia isogonal trajectories about the definition but I do not understand what does it mean by fixed "angle". Angle with the tangents of the curves? Clockwise angle? Orientated Angle? Thanks in advance.
I think that for two curves $y=f(x)$ and $y=g(x)$ which intersect at $(x_0,y_0)$, they are defining the angle between the curves to be $\ \mathrm{arctan}(f'(x_0)) - \mathrm{arctan}(g'(x_0))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
if $W$ is a subspace of an inner product space $V$, which of the following statements is true? if $W$ is a subspace of an inner product space $V$, which of the following statements are true? $1)$ there is a unique subspace $W'$ such that $W' + W = V$ $2)$ there is a unique subspace $W'$ such that $W'\oplus W = V$ $3)$ there is a unique subspace $ W'$ such that $W' + W = V$ and $\langle w, w '\rangle = 0 $ for all $w \in W $ and $w' \in W' $ $4) $ there is a unique subspace $W$' such that $W' \oplus W = V $ and $ \langle w,w '\rangle $ = $0$ for all $ w \in W$ and $w' \in W' $ I thinks all options $1,2,3,4 $ will be true because $W' \cap W = \{0\}$ Any Hints/solution Thanks u
(1) and (2) are certainly false; in fact they're false if $V=\Bbb R^2$ and $W=\{x,0)::x\in\Bbb R\}$. (3) and (4) are true if $V$ has finite dimension (or if $V$ is a Hilbert space and $W$ is a closed subspace), but they're also false in a general inner-product space. For example, let $V$ be the space of sequences $x=(x_1,\dots)$ such that all but finitely many of the $x_j$ vanish, with inner product $$(x,y)=\sum x_jy_j.$$Let $W=\{x\in V:\sum x_j=0\}$. (Users who said (3) and (4) were true presumably had $W'=W^\perp$ in mind. But here it's easy to see that $W^\perp=\{0\}$.) Similarly if $V$ is a Hilbert space and $W$ is any non-closed subspace: $$W\oplus W^\perp\ne\overline W\oplus W^\perp =\overline W\oplus\overline W^\perp=V.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$A\in \mathbb{R}^{n\times n}$ has eigenvalues in $\mathbb{Z}$ with at least 3 different eigenvalues. $\det(A)^n = 5^4$, find $A$'s eigenvalues $\newcommand{\adj}{\text{adj}}$ The question as it appeared in the first place: $A\in \mathbb{R}^{n\times n}$ such that all $A$'s eigenvalues are in $\mathbb{Z}$ and $A$ has at least 3 different eigenvalues. Let $B$ be the matrix results in making the next row operations upon $A$ : $R_1 \leftrightarrow R_2$ , $R_2 \rightarrow 5R_2$ , $R_4 \rightarrow R_4-16R_2$. Given that $\det((B\cdot \adj(A))^{-1}) = -\frac{1}{5^5}$ Find $A's$ eigenvalues. So: $$\begin{align}\det((B\cdot \adj(A))^{-1}) = -\frac{1}{5^5} &\Rightarrow \det((B\cdot \adj(A))) = -5^5 \\&\Rightarrow \det(B)\cdot \det(A)^{n-1}= -5^5 \\&\Rightarrow -1\cdot 5\cdot \det(A)\cdot \det(A)^{n-1} = -5^5 \\&\Rightarrow \det(A)^n = 5^4\end{align}$$ This is where I got to the question represented in the title. I don't know anything else about $A$, I got few examples for $A$ which satisfies the conditions: $\text{diag}(1,-1,-1,5), \text{diag}(1,1,-1,-5)...$ I don't see how do the conditions determine $A's$ eigenvalues.
Since the eigenvalues are all integers, their product $\det(A)$ is an integer. What are the divisors of $5^4$? EDIT: Use the fact that $n \ge 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Aristarchus' Inequality - algebraic proof While looking for trigonometric inequalities, I stumbled upon Aristarchus' inequality, which states that for $0<\alpha<\beta<\pi/2$ $$\frac{\sin(\beta)}{\sin(\alpha)}<\frac{\beta}{\alpha}<\frac{\tan(\beta)}{\tan(\alpha)}.$$ In this post (Proof of Aristarchus' Inequality) user141614 shows a completely algebraic proof of the first inequality using only $\sin(\alpha)<\alpha<\tan(\alpha)$. I tried for a long time to reproduce a similar proof for $\tan$ by trying to prove the equivalent inequality $$\frac{\tan(\beta)-\tan(\alpha)}{\beta-\alpha}>\frac{\tan(\alpha)}{\alpha},$$ by using user141614's same idea, combined with trigonometric identities for the sum and product, but without success. Does someone have hint on how to approach the problem? I really want an algebraic proof (which can potentially rely on easy-to-prove inequalities as the one above), no calculus. Thank you in advance
I managed to find an easy proof: For $0<\alpha<\beta<\pi/2$ we have $1>\cos(\alpha)>\cos(\beta)>0$ and thus (the orange chain was known) $$0<\sin(\alpha)\cos(\alpha)<\color{orange} {\sin(\alpha)<\alpha<\tan(\alpha)}=\frac{\sin(\alpha)}{\cos(\alpha)}<\color{red}{\frac{\sin(\alpha)}{\cos(\beta)}}.$$ Consequently, using that $0<\beta-\alpha<\beta$ in the previous equation (replacing $\alpha$ by $\beta-\alpha$) we obtain: $$\color{orange}{\beta-\alpha}<\color{red}{\frac{\sin(\beta-\alpha)}{\cos(\beta)}}=\tan(\beta)\cos(\alpha)-\sin(\alpha).$$ We conclude that \begin{align*} \frac{\beta}{\alpha}&=\frac{\beta-\alpha}{\alpha}+1\\ & <\frac{\tan(\beta)\cos(\alpha)-\sin(\alpha)}{\sin(\alpha)}+1\\ & = \frac{\tan(\beta)}{\tan(\alpha)}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that the directional derivative is the dot product of the gradient and the vector. I looked for few proofs online but was looking for alternate, more direct proofs. The one on Khan Academy used the Linear Approximation and one used the chain rule of multivariable functions. Are there any alternate methods to prove it? I'm looking for one which doesn't use much except the basic definition of the partial Derivative.
The property holds for differentiable functions, indeed by definition of differentiability we have that $$\lim_{\vec h\to \vec 0} \frac{ f(\vec x_0+\vec h)-f(\vec x_0)-\nabla f(\vec x_0)\cdot \vec h}{\| \vec h\|}=0 \iff f(\vec x_0+\vec h)-f(\vec x_0)=\nabla f(\vec x_0)\cdot \vec h+o(\| \vec h\|)$$ and assuming $\vec h = t\,\vec v$ we have $$\frac{\partial f}{\partial \vec v}(\vec x_0)=\lim_{t\to 0}\frac{f(\vec x_0+t\vec v)-f(\vec x_0)}{t}=\lim_{t\to 0}\frac{\nabla f(\vec x_0)\cdot t\vec v+o(\|t\vec v\|)}{t}=\nabla f(\vec x_0)\cdot \vec v$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a set of continuous functions with a certain property 2 I need help finding the set of continuous functions $f : \Bbb R \to \Bbb R$ such that for all $x \in \Bbb R$, the following integral converges: $$\int_0^1 \frac {f(x+t) - f(x)} {t^2} \ \mathrm dt$$ I think it might be the set of constant functions but i havent been able to prove it :( I was thinking that you can use the stone weiestrass theorem considering the set of continuous functions on a closed interval(non trivial) ,and a subset which contains the set of continuous functions whose integral above diverges in some point in that interval along with with the set of constant functions. So in order to solve the problem i need only to prove that if two functions do not meet the condition of the problem then their product does not as well . I hope you can provide some insight and thank you .
Although Rigel’s answer solved the matter brilliantly, I would like to present an alternative solution to this: Consider the sets $A_{\varepsilon,x} =\{ u > x, \, |f(u)-f(x)| < \varepsilon |u-x|\}.$ Notice that these sets are clearly open, by the continuity of $f$. Also, these sets are nonempty for every $x \in \mathbb{R}$, and $x \in \overline{A}\backslash A$ : indeed, as they are nested in $\varepsilon$, if one of them is empty/does not accumulate around $x$, every other one with $\eta < \varepsilon$ also is. Also, it means that we can assume without loss of generality that all points $y>x$ sufficiently close to $x$ satisfy $$ f(y) \ge f(x) + \varepsilon(y-x).$$ Plugging this back into the property satisfied by $f$ gives us then a contradiction. Claim: The set $A_{\varepsilon,x}$ is dense in $(x,+\infty).$ Proof: Suppose its intersection with an interval $(a,b)$ is empty, and consider $ a’ = \sup_{u<b} A_{\varepsilon,x} \le a$. It holds then for this $a’$ that $$ |f(a’)-f(x)|\le \varepsilon (a’-x).$$ As the set $A_{\delta,a’}, \, \delta < \varepsilon,$ is nonempty and accumulates around $a'$, there is $b’\in (a’,b)$ such that $$|f(b’)-f(a’)| < \delta(b’-a’).$$ This implies that $$|f(b’)-f(x)| \le |f(b’)-f(a’)| + |f(a’)-f(x)| < \varepsilon (a’-x) + \delta(b’-a’) < \varepsilon (b’-x),$$ A contradiction to the definition of $a’. \, \square$ Now we finish: as the set $A_{\varepsilon,x}$ is open and dense in $(x,+\infty),$ it means that every point $y \in (x,+\infty)$ satisfies $$ |f(y)-f(x)| \le \varepsilon(y-x).$$ This implies, in particular, that $f$ is differentiable at $x$ and that $f'(x) = 0.$ As this was valid for all $x \in \mathbb{R},$ we conclude that $f$ is differentiable and $f' =0,$ i.e., $f$ is constant, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2898983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Limit as $(x,y,z)\to (0,0,0)$ of $f(x,y,z) = \dfrac{xy+yz+xz}{\sqrt{x^2+y^2+z^2}}$ To find this limit, I converted to spherical coordinates and rewrote: $$\lim_{r\to 0} \dfrac{r^2(\sin^2\theta \cos\phi \sin \phi + \sin\theta \cos \theta \sin \phi + \sin\theta \cos \theta \cos \phi)}{r} = 0$$ Is this method alright? Our teacher did using epsilon delta proof, so how can we use something similar to spherical coordinates if say we had four variable limit of kind: $$\lim_{(w,x,y,z) \to (0,0,0,0)} \frac{xy+yz+xz+wx}{ \sqrt{x^2+y^2+z^2+w^2}}$$
Using the full spherical coordinates is overkill here. Let $r=\sqrt{x^2+y^2+z^2}$. Then $|x|\le r$, $|y|\le r$, $|z|\le r$. So $$|xy+xz+yz|\le|xy|+|xz|+|yz|\le 3r^2$$ and so $$\left|\frac{xy+xz+yz}{\sqrt{x^2+y^2+z^2}}\right|\le 3r.$$ As $\lim_{(x,y,z)\to(0,0,0)}r= 0$ then $$\lim_{(x,y,z)\to(0,0,0)}\left|\frac{xy+xz+yz}{\sqrt{x^2+y^2+z^2}}\right|=0$$ also. This method works for your four-variable problem too, avoiding the minutiae of four-dimensional spherical coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to compute the levy path integral with zero potential? In quantum mechanics, if we have the quantum particle moving in the potential $V$ then the quantum-mechanical amplitude $K(x_b,t_b| x_a,t_a)$ can be written as $$K(x_b,t_b|x_a,t_a)=\int_{x_{t_a}=x_a,x_{t_b}=x_b}Dx(t)\exp\left\{-\frac{i}{h}\int_{t_a}^{t_b}dtV(x(t))\right\}$$ where h is the Planck constant. It is known that in Feynman functional measure (generated by the process of the Brownian motion) and with zero potential ($V(x)=0$), the amplitude can be computed exactly, but what is the case with the non-Gaussian case? In the paper of Prof.Nikolai Laskin it said it can be computed with the measure generated by the $\alpha$ stable Levy motion ($1<\alpha<2$), but in this case the probability density function is so different from the Brownian case, so how to compute the amplitude?
It turns out that for the Levy path integral, the calculation of the amplitude of a quantum particle uses the Fourier translation of the probability density function, since this representation is an integral of an exponential function which is easy to compute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that a sum of degrees in a path between two vertices is smaller than $3n$ Let $G$ be a simple graph with $n$ vertices. Let $P$ be the shortest path between any two vertices. Prove that: $$\sum_{v\in P}deg(v)\leq 3n$$ Let the sum of degrees be bigger than $3n$. If so, there is a vertex on a path that has degree bigger than $\frac{3n}{p}$, where $p$ is the size of a path. And we know that a vertex in a path can't have more than $2$ neighbors on a path, so its degree is smaller or equal $n-p+2$. Unfortunately those two don't make contradiction. I think at least two non adjacent vertices (with $dist(x,y)>2$) on a path should have a common neighbor outside the path. This would make a contradiction with the path being the shortest. But I don't know how to show that.
Hint. Let $v_0,v_1,v_2,\dots,v_n$ be a path of minimum length from $v_0$ to $v_n.$ Can you show that $\deg v_0+\deg v_3+\deg v_6+\cdots+\deg v_{\lfloor n/3\rfloor}\le n?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Finding sum of a geometric series I am asked to find the summation of $1/3^n$ from $n=5$ to infinity. I have done the calculation: $1/(1-r)$, for $r=1/3$, and received $1.5$. As this summation starts from $5$, I subtracted $3^0, 3^-1, 3^-2, 3^-3$ and $3^-4$ from $1.5$ and got $6.17e-3$. However, apparently this answer is wrong, and so is the answer 0. I appreciate any help, thank you!
Your method is fine indeed $$\sum_{k=5}^\infty \frac1{3^k}=\sum_{k=0}^\infty \frac1{3^k}-\sum_{k=0}^4 \frac1{3^k}=\frac32 - \sum_{k=0}^4 \frac1{3^k}=\frac32-1-\frac13-\frac19-\frac1{27}-\frac1{81}=\frac1{162}$$ As an alternative, following the clever method suggested by lulu in the comment, we have $$\sum_{k=5}^\infty \frac1{3^k}=\sum_{j=0}^\infty \frac1{3^{j+5}}=\frac1{3^5}\sum_{j=0}^\infty \frac1{3^{j}}=\frac1{3^5}\cdot\frac32=\frac1{2\cdot 3^4}=\frac1{162}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Confusion with the proof that the Cantor set is closed I have encountered the definition of Cantor set and its property. Cantor set is constructed by removing the middle third open set of each interval .So each time we get some union of closed sets. As $F_0=[0,1]$ $F_1=[0,1/3]\cup [2/3,1]$ $F_2=[0,1/9]\cup [2/9,1/3]\cup [2/3,7/9]\cup [8/9,1]$ So on , That for $F_n$ we get the Union of $2^n$ closed interval . I know that finite union of closed interval is closed, but this is not true for arbitarly union. As I had a counterexample: $\cup$ {{1/n}|for $n$ $\in N$} as this is not closed as $0$ is not in that union. So what is the argument here to say that $F_n$ is closed for any $ n$ . As we are using this fact to prove the Cantor set to be closed as $F=\cap_{n\to \infty} F_n$ i.e arbitarly intersection of closed set is closed . Where am I misunderstanding? Any help will be appreciated.
For any large $n$, $F_n$ is the union of finitely many closed intervals so it is closed. As you mentioned the intersection of an arbitrary family of closed sets is closed so $F=\cap_{n\to \infty} F_n$ is closed. Note that for each $n$ we have finitely many closed sets, no matter how large $n$ is, so the union is closed. Your example of the infinite union of closed sets ${ 1/n } not being closed does not contradict the counter set because you are dealing with an infinite union of closed sets not an infinite intersection of closed sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Lebesgue Dominated Convergence Application I want to compute the integral $$\lim_{n\to \infty}\int_0^\infty \frac{\sin\left(\frac{x}{n}\right)}{(1+x/n)^n}\,\mathrm{d}x$$ Since $$\left| \frac{\sin\left(\frac{x}{n}\right)}{\left(1+\frac{x}{n}\right)^n}\right | \le \frac{1}{\left|\left(1+\frac{x}{n}\right)^n\right|}\le \frac{1}{1+x}$$ for $x\in [0,\infty)$, where I used $(1+x)^n \ge 1+xn$ to obtain the last inequality. I have therefore found a lebesgue integrable upper bound of the series of the integrand. Since the integrand converges to zero, I would obtain the integral to be zero. Is my reasoning correct? EDIT: Corrected the inequality.
The reasoning is not correct because $$ \int_0^{+\infty} \frac{1}{1+x} = + \infty $$ And thus $\frac{1}{1+x}$ is not integrable. Hint for a correct reasoning: For $n \geq 2$ we have $(1+y)^n \geq 1 + n \cdot y + \frac{n \cdot (n-1)}{2} \cdot y^2 \space \space \space \forall y \geq 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$C_{[a,b]} \rightarrow \mathbb{R}$, $x(t) \mapsto f(x) = \int_a^b x(t)dt$ continuous? Problem: Prove that $f:C_{[a,b]} \rightarrow \mathbb{R}$, $x(t) \mapsto f(x) = \displaystyle\int_a^b x(t)dt$ is continuous. My opinion: Let $x, x' \in C_{[a,b]}$, so they are bounded and we can choose a $\delta >0$ such that $d(x,x') < \delta$. To prove that $f$ is continuous we have to prove $\forall \epsilon > 0, \exists \delta > 0 : d(x,x') < \delta \Rightarrow d(f(x),f(x') < \epsilon$ My queston: with $f(x), f(x')$, how can we find a $\epsilon > 0$ respectively? Thank all!
You have to define $d(x,x')$ in order to be able to find your $\epsilon$ accordingly. For example if you define $$d(x,x')= \max _{t\in [a,b]} \{ |x(t)-x'(t)|\}$$ then $$|f(x)-f(x')|\le (b-a)d(x,x')$$ so you can find your $\delta$ if an $\epsilon$ is given.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Gauss Jordan elimination reduces to row-echelon form always? I am reading this text: and I'm wondering if gauss-jordan elimination always leads to an identity matrix on the left? If so, that helps me understand this passage: I'm trying to figure out why [A 0] can be rewritten as [I 0]. Why is this?
Gauss-Jordan eliminition works if and only if an inverse exists. It doesn't work for the null element matrix (matrix of zeros). They said that [A 0] can be rewritten as [I 0] using elementary row operations so there are operations so that works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Combinatorics distribution problem indistinguishable items in distinguishable boxes In how many ways can you put $10$ identical gold coins into four colored boxes so that at least $1$ goes into the blue box, at least $1$ into yellow, at most $2$ into red and at least $3$ into green? The way I solved this was by writing down all the restrictions, $$B \geq 1, \space G \geq 3, \space Y \geq 1, \space R \leq 2$$ Where each letter corresponds to the first letter of each colour box. Dividing the problem into 3 cases where there's no coin in red box, 1 coin and 2 coins in red box seems to be the most logical, giving the following results using bars and stars: C1: R box has zero coins $\binom{3 + 5-1}{5} = \binom{7}{5}$ C2: R box has one coins $\binom{3 + 4 - 1}{4} = \binom{6}{4}$ C3: R box has two coins $\binom{3 + 3 - 1}{3} = \binom{5}{3}$ Giving us a final result of $$\binom{7}{5}+\binom{6}{4}+\binom{5}{3}$$ I wanted to know if this is the correct way of solving a problem like this? What if we had bigger numbers including a box having at most say a 1000 coins, would we need to make a 10000 cases? what if there's multiple at most restrictions? like 2 boxes have to have at most 3 gold coins?
There is an easier way to solve this problem which generalizes better to when there are several boxes which may have at most $N$ coins, where $N$ is large. I will illustrate with your example. First, count the number of ways to put all the coins into the boxes without the $R\le 2$ restriction. This is $\binom{5+4-1}5$, beacuse you are putting $5$ coins into $4$ boxes. Next, subtract out the "bad" distributions where the $R$ box has more than $2$ coins. This is equivalent to counting distributions satisfying $\{B\ge 1,G\ge 3, Y\ge 1, R\ge 3\}$, the number of which is $\binom{2+4-1}2$. The final answer is $\binom{8}5-\binom{5}2$. What if you had two boxes with an upper limit? Say you have $15$ balls, and you still have all four boxes with their restrictions from before, but now there is also a purple box which can have at most $5$ balls. * *Start by ignoring both the upper limit restriction, counting $\{B\ge 1,G\ge 3,Y\ge 1\}$. The number is $\binom{10+5-1}{10}$. *Subtract out the distributions where the $R$ box has more than $2$ balls. This is counting $\{B\ge 1,G\ge 3,Y\ge 1, R\ge 3\}$ The count is $\binom{7+5-1}{7}$ *Subtract out the distributions where the $P$ box has more than $5$ balls. This is counting $\{B\ge 1,G\ge 3,Y\ge 1, P\ge 6\}$ The count is $\binom{4+5-1}{4}$ *Now, the cases where both the $R$ and $P$ boxes have been subtracted out twice, so these cases must be added back in. We are adding back in placements satisfying $\{B\ge 1,G\ge 3,Y\ge 1, R\ge 3,P\ge 6\}$, the number of which is $\binom{1+5-1}{1}$. The final answer is $$ \binom{14}{10}-\binom{11}{7}-\binom{8}{4}+\binom{5}1 $$ In general, if there are many boxes with an upper limit, you have to use the Principle of Inclusion-Exclusion to subtract out the bad distributions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2899966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How can i make the following change to this infinite series? $$ e^z - 1 = \sum_{n=1}^\infty \frac {z^n}{n!} $$ Given the above function and its corresponding series expansion, is there anything i could do to the left side of the equation so that the infinite series looks like this instead??? $$ \sum_{n=1}^\infty (\frac {z^n}{n!})^{a} $$ that is to the power of $\mathbf A$ which would be any constant. $$ (e^z-1)^a= \sum_{n=1}^\infty (\frac {z^n}{n!})^{a} $$ Would it be just like this? ^ Thank you very much for your time and help.
If you put a series to a certain power, it doesn't mean it's equal to the series of each term to that power. I don't think there is a general expression for this series except when $a$ is 0 or 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the following Area of Crescent all right? In the figure below. There are two overlapping circles and the area of Crescent in Red that I have found is $A_{C} = \frac{\pi rw}{2}$, where $w$ is the shift from center $'X'$ in blue to $'X'$ in red. Details: $$A_C = \frac{A_{elipse} - A_{circle}}{2} = \frac{[\pi r^2 + \pi r w] - \pi r^2}{2}$$
WLOG, assume both centers lie on the $x$-axis. You can use this diagram afterwards: Since the area of the circle is $A=\pi r^2$, then the area of the crescent should be: $$A_{\text{crescent}}=\pi r^2-(2A_{\text{sector }EAF}-2A_{\triangle AEF})$$ This is because $A_{\text{sector }EAF}=A_{\text{sector }ECF}$, and so does their corresponding triangle. Since the area of a sector is $A=\frac12r^2 \theta$, with $\theta$ in radians, and the area of the triangle is $A=\frac12ab\sin C$. Then the area of the crescent can be re-written as: $$A=\pi r^2-\alpha r^2+r^2\sin\alpha\\ \implies A=r^2(\pi-\alpha+\sin \alpha)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Qualitative inspection of solutions to $x^{4}-2x+1=0$ Consider the following polynomial $$ x^{4}-2x+1=0 $$ Is it possible to check if there is or there is not a solution in $x\in\left]0,1\right[$ without explicitly evaluating the expression? What other tests are there to qualitatively classify the solutions for this polynomial?
Since $x=1$ works, $$ x^4 - 2x + 1 = (x-1)p(x)\quad [p \in \mathbb R[x]_3]. $$ Now $$ x^4 - 2x +1 = x^2(x^2 -1) + (x-1)^2 = (x-1)(x-1+x^2(x+1)) = (x-1)(x^3 + x^2 + x - 1). $$ Then $p(x) = x^3 + x^2 + x-1$. Since $$ p(1)= 2 >0, p(0) = -1 < 0, $$ by intermediate value theorem, $p$ has a root in $(0,1)$, hence so does $x^4 - 2x + 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
When does a bijection of topologies induce a homeomorphism of spaces? If two topological spaces $(X, \tau)$ and $(Y, \tau')$ are homeomorphic, we have a bijective correspondence between $\tau$ and $\tau'$ via $U \in \tau \mapsto f(U) \in \tau'$ where $f: X \to Y$ is a homeomorphism. Are the reasonable conditions to be imposed to a bijection $\Gamma : \tau \to \tau'$ so that it implies that $X$ and $Y$ are homeomorphic? I'm not necessarily asking for a homeomorphism $g : X \to Y$ to verify $\Gamma(U) = g(U)$, but for conditions on $\Gamma$ that imply the existence of some homeomorphism.
So, let me first impress on you how outrageously weak the mere existence of such a bijection $\Gamma$ is. It means solely that $X$ and $Y$ have the same number of open sets. When dealing with infinite sets, a cardinality statement like this says extremely little. For instance, if $X$ is any infinite separable metric space, then there are exactly $2^{\aleph_0}$ open subsets of $X$. So for any two infinite separable metric spaces $X$ and $Y$, there exists such a bijection $\Gamma$. This includes, for instance, every infinite subspace of $\mathbb{R}^n$ for any $n$. One additional much stronger condition you can impose is that $\Gamma$ is an order-isomorphism (with respect to the inclusion order). That is, $\Gamma$ is a bijection and for any $U,V\in\tau$, $U\subseteq V$ iff $\Gamma(U)\subseteq \Gamma(V)$. For nice spaces, this implies that there is a homeomorphism $g:X\to Y$ such that $\Gamma(U)=g(U)$ for all $U\in\tau$. In particular, if $X$ and $Y$ are both $T_1$, you can recover $g$ by considering open subsets of the form $X\setminus\{x\}$. These subsets can be characterized in terms of the order relation (for instance, they are the elements of $\tau$ that have exactly one other element of $\tau$ which contains them). For each $x\in X$, $\Gamma(X\setminus\{x\})$ must therefore be of the form $Y\setminus\{y\}$ for some $y\in Y$. Defining $g(x)$ to be this $y$, it is then not hard to verify that $\Gamma(U)=g(U)$ for all $U$ and $g$ is a homeomorphism. (Alternatively, instead of assuming $X$ and $Y$ are both $T_1$, it would suffice to assume they are both sober.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Name convention for functor and natural transformation composition If there are functors $H: D \to C; F,G: C \to D$ and $K: B \to C$ and a natural transformation $\alpha: F \xrightarrow{.} G$ then we can construct 2 new natural transformations: Aka "left composition" $$H \alpha : H F \xrightarrow{.} H G $$ and "right composition" $$ \alpha K : F K \xrightarrow{.} G K $$ I was not able to find the naming convention for these ones and called them as left and right compositions but not sure that there are correct namings. Could anybody help me in the finding correct ones?
It's called whiskering; you can show that it is the same as the horizontal composition of $\alpha$ with $1_H:H\Rightarrow H$ (for your "left composition") or with $1_K:K\Rightarrow K$ (for your "right composition").
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$I$ is infinite, $A_k$ is countably infinite, and $A_i$ is countable for all $i \neq k$. Is $\prod\limits_{i\in I}A_i$ countable? The Cartesian product of a family $(A_i\mid i\in I)$ is defined as $$\prod\limits_{i\in I}A_i=\{f:I\to\bigcup A_i\mid f(i)\in A_i \text{ for all } i \in I\}$$ Let $(A_i \mid i \in I)$ be a family of non-empty indexed sets where $I$ is infinite, $A_k$ is countably infinite, and $A_i$ is countable for all $i \neq k$. Is $\prod\limits_{i\in I}A_i$ countable? I found that it's not too hard to conclude when $I$ is finite or when $A_k$ is uncountable. Please give me some hints in this case!
The Cartesian product will be countable if any of the sets $A_i$ is empty, since then the Cartesian product will also be empty. Let us assume that for all $i$, $A_i\neq \varnothing$. If there exists a finite subset $J$ of $I$ such that $A_i$ is a singleton for all $i\in I\setminus J$, then the Cartesian product will be countable. To see this, we can rearrange the terms to assume that $A_1, \ldots, A_n$ have more than one element, but $A_{n+1}, A_{n+2}, \ldots$ each have exactly one element. Then there is a bijection between $\prod_{i=1}^\infty A_i$ and $\prod_{i=1}^n A_i$. Now let us consider the case where infinitely many of the $A_i$ have at least two elements (still assuming each $A_i$ is non-empty). Then $\prod_{i=1}^\infty A_i$ will contain a subset which has the same cardinality at $\{0,1\}^\mathbb{N}$. We should be able to answer the question from here. I will leave the details to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Generally accepted notation for referencing function without defining it. Let $F\subseteq (\mathbb R \to\mathbb R)$ be some space of functions, and let $G:F\to \mathbb R$ be a functional. I have a statement of the following form: $$\begin{align}\text{Let } &f^*(x):=x^2. \quad\quad\quad \text{Then }\\ &f^*\in \arg\max_{f\in F} G(f) \end{align}$$ Rather than first defining a function and then referencing it, I'd like to compress this into one equation for brevity's sake. Something like: $$(x\mapsto x^2)\in \arg\max_{f\in F} G(f)$$ Is there a generally accepted notation like this? I'd prefer not to invent something new and unknown.
I don't understand either of your statements, so both of them are too concise to be readable. Do you mean that $f^*$, which is $\underset{f \in F}{\operatorname{argmax}} G(f)$, turns out to be the function defined by $f^*(x) = x^2$? If so, for the sake of comprehensibility rather than brevity, you should write this out in words in a complete sentence: for example, Let $f^* = \underset{f \in F}{\operatorname{argmax}} G(f)$. Then it turns out for mysterious reasons that $f^*(x) = x^2$. (Possibly with an explanation why this is the function that maximizes $G(f)$.) Or possibly (after the recent edits, this seems closer to the sort of emphasis you want): Let $f^* \in F$ be given by $f^*(x) = x^2$. Then $f^* \in \underset{f \in F}{\operatorname{argmax}} G(f)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does a proper class have arbitrary large subsets? Assume that we are working in ZFC, that we have a well-formed formula $P(x)$, that $x$ is the only free variable of $P(x)$, that there is no set $S$ such that $$ \forall x\ (x\in S\iff P(x)), $$ and that $\alpha$ is a cardinal. Is there necessarily a set $T$ of cardinality $\alpha$ such that $$ \forall x\ (x\in T\implies P(x))\ ? $$
This is true. Assume to the contrary that there is an ordinal $\alpha$ such that no set equinumerous to $\alpha$ is contained in $P$. Then there is a smallest such $\alpha$. However this means that for each $\beta<\alpha$ we can choose an $A_\beta$ such that $A_\beta \subseteq P$ and $|A_\beta|=|\beta|$ (employing Scott's trick to cut the space of candidate $A_\beta$s down to set size before choosing frely). But then $\bigcup_{\beta<\alpha} A_\beta \subseteq P$ and its cardinality is at least $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }