Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Solutions to $z^3 - z^2- z =15 $ Find in the form $a+bi$, all the solutions to the equation $$z^3 - z^2- z =15 $$ I have no idea what to do - am I meant to factor out z to get $z(z^2-z-1)=15$ or should I plug in $a+bi$ to z? Please help!!!!!!!
Hint: First find the one real root, say $r$ (it's not hard). Then you can factor out $z-r$ out of $z^3-z^2-z-15=0$, and find the remaining roots using the quadratic formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
The digit at the hundred's place of $33^{33}$ I would want to know how to start with the question. And if you get hung up somewhere there's the answer it's $5$. Any help is appreciated thanks, My approach was to look at the factors to somehow crack the nut. But still in vain. Any help or tip or approach is alright as I didn't get a clue but for that. Yeah and also do suggest a category as well if number systems isn't appropriate.
Here goes, to see how quickly it can be done. Jack's extraction of a factor $8$ from the modulus is helpful. Let's work with "the last three digits", using $\equiv$ when the thousands are dropped, but allow negative numbers if that simplifies things. $$33^2=1089$$ $$33^4\equiv89^2=(90-1)^2=7921\equiv-79$$ $$33^8\equiv (-79)^2=(80-1)^2=6241\equiv 241$$ It is here that being able to work mod $125$ would simplify things markedly, because you would have $33^{16}\equiv (-9)^2=81$ and $33^{32}\equiv 81^2=6561\equiv 61$ (modulo $125$) but I didn't spot that. So I'd do $$33^{16}\equiv 241^2=(250-9)^2=62500-4500+81\equiv81$$ which comes to the same thing and $$33^{32}\equiv 81^2=6561\equiv 561$$ $$561\times 33 =11 \times 1683 = \dots 513$$ There isn't a lot of thinking time, but done confidently and quickly, there is nothing there to take lots of time - and it helps to know two digit squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What vector field property means “is the curl of another vector field?” I know that a vector field $\mathbf{F}$ is called irrotational if $\nabla \times \mathbf{F} = \mathbf{0}$ and conservative if there exists a function $g$ such that $\nabla g = \mathbf{F}$. Under suitable smoothness conditions on the component functions (so that Clairaut's theorem holds), conservative vector fields are irrotational, and under suitable topological conditions on the domain of $\mathbf{F}$, irrotational vector fields are conservative. Moving up one degree, $\mathbf{F}$ is called incompressible if $\nabla \cdot \mathbf{F} = 0$. If there exists a vector field $\mathbf{G}$ such that $\mathbf{F} = \nabla \times \mathbf{G}$, then (again, under suitable smoothness conditions), $\mathbf{F}$ is incompressible. And again, under suitable topological conditions (the second cohomology group of the domain must be trivial), if $\mathbf{F}$ is incompressible, there exists a vector field $\mathbf{G}$ such that $\nabla \times\mathbf{G} = \mathbf{F}$. It seems to me there ought to be a word to describe vector fields as shorthand for “is the curl of something” or “has a vector potential.” But a google search didn't turn anything up, and my colleagues couldn't think of a word either. Maybe I'm revealing the gap in my physics background. Does anybody know of such a word?
The concept of a pseudovector is quite similar, being associated with the curl of a vector. An apparent generalization is to the concept of a pseudovector field, for which I found a reference here. There is also the related notion of a vorticity arising as the curl of a vector field but this is not generic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
Product of a character with an irreducible character a non-negative integer Why is the inner product of a character with an irreducible character a non-negative integer? I can see that by properties of the inner product it will be non-negative but I cannot see why it would be an integer?
I assume we're talking about finite groups and complex scalars (so, the semisimple situation). Note that if $U$ and $V$ are reps of $G$, then $\hom(U,V)$ is also a rep: $g$ acts as $T\mapsto gTg^{-1}$ for all linear maps $T:U\to V$. The invariant subspace $\hom(U,V)^G$ is precisely the space of equivariant maps (aka intertwining operators) $\hom_G(U,V)$. For any representation $V$, the linear map $\sigma=\sum_{g\in G}g$ satisfies $\sigma^2=|G|\sigma$, and we can deduce the linear map $P=\frac{1}{|G|}\sum_{g\in G}g$ is a projection $P=P^2$ whose image is $V^G$ (the image is clearly a subspace of the invariant space $V^G$, and $P$ acts as the identity on $V^G$). Taking traces, we conclude that ${\rm tr}(P)=\frac{1}{|G|}\sum_{g\in G}\chi_V(g)=\dim V^G$. There is an obvious map $U^*\otimes V\to\hom(U,V)$ which is an isomorphism of representations. The character of $U^*\otimes V$ equals $\chi_{U^*\otimes V}(g)=\chi_{U^*}(g)\chi_V(g)=\overline{\chi_U(g)}\chi_V(g)$. Therefore, $$\dim\hom_G(U,V)=\dim(U^*\otimes V)^G=\frac{1}{|G|}\sum_{g\in G}\overline{\chi_U(g)}\chi_V(g) $$ which is $\langle \chi_U,\chi_V\rangle$ (at least with the physicists' inner product).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What I have to do to write and have published an article? Let's suppose that I have proved a theorem myself and I want to write an article about it, I have a few questions: 1) How do I have to write it ? I mean, what character should I use, what conventions I should follow, what format should I use and so on... 2) How do I know that my result is original ? I know that there are dozens of new theorems proven every day so how can I be sure that the article I send is original (maybe who will publish it has to check this) ? 3) To what magazine/site should I send it ? Maybe my result is original but it isn't very interesting so probably Annales of Mathematics isn't interested but are there any magazines/journals which publish mathematic of not too high level ?
I'd say: Write a good introduction to your article and answers to these questions will come naturally. Since you are supposed to place your result in the already existing literature on the problem, you will: 1) See what notations/conventions/format are used by others to deal with the problem. Use them! People like what they already know (particularly for the notations) :). 2) Once you'll know about the literature on your problem, it should start to be clear how innovative is your result. 3) Where are the references on your problem published? This could at least give you a vague idea on who (which journal) could be interested by your result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
arccos and arcsin integral contradiction: I am shown: $$f(x) = \arcsin x \implies f'(x) = \frac{1}{\sqrt{1-x^2}}$$ $$f(x) = \arccos x \implies f'(x) = -\frac{1}{\sqrt{1-x^2}}$$ These two derivatives can be very readily derived by a bit of implicit differentiation. However... I want to undo my differentiation for both expressions. I set up the integrals and now note that the only difference between the two derivatives is a negative sign. Suppose that for the arccos(x) derivative, I factor out the negative sign before integrating the expression, as $-1$ is a constant and the constnat factor rule states that: $$\int k \frac{dy}{dx} dx = k \int \frac{dy}{dx} dx$$ However, due to this, I end up with the false equality: $$-\arccos(x)=\arcsin(x)$$ Which doesn't hold for any $x$ at all! What have I done wrong, why cant one simply factor out the negative sign for the arccos derivative expression?
Two functions defined over an interval that have the same derivative differ by a constant. Since the derivatives of $\arcsin x$ and $-\arccos x$ are both equal to $\dfrac{1}{\sqrt{1-x^2}}$ on the interval $(-1,1)$, you can say that there exists $c$ such that $$ \arcsin x=c-\arccos x $$ If you evaluate at $0$, you get $$ 0=\arcsin 0=c-\arccos 0=c-\frac{\pi}{2} $$ Therefore, for all $x\in(-1,1)$, $$ \arcsin x=\frac{\pi}{2}-\arccos x $$ and this holds by continuity also for $x=-1$ and $x=1$. You get a similar “contradiction” if you do $$ \log x=\int\frac{1}{x}\,dx=\int\frac{2}{2x}\,dx=\int\frac{1}{2x}\,d(2x)=\log(2x) $$ without taking into account the “constant of integration”; indeed $$ \log x = \log(2x)-\log 2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Number of digits function Let $\{0, 1, 2, 3, 4, 5, 6, 7, 8, 9\}$ be the $10$ digits of the base $10$ system. The set of all finite sequences of those digits that don't start with a leading zero, together with the single one-element sequence $(0)$, can be bijectively mapped onto the natural numbers starting with 0, in the usual base $10$ way. Through this mapping, one can associate a function defined on one set with a function defined on another. For example, from the function of addition, $2+8 = 10$, so that means $(2) + (8) = (1, 0)$. The number of digits function is defined on the sequences, and it is simply the length of the sequence, and it can be associated, or "lifted", to the set of natural numbers themselves. So, does this mean the number of digits in $0$ is $1$, or $0$? I ask this question because I got contradictory answers in my previous question.
The number of digits in zero is $1$, since we have associated zero with the one-element sequence $(0)$, and the number of digits function is defined as the length of that sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to know if equation can be solved by factorising before trying? So, I have core 1 test tomorrow and there is a lot of solving of quadratic equations without calculator and my weakest point is the time I waste in trying to factorise and equation but then it ends up it doesn't factorise or the opposite way in which I use equation while I could've done factorising. Is there any way you can know if equation factorise or not?
For any quadratic equation $ax^2+bx+c=0$, $a,b,c\in \mathbb{R}$, there exist solution(s) (i.e., you can factorise) if and only if the discriminant $$D=b^2-4ac$$ is higher than or equal to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
What's the name for this mathematical device used by programmers? So a friend is trying to figure out what this is called so we can read more about it. The concept is/was used by database designers, who needed a compact way to store a list of selected options as a single numerical value, where that number would be unique for any given selection of options. This was done by assigning an ID value to each option that corresponded with an exponentiation of a base pair, and adding them all together. Since I'm not sure if anything I just said was correct, what I mean is (using base 2 as an example): Option 1 ID: 1 (2^0) Option 2 ID: 2 (2^1) Option 3 ID: 4 (2^2) So if your selection was both option 1 and 2, the value of that configuration would be 3 (1+2). If your selection was option 1 and 3, it would be 5 (1 + 4). The presumed property here is that the resulting sum of any given configuration would be unique to that configuration, so that given the value of the sum, one would be able to work backwards to determine which option was selected. This would be useful to, say, a database guy, who wants a compact way to store a list of selected options as a single value in a row of data. Can anyone enlighten me on what I would google to learn more about this database storage technique / mathematical principle?
This is normally called a "Bit field."
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is it true that any polynomial $f(x)$ with coefficients in a finite field $F_q$ is separable over that field? Why is it true that any polynomial $f(x)$ with coefficients in a finite field $F_q$ is separable over that field? I'm confused because the polynomial $x^2+1$ in $F_2[x]$ is inseparable ($f(x)$ and $f'(x)$ are not coprime), and yet apparently this property is true. Am I missing something?
The definition of a field $E$ being perfect is that every irreducible polynomial $p\in E[x]$ is separable (Wikipedia link). But the polynomial $x^2+1$ is equal to $(x+1)^2$ in $\mathbf{F}_2[x]$, so it's not irreducible. Similarly, the field $\mathbf{Q}$ being perfect doesn't conflict with $x^2+2x+1=(x+1)^2\in\mathbf{Q}[x]$ having repeated roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find wave equation initial condition such that solution consists of right-going waves only Let $u(x,t)$ solve the wave equation $u_{tt}=c^2u_{xx}$ and let $u(x,0)=A(x)$ for some function $A(x)$. Find the function $B(x)=u_t(x,0)$ such that the solution consists of right going waves only. My attempt: By d'Alembert's formula, the general solution is given by $u(x,t) = \frac{1}{2}[A(x+ct)+A(x-ct)]+\frac{1}{2c}\int_{x-ct}^{x+ct}B(s)ds$ Since the solution should consist of right-going waves only, we want a solution of the form $u(x,t)=A(x-ct)$ which implies the sum of the remaining terms in d'Alembert's formula should equal $0$. \begin{equation} \frac{1}{2}A(x+ct)+ \frac{1}{2c}\int_{x-ct}^{x+ct}B(s)ds=0\\ \frac{1}{2c}\int_{x-ct}^{x+ct}B(s)ds = -\frac{1}{2}A(x+ct)\\ \int_{x-ct}^{x+ct}B(s)ds = -cA(x+ct)\\ \end{equation} And then taking the derivative of both sides, \begin{equation} B(x) = -cA'(x+ct) \end{equation} Is this the correct way to do this?
The general solution to the homogeneous wave function $u_{tt}-c^2u_{xx}=0$ is $$u(x,t)=f(x-ct)+g(x+ct)$$ Applying the initial condition $u(x,0)=A(x)$ reveals that $$f(x)+g(x)=A(x) \tag 1$$ Applying the initial condition $u_t(x,0)=B(x)$ reveals that $$-cf'(x)+cg'(x)=B(x) \tag 2$$ Now, we differentiate $(1)$ to obtain $$f'(x)+g'(x)=A'(x) \tag 3$$ Solving $(2)$ and $(3)$ simultaneously for $f'$ and $g'$ yields $$f'(x)=\frac{cA'(x)-B(x)}{2c}$$ $$g'(x)=\frac{cA'(x)+B(x)}{2c}$$ If we impose that $g=0$, then we must have $B(x)=-cA'(x)$. Then, $$f(x)=A(x)$$ and $$u(x,t)=A(x-ct)$$ where the initial conditions are $u(x,0)=A(x)$ and $u_t(x,0)=B(x) = -cA'(x)$. Approaching the problem using D'Alembert's equation, we have $$u(x,t)=\frac12 (A(x-ct)+A(x+ct))+\frac{1}{2c}\int_{x-ct}^{0}B(u)du+\frac{1}{2c}\int_{0}^{x+ct}B(u)du$$ Then, for a solution with right-going waves only, we must enforce $$\frac12 A(x+ct)+\frac{1}{2c}\int_{0}^{x+ct}B(u)du=0 \tag 4$$ Differentiating $(4)$ yields $$\frac{c}{2} A'(x+ct)+\frac12 B(x+ct)=0 \Rightarrow B(x+ct)=-cA'(x+ct)$$ Finally, $$u(x,t)=\frac12 A(x-ct)-\frac{1}{2}\int_{x-ct}^{0}A'(u)du=A(x-ct)$$ where we tacitly assume that $A(0)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unique factorization in fields Suppose $A$ is a commutative $R$-algebra and that is also a field. Define: * *For $x,y \in A$, say that $x$ divides $y$ iff $xr = y$ for some $r \in R$. *Call $x,y \in A$ associates iff each divides the other. *Say that $I \subseteq A$ is an ideal iff $A$ is an additive subgroup of $A$ and $RI \subseteq I$ *etc. I think this is often the right context in which to view factorization problems in fields, and possibly in other situations where there are "too many units." For example, when $\mathbb{Q}$ is viewed as a $\mathbb{Z}$-algebra, it has a "unique factorization" property whereby every non-zero $q \in \mathbb{Q}$ can be uniquely expressed, up to associates, as product of prime elements of $\mathbb{Z}$ and reciprocals of primes elements of $\mathbb{Z}.$ Essentially the same idea should work if we're thinking about factorization in abelian groups; given a distinguished submonoid, we get a corresponding divisibility relation, notion of associates, etc. and we can try to prove factorization results in this context. Where can I learn about this kind of thing?
There is a very nice book by Alfred Geroldinger and Franz Halter-Koch which studies factorization from the monoid standpoint. There is a survey article here: http://www.uni-graz.at/~geroldin/54-non-unique-fact-survey.pdf By these two, which I assume turned into the book (which is a bit pricey, but your library might have it/ be able to get it for you). The book is here: https://www.crcpress.com/product/isbn/9781584885764 You might look at the survey article to see if this is something you are interested in. They also have an extremely extensive bibliography which could help you out! Hope that helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How can $\frac{\mathrm{d}}{\mathrm{d}x}\left[y(u(x))\right] = \frac{\mathrm{d}y}{\mathrm{d}x}$? I just saw a video on the chain rule, and it stated that $$\frac{\mathrm{d}}{\mathrm{d}x}\left[y(u(x))\right] = \frac{\mathrm{d}y}{\mathrm{d}x}$$ I don't understand this; if I let $y(x) = x^2$ and $u(x) = \sqrt x$ then $$\frac{\mathrm{d}y}{\mathrm{d}x} = 2x$$ and $$\frac{\mathrm{d}}{\mathrm{d}x}\left[y(u(x))\right] = \frac{\mathrm{d}}{\mathrm{d}x} \left[x\right] = 1$$ Clearly, I am completely misunderstanding something. What is it? EDIT: It is my understanding right now that $y(u(x)) = (\sqrt x)^2 = x$. Is this wrong?
Your confusion is reading the abbreviation of $\frac{\mathrm d y}{\mathrm d x}$ as $\frac{\mathrm d y(x)}{\mathrm d x}$ instead of $\frac{\mathrm d y(u(x))}{\mathrm d x}$ as you ought. $$\begin{align} u(x) & \mathop{:=} \surd x \\ y(u) & \mathop{:=} u^2 \\ \therefore y(u(x)) & = y(\surd x) \\ & = (\surd x)^2 \\ & = x \end{align}$$ The dependent variable $y$ is defined as a function of the dependent variable $u$, which is in turn defined as a function of the independent variable $x$. Thus $y$ is a composition function when expressed with respect to $x$. It is to be understood that when we differentiate the dependent variable $y$ with respect to $x$ we are differentiating this composition. For clarity let's use different letters for the dependent variables and the functions by which they are defined.   Then we have an independent variable $x$, and dependent variables $u$ and $y$ are defined as functions $f$ and $g$, such that $y=f(u)$ and $u=g(x)$.   When differentiating $y$ with respect to $x$ we apply the chain rule to the composition of $f$ and $g$ (that is $f\circ g).$ $\begin{align} u & \mathop{:=}g(x) \\[1ex] y & \mathop{:=} f(u) \\[1ex] & = [f\circ g](x) \\[3ex] \therefore \dfrac{\mathrm d y}{\mathrm d x} & = \dfrac{\mathrm d [f\circ g](x)}{\mathrm d x} \\[1ex] & = \dfrac{\mathrm d f(g(x))}{\mathrm d g(x)} \cdot \dfrac{\mathrm d g(x)}{\mathrm d x} \\[1ex] & = \frac{\mathrm d y}{\mathrm d u}\frac{\mathrm d u}{\mathrm d x} \\[2ex] & = \frac{\mathrm d u^2}{\mathrm d u}\frac{\mathrm d \surd x}{\mathrm d x} \\[1ex] & = 2u\cdot\frac {1}{2\surd x} \\[1ex] & = \frac{2\surd x}{2\surd x} \\[1ex] & = 1 \end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Finding $k$-clique in a graph with running time of $|V|^{k-1}$ This is a homework problem. Let's say I have a graph $G$, how can I find a $k$-clique (i.e. a complete graph with $k$ vertices) inside $G$? So far I can think of a naive solution where I check if each node in k is connected to other nodes, and the running time would be $|V|^k$. How can I reduced it by $|V|$ to $|V|^{k-1}$?
Nešetřil and Poljak give an algorithm running in time $O(n^{(\omega/3)k})$, for $k$ divisible by 3, which seems to be the state-of-the-art. This generalizes the well-known $O(n^\omega)$ algorithm for the case $k = 3$. Here $\omega$ is the matrix multiplication constant, currently known to be at most $2.373$, but conjectured to be $2$. See also Vassilevska (now Vassilevska-Williams), who gives an $o(n^k)$ algorithm which is space-efficient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that collection of finite dimensional cylinder sets is an algebra but not $\sigma$-algebra I am trying to prove that collection of all finite dimensional cylinder sets is an algebra but not $\sigma$-algebra. Cylinder sets are defined as: $\mathcal{B}_n$ is defined as the smallest $\sigma-$ algebra containing the rectangles $\{(x_1,x_2,\ldots,x_n): x_1 \in I_1,\ldots,x_n \in I_n \}$ where $I_1,I_2,\ldots,I_n$ are intervals in $\mathbb{R}$. An $n$- dimensional cylinder set in $\mathbb{R}^{\infty} := \mathbb{R}\times \mathbb{R}\times...$ is defined as the set $\{\textbf{x}: (x_{1},x_2,\ldots,x_n) \in B\},\ B \in \mathcal{B}_n$. I am unable to visualize how to prove that finite unions belong in the collection, and also why some infinite unions won't belong to the collection. Can anyone help me with this ?
I presume that by a cylinder set you mean a set of the form $B \times \mathbb{R} \times \mathbb{R} \times \cdots$, with $B \in {\cal B_n}$ for some $n$. Let ${\cal C}_n$ denote the $n$ dimensional cylinder sets and ${\cal C} = \cup_n {\cal C}_n$. Note that if $C \in {\cal C}_n$ then $C \in {\cal C}_m$ for all $m \ge n$. If $C_1,...,C_k$ are cylinder sets, then all of the sets are in some ${\cal C}_p$ for some $p$, hence $C_1 \cup \cdots \cup C_k \in {\cal C}_p$, hence ${\cal C}$ is closed under finite unions. It might be easier to show that ${\cal C}$ is not closed under infinite intersections. Let $C_k = \{0\}\times \cdots\times\{0\} \times \mathbb{R} \times \cdots$ ($k$ zeros), and note that $C=\cap_k C_k = \{(0,0,0,\cdots) \}$. We see that $C \notin {\cal C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\lim_{x\rightarrow0}\frac{1-\left(\cos x\right)^{\ln\left(x+1\right)}}{x^{4}}$ Could you please check if I derive the limit correctly? $$\lim_{x\rightarrow0}\frac{1-\left(\cos x\right)^{\ln\left(x+1\right)}}{x^{4}}=\lim_{x\rightarrow0}\frac{1-\left(O\left(1\right)\right)^{\left(O\left(1\right)\right)}}{x^{4}}=\lim_{x\rightarrow0}\frac{0}{x^{4}}=0$$
By your logic: $$\lim_{x\to 0}\frac{1 - \cos x}{x^2} = \lim_{x\to 0}\frac{1-O(1)}{x^2} = \lim_{x\to 0}\frac{0}{x^2} = 0$$ but in actual fact, the limit should be equal to $\frac12$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Angle of rotation of $w = \frac{1}{z}$ at the points $1$ and $i$ What angle of rotation is produced by the transformation $w=\frac{1}{z}$ at the point (a) $z_{0} = 1$ (b) $z_{0} = i$ I'm not sure what to do. At first I though about representing $w=\frac{1}{z}$ in polar coordinates and then plugging in the point in question, but I know this is wrong... I also know I am supposed to do something with differentiation...do I take $zf'(z)$ or something like that and then plug in i and compare the change in angle from the point in question? I am quite confused.
Here , $f(z)=\frac{1}{z}$. $f'(z)=-\frac{1}{z^2}$. Angle of rotation at the point $z_0=1$ is $\arg f'(1)=\arg(-1)=\pi$. Angle of rotation at the point $z_0=i$ is $\arg f'(i)=\arg(1)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1279957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\lim_{x\rightarrow0}\frac{\ln\cos2x}{\left(2^{x}-1\right)\left(\left(x+1\right)^{5}-\left(x-1\right)^{5}\right)}$ Could you help me to find the limit: $$\lim_{x\rightarrow0}\frac{\ln\cos2x}{\left(2^{x}-1\right)\left(\left(x+1\right)^{5}-\left(x-1\right)^{5}\right)}$$
In a neighbourhood of the origin: $$\log(\cos 2x)=\log(1-2\sin^2 x)=-2x^2+o(x^3),$$ $$2^x-1 = e^{x\log 2}-1 = x\log 2+o(x),$$ $$ (x+1)^5-(x-1)^5 = 2+o(x),$$ hence the limit is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Analyticity of $\dfrac{1}{z}$ vs. $\dfrac{1}{z^2}$ I am learning complex analysis on my own. I am familiar with the theorems, and I am able to compute by hand and get correct results. But there is something that escapes me. What is the criteria for analyticity? For example: derivative if $\dfrac{1}{z}=-\dfrac{1}{z^2}$ derivative of $\dfrac{1}{z^2}=-\dfrac{2}{z^3}$ Both derivatives are undefined at $z=0$. Yet closed curve integral of $1/z !=0$, while $1/z^2$ does. Cauchy integral theorem states that closed curve integrals of all analytic functions $=0.$ http://mathworld.wolfram.com/ResidueTheorem.html It states that (integral of) all the terms in a Laurent series besides $a_{-1} =0$ because of the Cauchy integral theorem. I get the same correct results, computing these by hand. But I would like to understand, why all $1/z^n$ where $n=-1$ closed curve integrals $=0$ due to Cauchy integral theorem. They all have poles at $z=0$. And Cauchy integral theorem is suposed to apply only to curves not containg any poles(in the area they enclose). I hope I was clear enough in my wording, if not, please say so. Post-Acceptance-Edit: Confusion arose, at their (sites') statement that Cauchys' theorem is the reason for the integral being zero, when that in fact is not true, as confirmed by other users.
If you change variables to $w=\frac{1}{z}$ then you end up with $\int z^{-n}dz$ becoming $-\int w^n \frac{dw}{w^2}=-\int w^{n-2}dw$ and hence, for each $n\geq 2$ this is analytic again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Having a difficult time figuring out if linear equations is true or false. I'm having a very difficult time figuring out if these linear equations are true or false: $$ x + 2y > 6\\ x - y < 3 $$ How do you identify if these equations are true or false? At first I thought I was supposed to write out the solution, then once I get the answer I was supposed to say if it was true or false. Is that right? I created this graph identifying both inequalies, but I just don't know how to figure it's true or not. If anyone could help that would be great.
Hint: It would be helpful to sketch the lines $$ l_1\colon \quad y=\frac{6-x}{2}\\ l_2\colon \quad y=x-3 $$ and check when the following two conditions hold true jointly: $$ y>\frac{6-x}{2}\\ y>x-3 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Group of Order $5$ Let $G$ be a group of order $5$ with elements $a, b, c, d, 1$ where $1$ is the identity element. This is the definition of the group. We all know that this can't be a group because any group of order $5$ is abelian but according to my definition this group is not abelian. But my question is why can't this be a group when it satisfies all the criteria mentioned in the definition of a group. I wish to find a reason for its ability of not being a group just from the definition of group. For example one can say that the diagonal elements are $1$, which means we have subgroups of order $2$, which is not possible for a group of order $5$. But this is not what I am seeking. Please explain me using nothing more than just the four criteria of the definition of group.
$$(ab)d=a \neq d= a(bd)$$ Associativity fails.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Equivalence of norms: $\|x\|$ and $\|x\|_1=\|x\|+\lvert\,f(x)\rvert$ Let $f: X\to \mathbb{R}$ be a linear functional and $\|\cdot\|_1$ is defined as follow $\|x\|_1=\|x\|+|f(x)|$. Prove or disprove $\|\cdot\|_1$ is equivalent to $\|\cdot\|$ iff $f$ is continuous. I've tested some examples and get the answer is positive.
Hint. It suffices to prove the following: $$ \|x_n-x\|\to 0 \quad\text{iff}\quad \|x_n-x\|_1\to 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Book recommendation on infinitesimals I'm a sophomore mechanical engineering student and I'm looking for a book which completely describes the concept of "infinitesimals" (used in thermodynamics or dynamics or...), In fact, I was taught that dy/dx is not quotient ,but in physics it seems they are treated and being manipulated like numbers
Since differential geometry seems to be the flavour of this thread, my recommendations are as follows. Personally, I think the best text for differential forms is set out in R.W.R. Darling's Differential Forms and Connections: see link here. For a physics slant, try Nakahara's Geometry, Topology and Physics, the text can be found here. Personally, when I was a mathematics undergraduate, I found John McCleary's Geometry from a Differentiable Viewpoint (here) and Differential Geometry by Kreyszig (here) indespensible but appreciate it may be more suited to a math Major.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Counting binary strings of length n with no two adjacent 1's I need to calculate the total number of possible binary strings of length $n$ with no two adjacent 1's. Eg. for n = 3 f(n) = 5 000,001,010,100,101 How do I solve it?
Hint: Use a recurrence relation. What if the string with $k$ bits starts with a 1, how many possibilities gives this for strings with $k+1$ bits? If the string with $k$ bits starts with a 0, how many possibilities gives this for strings with $k+1$ bits?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find $x$ and $y$ If $\frac{\tan 8°}{1-3\tan^{2}8°}+\frac{3\tan 24°}{1-3\tan^{2}24°}+\frac{9\tan 72°}{1-3\tan^{2}72°}+\frac{27\tan 216°}{1-3\tan^{2}216°}=x\tan 108°+y\tan 8°$, find x and y. I am unable to simplify the first and third terms. I am getting power 4 expressions. Thanks.
HINT: $$\tan(3\cdot8^\circ)=\dfrac{3\tan 8^\circ-\tan^38^\circ}{1-3\tan^28^\circ}$$ Now, $$\frac{\tan 8^\circ}{1-3\tan^28^\circ}-y\tan8^\circ=\dfrac{(1-y)\tan 8^\circ-(-3y)\tan^38^\circ}{1-3\tan^28^\circ}$$ which will be a multiple of $\tan(3\cdot8^\circ)$ if $$\dfrac{1-y}{-3y}=\dfrac31\iff y=-\dfrac18$$ $$\implies\frac{\tan A}{1-3\tan^2A}-\left(-\dfrac18\right)\tan A=\dfrac38\tan3A$$ and I should leave it here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Inductive proof that every term is a sequence is divisible by 16 I have this question: The $n$th member $a_n$ of a sequence is defined by $a_n = 5^n + 12n -1$. By considering $a_{k+1} - 5a_k$ prove that all terms of the sequence are divisible by 16. I can do the induction and have managed to rearrange the expression at the inductive step such that the expression must be divisible by 16. In other words, I can do the question fine. My question is: why must we consider $a_{k+1} - 5a_k$? Why can't we prove this by induction just by looking at $a_{k+1}$? Also, how can it be deduced that the expression we must consider is $a_{k+1} - 5a_k$?
A simple proof with some arithmetic modulo $16$: $$\begin{array}{r|ccc} n\equiv\pmod 4\quad & 5^n & 12n-1& a_n \\ \hline 0 & 1 & -1 & 0 \\ 1 & 5 & 11 & 0 \\ 2 & 9 & 7 & 0 \\ 3 & 13 & 3 & 0 \end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
"Intermediate Value Theorem" for curves Let $f$ be a continuous real function defined on the unit square $[0,1]\times[0,1]$, such that $f(x,0)=-1$ and $f(x,1)=1$. If we walk from $y=0$ to $y=1$ on any path, we must cross some point where $f=0$. Intuitively, there is a connected "wall" (a curve) that separates $y=0$ from $y=1$, on which $f=0$, like this: My question is: is there a theorem that formalizes this intuition (if it is correct)? In particular: is there a theorem which implies that there exists a connected line that goes from $x=0$ to $x=1$, on which $f=0$, as in the picture?
Converting my comment to a loose idea of proof: Assume it's not true - ie, the set of zeros of f is not connected between x=0 and x=1. Then there are 2 disjoint open sets, A and B, one containing the x=0 side, and the other one containing the x=1 side, which cover the entire set of zeros. Now let's prove that in the remaining piece of the square not covered by either A or B there is a connected component touching both y=0 and y=1. This would contain a path where f is non-zero (well, not exactly a path, but it would kind of break the notion of 'every connection between the two sides must equal zero somewhere'). Again, let's assume otherwise, so there are 2 disjoint open sets, C and D, one containing y=0, the other one containing y=1, which cover the remaining part of the square. Now we prove that the boundaries of A and C, $\partial A$ and $\partial C$ intersect - with some hand-waving, the connected component of A containing the entire x=0 side has points both in the interior of C: (0, 0), and points in the interior of the complement of C: (0, 1), and since it's connected, it must also have points on the boundary of C, and the limit points of these will still be on the boundary of C, but will also be on the boundary of A. Now look at a single point $x \in \partial A \cap \partial C$. It is not inside A or C, since they are open, but it also cannot be in B or D. For example, if we assume $x \in B$ then there is an open neighborhood of x fully contained in B, and therefore disjoint from A, which is a contradiction with $x \in \partial A$. So x is not in $A \cup B \cup C \cup D$, which is a contradiction, we assumed that those 4 sets cover the square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How do I compute $ \sum\limits_{n=0}^{\infty} \frac{1}{(n+2)7^n} $? I'm trying to evaluate the following sum: $$ \sum\limits_{n=0}^{\infty} \frac{1}{(n+2)7^n} $$ Probably the best way to do that is to try and find a closed formula for $$ \sum\limits_{n=0}^{\infty} \frac{1}{(n+2)x^n} $$ Probably it requires some integrating/differentiating. I've tried both, but I can't get it to look any more decently. I'd appreciate some hint
$$\sum_{n\geq 0}\frac{1}{(n+2)7^n}=49\int_{0}^{1/7}\sum_{n\geq 0}x^{n+1}\,dx = 49\int_{0}^{1/7}\frac{x}{1-x}\,dx = \color{red}{-7+49\log\frac{7}{6}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1280913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Intersection of Hilbert spaces Consider two Hilbert spaces $H_1$ and $H_2$ with inner products $\langle \cdot,\cdot\rangle_1$ and $\langle \cdot,\cdot\rangle_2$ generating norms $\Vert \cdot \Vert_1$ and $\Vert \cdot \Vert_2$ respectively. I am trying to understand when $H = H_1 \cap H_2$ becomes a Hilbert space with the inner product $\langle \cdot,\cdot\rangle_1 + \langle \cdot,\cdot\rangle_2$. Or does this always hold? I can see that a Cauchy sequence in $H$ is Cauchy in both $H_1$ and $H_2$, but apriori, it is not clear to me that a common convergent subsequence can be found. Thanks for any help! Edit: Let us assume that $H_1$ and $H_2$ are sitting inside a larger Hilbert space $H$. As pointed out below, otherwise the question doesn't make sense.
Asking questions about the intersection $H_1 \cap H_2$ of two arbitrary Hilbert spaces $H_1$ and $H_2$ is "evil" in the mathematical sense of the word. Unless the two Hilbert spaces are linear subspaces of some larger Hilbert space, then this question is ill-posed. In general, it is "evil" to ask questions about structured objects such as groups, topological spaces, etc. which depend only on the specific set-theoretic construction of that object. For example, asking if $1 \in 3$ is a bad question, because it depends on the specific set-theoretic construction of the number $1$ and the number $3$, and it may or may not be true depending on what sort of construction you have in mind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to integrate when integration by parts never ends? So I have the following integration: $\int{e^{-x}\sin{x}}$ I let u be the $e^{-x}$ and $dv = \sin{x} dx$ After doing the integration by parts I get:$(e^{-x})(-\cos{x})-\int{-e^{-x}*(-\cos{x})}$ From what it looks like, if I continue integration by parts, will it just continue going on forever?
If you continue (do integration by part for $\int e^{-x}\cos x\,dx$) you will get $\int e^{-x}\sin x \,dx= f(x) - \int e^{-x}\sin x\, dx$, therefore $\int e^{-x}\sin x\, dx= \frac12 f(x)$. Every time you integrate by part you will get an extra minus, but you integrating $\sin x$ twice get one minus, that's why in this case doing integration twice works. Edit: You also made a mistake. (Editted) $\int udv = uv -\int vdu$, so you should get $$\int e^{-x} \sin x \,dx = -e^{-x}\cos x - \int e^{-x} \cos x \, dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
If I wanted to randomly find someone in an amusement park, would I be faster roaming around or standing still? Assumptions: * *The other person is constantly and randomly roaming *Foot traffic concentration is the same at all points of the park *Field of vision is always the same and unobstructed *Same walking speed for both parties *The other person is NOT looking for you. They are wandering around having the time of their life without you. *You could also assume that you and the other person are the only two people in the park to eliminate issues like others obstructing view etc. Bottom line: the theme park is just used to personify a general statistics problem. So things like popular rides, central locations, and crowds can be overlooked. I know this can be simulated, but can this be calculated analytically?
Suppose your friend moves around taking a step in a uniformly random direction each time step and suppose you can either stand still or move by the same rule as your friend. Then, if the step size is small compared to the visibility radius and the park is large (so boundary effects are to be ignored), I claim you'll find each other roughly twice as fast by both moving. Why is this? You each taking a step is equivalent to your friend taking two steps, because we're ignoring the boundary of the park and we're assuming your friend's direction is uniformly random. Then if the step size is small compared to the visibility radius, we can (approximately) ignore the possibility that you step into view just for one step and then out of view. So only checking if you can see each other every other step should be approximately equivalent. (This argument can be formalized further in the framework of brownian motion, where [if we put you on an infinite plane] it will work exactly because brownian motion corresponds to a limit in which the discretized step size goes to zero.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Among the various subgraphs of $K_5$, how many are cycles? Among the various subgraphs of $K_5$, how many are cycles? I know the answer is $37$ because the number of $3$-cycles is $10$, the number of $4$-cycles is $15$, and $5$-cycles is $12$. Could anyone explain in detail how these numbers are reached?
First notice that any $n \in \{3,4,5\}$ vertices you choose from $K_5$, you can make a cycle out of them. When you are counting the $n$-cycles, you need to count the number of ways you can choose $n$ vertices out the given $5$ vertices, and for each of those choices you need to count how many ways those $n$ vertices can be arranged into a cycle. So when counting the $3$-cycles, since there is only one way to have a cycles with $3$ vertices it would be $$ \binom{5}{3} \times (1) = 10 \times (1) = 10 $$ The cases for $4$ and $5$ vertices is done similarly. You've just got to figure out how to count the number of ways $4$ and $5$ vertices can be connected into a cycles. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
First order differential equation integrating factor is $e^{\int\frac{2}{x^2-1}}$ So i got the first order ode $$(x^2-1)\frac{dy}{dx}+2xy=x$$ I divided both sides by $x^2-1$ $$\frac{dy}{dx}+\frac{2}{x^2-1}xy=\frac{x}{x^2-1}$$ in the form $y' + p(x)y = q(x)$ So that means the integrand is... $$e^{\int\frac{2}{x^2-1}}$$ But i'm not sure what to do i think the $\int\frac{2}{x^2-1}$ = $-\log{(x-1)}+4\log{(x+1)}$ So it's $$e^{-\log{(x-1)}+4\log{(x+1)}}$$ $$e^{\log{(x-1)^{-1}}+\log{(x+1)^4}}$$ $$\frac{1}{x-1}+(x+1)^4$$ Is this right? and then just multiply both sides by this?
your equation $$(x^2 - 1) \frac{dy}{dx} + 2x y = x $$ is an exact differential equation. the reason is it can be written as $$\frac d{dx}\left((x^2 - 1) y\right) = x $$ on integration gives you $$(x^2 - 1)y = \frac12 x^2 + c \to y = \frac{x^2}{2(x^2 - 1)} + \frac c{x^2 - 1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to simplify this summation, or express as integral? $\sum_{t=-\infty}^\infty\frac1{\sqrt{(t+ax)^2+4}}$ How to simplify this summation, or express as integral? $$f(x)=\sum_{t=-\infty}^\infty\frac1{\sqrt{(t+ax)^2+4}}$$ $a$ is a constant, say, 24. $$f(x)=\sum_{t=-\infty}^\infty\frac1{\sqrt{(t+24x)^2+4}}$$
Just to make an answer from sos440's comment: From the integral test you get: $$\sum_{t=1}^\infty\frac1{\sqrt{(t+24x)^2+4}} > \int_1^\infty \frac1{\sqrt{(t+24x)^2+4}} dt = \infty$$ because $\frac1{\sqrt{(t+24x)^2+4}} \approx \frac 1x$ for $x\rightarrow\infty$ and $\int_1^\infty \frac 1x dx$ diverges. Because each summand of $f(x)$ is positive, also $$\sum_{t=-\infty}^\infty\frac1{\sqrt{(t+24x)^2+4}} = \infty$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does one of these conditions on a sequence imply the other one? Let ${(r_n)}_{n \geq 0}$ be a sequence of integers $\geq 2$. Set $q_n=\prod_{i=0}^{n-1} r_i$ (agreeing with $q_0=1$). I want to know whether one of these two conditions implies the other one (I think the answer is no but I don't find counter-examples). The first condition is $$(\Delta)\colon\quad \sum \frac{r_n \log r_n}{q_{n+1}} < \infty.$$ The second condition is $$(\Theta)\colon\quad \limsup_{\epsilon\to 0}\,\epsilon \sum_{i=0}^{k(\epsilon)}r_i\log r_i \to 0$$ where $k(\epsilon) = \max\{k \mid q_{k+1}^{-1} \geq \epsilon\}$. For example: * *Any bounded sequence $(r_n)$ satisfies both conditions. This is obvious for $(\Delta)$, and not difficult for $(\Theta)$. *The case $r_n=n+2$ satisfies both conditions. This is not difficult for $(\Delta)$, and this can be shown for $(\Theta)$ by using the inequalities $k! \geq k^{\frac{k}{2}}$ and $\sum_{i=1}^n k \log k \leq {(n \log n)}^2$. Incidentally, if none of these conditions implies the other one, I would be interested in a simplified formulation of condition $\Delta\cap\Theta$.
@Adayah's answer provides the proof of this lemma: Lemma: Let ${(u_n)}_{n \geq 0}$ and ${(v_n)}_{n \geq 0}$ be two sequences of positive numbers such that $v_n \searrow 0$. If $\sum u_n v_n < \infty$ then $\epsilon \sum_{i=0}^{n(\epsilon)} u_i \to 0$ when $\epsilon \to 0^+$, where $n(\epsilon)=\min\{n \mid v_{n+1} < \epsilon\}$. Surely, @Adayah's proof of this lemma is the more straightforward one. As a comment too long for a comment, I just want to explain how to derive this lemma from a general result about integrability (finite expectation) of random variables. We can derive it from the two following facts: 1) For any positive random variable $X$, $$E(X) = \int_0^1 F^{-1}(u)\mathrm{d}u,$$ where $F$ is the cumulative distribution function of $X$ and $F^{-1}$ is its left-continuous inverse (the inverse when it exists). Consequently, $$E(X) < \infty \implies \epsilon F^{-1}(1- \epsilon) \to_{\epsilon\to 0^+} 0.$$ And consequently, for any positive and right-continuous increasing function $H$, $$E(H(X)) < \infty \implies \epsilon H(F^{-1}(1- \epsilon)) \to_{\epsilon\to 0^+} 0.$$ (to prove this consequence, consider the right-continuous inverse of $H$ to check that $H\circ F^{-1}$ is the left-continous inverse of $H(X)$). 2) When $X$ is a discrete random variable, $$\sum_{n \geq 0} u_n\Pr(X>n) = E(\sum_{k=0}^X u_k).$$ (continuous version: $E(U(X))=\int u(x) \Pr(X>x)\mathrm{d}x$) Then we can prove the lemma by firstly assuming without loss of generality that $v_0\leq 1$ and taking $X$ such that $\Pr(X>n)=v_n$. Then fact 2 gives $\sum u_n v_n = E(H(X))$ with $H(x) = \sum_{k=0}^x u_k$. Then it suffices to check that $n(\epsilon)=F^{-1}(1-\epsilon)$ and apply fact 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that $\left\lceil \frac{n}{m} \right\rceil =\left \lfloor \frac{n+m-1}{m} \right\rfloor$ On a discrete mathematics past paper, I must prove that $$\left\lceil \frac{n}{m} \right\rceil = \left\lfloor \frac{n+m-1}{m} \right\rfloor$$ for all integers $n$ and all positive integers $m$. I have started a proof by induction thus: Let $P(m,n)$ be the statement that $$\left\lceil \frac{n}{m} \right\rceil = \left\lfloor \frac{n+m-1}{m} \right\rfloor$$ for all integers $n$ and all positive integers $m$. Then, according to the technique described here, I must prove the following: * *Base case: I have already proved that $P(a,b)$, where $a=1$ and where $b$ is the smallest value where $n$ is valid. (Note that this is equivalent to proving that $\lceil n \rceil = \lfloor n \rfloor \space \forall\space n\in\mathbb{Z}.$) *Induction over $m$: I must show that $P(k,b) \implies P(k+1,b)$ for some positive integer $k$. THIS IS THE STAGE AT WHICH I AM STUCK. *Induction over $n$: I must show that $P(h,k) \implies P(h,k+1)$ for some positive integer $m$ and for some integer $k$ (I think that I must account for the fact that $k$ could be negative OR non-negative). I will explain why I am stuck. My inductive hypothesis is that $P(k,b)$ - that is, $\left\lceil \frac{b}{k}\right\rceil = \left\lfloor \frac{b+k-1}{k}\right\rfloor$. I want to show that this implies $P(k+1,b)$. I have tried to do this by attempting to express $\lceil \frac{b}{k+1} \rceil$ in terms of $\left\lceil \frac{b}{k} \right\rceil$, but have not succeeded. Any hints would be appreciated.
We have $$ \lfloor \frac{n+m-1}{m}\rfloor=\lfloor \frac{n-1}m \rfloor+1 $$ Now you have two cases $m|n$ or not... If $m|n$, then $\lceil \frac nm\rceil=\frac nm$ and $\lfloor\frac{n-1}m\rfloor=\frac nm-1$ Now if $m$ does not devide $n$, then $\lceil \frac nm\rceil=\lfloor\frac nm\rfloor+1$ and $\lfloor\frac nm\rfloor=\lfloor\frac{n-1}m\rfloor$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
What is a branch point? I am really struggling with the concept of a "branch point". I understand that, for example, if we take the $\log$ function, by going around $2\pi$ we arrive at a different value, so therefore it is a multivalued function. However, surely this argument holds for all points in the complex plane, so I don't really understand how $z=0$ is the ONLY branch point. Additionally, the course I am revising for needs no Riemann surfaces or knowledge of that area of mathematics, just what a branch point is and how to find it. Thanks for any help.
A branch point of a "multi-valued function" $f$ is a point $z$ with this property: there does not exist an open neighbourhood $U$ of $z$ on which $f$ has a single-valued branch. In the case of $\log$, the only branch point is $0$: indeed, if $z \ne 0$, we could take $$U = \{ w \in \mathbb{C} : \lvert w - z \rvert < \lvert z \rvert \}$$ and define a single-valued branch of $\log$ on $U$. If you want to think in terms of paths, the point is that the value of $\log$ cannot jump if the path does not wind around $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Difference between domain and range for relations and functions? What is the difference between the definition of domain and range for a relation, and that for a function?
There is no difference. Note, that each function can be seen as a special relation. So $f:A \rightarrow B$ can be seen as a relation $R_f \subseteq A\times B$ such as $$\forall x \in A: \exists ! y \in B: xR_fy$$ The domain and the range is so defined, such that the relation $R_f$ has the same domain and the range as $f$. Note: Here I mean with "range" the "image of $f$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1281931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Representation of the Orthogonal Projection We want to show that orthogonal projection can be written as \begin{align} C\cdot C^+=\left(I_N-\frac{1}{N}\iota\iota'\right), \end{align} where $C=\left(\omega\iota'-I_N\right)$, $I_N$ is the identity matrix of dimension $N$, $\omega$ is an $N\times1$ vector, $\iota'\omega=1$ and $C^+$ denotes the Moore–Penrose pseudo-inverse of $C$. Numerically, we know that this holds for any vector $\omega$ fulfilling the summing-up-constraint. Here, $C$ is a linear operator from $\mathbb{R}^{T\times N}$ onto $\mathbb{R}^{T\times N}$ and we want to show that despite of the choice of $\omega$, the orthogonal projector $CC^+$ is always of the same form.
We find finally that $C^+$ has the following form: \begin{align} C^{+}= w \iota' - I \frac{1}{\omega'\omega} \omega \omega' - \left( \frac{1}{N} \frac{1}{\omega'\omega} + 1\right) \omega \iota' + \frac{1}{N} \iota \iota' \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$\sum a_n$ converges $\implies\ \sum a_n^2$ converges? If $\sum a_n$ with $a_n>0$ is convergent, then is $\sum {a_n}^2$ always convergent? Either prove it or give a counter example. Im trying in this way, Suppose $a_n \in [0,1] \ \forall\ n.\ $ Then ${a_n}^2\leq a_n\ \forall\ n.$ Therefore by comparison test $\sum {a_n}^2$ converges. So If $a_n$ has certain restrictions then the result is true. what about the general case? How to proceed further? Hints will be greatly appreciated.
If $\sum_{n=1}^\infty a_n$ is convergent, then $\lim_{n\to\infty}a_n=0$, hence from somewhere upward we have $0<a_n<1$, now use comparison test...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 2 }
Derivative of an Expectation over an Indicator function I have a small question on how to compute the derivative of an expectation when an indicator function gets in the way. Let $x$ be a random variable. We are interested in computing the derivative wrt A of $E_x [(A-x)1_{(x \leq A)}]$ where $1_n$ is the indicator function 1 when $n$ is true. A paper I am reading mentions that this is simply $F(A)$ but I am unsure how to get that result. Your intuitive explanations are most appreciated.
$$E_x [(A-x)1_{(x \leq A)}]$$ $$=\int_{-\infty}^\infty(A-x)1_{(x\leq A)}p_x(x)dx$$ $$=\int_{-\infty}^A(A-x)p_x(x)dx$$ $$=\int_{-\infty}^AAp_x(x)dx-\int_{-\infty}^Axp_x(x)dx$$ $$=A\int_{-\infty}^Ap_x(x)dx-\int_{-\infty}^Axp_x(x)dx$$ $$=AF(A)-\int_{-\infty}^Axp_x(x)dx$$ So taking the derivative with respect to $A$ we get $$F(A) + AF'(A)-\frac{d}{dA}\int_{-\infty}^Axp_x(x)dx$$ $$=F(A)+Ap_x(A)-Ap_x(A)$$ $$=F(A)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that if a Mobius transformation has 3 fixed points then it is the identity map. I have that any non trivial Mobius transformation has at most 2 fixed points since f(z)-z=0 has at most 2 roots. But I cannot deduce why it must then be the identity.
Suppose $Tz=(az+b)/(cz+d)$. The fixed points of $T$ must satisfy the equation: $$\begin{align} \frac{az+b}{cz+d}=z & \iff az+b=cz^2+dz\\ & \iff cz^2+(d-a)z+b=0 \end{align}$$ If this quadratic equation has more than 3 solutions, the polynomial must be identically zero. So, $c=0,a=d,b=0$. Therefore $Tz=az/a=z$. This argument works when either $a\ne0$ or $d\ne0$. If either one is zero, then $a=b=c=d=0$ which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Multivariable limit which should be simple ! How to calculate the following limit WITHOUT using spherical coordinates? $$ \lim _{(x,y,z)\to (0,0,0) } \frac{x^3+y^3+z^3}{x^2+y^2+z^2} $$ ? Thanks in advance
Let $\epsilon \gt 0$. If $(x,y,z)$ is close enough to $(0,0,0)$ but not equal to it, then $|x^3|\le \epsilon x^2$, with similar inequalities for $|y^3|$ and $|z^3|$. It follows that $$\frac{|x^3+y^3+z^3|}{x^2+y^2+z^2}\le \frac{|x^3|+|y^3|+|z^3|}{x^2+y^2+z^2}\le \frac{\epsilon (x^2+y^2+z^2)}{x^2+y^2+z^2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving UNIT INTERSECTION NP-complete I am working on some review problems right now and am extremely stuck on how to solve problem - any help would be so appreciated. We are told to consider the following combinatorial problem: Unit Intersection: Let X = {1, 2,...,n}. Given a family of subsets $S_1,...,S_m$ of X, determine whether there is a subset T of X such that for all i,|T ∩ $S_i$| = 1 ? I am then trying to prove that Unit Intersection is NP-complete using a reduction from Exactly-One-3SAT. In the Exactly-One-3SAT problem, we are given a 3CNF formula, and need to decide whether there is an assignment to the variables such that every clause contains exactly one true literal. In a 3CNF formula, every clause has at most three literals. A clause in a 3CNF formula may contain repeated literals. I've been working for hours trying to find a way to solve this problem, but am so stuck right now. Thank you in advance for any help.
The reduction is straightforward. As an example, I'll reduce an instance of Exactly-One-in-3SAT to an instance of Unit Intersection. $$ (x_1 \lor x_4 \lor x_3) \land (\overline{x_4} \lor \overline{x_2} \lor x_3) \land (x_2 \lor x_1 \lor \overline{x_3}) $$ Let $n$ be the number of distinct (positive and negative) literals in the formula, and choose a bijection between the literals and the integers from $1$ to $n$: $$x_1 ↔ 1\\ x_2 ↔ 2\\ x_3 ↔ 3\\ x_4 ↔ 4\\ \overline{x_2} ↔ 5\\ \overline{x_3} ↔ 6\\ \overline{x_4} ↔ 7$$ The integers on the right are the set $X$ in the Unit Intersection instance: $X = \lbrace 1,2,3,4,5,6,7 \rbrace$. Convert each clause in the Exactly-One-in-3SAT instance into a set of integers by using the mapping created earlier to replace literals with integers. $$ (x_1 \lor x_4 \lor x_3) \rightarrow \lbrace 1, 4, 3 \rbrace \\ (\overline{x_4} \lor \overline{x_2} \lor x_3) \rightarrow \lbrace 7, 5, 3 \rbrace \\ (x_2 \lor x_1 \lor \overline{x_3}) \rightarrow \lbrace 2, 1, 6 \rbrace$$ These are the initial subsets $S_1 ... S_m$ in the Unit Intersection instance. If variables appear as both positive and negative literals in the Exactly-One-in-3SAT instance, for each such variable add a subset that contains the mapped integers for the positive and negative literals of that variable. For our instance we would add $$\lbrace 2, 5 \rbrace \\ \lbrace 3, 6 \rbrace \\ \lbrace 4, 7 \rbrace$$ The reduction is complete. If a subset $T$ can be found for the Unit Intersection instance, that solution can be mapped back to a satisfying assignment for the Exactly-One-in-3SAT instance.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The box has minimum surface area Show that a rectangular prism (box) of given volume has minimum surface area if the box is a cube. Could you give me some hints what we are supposed to do?? $$$$ EDIT: Having found that for $z=\frac{V}{xy}$ the function $A_{\star}(x, y)=A(x, y, \frac{V}{xy})$ has its minimum at $(\sqrt[3]{V}, \sqrt[3]{V})$, how do we conclude that the box is a cube?? We have that $x=y$. Shouldn't we have $x=y=z$ to have a cube??
we will keep the volume at $1.$ let the base have length $x$ and width $y.$ then the volume constraint makes the height of the box $\frac1{xy}.$ you need to minimize the surface area $$A = 2\left(xy+\frac 1x + \frac 1y\right), x > 0, y > 0 $$ now you can use the am-gm inequality $\frac{a+b+c}3\ge (abc)^{1/3}$ to show that $$A \ge 6\left(xy\frac1x \frac 1y\right)^{1/3} = 6.$$ therefore the minimum surface area of the box is $ 6$ subject to the constraint that the volume is $1.$ if you scale everything, you will get $$A \ge 6V^{2/3}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 2 }
A reference for the Tannaka-Krein theorem I am looking for a reference for the Tannaka-Krein theorem on compact groups. By the Tannaka-Krein theorem which is also called (classic) Tannaka duality (because of the quantum theory), I mean the theory which is to reconstruct a compact (Lie) group from its representations.
I found it myself. It is in Hewitt, Edwin; Ross, Kenneth A. Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups. Die Grundlehren der mathematischen Wissenschaften, Band 152 Springer-Verlag, New York-Berlin 1970 ix+771 pp. Section 30. It is written with all the details and preliminaries that one may need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove that $\lambda(s)=(1-2^{-s})\zeta(s)$? The Lambda function is defined as: $$\lambda(s)=\sum_{n=0}^{\infty} \frac{1}{(2n+1)^s},\; \mathfrak{Re}(s)>1$$ How to prove that $\lambda(s)=(1-2^{-s})\zeta(s)$? Basically, I was dealing with the functional equation of $\eta$ and while I was trying to prove its functional equation when $\mathfrak{Re}(s)>1$ I came across this series. I googled it and found that is a known function under the name "lambda dirichlet function" and it is reduced down to that formula. Trying to prove the formula I came across difficulties because I cannot manipulate the sum and split it apart somehow, unless I write it as an alternating sum but that is a deja vu and I'm sure it will lead me back to where I started. Of course contour integration over a square is out of the question since the residue of the function $\dfrac{\pi \cot \pi z}{(2z+1)^s}$ is very tedious to calculate. Any help?
I give the solution according to Lucian's comment. We start things off by using the $\zeta$ Riemann's function, defined as: $$\zeta(s)= \sum_{n=1}^{\infty}\frac{1}{n^s}, \; \mathfrak{Re}(s)>1$$ which converges absolutely since all terms are positive. That the series converges if $\mathfrak{Re}(s)>1$ is an immediate consequence of the integral test. Now, splitting the series into odd and even terms we get that: $$\zeta(s)= \sum_{n=1}^{\infty}\frac{1}{(2n)^s}+ \sum_{n=0}^{\infty}\frac{1}{(2n+1)^s} \Leftrightarrow \zeta(s)= \frac{1}{2^s}\zeta(s) + \sum_{n=1}^{\infty}\frac{1}{(2n+1)^s} \Leftrightarrow$$ $$\Leftrightarrow \sum_{n=1}^{\infty}\frac{1}{(2n+1)^s} =(1-2^{-s})\zeta(s) \tag{1}$$ what we wanted. Going one step further one can now prove the functional equation of $\eta$ Dirichlet function. This function is defined as: $$\eta(s)= \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^s} , \; \mathfrak{Re}(s)>1$$ and converges absolutely since $\displaystyle \sum_{n=1}^{\infty}\left | \frac{(-1)^{n-1}}{n^s} \right |= \sum_{n=1}^{\infty}\frac{1}{n^s}=\zeta(s)$ as shown above. Now, absolute convergence allows us to rearrange the terms. Hence: $$\begin{aligned} \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^s} &=\left ( 1+\frac{1}{3^s}+ \frac{1}{5^s}+\cdots \right )- \left ( \frac{1}{2^s}+ \frac{1}{4^s}+ \frac{1}{6^s}+\cdots \right ) \\ &= \sum_{n=0}^{\infty}\frac{1}{(2n+1)^s}- \sum_{n=1}^{\infty}\frac{1}{(2n)^s}\\ &=\sum_{n=0}^{\infty} \frac{1}{(2n+1)^s}- \frac{1}{2^s} \sum_{n=1}^{\infty}\frac{1}{n^s}\\ &\overset{(1)}{=}\left ( 1-2^{-s} \right )\zeta(s)- \frac{1}{2^s}\zeta(s)\\ &=\left ( 1-2^{1-s} \right )\zeta(s) \end{aligned}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Solving non-linear second order differential equation: radius of curvature $= k \theta$ I'm trying to find any curve where the radius of curvature increases linearly with angular displacement. So in polar coordinates radius of curvature $= k \theta$ $$ \frac{(r^2 + r'^2)^{3/2}}{r^2 + 2r'^2 - rr''} = k \theta$$ where k is a constant I just need a single curve that fits this equation, does not matter if it is defined as an integral or power series. But, best to have a closed form solution. Thank you.
I understand you are looking for any curve satisfying the property $$\rho=k\theta$$ Therefore choose a curve which has the property that the radius vector makes a constant angle $\alpha$ with the tangent vector. In intrinsic form, we have $$\frac {dy}{dx}=\tan\psi$$ The radius of curvature is $$\rho=\frac{ds}{d\psi}$$ Your curve therefore has the property that $\rho=k(\psi-\alpha)$. We can now use the well-known relationships $$x=\int\rho\cos\psi d\psi$$ and $$y=\int\rho\sin\psi d\psi$$ to obtain $x$ and $y$ in terms of $\theta$ and then you can get the curve easily into polar form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1282901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $n$ be an odd natural number , to find a continuous real valued function on $\mathbb R$ which takes every value exactly $n$ times Let $n$ be an odd natural number . We know $\mathbb R = \cup_{k \in \mathbb Z} [nk\pi , n(k+1)\pi]$ . So for every $k \in \mathbb Z$ , define $h(x):=2k+1-(-1)^k \cos x , \forall x \in [nk\pi , n(k+1)\pi ]$ , then does the function $h : \mathbb R \to \mathbb R$ , takes every value exactly $n$ times ? Can someone please provide any other continuous function(s) on $\mathbb R$ which takes every value exactly $n$ times for odd $n$ ? Please help . Thanks in advance .
Yes, your function appears to meet the requirements. Here is a sketch for $n=5$ Another approach without the need for intervals might be to take a sinusoidal curve and add a linear element to shift successive peaks and troughs up, perhaps something like $$f(x)= \frac{2}{n\pi}x +\cos(x)$$ which for $n=5$ looks like
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sum of variances of multinomial distribution. I've k fair coins, and I would like to know the number of heads obtained in $n$ trials. But that is simple binomial distribution. But if I want to find out how much it varies from binomial expectation, them it becomes a multinomial distribution for the number of heads from 0 to k. $$ \text{pdf} = \frac{n!}{x_0!\cdots x_k!}p_0^{x_0}\cdots p_k^{x_k} $$ Where: $$ p_i = {k \choose i}\frac{1}{2^k}$$ I am interested in: $$ \text{Var}(\vec{X}) = \sum_{i=0}^k E(X_i - E[\vec{X}]_i)^2$$ Since we have: $$ \text{Var}(X_i) = E(X_i - E[X_i])^2 = np_i(1-p_i)$$ The wrong derivation is as follows, how do I correct this. This I think is wrong due to dependency or number of degrees problem. $$ \text{Var}(\vec{X}) = \sum_{i=0}^k\text{Var}(X_i) = \sum_{i=0}^k np_i(1-p_i)$$ on substitution we get: $$ \text{Var}(\vec{X}) = n(1-{2k \choose k}) \approx n(1-\frac{1}{\sqrt{\pi~k}})$$ How do I correct this? Experimental data (for k=15) seem to differ a bit. In the following graph RMS is defined as: $$ \text{RMS} = \frac{1}{n}\sqrt{\text{Var}(\vec{X})}$$ The IJulia code used to generate this is: here
There is nothing wrong with the derivation, only problem with experimental setup. The experiment is calculating expected value of standard deviation and not of variance. Change RMSExptAvg(S, n) = mean([RMSExpt(S, n) for i in 1:10000])) to this: RMSExptAvg(S, n) = sqrt(mean([RMSExpt(S, n)^2 for i in 1:10000])) Then equations match up exactly. Getting closed form equation for expectation of standard deviation is pretty difficult.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Existence of a metric space where each open ball is closed and has a limit point Show that there exists a metric space in which every open ball is closed and contains a limit point. I think that the space $\{\frac{1}{n}\mid n\in\mathbb{N},n>0\}\cup \{0\}$ with the standard Euclidean metric is an answer, but it is not true open ball with center 1 is closed.
$\mathbb{Z}$ with the $p$-adic metric $d(m,n) = p^{-\nu_p (m-n)}$ has the property that every open ball is closed (to prove this, note that the set of positive distances forms a discrete space), and every integer $n$ is the limit point of the sequence $\{n+p^k\}_{k\geq 1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving vectors as a basis in $E^{m}$ Show that if the vectors $a_{1}$, $a_2$, $\cdots$, $a_m$, are a basis in $E^{m}$, the vectors $a_{1}$, $a_2$, $\cdots$, $a_{p-1}$, $a_{q}, a_{p+1}, \cdots,a_{m}$, also are a basis if and only if $y_{p,q} \neq 0$, where $y_{p,q}$ is defined by the following tableau: \begin{matrix} 1& 0& \cdots & 0& y_{1,m+1} & y_{1,m+2}& \cdots & y_{1n} & y_{10}\\ 0& 1& \cdot & 0&y_{2,m+1}& y_{2,m+2}& \cdots & y_{2n} & y_{20}\\ 0& 0& \cdot & 0& \cdot & \cdot & \cdot & \cdot & \cdot\\ \vdots& \vdots& \vdots & \vdots& \vdots& \vdots& \vdots & \vdots& \vdots\\ 0& 0& \cdot & 1& y_{m,m+1} & y_{m,m+2} & \cdots & y_{mn} & y_{m0} \end{matrix} Can the necessary and sufficient conditions be defined as follows. If $a_{1}$, $a_2$, $\cdots$, $a_{p-1}$, $a_{q}, a_{p+1}, \cdots,a_{m}$ are a basis in $E^{m}$ then $y_{p,q} \neq 0$ which implies to prove that they're LI (necessary condition) and if $y_{p,q} \neq 0$ then $a_{1}$, $a_2$, $\cdots$, $a_{p-1}$, $a_{q}, a_{p+1}, \cdots,a_{m}$ are a basis in $E^{m}$ (sufficient condition)? Does anyone have any idea to prove this? Any hint is welcome. Thanks.
Remark: Your problem has nothing to be with the rightmost column $(y_{1,0},y_{2,0},\dots,y_{m,0})^T$. I'll omit that column from the matrix. Settings Let $A = \begin{bmatrix}\mathbf{a}_1&\mathbf{a}_2&\cdots&\mathbf{a}_m&|\mathbf{a}_{m+1} & \cdots&\mathbf{a}_n\end{bmatrix}$ be the original coefficient matrix in $E^{m \times n}$, where $\mathbf{a}_j$ denotes the $j$-th column of $A$. By changing $A$ to the given matrix $\begin{bmatrix}I_m&|\mathbf{y}_{m+1} & \cdots&\mathbf{y}_n\end{bmatrix}$, where $\mathbf{y}_j$ denotes the $j$-th column of the given matrix (i.e. $\mathbf{y}_j = \mathbf{e}_j \;\forall j \in \{1,\dots,n\}$ and $\mathbf{y}_j^T = (y_{1,j},y_{2,j},\dots,y_{m,j}) \;\forall j \in \{n+1,\dots,m\}$), using row operations, the given matrix equals $B^{-1}A$, where $B := \begin{bmatrix}\mathbf{a}_1&\mathbf{a}_2&\cdots&\mathbf{a}_m\end{bmatrix}$ is formed by the given basis. To see this, use the fact that \begin{align} B^{-1} B &= I \\ B^{-1} \begin{bmatrix}\mathbf{a}_1&\mathbf{a}_2&\cdots&\mathbf{a}_m\end{bmatrix} &= \begin{bmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \cdots & \mathbf{e}_m \end{bmatrix} \end{align} Therefore, we have $B^{-1}\mathbf{a}_j = \mathbf{y}_j \;\forall j \in \{1,2,\dots,n\}$. This is our starting point. Sufficient condition Assume that $y_{pq} \ne 0$. \begin{align} B^{-1}\mathbf{a}_q &= \mathbf{y}_q \\ \mathbf{a}_q &= B \mathbf{y}_q \\ &= \sum_{i = 1}^m y_{i,q} \mathbf{a}_i \\ &= y_{p,q} \mathbf{a}_p + \sum_{\substack{i = 1 \\ i \ne p}}^m y_{i,q} \mathbf{a}_i \\ \frac{1}{y_{p,q}} \mathbf{a}_q &= \mathbf{a}_p + \sum_{\substack{i = 1 \\ i \ne p}}^m \frac{y_{i,q}}{y_{p,q}} \mathbf{a}_i \\ \mathbf{a}_p &= \frac{1}{y_{p,q}} \mathbf{a}_q - \sum_{\substack{i = 1 \\ i \ne p}}^m \frac{y_{i,q}}{y_{p,q}} \mathbf{a}_i \end{align} Since it's given that $\left\{ \mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_m \right\}$ is a basis for $E^m$, then $$\left\{ \mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{p - 1}, \frac{1}{y_{p,q}} \mathbf{a}_q - \sum_{\substack{i = 1 \\ i \ne p}}^m \frac{y_{i,q}}{y_{p,q}} \mathbf{a}_i, \mathbf{a}_{p + 1}, \dots, \mathbf{a}_m \right\},$$ is a basis for $E^m$. Hence $\left\{ \mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{p - 1}, \mathbf{a}_q, \mathbf{a}_{p + 1}, \dots, \mathbf{a}_m \right\}$ is a basis for $E^m$. Necessary condition Suppose that $y_{pq} = 0$. Then from the section above, we have $$\mathbf{a}_q = y_{p,q} \mathbf{a}_p + \sum_{\substack{i = 1 \\ i \ne p}}^m y_{i,q} \mathbf{a}_i = \sum_{\substack{i = 1 \\ i \ne p}}^m y_{i,q} \mathbf{a}_i,$$ so $\mathbf{a}_q$ can be represented by $\left\{ \mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{p - 1}, \mathbf{a}_{p + 1}, \dots, \mathbf{a}_m \right\}$. In other words, the vector $\mathbf{a}_q$ has two different representations by the basis $\left\{ \mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{p - 1}, \mathbf{a}_q, \mathbf{a}_{p + 1}, \dots, \mathbf{a}_m \right\}$. This is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why can't I get this with Gram-Schmidt Can someone assist me with a very simple problem. I cannot get these two vectors to be orthogonal using Gram-Schmidt: $\{(1,-1,1),(2,1,1)\}$ What am I doing wrong? Let $v_1=(1,-1,1)$. $v_2 = > (2,1,1)-\frac{\langle(2,1,1),(1,-1,1)\rangle}{\lvert (1,-1,1) \rvert^2}(1,-1,1)$ =$(2,1,1)-\frac{1}{3}(1,-1,1)$ This gives me $(\frac{5}{3},\frac{4}{3},-\frac{2}{3})$ Which is not orthogonal to $(1,-1,1)$.
The numerator of the fraction should be $$ \langle(2,1,1),(1,-1,1)\rangle $$ Edit After your correction, I can see that there is still an error, given that $$ \frac{\langle(2,1,1),(1,-1,1)\rangle}{|(1,-1,1)|^2}=\frac{2}{3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the set $A \cap B = \emptyset$ Let $A$ and $B$ be two sets for which the following applies: $A \cup B = (A \cap B^{C}) \cup (A^{C} \cap B)$. Show that $A \cap B = \emptyset$. How?! I am seriously stuck. One thought I had is to distribute the right part, so that: $(A \cap B^{C}) \cup (A^{C} \cap B) = (A \cap B^{C} \cup A^{C}) \cup (A \cap B^{C} \cup B^{C})$
I'll add an answer expanding your idea, since that works too! Only, you made a few mistakes. It should be: $(A \cap B^{C}) \cup (A^{C} \cap B) = ((A \cap B^{C}) \cup A^{C}) \cap ((A \cap B^{C}) \cup B)$ Then, $=((A\cup A^{C}) \cap (B^{C}\cup A^{C})) \cap ((A\cup B) \cap (B^{C}\cup B))=$ $=(B^{C}\cup A^{C}) \cap (A\cup B)$ We know this to be $A\cup B$, so we can conclude $A \cap B = \emptyset$ (for instance by drawing a Venn diagram, or by further manipulation)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Show that the closed unit ball $B[0,1]$ in $C[0,1]$ is not compact Show that the closed unit ball $B[0,1]$ in $C[0,1]$ is not compact under the following metrics: $1. d(f,g)=\sup_{x\in [0,1]}|f(x)-g(x)|$ $2.d(f,g)=\int _0^1 |f(x)-g(x)| dx$ My try: In order to show not compact if we can find a sequence which has no convergent subsequence then we are done. How to approach $1,2$?
As mentioned in previous answers, an animation much aids comprehension. Here's an open cover for the closed unit ball with respect to sup norm metric. Let, \begin{align} f^n_{\mathrm{upper}}(x) &= \text{the continuous function obtained by connecting the points } \begin{cases} (0,0),\\ (1/n,2),\\(1-1/n,2),\\(1,0) \end{cases}\\ f^n_{\mathrm{lower}}(x) &= \text{the continuous function obtained by connecting the points } \begin{cases} (0,0),\\ (1/n,-2),\\(1-1/n,-2),\\(1,0) \end{cases}. \\ \end{align} And then define each open set, $$ \mathcal{O}_n = \{ g(x) \in C[0,1] : f^n_{\mathrm{lower}}(x) < g(x) < f^n_{\mathrm{upper}}(x) \quad \forall x \in [0,1]\}. $$ And you can check that every function in the unit closed ball lies in at least on of these $ \mathcal{O}_i$s, and each $ \mathcal{O}_i$ is an open set. From the illustation it's clear that the cover does not contain a finite subcover. The illustration is created using Desmos.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Fourier series of cotangent I have found the Fourier series of $\cot(ax)$ and i get: $$\cot(ax) \sim \frac{ \sin(a \pi)}{a\pi}\left[ \left(\frac{1}{2a^2}\right)- \sum_1^\infty \frac{(-1)^n \cos(nx)}{n^2-a^2}\right]$$ How can I deduce the Fourier series of $\cot(x)$ where $x$ isn't multiple of $\pi$? Any help please...
Expand the exponential form of $\cot x$: \begin{align}\cot x=\frac{i(e^{ix}+e^{-ix})}{e^{ix}-e^{-ix}}&=i+\frac{2ie^{-ix}}{e^{ix}-e^{-ix}}=i+2ie^{-2ix}(1+e^{-2ix}+e^{-4ix}+\cdots)\\[1ex] &=i+2i\sum_{k\ge1}(\cos2kx-i\sin2kx) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Application of Taylor's Theorem in Number Theory I'm working through Alan Baker's book A Concise Introduction to the Theory of Numbers, and there's an assertion in there that confuses me. Here's the quote: It is easily seen that no polynomial $f(n)$ with integer coefficients can be prime for all $n$ in $\mathbb{N}$, or even for all sufficiently large $n$, unless $f$ is constant. Indeed, by Taylor's theorem, $f(mf(n)+n)$ is divisible by $f(n)$ for all $m$ in $\mathbb{N}$ How is this an application of Taylor's theorem? It's entirely mysterious to me. Thanks in advance for any insight on this.
By Tayor's Theorem $$f(x) = f(n) + f'(n)(x-n) + \frac{f''(n)}{2!}(x-n)^2 + \cdots + \frac{f^{(k)}(n)}{k!}(x-n)^k + h_k(x)(x-n)^k$$ Now take $k$ big enough so that $h_k(x)=0$ (possible since $f$ is a polynomial). So $$f(x) = f(n) + f'(n)(x-n) + \frac{f''(n)}{2!}(x-n)^2 + \cdots + \frac{f^{(k)}(n)}{k!}(x-n)^k$$ Now plug in $mf(n)+n$ for $x$. You get an $f(n)$ in every term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
Formula to get mean and standard deviation of this multi-variable equation $$ \binom n x \times\left(\frac1r\right)^x\times\left(\frac{r-1}r\right)^{n-x} $$ If you have $n$ boxes and have a $\frac1r$ chance to fill each one, this equation returns the chance that you fill exactly $x$ boxes. When $n$ and $r$ are given, the graph is a normal distribution. I read about the Box-Muller algorithm to generate random normally distributed numbers, but I need the mean and standard deviation of the bell curve given $n$ and $r$. I've been working for a while to try to make a formula for it, but to no avail. Help would be very welcome.
Let $X_i$ be the random variable which equals $1$ when box $i$ is filled, and $0$ otherwise. Then the variable you want to know about is $X = \sum_{i = 1}^{n}X_i$. We have: $$E(X) = E(\sum_{i = 1}^{n}X_i) = \sum_{i = 1}^{n}E(X_i)$$ And, since the $X_i$ are independent: $$\sigma^{2} = Var(X) = Var(\sum_{i = 1}^{n}X_i) = \sum_{i = 1}^{n}Var(X_i)$$ Since $X_i$ is $1$ with probability $1/r$ and $0$ with probability $1 - 1/r$, $E(X_i) = 1/r$. So your mean is: $$E(X) = \sum_{i = 1}^{n}\frac{1}{r} = \frac{n}{r}$$ Similarly, the variance of $X_i$ is $(1 - 1/r)(1/r)$, so: $$Var(X) = \sum_{i = 1}^{n}\left(1 - \frac{1}{r}\right)\frac{1}{r} = n\frac{1 - r}{r^2}$$ And so $\sigma = \frac{1}{r}\sqrt{n(1-r)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Let $A$ be a subset of a topological space. Prove that $Cl(A) = Int(A) \cup Bd(A)$ Let $A$ be a subset of a topological space. Prove that $Cl(A) = Int(A) \cup Bd(A)$ Here are my defintions: Closure: Let $(X,\mathfrak T)$ be a topological space and let $ A \subseteq X$ . The closure of $A$ is $Cl(A) = \bigcap \{U \subseteq X: U$ is a closed set and $A \subseteq U\}$ Based on this I know $A \subseteq Cl(A)$ Interior:Let $(X, \mathfrak T)$ be a topological space and let $A \subset X$ is the set of all points $x \in X$ for which there exists an open set $U$ such that $x \in U \subseteq A$. My definition of boundary is: Let $(X,\mathfrak T)$ be a topological space and let $A \subseteq X$. A point $x \in X$ is in the boundary of A if every open set containing $x$ intersects both $A$ and $X−A$. My proofs start by picking an element to be in each side then showing it must be in the other side. I have tried to start that here. Let $x \in Cl(A)$ then Let $x \in Int(A) \cup Bd(A)$. Then $x\in Int(A)$ or $x\in Bd(A)$ If $x \in Int(A)$ then
The closure of $A$ is the smallest closed subset of $X$ containing $A$. The interior of $A$ is the largest open subset contained in $A$. So we have that $Int(A) \subseteq A \subset Cl(A)$. Let $x \in Bd(A)$. Suppose $ x \notin Cl(A)$. $X \setminus Cl(A)$ is open, and $x \in X \setminus Cl(A)$, so by your definition of boundary, $(X \setminus Cl(A)) \cap A \neq \emptyset$, which contradicts that $ A \subseteq Cl(A)$. So $Cl(A) \supseteq Int(A) \cup Bd(A)$. For the converse we show that $D=Int(A) \cup Bd(A)$ is closed and contains $A$. Suppose $a \in A$ but $ a \notin D$. In particular $a \notin Bd(A)$ so there is an open set $U_{a} \subseteq A$ which does not intsersect $X \setminus A$. But then $U_{a}$ is an open set contained in $A$, so $U_{a} \subseteq Int(A)$ a contradiction. So $A \subseteq D$. For each $x \in X \setminus D$, there is an open set $U_{x} \subseteq X\setminus D \subseteq X \setminus A$ containing $x$, because otherwise $x \in Bd(A)$. Taking the union of these open subsets for each $x \in X\setminus D$ gives us that $X \setminus D$ is open, so $D$ is closed. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1283975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Which of the following collections are topologies for $\mathbb R$? Am I correct? Which of the following collections are topologies for $\mathbb R$? I think I have these correct I just want to double check my answers. (a) $\{\mathbb R, \emptyset, (-\infty, 0), (0,\infty)\}$- Yes, empty set and whole line are included, intersections and unions are as well. (b) $\{\mathbb R, \emptyset, (1,4), (2,5)\}$ no as the intersection of $(1,4)$ and $(2,5)$ is not included in the topology. (c) $\{U : U = \emptyset$ or $U = \mathbb R$ or $U = (-\infty, b]$ for some $b \in \mathbb R\}$- I originally thought this was no now I am going back and saying yes. The other 2 I feel confident about this is the one I am shaky on.
You are correct in thinking that (b) is not a topology, for the reason that you gave. However, (a) is also not a topology: it doesn’t contain $(-\infty,0)\cup(0,\infty)=\Bbb R\setminus\{0\}$. Finally, (c) also fails to be a topology: $$\bigcup_{b<0}(-\infty,b]=(-\infty,0)\;,$$ which is not in the family.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of Cauchy-Schwarz inequality from Terry Tao's notes, meaning of "cancelling the phase" I was reading Tao's proof of the Cauchy-Schwarz inequality by exploiting certain inherent symmetries and making some transformations. He says that first we use the fact that the norm is positive, i.e. $$\|u-v\|^2>0$$ to conclude that $\operatorname{Re} (\langle u,v\rangle) \leq \frac{1}{2}\|u\|^2+\frac{1}{2}\|v\|^2$, and then he claims that we make a transformation $$v \mapsto ve^{\iota \theta}$$ and the RHS is clearly preserved under the transformation whereas the LHS changes. Now he says that we choose a $\theta$ to make the LHS as high as possible such that it cancels the phase of $\langle u,v\rangle$. The rest of the proof is clear. But I don't get what he means by cancelling the phase. I mean the transformation is such that $\operatorname{Re} (\langle u,v\rangle)e^{\iota \theta} \mapsto |\langle u,v\rangle|$ for the theta which cancels the phase and for this particular theta we are kind of recovering the imaginary part of the complex scalar and then take its length. But how does it work? I don't understand it.
Cancelling the phase means to choose $\theta$ so that $e^{i\theta}\langle u,v\rangle$ is real. A priori all we know about $\langle u,v\rangle$ is that it is some complex number $\langle u,v\rangle = re^{i\varphi}$, so choosing $\theta = -\varphi$ (i.e. "cancelling the phase" on this complex number) will make $e^{i\theta}\langle u,v\rangle$ a (nonnegative) real number. This optimizes the inequality because $|\text{Re}(e^{i\theta}\langle u,v\rangle)| \leq |\langle u,v\rangle|$, and equality holds precisely when $e^{i\theta}\langle u,v\rangle$ is real.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why 9 mod -7 = -5? Quotient and remainder with negative integers. Forgive me if this question does not belong on this site for it is simplistic and this is my first post, however I do not seem to understand the modulo function when it comes to negative numbers. I'd assume the process for calculating modulo would be the same as with positive numbers: * *9 % 7 = 2 because 9 - (7 * 9/7) = 9 - (7 * 1) = 2 So wouldn't 9 % (-7) = 9 - (-7 * -9/7) = 9 - (-7 * -1) = 2?
Recall that $a \equiv b \bmod n$ if and only if $n$ divides $a - b$. This is the definition. So is it true that $-5 \equiv 9 \bmod -7$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Proving an identity Given $a,b\in\mathbb{R}$ with $a < b$ and defining $F(z):=\int_0^z f(s) \, ds$ with $z \in \mathbb{R}$, how can one establish that $$F(a+b)=F(a)+f(a)b+ b^2\int_0^1 (1-s)f'(a+sb) \, ds,$$ which is part of a larger proof on page 508 of PDE Evans (2nd edition)? Should I perform a change of variable to the second term on the RHS?
Taylor's formula with integral remainder: $$ F(a+b)=F(a) + F'(a) b + \int^{a+b}_a (a+b-t) F''(t) \ dt $$ Make a substitution $t=a+bs,$ and we get $$ F(a+b)=F(a) + F'(a) b + b^2\int^{1}_0 (1-s) F''(t) \ dt. $$ Using $F(x)=\int^x_a f(w) \ dw,$ we get exactly the required result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove a set of connectives is functionally complete What is the right way of proving that a set of logical connectives is or not functionally complete? For example, if I have {→,∨} how can I show it is or not functionally complete? Any ideas?
To prove that a set of connectives is functionally complete, you simply need to show that you can derive any other logical connective using only this restricted set. In your case, you want to show that you can obtain definitions for $\left\{\land,\leftrightarrow,\neg\right\}$ from $\left\{\to,\lor\right\}$, i.e. the question you have to ask yourselves is: What combination of $\left\{\to,\lor\right\}$ and sentence symbols $A$ and $B$ is equivalent to... * *$A\land B$ ? *$A\leftrightarrow B$ ? *$\neg A$ ? And by definition, if you cannot show that the set $\left\{\to,\lor\right\}$ is functionally complete, then it is not. Playing around with these questions should give you a good idea of how connectives are related to each other and what functional completeness really means. For a more formal approach, consider the following characterization of functional completeness. A set of connectives is functionally complete if it is not: * *monotonic : changing the truth value of any connected variable from false to true never makes the return value change from true to false; *self-dual : reversing the truth value of the connected variables revers the truth value of the return value; *affine : each connected variable either always or never affects the truth value of the return value; *truth-preserving : when all variables are true, the return value is also true; *falsity-preserving: when all variables are false, the return value is also false. Thus, if you can show that $\left\{\to,\lor\right\}$ has any of the above properties, then it is not functionally complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Prove every group of order less or equal to five is abelian Is it possible to prove that every group of order less or equal to five is abelian? We know that groups of prime order are cyclic and therefore commutative. As the number $4$ is the only composite number $\le5$, it basically remains to show this for groups of order four.
Group of order 1 is trivial, groups of order 2,3,5 are cyclic by lagrange theorem so they are abelian. For a group of order 4, if it has an element of order 4, it is abelian since it is cyclic(isomorphic to Z4). If orders of every element are 2, then the inverse of an element is the element itself so you can verify every element commute with each other so the group is abelian(isomorphic to klein four group) You can make the cayley table(operation table) for the group of order 4. You will find 4 possibilities. 3 of them are just same groups with different elements' name and are all isomorphic to Z4. The other one is the klein four group. Those are all abelian if you check the table
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
$ \lim_{x\to o} \frac{(1+x)^{\frac1x}-e+\frac{ex}{2}}{ex^2} $ $$ \lim_{x\to o} \frac{(1+x)^{\frac1x}-e+\frac{ex}{2}}{ex^2} $$ (can this be duplicate? I think not) I tried it using many methods $1.$ Solve this conventionally taking $1^\infty$ form in no luck $2.$ Did this, expand $ {(1+x)^{\frac1x}}$ using binomial theorem got $\frac13$ then grouped coefficients of $x^0$ and it cancelled with $e$ then took coefficient of $x$ cancelled with $\frac{ex}{2}$ and so on very messy right ? at last I got $\frac13$ but that's not the expected answer! I must have went wrong somewhere can anyone help me with this.
Use Taylor expansion of $(1+x)^{1/x}$. $$(1+x)^{1/x}=e(1- \frac{x}{2} + \frac{11x^2}{24} + ..........)$$ The other terms wont be required in this limit. The numerator simplifies and the $x$ terms get cancelled out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Maclurin Series. (Approximation) Given that $y=\ln \cos x$, show that the first non-zero terms of Maclurin's series for $y=-\frac{x^2}{2}-\frac{x^4}{12}$. Use this series to find the approximation in terms of $\pi$ for $\ln 2$. My question is how to determine value of $x$ which is suitable?
We have \begin{align} \ln(\cos(x)) & = \dfrac{\ln(\cos^2(x))}2 = \dfrac12 \cdot \ln(1-\sin^2(x)) = - \dfrac12 \sum_{k=1}^{\infty} \dfrac{\sin^{2k}(x)}k\\ & = -\dfrac12 \sin^2(x) - \dfrac14 \sin^4(x) - \cdots\\ & = -\dfrac12 \left(x-\dfrac{x^3}{3!} + \mathcal{O}(x^5)\right)^2-\dfrac14 \left(x + \mathcal{O}(x^3)\right)^4 + \mathcal{O}(x^6)\\ & = -\dfrac12\left(x^2 - \dfrac{x^4}6\right) - \dfrac{x^4}4 + \mathcal{O}(x^6)\\ & = -\dfrac{x^2}2 - \dfrac{x^4}{12} + \mathcal{O}(x^6) \end{align} We need $\ln(2)$ to be expressed in terms of $\pi$, this means we need $\ln(\cos(x))$ to be some value related to $\ln(2)$. One such value is when $\cos(x) = 1/2$, which implies $$\ln(1/2) = \ln(\cos(2n\pi \pm \pi/3)) = -\dfrac{(2n\pi \pm \pi/3)^2}2-\dfrac{(2n\pi \pm \pi/3)^4}{12} + \mathcal{O}\left((2n\pi\pm \pi/3)^6\right)$$ Since typically we are after good approximation, we would like to have the least possible error term, which in turn forces $n=0$, which gives us that $$\ln(2) \approx \dfrac{\pi^2}{18} + \dfrac{\pi^4}{972}$$ Another way is to take $x=\pi/4$, which gives us that $$-\dfrac12\ln(2) = \ln(1/\sqrt{2}) = \ln(\cos(\pi/4))) \approx - \dfrac{\pi^2}{32} - \dfrac{\pi^4}{3072}$$ This gives us that $$\ln(2) \approx \dfrac{\pi^2}{16} + \dfrac{\pi^4}{1536} + \cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why is it so hard to prove a number is transcendental? While reading on Wikipedia about transcendental numbers, i asked myself: Why is it so hard and difficult to prove that $e +\pi, \pi - e, \pi e, \frac{\pi}{e}$ etc. are transcendental numbers? Answer by @hardmath: It is generally more difficult to prove a number is transcendental than to prove it is not transcendental, i.e. that it is algebraic. Showing a number x is algebraic amounts to proving it is the root of a polynomial with rational coefficients, and so one often can just exhibit the polynomial and show by computation that a particular x is its root. Proving number x is transcendental amounts to proving no rational polynomial exists with that number as a root, and this requires more work (because we will be "proving the negative", i.e. exhausting all possible polynomials). We know that $\pi$ and $e$ are transcendental numbers, why we can't deduce that $e + \pi \approx 5.859874482048838473822930854632165381954416493075065395941912...$ or $\pi - e \approx 0.423310825130748003102355911926840386439922305675146246007976...$ are also transcendental numbers? Answer by @Matt Samuel: For example, $\pi$ and $1−\pi$ are transcendental, but $\pi+(1−\pi)=1$ is not. Since both of you gave imho a great answer, i don't know whom of you should become the credits...
For example, $\pi$ and $1-\pi$ are transcendental, but $\pi+(1-\pi)=1$ is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1284948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 1 }
How to calculate a Modulo? I really can't get my head around this "modulo" thing. Can someone show me a general step-by-step procedure on how I would be able to find out the 5 modulo 10, or 10 modulo 5. Also, what does this mean: 1/17 = 113 modulo 120 ? Because when I calculate(using a calculator) 113 modulo 120, the result is 113. But what is the 1/17 standing for then? THANK YOU!
Modulo is counting when knowing only a limited amount of numbers. E.g. modulo three, instead of counting $$0,1,2,3,4,5,6,7,8,9,10,11,...$$ you count $$ \underset{\color{lightgray}0}0, \underset{\color{lightgray}1}1, \underset{\color{lightgray}2}2,\;\; \underset{\color{lightgray}3}0, \underset{\color{lightgray}4}1, \underset{\color{lightgray}5}2,\;\; \underset{\color{lightgray}6}0, \underset{\color{lightgray}7}1, \underset{\color{lightgray}8}2,\;\; \underset{\color{lightgray}9}0, \underset{\color{lightgray}{10}}1, \underset{\color{lightgray}{11}}2,...$$ As you can see, you will end up at zero whenever the actual number is divisible by three. So finding the nearest multiples of three will help you to find its value moldulo three. Example. $362$ is close to $360=3\times 120$. Since it is two larger than $360$, it is equivalent to $2$ modulo three. As explained in the other answers you could have found this using division with rest. The "$1/17$ modulo $120$" is just a short form of the question "what number $x\in\{1,...,119\}$ do I have to multiply by $17$ so that the result $17x$ is equivalent to $1$ modulo $120$?". In this case, it is $113$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 2 }
Prove that if $G$ is a finite group and $H$ is a proper normal subgroup of largest order, then $G/H$ is simple. We define the group action of $G$ on the set of left cosets of $H$ in $G$ by left multiplication. Let $\Phi$ be the associated permutation representation. Then by Generalised Cayley's Theorem, $G/ \text{Ker}(\Phi)$ is isomorphic to a subgroup of symmetric group of order equal to index of $H$ in $G$.Would this mean that $H=\text{Ker}(\Phi)$ ($\text{Ker}(\Phi)$ is the largest subgroup of $G$ contained in $H$)? And if so, does this help prove this?
A normal subgroup of $G/H$ is of the form $K/H$ where $K \le G$ is a normal subgroup containing $H$. By maximality of $H$ you have $K=H$. Hence $G/H$ has no proper normal subgroups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does the Laplace operator include the second derivative with respect to time variable? Does the Laplacian of a function $f(x,z,t)$ equal $f_{xx} + f_{yy} + f_{tt}$? We aren't sure whether or not time is included in it or not.
It depends... the Laplacian is always defined with respect to some metric: $\Delta f = \nabla \cdot \nabla f$ and divergence requires a metric. Alternatively, you can define $\Delta$ as the gradient, in the sense of the calculus of variations, of the Dirichlet energy $\int \langle \nabla f, \nabla f\rangle\,dV$ and here again the metric is seen. My guess is that for classical fluid dynamics the metric is simply the Euclidean $dx^2+dy^2+dz^2$. In which case $\Delta f(x,z,t) = f_{xx} + 0 + f_{zz}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If the sum of two squares is divisible by $7$, both numbers are divisible by $7$ How do I prove that if $7\mid a^2+b^2$, then $7\mid a$ and $7\mid b$? I am not allowed to use modular arithmetic. Assuming $7$ divides $a^2+b^2$, how do I prove that the sum of the squares of the residuals is 0?
Since you said that you are not allowed to use modular arithmetic, argue by the contrapositive. If 7 does not divide $a$ or $b$. Then, $a = 7k \pm 1, 7k \pm 2, 7k \pm 3 \implies$ $a^2 = 7(7k^2 \pm 2k) + 1, 7(7k^2 \pm 4k) +4, 7(7k^2 +8k + 1)+2$, $k \in \Bbb Z$ Now, look at any linear combination of $a^2$ (with integer weights, or course) and it is easy to convince oneself that 7 will never divide the sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How to find cotangent? Need to find a $3\cot(x+y)$ if $\tan(x)$ and $\tan(y)$ are the solutions of $x^2-3\sqrt{5}\,x +2 = 0$. I tried to solve this and got $3\sqrt{5}\cdot1/2$, but the answer is $-\sqrt{5}/5$
Since by Vieta's formulas one has $$\tan x+\tan y=-\frac{-3\sqrt 5}{1}=3\sqrt 5,\ \ \ \tan x\tan y=\frac{2}{1}=2,$$ one has $$3\cot(x+y)=3\cdot\frac{1}{\tan(x+y)}=3\cdot\frac{1-\tan x\tan y}{\tan x+\tan y}=\frac{3(1-2)}{3\sqrt 5}=-\frac{\sqrt 5}{5}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Evaluate the complex integral $\int_{C_R}\frac{z^3}{(z-1)(z-4)^2}$ Let $C_R$ be the positively oriented circle with centre $3i$ and radius $R > 0$. Use the Cauchy Residue Theorem to evaluate the integral $$\int_{C_R}\frac{z^3}{(z-1)(z-4)^2}$$ Your answer should state any values of R for which the integral cannot be evaluated. Now I can find the residues of the function easily enough. They are $\frac{1}{9}$ at $z=1$ and $\frac{80}{9}$ at $z=4$. However I have no idea at what radius the points $z=1$ and $z=4$ are included in the region being integrated. Is there a way to calculate the radius at which these points need to be taken into account?
Sketch 3i , 1, 4 on a paper and use pythagorean theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving an Equation for y in terms of x How is it possible to solve the equation: $$ x = \frac{2-y}{y} \sqrt{1-y^2} $$ for $ y $ in terms of $ x $? I know that it is possible to do it, since WolframAlpha gives a complicated answer but entirely composed of elementary functions. However, I am stumped on how to even start. Thanks in advance!
The typical method would be to square both sides (possibly picking up extraneous solutions) and then arrange the terms into a polynomial equation in $y$: $$x = \frac{2-y}{y} \sqrt{1-y^2}$$ $$x^2 = \frac{(2-y)^2}{y^2} (1-y^2)$$ $$y^2 x^2 = (4-4y+y^2)(1-y^2) = (4-4y+y^2) - 4y^2 + 4y^3 - y^4$$ Which yields: $$y^4 - 4y^3 + (x^2 + 3)y^2 + 4y - 4 = 0$$ If this were simply a quadratic in $y$, you could use the quadratic equation. As it stands this is a fourth order polynomial. There is a procedure to find the roots of such a polynomial, and you can find it through the wiki page on Quartic Polynomials. It's a bit of a mess though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Some questions about notation in "$[T]_\alpha^\beta$" I just have a few questions about the general meaning of the notation "$[T]_\alpha^\beta$". I would really appreciate if someone would dumb it WAY down to the most basic level (no assumptions, no leaps of logic) because most of the literature I have read on this notation is very scattered. I want to mention that $\alpha$ and $\beta$ are the ordered bases for $R^n$ and $R^m$ respectively. $T$ is the linear transformation from $R^n \to R^m$. Questions: * *What is $[T]$? *What are the subscript and superscript? *Does the order of the subscript and superscript matter (which one is on top or bottom)? *What are the dimensions of $[T]_\alpha^\beta$? Thank you guys so much.
It means the matrix associated to $T$ respect the basis $V\subset\mathbb{R}^n$ and the basis $W\subset\mathbb{R}^m$ For example, if we have $T\colon\mathbb{R^2}\to\mathbb{R^3}$ such $T(x,y)=(x,x+y,y)$, and $V=\{e_1=(1,0);e_2=(0,1)\}$ a ordered basis for $\mathbb{R^2}$ and $W=\{f_1=(1,0,0);f_2=(0,1,0);f_3=(0,0,1)\}$ a ordered basis for $\mathbb{R^2}$, then we have to do: $T(e_1)=T(0,1)=(0,1,1)=0f_1+1f_2+1f_3~~ \Rightarrow [T(e_1)]=[0,1,1]^\intercal$ $T(e_2)=T(1,0)=(1,1,0)=1f_1+1f_2+0f_3~~ \Rightarrow [T(e_2)]=[1,1,0]^\intercal$ So $[T]_V^W=[[T(e_1)][T(e_2)]]=\begin{bmatrix} 0 & 1 \\ 1 & 1 \\ 1 & 0 \end{bmatrix}$ Since $dim(V)=2$ and $dim(W)=3$, the matrix $[T]_V^W$ has order $3\times2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Q: Nowhere dense sets. Given $X$ a metric space, $A\subset X$ a nowhere dense set. Show that every open ball $B$ contains another open ball $B_1 \subset B$ such that $B_1 \cap A = \emptyset$. EDIT: I modify my proof with the help of a great hint! Given $B(x,\epsilon)$ lets suppose that for every open ball $B_1 \subset B(x,\epsilon)$, $B_1 \cap A \neq \emptyset$. Given $y \in B(x,\epsilon)$ we have that for every $n>N$ with $N$ sufficiently big, $B(y,\frac{1}{n})\cap A \neq \emptyset$ That is, for every $n>N$ there exists $y_{n} \in B(y,\frac{1}{n})\cap A \neq \emptyset$. If we consider the sequence $(y_{n})_{n>N}$ we have that for every $n$, $y_n \in A$ and $y_n \rightarrow y$. Therefore $y \in cl(A)$ which implies $B(x,\epsilon)\subset cl(A)$ with contradicts the fact that $A$ is nowhere dense.
That step is not legitimate. Consider the set $A=\left\{\frac1n:n\in\Bbb Z^+\right\}$ in $\Bbb R$. For each $\epsilon>0$ we have $B(0,\epsilon)\cap A\ne\varnothing$, but we can’t take the limit as $\epsilon\to 0^+$ to conclude that $\{0\}\cap A\ne\varnothing$: this is clearly false. Try this instead. Suppose that every open ball contained in $B(x,\epsilon)$ intersects $A$. Show that this implies that $B(x,\epsilon)\subseteq\operatorname{cl}A$, contradicting the hypothesis that $A$ is nowhere dense in $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
integrating something from a partial derivative $v=\int \frac{2x}{x^2+y^2}\,dy$ i am trying to learn harmonic analysis, and i have$$\frac{\partial u}{\partial x}=\frac{2x}{x^2+y^2}=\frac{\partial v}{\partial y}$$ and i want to get $v$. so what i do is: $$v=\int \frac{2x}{x^2+y^2}\,dy$$ $$=\int\frac{2x}{2x}\frac{1}{a}\,da$$ $$=\ln(a)+f(x)=\ln(x^2+y^2)+f(x)$$ is that right? my problem is to do with the constant(which is just some function of $y$ since it is a partial derivative with respect to y) should it be $\ln(x^2+y^2+f(x))$? or my above $\ln(x^2+y^2)+f(x)$? this is from $\ln(x^2+y^2)$ but the harmonic conjugate i got was $\ln(x^2+y^2)$ which is weird! so my function is analytic at $f=\ln(x^2+y^2)+i\ln(x^2+y^2)$??
Remember that you can treat $x$ as a constant because we are integrating with respect to $y$. This motivates the following: $$\begin{aligned} v &= \int \frac{2x}{x^{2} + y^{2}} dy \\ &= \int \frac{2}{x \left( 1 + \left(\frac{y}{x} \right)^{2}\right)} dy \\ &= \frac{2}{x} \int \frac{dy}{\left( 1 + \left( \frac{y}{x} \right)^{2} \right)}. \end{aligned} $$ That last integral shouldn't be too hard, but let me know if you need more help. As for the constant that is troubling you, you should get $v = f(x,y) + g(x)$, where $f$ is the antiderivative above, because then we have $$\frac{\partial v}{\partial y} = \frac{\partial f(x,y)}{\partial y} + \frac{\partial g(x)}{\partial y}.$$ This gives us $\frac{\partial f(x,y)}{\partial y} = \frac{2x}{x^{2} + y^{2}}$ and $\frac{\partial g(x)}{\partial y} = 0$; that last equality holds because $g$ is not a function of $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Topics in Algebra I.N.Herstein Problem 7 Given that if $A$ and $B$ are cyclic of orders m and n and $\gcd(m,n)=1$ then $A\times B$ is cyclic. Using this prove that if $u,v\in \mathbb Z$ then $\exists x$ such that $x\equiv u(\mod m);x\equiv v(\mod n)$ My try: Now $(u,v)\in \mathbb Z\times \mathbb Z\implies (u\mod m,v\mod n)\in \mathbb Z_m\times \mathbb Z_n$ By above lemma $\mathbb Z_m\times \mathbb Z_n$ is cyclic so $\exists (a,b)\in \mathbb Z_m\times \mathbb Z_n$ such that $(u\mod m,v\mod n)=x(a,b)$ for some $ x\in \mathbb Z\implies u\mod m=xa;v\mod n=xb$ I am getting stuck here .Any help
Consider the element $(1,1)$. What is the order of this element? It is the smallest $N$ that is both a multiple of $m$ and $n$. Since $m$ and $n$ are relatively prime, $N=mn$. This the group has $mn$ elements and an element of order $mn$, hence it is cyclic. To show $x$ exists, note that $u,v$ determine an element in the product group, $(u,v)$. Since we have found a generator $(1,1)$, this means there exists an $x$ such that $x(1,1)=(x,x)=(u,v)$, hence $x\equiv u\pmod m$ and $x\equiv v\pmod n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1285961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Proving simple trigonometric identity I need help with this one $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{1- \mathrm{tan}^2\alpha} - \cos\alpha = \sin \alpha $$ I tried moving sin a on the other side of the eqation $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{1- \mathrm{tan}^2\alpha} - \cos\alpha - \sin \alpha = 0 $$ This are the operations I was able to do $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{1- \mathrm{tan}^2\alpha} - \cos\alpha - \sin \alpha = 0 $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{1- \frac{\sin^2\alpha}{\cos^2\alpha}} - \cos\alpha - \sin \alpha = 0 $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{\frac{\cos^2\alpha- \sin^2\alpha}{\cos^2\alpha} } - \cos\alpha - \sin \alpha = 0 $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha*\cos^2\alpha + \cos^3 \alpha}{\cos^2\alpha- \sin^2\alpha} - \cos\alpha - \sin \alpha = 0 $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\cos^3 \alpha}{\sin\alpha} - \cos\alpha - \sin \alpha = 0 $$ I don't see what else I can do with this, so I tried to solve the left part of the equation. $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{1- \tan^2\alpha} - \cos\alpha = \sin \alpha $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{1- \frac{\sin^2\alpha}{\cos^2\alpha}} - \cos\alpha = \sin \alpha $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\sin\alpha + \cos \alpha}{\frac{\cos^2\alpha- \sin^2\alpha}{\cos^2\alpha} } - \cos\alpha = \sin \alpha $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - cos\alpha} + \frac{\sin\alpha\cdot\cos^2\alpha + \cos^3 \alpha}{\cos^2\alpha- \sin^2\alpha} - \cos\alpha = \sin \alpha $$ $$ \frac{\sin^2 \alpha}{\sin\alpha - \cos\alpha} + \frac{\cos^3 \alpha}{\sin\alpha} - \cos\alpha = \sin \alpha $$ And I get to nowhere again. I have no other ideas, I didn't see some formula or something. Any help is appreciated.
$\left(\dfrac{\sin^2 \alpha}{\sin \alpha+\cos\alpha}-(\sin\alpha+\cos\alpha)\right)+\dfrac{\sin \alpha+\cos\alpha}{1-\tan^2\alpha}\\=\left(\dfrac{\sin^2 \alpha-\sin^2 \alpha+\cos^2\alpha}{\sin \alpha-\cos\alpha}\right)+\dfrac{\sin \alpha+\cos\alpha}{1-\tan^2\alpha}\\=\dfrac{\cos^2\alpha}{\sin \alpha-\cos\alpha}+\dfrac{\sin \alpha+\cos\alpha}{1-\tan^2\alpha}\\=\dfrac{\cos^2\alpha-\sin^2\alpha+\sin^2\alpha-\cos^2\alpha}{(\sin \alpha-\cos\alpha)(1-\tan^2\alpha)}\\=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Show that the comb space has fixed-point property By "comb space", I mean the space $X=([0,1] \times \{0\}) \cup (K \times [0,1])$, where $K=\{ \frac{1}{n} : n \in \mathbb{N}^+ \}$, without the leftmost vertical line segment. How to prove that this space has the fixed-point property? I've tried to embed $X$ as a retract of some space with fixed-point property, but this seems impossible since it is not a closed subspace of $\mathbb{R}^2$.
Let $f:X\to X$ be an arbitrary continuous map. We shall prove that $f$ has a fixed point. Let $I=[0,1]\times\{0\}$ be the bottom interval and let $\pi:X\to I$ be the projection $\pi(x,y)=(x,0)$. The map $\pi\circ f$ maps $I$ to $I$ so it has a fixed point in $I$. Therefore, there exist $x_0$ and $y_0$ such that $$f(x_0,0)=(x_0,y_0).$$ Let $y_1$ be the largest number such that $f(\{x_0\}\times[0,y_1])\subseteq\{x_0\}\times[0,1]$. If $y_1=0$, we have $f(x_0,0)=(x_0,0)$ and we are done. If $y_1=1$, $f$ maps $\{x_0\}\times[0,1]$ to itself, and we again have a fixed point. So assume $0<y_1<1$. Let $$r(x_0,y)=(x_0,\min\{y,y_1\})$$ be the obvious retraction of $\{x_0\}\times[0,1]$ to $\{x_0\}\times[0,y_1]$. Then, $r\circ f$ maps $\{x_0\}\times[0,y_1]$ to itself, so it has a fixed point $(x_0,y)$ in $\{x_0\}\times[0,y_1]$. If this fixed point has $y<y_1$, we are done. Otherwise, $r(f(x_0,y_1))=(x_0,y_1)$, so $f(x_0,y_1)\in\{x_0\}\times[y_1,1]$. But then, for some $\epsilon>0$, we must have $f(\{x_0\}\times(y_1-\epsilon,y_1+\epsilon))\in\{x_0\}\times[0,1]$, contradicting the maximality of $y_1$. This completes the proof. (If you need some more details, feel free to ask.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
comparing MSE of estimations of binomial random variables $X$ is a binomial random variable defined over 12 Bernoulli trials with a success probability of $p$ in each (i.e. $X\sim\operatorname{Bin}(12,p)$. Consider $\hat p=\frac X{10}$ Determine the range for which the mean squared error of $\hat p =\frac X{10}$ is worse than the mean squared error of $\hat p=\frac X {12}$. I have calculated $\operatorname{MSE}\left(\frac X{12}\right)$ and $\operatorname{MSE}\left(\frac X{10}\right)$ that are equal to $\frac 1{12}\left(p-p^2\right)$ and $ \frac{12p-8p^2}{100}$ respectively. But comparison of them gives such a stupid result $4p^2> -44p$ Where is my mistake? Do you have any idea? Thanks in advance.
I will write $\tilde p = \frac1{10}X$ and $\hat p=\frac1{12}X$ to avoid confusion. Recall that the mean-squared error of an estimator $\hat\theta$ of a parameter $\theta$ is $$\operatorname{MSE}\left(\hat\theta,\theta\right) = \mathbb E\left[(\hat\theta, \theta)^2\right] = \operatorname{Var}\left(\hat\theta\right) + \operatorname{Bias}\left(\hat\theta,\theta\right)^2, $$ where $$ \operatorname{Bias}(\hat\theta) = \mathbb E\left[\hat\theta\right] - \theta.$$ We can further simplify this as $$\begin{align*} \operatorname{MSE}(\hat\theta) &= \mathbb E\left[\hat\theta^2\right] - \mathbb E\left[\hat\theta\right]^2 + \left(\mathbb E\left[\hat\theta\right]-\theta\right)^2\\ &= E\left[\hat\theta^2\right] - \mathbb E\left[\hat\theta\right]^2 + \mathbb E\left[\hat\theta\right]^2-2\theta\mathbb E\left[\hat\theta\right]+\theta^2\\ &= E\left[\hat\theta^2\right] -2\theta\mathbb E\left[\hat\theta\right]+\theta^2. \end{align*}$$ So we compute $$ \begin{align*} \operatorname{MSE}(\tilde p) &= E\left[\tilde p^2\right] -2 p\mathbb E\left[\tilde p\right]+p^2\\ &= \left(\frac1{10}\right)^2\mathbb E[X^2] - 2p\cdot\frac1{10}\mathbb E[X] + p^2\\ &= \frac1{25}p(3-2p), \end{align*} $$ and $$\begin{align*} \operatorname{MSE}(\hat p) &= E\left[\hat p^2\right] -2 p\mathbb E\left[\hat p\right]+p^2\\\\ &= \left(\frac1{12}\right)^2\mathbb E[X^2] - 2p\cdot\frac1{12}\mathbb E[X] + p^2\\ &=\frac1{12}p(1-p). \end{align*} $$ The values of $p$ for which $\operatorname{MSE}\left(\tilde p\right)>\operatorname{MSE}\left(\hat p\right)$ satisfy $$ \frac1{25}p(3-2p) > \frac1{12}p(1-p),$$ which are $$(-\infty, -11)\cup(0,\infty).$$ Hence $\operatorname{MSE}\left(\tilde p\right)>\operatorname{MSE}\left(\hat p\right)$ for all $p\in(0,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dual Bundles' Local Trivializations Confusion $$\newcommand{\R}{\mathbb R}$$ Let $\pi:E\to M$ be a rank $k$ smooth vector bundle over a smooth manifold $M$. I will below describe how to form the dual bundle, wherein lies my question. Let $E_p=\pi^{-1}(p)$ for each $p\in M$, and define $E^*=\bigsqcup_{p\in M}E_p^*$. Further define $\pi^*:E^*\to M$ as $\pi(E^*_p)=\{p\}$ for all $p\in M$. The main part is to define the local trivializations. For each local trivialization $\Phi:\pi^{-1}(U)\to U\times \R^k$ of $E$ over $U$, define $\Phi^*:\pi^{*-1}(U)\to U\times \R^{k*}$ as $$\Phi^*(\omega)=(p, (\Phi|_{E_p})^{-t}\omega)$$ for all $\omega\in E_p^*$, and all $p\in U$. (The '$t$' denotes the transpose). The problem with the above definition is that a local trivialization is supposed to map $\pi^{*-1}(U)$ to $U\times \R^k$ and not $U\times \R^{k*}$. Now I know that $\R^k$ can be canonical identified with $\R^{k*}$ provided we have an inner product on $\R^k$. But then different inner products will lead to different $\Phi^*$ and consequently different transition functions. Is there a preferred inner product with respect to which the identification of $\R^k$ with $\R^{k*}$ is made?
The identification of $V$ with $V^*$ via a scalar product is done for vector spaces in which there is no preferred basis. This is not the case of $\mathbb R ^n$ which has the usual basis (in terms of which it is constructed, after all) that allows you to define the identification $e_i \mapsto e^i$ where $e^i (v) = e^i (\sum v^j e_j) =v^i$. So, the idea is that $\mathbb R^n$ identifies with $(\mathbb R^n)^*$ purely algebraically, with no need for a scalar product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Show that $f(x) = 0$ for all $x \in \mathbb{R}$ $f: \mathbb{R} \to \mathbb{R}$ is continuous with $f(x)=0$ for all $x \in \mathbb{Q}$. Show that $f(x) = 0$ for all $x \in \mathbb{R}$. Can anyone please point me in the right direction as to how I can go about answering this question?
Assume that $$ \exists q \in \mathbb{R} \setminus \mathbb{Q} : f(q) \not = 0$$ Now by trichotomy we have either $f(q) > 0$ or $f(q) < 0$. If we consider the first case, by continuity of $f$ we have $$ \exists \delta > 0 : f(x) > 0 : x \in (q - \delta, q + \delta)$$ (Proving this is a very useful exercise in itself). Can you see what the contradiction is now in the above statement? Remember the hypothesis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Inf is not a stopping time in general If ${\tau_n}$ , $n=1,2,3...$ are stopping times to a given filtration $F_t$, why in general it's not true to claim that $\inf_n {\tau_n}$ is a stopping time also? Thanks
I guess we are talking about continuous time: stochastic basis $(\mathcal F_t)_{t \in (0,\infty)}$, say. Write $\sigma = \inf_n \tau_n$. Can we verify that $\{\sigma \le 5\} \in \mathcal F_5$ ??? Well, suppose $B \in \bigcap_{t>5}\mathcal F_t$ but $B \not\in \mathcal F_5$. Then for each $n$, $$ \tau_n(\omega) = \begin{cases} 5+\frac{1}{n},\qquad \omega \in B \\ 17,\qquad \omega\not\in B \end{cases} $$ defines a stopping time, but $$ \sigma(\omega) = \begin{cases} 5,\qquad\omega \in B \\ 17,\qquad\omega\not\in B \end{cases}$$ is not a stopping time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Another combined limit I've tried to get rid of those logarithms, but still, no result has came. $$\lim_{x\to 0 x \gt 0} \frac{\ln(x+ \sqrt{x^2+1})}{\ln{(\cos{x})}}$$ Please help
It can be computed this way $$ \frac{\ln (x+\sqrt{x^{2}+1})}{\ln {(\cos {x})}}=\frac{\ln (1+(x-1)+\sqrt{% x^{2}+1})}{(x-1)+\sqrt{x^{2}+1}}\cdot \frac{(x-1)+\sqrt{x^{2}+1}}{x^{2}}% \cdot \frac{x^{2}}{(\cos x)-1}\cdot \frac{(\cos x)-1}{\ln (1+\cos x-1)} $$ $$ \lim_{x\rightarrow 0^{+}}\frac{\ln (1+(x-1)+\sqrt{x^{2}+1})}{(x-1)+\sqrt{% x^{2}+1}}=\lim_{y\rightarrow 0}\frac{\ln (1+y)}{y}=1,\ \ \ \ \ \ y=(x-1)+% \sqrt{x^{2}+1} $$ $$ \lim_{x\rightarrow 0^{+}}\frac{(x-1)+\sqrt{x^{2}+1}}{x^{2}}\overset{H-Rule}{=% }\lim_{x\rightarrow 0^{+}}\frac{1+\frac{x}{\sqrt{x^{2}+1}}}{2x}=\frac{1}{% 0^{+}}=+\infty $$ $$ \lim_{x\rightarrow 0^{+}}\frac{x^{2}}{(\cos x)-1}=-2. $$ $$ \lim_{x\rightarrow 0^{+}}\frac{(\cos x)-1}{\ln (1+\cos x-1)}% =\lim_{z\rightarrow 0}\frac{z}{\ln (1+z)}=1,\ \ \ \ \ \ z=(\cos x)-1 $$ Therefore $$ \lim_{x\rightarrow 0^{+}}\frac{\ln (x+\sqrt{x^{2}+1})}{\ln {(\cos {x})}}% =1\cdot (+\infty )\cdot (-2)\cdot 1=-\infty $$ EDIT. It is possible to compute the second limit without using L'Hospital's rule. Indeed, $$ \begin{eqnarray*} \lim_{x\rightarrow 0^{+}}\frac{(x-1)+\sqrt{x^{2}+1}}{x^{2}} &=&\lim_{x\rightarrow 0^{+}}\frac{\left( (x-1)+\sqrt{x^{2}+1}\right) \left( (x-1)-\sqrt{x^{2}+1}\right) }{x^{2}\left( (x-1)-\sqrt{x^{2}+1}\right) } \\ &=&\lim_{x\rightarrow 0^{+}}\frac{\left( (x-1)^{2}-(x^{2}+1)\right) }{% x^{2}\left( (x-1)-\sqrt{x^{2}+1}\right) }=\lim_{x\rightarrow 0^{+}}\frac{-2x% }{x^{2}\left( (x-1)-\sqrt{x^{2}+1}\right) } \\ &=&\lim_{x\rightarrow 0^{+}}\frac{-2}{x\left( (x-1)-\sqrt{x^{2}+1}\right) }=% \frac{-2}{0^{+}\cdot (-2)}=\frac{1}{0^{+}}=+\infty . \end{eqnarray*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Find the integer x: $x \equiv 8^{38} \pmod {210}$ Find the integer x: $x \equiv 8^{38} \pmod {210}$ I broke the top into prime mods: $$x \equiv 8^{38} \pmod 3$$ $$x \equiv 8^{38} \pmod {70}$$ But $x \equiv 8^{38} \pmod {70}$ can be broken up more: $$x \equiv 8^{38} \pmod 7$$ $$x \equiv 8^{38} \pmod {10}$$ But $x \equiv 8^{38} \pmod {10}$ can be broken up more: $$x \equiv 8^{38} \pmod 5$$ $$x \equiv 8^{38} \pmod 2$$ In the end,I am left with: $$x \equiv 8^{38} \pmod 5$$ $$x \equiv 8^{38} \pmod 2$$ $$x \equiv 8^{38} \pmod 7$$ $$x \equiv 8^{38} \pmod 3$$ Solving each using fermat's theorem: * *$x \equiv 8^{38}\equiv8^{4(9)}8^2\equiv64 \equiv 4 \pmod 5$ *$x \equiv 8^{38} \equiv 8^{1(38)}\equiv 1 \pmod 2$ *$x \equiv 8^{38} \equiv 8^{6(6)}8^2\equiv 64 \equiv 1 \pmod 7$ *$x \equiv 8^{38} \equiv 8^{2(19)}\equiv 1 \pmod 3$ So now, I have four congruences. How can i solve them?
You made a small slip up when working $\bmod 2$, this is because $0^n$ is always congruent to $0$ no matter what congruence you are working with . I repeat this step once again. $8^{38}\equiv 3^{38}\equiv3^{9\cdot4}3^{2}\equiv(3^{4})^{9}3^2\equiv1^9\cdot3^2\equiv9\equiv 4\bmod 5$ $8^{38}\equiv0 \bmod 2$ since it is clearly even. $8^{38}\equiv(1)^{38}\equiv 1\bmod 7$ $8^{38}\equiv(-1)^{38}\equiv 1 \bmod 3$ So you have the following system of equations: $x\equiv4 \bmod 5$ $x\equiv 0 \bmod 2$ $x\equiv 1 \bmod 7$ $x\equiv 1 \bmod 3$ There are general ways to solve this, but it is possible to solve it step by step using basic substitutions. We start by writing $x=2k$ since $x\equiv 0 \bmod 2$ We now have $2k\equiv 1 \bmod 3$. Multiplying by two we get $4k\equiv 2\bmod 3$ since $4\equiv 1$ we have $k\equiv 2 \bmod 3$ so $k=3j+2$ and so $x=2(3j+2)=6j+4$ We have $6j+4\equiv 1 \bmod 7$ so $6j\equiv-3\equiv 4 \bmod 7$. Multiplying by $6$ we get $36j\equiv 24\equiv 3 \bmod 7$ Since $36\equiv 1 \bmod 7$ we have $j\equiv 3 \bmod 7$. So $x=6(7l+3)+4=42l+22$ We have $42l+22\equiv 4 \bmod 5$ from here $42l\equiv -18\equiv2 \bmod 5$ Since $42\equiv 2 \bmod 5$ we have $2l\equiv 2 \bmod 5$ so $l\equiv 1 \bmod 5$. So $l=5s+1$ and $x=42(5s+1)+22=210s+64$. So $8^{38}\equiv 64\bmod 210$ Second solution: Instead of using Fermat's theorem use Carmichael's theorem which says that if $a$ and $n$ are relatively prime then $a^{\lambda(n)}\equiv 1 \bmod n$. Using this and the fact that $\lambda(105)=12$ we get $8^{38}\equiv 8^{36}a^2\equiv (8^{12})^8a^2\equiv 1^38^2\equiv 8^2\equiv64\bmod 105$ Using this and the fact that $8^{38}$ is even we get $8^{38}\equiv 64\bmod 210$ Note: The theorem that tells us that we can separate congruences into congruences mod the power of primes dividing the number is called the Chinese Remainder theorem. This method assures us that the solution exists and that it is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 8, "answer_id": 1 }
Electric field off axis inside a charged ring. If I have a charged ring of radius $a$ what I'm trying to find the electric field of a point $r$ from the axis in the plane for $r\ll a$. The example sheet hints to use a gaussian surfaces of radius $r$ but I'm not sure what purpose a gaussian surface that encloses no charge serves and so i've tried to to is directly. By symmetry the only component of the field will be radial. Using Coulomb's law in conjunction with the cosine rule I get $$E=\frac {\sigma} {4\pi \epsilon_0} \int_{0}^{2\pi} {\frac {r-a\cos(\theta)}{((a^2+r^2)-2ar \cos(\theta))^{3/2}} \ d\theta}$$ Is this integral possible using first year university maths? Is there an easier way to find the electric field off axis for $r\ll a$ in the plane of a charged ring?
So if we take a very small gaussian pillbox centred on the origin of height $2z$ and radius $r$ in the limit the field out of the top and bottom surfaces is: $$\frac {Qz\pi r^2}{4\pi\epsilon_0(a^2+z^2)^{3/2}}$$ therefore as the total charge enclosed is zero and we know the field through the sides of our pillbox is radial and of constant magnitude we can arrive at: $$ \frac {2Qz\pi r^2}{4\pi\epsilon_0(a^2+z^2)^{3/2}} - 4\pi rzE = 0$$ This rearranges to: $$E=\frac{Qr}{8\pi\epsilon_0(a^2+z^2)^{3/2}}$$ which is: $$\frac{Qr}{8\pi\epsilon_0a^3}$$ if we take the taylor series and say the $\frac{z^2}{a^2}$ and above terms are negligible in the limit $z<<a$. Seems to be quite wishy washy but it does get the right answer. Thanks to wltrup for help!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
any good way to approximate this non-convex function with convex function? There is a non-convex constraint in my optimization problem, which is given by $\displaystyle -xy\log\left(1+\frac{z}{xy}\right)$. Obviously, it is neither convex or concave. Is there any good convex approximation method to approximate this function? Thank you for your help.
I don't know if this is particularly helpful in your situation, but you might consider the 'double' Legendre transform. What I mean is the following: Let $f$ represent your function. I assume $f: D \to \mathbb{R}$, where $D \subset \mathbb{R}^3$ is some suitable domain. The Legendre transform of $f$, denoted by $f^{*}$, is the function defined by \begin{equation} f^{*}(y) = \sup_{x \in D}(\langle y, x\rangle - f(x)) \end{equation} where $y$ is any point where the $\sup_{x \in D}(\langle y, x\rangle - f(x)) < \infty$. (The angled brackets $\langle\cdot,\cdot \rangle$ represent the inner-product on $\mathbb{R}^3$.) We can define $f^{**}$ as the Legendre transform of $f^*$. It has the following properties: * *$f^{**}$ is convex. *$f^{**} \leq f$. *If $f$ is finite, but not necessarily convex, then $f^{**}$ is its convex envelope (that is, the largest convex function which is smaller than $f$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1286896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Discrete metric, countable basis? Give an example of a metric space which does not have a countable basis. I was thinking of some uncountable set, with a metric which results in an uncountable number of open subsets. Which resulted in this: $$(\mathbb{R}, d_{\text{discrete}})$$ Where $$d_\text{discrete}(x,y) = \begin{cases} 1 & \text{if } x\not = y\\ 0 & \text{if } x=y \end{cases}$$ If I understand this metric correctly then the induced topology $\tau = \{ U\subseteq \mathbb{R}: U \text{ is }d_\text{discrete}\text{-open}\}$ would consists of all subsets of $\mathbb{R}$. And then this would include an uncountable number of sets. Intuition tells me this cannot have a countable basis. But how do I prove this? Would the following be valid? It doesn't feel very formal... Proof? Say there is a countable basis $\mathscr{B}$. Then $(\forall U \in \tau)(U = \bigcup_i B_i)$ where $B_i\in \mathscr{B}$. But since $\mathscr{B}$ is countable, then $U$ must be countable which leads to a contradiction. ($U$ can be uncountable, like $U=[0,1]$)
Your example is fine, but your argument is not: $\Bbb R$ with the usual topology, for instance, is second countable but has uncountably many open sets. Let $\mathscr{B}$ be a base for the discrete topology on $\Bbb R$. For each $x\in\Bbb R$ the set $\{x\}$ is open, so for each $x\in\Bbb R$ there is a $B_x\in\mathscr{B}$ such that $x\in B_x\subseteq\{x\}$. Clearly this means that $B_x=\{x\}$, so $\{x\}\in\mathscr{B}$ for each $x\in\Bbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum of the digits of a perfect square Prove that the sum of the digits of a perfect square can't be 2, 3, 5 , 6, or 8. I'm completely stumped on this one, how would I go about proving it?
If your square number had the sum of digits equal to 3 or 6, then it must be a multiple of 3. But a square that is a multiple of 3 must be a multiple of 9. Therefore the sum of its digits must be a multiple of 9. 3 or 6 are not multiples of 9. Therefore 3 and 6 are impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
clarification of a logic proof I am a bit confused on what this question is asking me to prove: Prove $$ \exists z\forall x\in\mathbb{R}^{+}[\exists y(y - x = y/x)\leftrightarrow x \neq z] $$ Am I asked to prove that there exists a z where the bi-conditional statement is true? Or is z a given and I should just prove the statement in the brackets for some z? I put that thought aside and showed that the statement in brackets is true when z = 1, but I just want to be sure that I did what the question I was asked.
It is conventional that quantifiers bind from left to right. Therefore, to prove this statement, you must first choose a $z$. Then, your adversary picks an $x$, and regardless of this choice, you must now prove that the biconditional holds. To do so you must prove that under certain circumstances $y$ exists, and under other circumstances it does not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The maximality of a family of pairwise disjoint meager open sets implies the denseness of its union Consider the following theorem from A. Kechris's Classical Descriptive Set Theory: (8.29) Theorem. Let $X$ be a topological space and $A \subseteq X$. Put $$U(A) = \bigcup \{ U\text{ open} : U \Vdash A \}.$$ Then $U(A) \setminus A$ is meager, and if $A$ has the [Baire Property], $A \setminus U(A)$, and thus $A \mathop{\Delta} U(A)$, is meager, so $A =^* U(A)$. (Here $U \Vdash A$ means that $A$ is comeager in $U$, or $U \setminus A$ is meager.) In following the proof for the case $A = \varnothing$, we begin with the family $$\mathscr U=\{\mathcal U: \mathcal U\text{ is a pairwise disjoint subfamily of meager open subsets of }X \}.$$ Then let $ \{U_i \}_{i\in I} $ be a maximal element of $\mathscr U$. (Such a family $ \{U_i \}_{i\in I} $ can be chosen by Zorn's Lemma.) Letting $W=\bigcup_{i\in I} U_i$, the maximality of $\{ U_i \}_{i \in I}$ implies that $W$ is dense in $U(\varnothing)$. My question is, why does the maximality of $\{ U_i \}_{i \in I}$ imply the denseness of the set $W$ in $U(\varnothing)$?
If $W$ is not dense in $U(\varnothing)$, then $U(\varnothing) \not\subseteq \overline{W}$, and so $U(\varnothing) \setminus \overline{W}$ is a nonempty open set. Picking any $x \in U(\varnothing) \setminus \overline{W}$ by definition of $U(\varnothing)$ there must be an open meager $U$ containing $x$. In particular, $U \cap (U(\varnothing) \setminus \overline{W})$ is a nonempty open set. Now this set is meager because it is a subset of the meager $U$, and is disjoint from each $U_i$, contradicting the maximality of $\{ U_i \}_{i \in I}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Matrices such that $M^2+M^T=I_n$ are invertible Let $M$ be an $n\times n$ real matrix such that $M^2+M^T=I_n$. Prove that $M$ is invertible Here is my progress: * *Playing with determinant: one has $\det(M^2)=\det(I_n-M^T)$ hence $\det(M)^2=\det(I_n-M)$ and $\det(M^T)=\det(I_n-M^2)$, hence $\det(M)=\det(I_n-M)\det(I_n+M)$ Combining both equalities yield $$\det(I_n-M)(\det(I_n-M)\det(I_n+M)-1)=0$$ * *Playing with the original assumption: transposing yields $(M^T)^2+M=I_n$, and combining gives $(M^2-I_n)^2=I_n-M$ that is $M^4-2M^2+M=0$. $M$ is therefore diagonalizable and its eigenvalues lie in the set $\{0,1,-\frac{1+\sqrt{5}}{2},\frac{1-\sqrt{5}}{2}\}$ * *Misc Multiplying $M^2+M^T=I_n$ by $M$ in two different ways, one has $MM^T=M^TM$ * *Looking for a contradiction ? Supposing $M$ is not invertible, there is some $X$ such that $MX=0$. This in turn implies $M^TX=X$... So what ?
as a footnote, just to look at the simplest cases: in dimension 1 $m^2 + m - 1=0$ so $m=\frac12(-1 \pm \sqrt{5})$ in 2 dimensions, set $M=\begin{pmatrix} a & b \\c & d \end{pmatrix}$ so $$ M^{\mathrm{T}}=\begin{pmatrix} a & c \\b & d \end{pmatrix} $$ and $$ M^2 = \begin{pmatrix} a^2+bc & b(a+d) \\c(a+d) & d^2+bc \end{pmatrix} $$ so the matrix condition $M^2+M^{\mathrm{T}}=I$ gives the conditions: $$ a^2+bc +a =1 \\ d^2+bc +d = 1 \\ b(a+d)+c= 0 \\ c(a+d)+b=0 $$ since $a$ and $d$ are both roots of $x^2+x+(bc-1)=0$ we require $ad=bc-1$ hence $$ |M|=-1 $$ and $M$ is non-singular. since we also have $a+d=-1$ this forces $b=c$ and we obtain a 1-parameter family of solutions: $$ a,d=\frac12 \left(-1\pm\sqrt{5-4b^2} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Rolling two dice... Let $A_n$ be the number of fives, $B_n$ the number of sixes and $C_n$ the number of eights in $n$ rolls of two dices. For which n do we have: $E(A_n) < E(min(B_n,C_n))$ ?
I run a program with PARI/GP : ? n=0;gef=0;while(gef==0,s=0;su=0;n=n+1;for(a=0,n,for(b=0,n,for(c=0,n,for(d=0,n, if(a+b+c+d==n,su=su+n!/a!/b!/c!/d!*p1^a*p2^b*p3^c*p4^d*min(b,c);s=s+n!/a!/b!/c!/ d!*p1^a*p2^b*p3^c*p4^d)))));if(su>n/9,gef=1);print(n," ",s," ",su*1.0," " ,n/9*1.0," ",(su-n/9)*1.0)) The end of the output is 57 1 6.338345777056660390204361588 6.333333333333333333333333333 0.0 05012443723327056871028254579 So, the answer should be $n=57$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
show that $\frac{1}{F_{1}}+\frac{2}{F_{2}}+\cdots+\frac{n}{F_{n}}<13$ Let $F_{n}$ is Fibonacci number,ie.($F_{n}=F_{n-1}+F_{n-2},F_{1}=F_{2}=1$) show that $$\dfrac{1}{F_{1}}+\dfrac{2}{F_{2}}+\cdots+\dfrac{n}{F_{n}}<13$$ if we use Closed-form expression $$F_{n}=\dfrac{1}{\sqrt{5}}\left(\left(\dfrac{1+\sqrt{5}}{2}\right)^n-\left(\dfrac{1-\sqrt{5}}{2}\right)^n\right)$$ $$\dfrac{n}{F_{n}}=\dfrac{\sqrt{5}n}{\left(\left(\dfrac{1+\sqrt{5}}{2}\right)^n-\left(\dfrac{1-\sqrt{5}}{2}\right)^n\right)}$$ Well and now I'm stuck and don't know how to proceed
Here's one rough approach which works for $n$ sufficiently large. The idea is that for large $n$, we have that $$ F_n \approx \phi^n/\sqrt{5} $$ since the conjugate root is less than one, and so it tends to zero. So consider now the generating function $$ G(q) = \sum_{n=0}^\infty \frac{q^n}{F_n} $$ We note that your sum approaches $\big(q\frac{d}{dq}G(q)\big)_{q=1}$ and so we just need to evaluate that expression. Using the approximating above, we see that $$ G(q) \approx \sum_{n=0}^\infty \sqrt{5}\frac{q^n}{\phi^n} = \sqrt{5}\frac{\phi}{\phi - q} $$ and so taking the derivative we get $$ q\frac{d}{dq}G(q) \approx \frac{\sqrt{5}\phi}{(\phi - 1)^2} $$ which, when evaluated at $q = 1$ yields $9.4721\ldots < 13$ There are obviously some details that would probably need to be filled in here (how good are the approximations?), but this could give you a rough start. Edit Here is a full proof. We first define $G_n(q) = \sum_{k=0}^n \frac{nq^n}{F_n}$ and $G(q) = \sum_{k=0}^\infty \frac{nq^n}{F_n}$. Clearly, $G_n(q) < G(q)$ (if $q > 0$). So our goal is to bound $G(1)$. Define $D = q\frac{d}{dq}$. We have: $$ \begin{align} G(q) &= D\sum_{k=0}^\infty \frac{q^n}{F_n} \\ &= D\sum_{k=0}^\infty \frac{\sqrt{5}q^n}{\phi^n - \phi'^n} \\ &= D\sum_{k=0}^\infty \frac{\sqrt{5}q^n}{\phi^n\big(1 - (\phi'/\phi)^n\big)} \end{align} $$ The point now is that we always have that $$ \frac{1}{1 - (\phi'/\phi)^n} < \frac{1}{1 - (\phi'/\phi)^2} $$ since this is equivalent to $(\phi'/\phi)^n < (\phi'/\phi)^2$ which is vacuously true if $n$ is odd (the LHS is negative) and true if $n$ is even since $|\phi'/\phi| < 1$. Thus $$ G(q) = D\sum_{k=0}^\infty \frac{\sqrt{5}q^n}{\phi^n\big(1 - (\phi'/\phi)^n\big)} < \frac{\sqrt{5}}{1 - (\phi'/\phi)^2}D\sum_{k=0}^\infty \frac{q^n}{\phi^n} $$ from which we have (looking at the previous argument) that $$ G_n(1) < G(1) < \frac{\sqrt{5}}{1 - (\phi'/\phi)^2}\frac{\phi}{(\phi - 1)^2} \approx 11.08 < 13 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 1 }
Convergence of $\sum_{n=0}^{\infty} \left(\sqrt[3]{n^3+1} - n\right)$ I have encountered the following problem: Determine whether $$\sum \limits_{n=0}^{\infty} \left(\sqrt[3]{n^3+1} - n\right)$$ converges or diverges. What I have tried so far: Assume that $a_n = \sqrt[3]{n^3+1} - n$. * *$n$th term test: $\lim \limits_{x\to\infty} a_n = 0$ $\Rightarrow$ inconclusive. *$p$-series test: can't be used *geometric series test: can't be used *alternating series test: not alternating, can't use *telescoping series: does not look like it (after rewriting first few terms) *integral test: integrating the $a_n$ is outside the scope of my course (checked result with wolfram) *ratio test: simplification of the limit is not possible due to number of expanding terms *comparison / limit comparison test: no idea what should I compare it to I would be grateful for any help.
Hint We can apply a cubic analogue of the technique usually called multiplying by the conjugate, which itself is probably familiar from working with limits involving square roots. In this case, write $$u = \sqrt[3]{n^3 + 1},$$ so that $$u^3 - n^3 = (n^3 + 1) - (n^3) = 1.$$ On the other hand, factoring gives (in analogy to the difference of perfect squares factorization $a^2 - b^2 = (a - b)(a + b)$ used in the square root case case) $$u^3 - n^3 = (u - n) (u^2 + un + n^2).$$ Explicitly, multiplying gives that we can write our sum as \begin{align} \sum_{n = 0}^{\infty} (u - n) &= \sum_{n = 0}^{\infty} (u - n) \cdot \frac{u^2 + un + n^2}{u^2 + un + n^2} \\ &= \sum_{n = 0}^{\infty} \frac{u^3 - n^3}{u^2 + un + n^2} \\ &= \sum_{n = 0}^{\infty} \frac{1}{u^2 + un + n^2} . \end{align} Now, $u > n$, and so the summand satisfies $$\frac{1}{u^2 + un + n^2} < \frac{1}{3 n^2}.$$ The given series is bounded by $$\frac{1}{3} \sum_{n = 0}^{\infty} \frac{1}{n^2},$$ which converges by, e.g., the $p$-test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Absolute Value Equation $$x^3+|x| = 0$$ One solution is $0.$ We have to find the other solution (i.e, $-1$) $$Solution:$$ CASE $1$: If $x<0,~|x| = -x$, we can write $x^3+|x| = 0$ as $-x^3-x=0$ $$x^3+x=0$$ $$x(x^2+1)=0$$ $$\Longrightarrow x=0, or, x=\sqrt{-1}$$ Please tell me where I've gone wrong.
the function $$f(x) = \begin{cases} x^3 - x & if \, x \le 0,\\x^3+x &if \, 0 \le x. \end{cases} $$ has one negative zero at $x = -1$ and a zero at $x = 0.$ you can see from the function definition that $f(x) > 0$ for $x > 0.$ the graph of $y = f(x)$ has a cusp at $(0,0)$ which is also a local minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the value of undefinite integral Find $$\int \frac{dx}{(x+1)^{1/2}+(x+1)^{1/3}}$$ I have tried with let $u=(x+1)^{1/2}+(x+1)^{1/3}$ but I have nothing to solve that undefinite integral. please give me a clue for solve it.
$$\begin{align*} \int \frac{dx}{\left ( x+1 \right )^{1/2}+ \left ( x+1 \right )^{1/3}} &\overset{u^6=x+1}{=\! =\! =\! =\!}\int \frac{6u^5}{u^3+ u^2}\, du \\ &= \int \left ( 6u^2 - 6u - \frac{6}{u+1} + 6 \right )\, du\\ &= u\left ( 2u^2-3u+6 \right )-6\log (u+1) \end{align*}$$ Now substite $u$ with $\sqrt[6]{x+1}$ and add the constant and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1287830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solve $|1 + x| < 1$ I'm trying to solve $|1 + x| < 1$. The answer should be $ -2 < x < 0$ which wolframalpha.com agrees with. My approach is to devide the equation to: $1+x < 1$ and $1-x < 1$ and then solve those two: $ 1+x < 1 $ $ x < 0 $ $ 1 - x < 1$ $ -x < 0$ $x > 0$ And this gives me $ x = 0$. Where am I going wrong?
Every exercise dealing with $| \cdot |$ can be solved as follows: \begin{align}|1+&x| < 1 \\ (\iff) -1 < 1&+x < 1 \\ (\iff) -2 < &x < 0 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1288140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }