Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Verify the triple angle formula Verify the triple angle formula $$\tan(3x) = \frac{3 \tan(x) − \tan^3(x)}{1 − 3 \tan^2(x)}$$ I have tried simplifying the right side by the following $$\tan(3x) = \frac{\tan(x)(3 − \tan^2(x))}{1 − 3 \tan^2(x)}$$ but then I am getting stuck trying to verify the equation
$$\tan 3x =\tan(x+2x)= \frac{\tan x + \tan 2x}{1 − \tan x \tan 2x}=$$ $$=\frac{\tan x + \frac{2\tan x}{1-\tan^2x}}{1 − \tan x \frac{2\tan x}{1-\tan^2x}}=\frac{3\tan x- \tan ^3x}{1-3\tan^2x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Limits at infinity by rationalizing I am trying to evaluate this limit for an assignment. $$\lim_{x \to \infty} \sqrt{x^2-6x +1}-x$$ I have tried to rationalize the function: $$=\lim_{x \to \infty} \frac{(\sqrt{x^2-6x +1}-x)(\sqrt{x^2-6x +1}+x)}{\sqrt{x^2-6x +1}+x}$$ $$=\lim_{x \to \infty} \frac{-6x+1}{\sqrt{x^2-6x +1}+x}$$ Then I multiply the function by $$\frac{(\frac{1}{x})}{(\frac{1}{x})}$$ Leading to $$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{(\frac{-6}{x})+(\frac{1}{x^2})}+1}$$ Taking the limit, I see that all x terms tend to zero, leaving -6 as the answer. But -6 is not the answer. Why is that?
It leads to $$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{1-(\frac{6}{x})+(\frac{1}{x^2})}+1}$$ And so the limit is $-3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Prove that $max_{J\subset \{ 1,2...n \} } \left| \sum_{j \in J} z_j\right| = max_{t\in[0,2\pi]} \sum_{j=1}^{n} Re^{+}(e^{it}z_j).$ Consider a finite subset of $\mathbb{C}$ $A=\{z_1,z_2,...,z_n\}$ and the function $Re^{+}(z) = Re (z) $ if $ Re(z)>0$ and $0$ if $Re(z)\leq 0$. I have to prove that: $max_{J\subset \{ 1,2...n \} } \left| \sum_{j \in J} z_j\right| = max_{t\in[0,2\pi]} \sum_{j=1}^{n} Re^{+}(e^{it}z_j).$ Geometrically it means that if you have $n$ points in the plane, the maximum of the sum of the positive $x$ components of the points when you turn them with an angle $t$ is equal to the sum of the distances from the center to the points certain subset of A.
One direction: $\max_{J\subset\{1,\dots,n\}} |\sum_{j\in J} z_j|\le\max_{t\in[0,2\pi]} \sum_{j=1}^n \Re^+(e^{it}z_j)$: Choose $J$ so that the max on the LHS is attained. Let $t=-\arg\sum_{j\in J} z_j$. Then $|\sum_{j\in J} z_j|=\sum_{j\in J} \Re(e^{it}z_j)$. I claim that for each $k\in J$, $\Re(e^{it}z_k)\ge0$ and for each $k\notin J$, $\Re(e^{it}z_k)\le0$. To see this, let $k\in J$. Then by the choice of $J$, we have $$\sum_{j\in J} \Re(e^{it}z_j)=|\sum_{j\in J} z_j|\ge|\sum_{j\in J-\{k\}} z_j|\ge\sum_{j\in J-\{k\}} \Re(e^{it}z_j),$$ which shows $\Re(e^{it}z_k)\ge0$. For $k\notin J$, a similar argument shows $\Re(e^{it}z_k)\le0$. Then $$ |\sum_{j\in J} z_j|=\sum_{j\in J} \Re(e^{it}z_j)=\sum_{j=1}^n \Re^+(e^{it}z_j). $$ The other direction is easier: for every $t\in[0,2\pi]$, let $J_t=\{j\in\{1,\dots,n\}:\Re(e^{it}z_j)\ge0\}$. Then $$ \sum_{j=1}^n \Re^+(e^{it}z_j)=\sum_{j\in J_t} \Re(e^{it}z_j)=e^{it}\sum_{j\in J_t} \Re(z_j)\le |\sum_{j\in J_t} \Re(z_j)|. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
10 dice are rolled. What is the probability of getting 6 dice that are even numbers and the other four dice are 3's? Ten dice are rolled. What is the probability of getting six dice that are even numbers and the other four dice are $3$'s? My approach is that you have $6^{10}$ number of total outcomes, and * *$\Pr(\text{6 dice with even numbers}) = (1/2)^6$ *$\Pr(\text{the other 4 with 3's}) = (1/6)^4$ So, the probability of the question will be $\frac{(1/2)^6 \cdot (1/6)^4}{6^{10}}$ But I am not confident that this is correct.
Thank you Arthur for correcting me, I got a bit carried away. It's night-time here where I'm at, after all. Out of the 10 dice you roll, 4 need to be 3's (the other 6 will be even and uniquely determined, then), but there are many ways in which this condition can be satisfied. For example, you can roll $(3, 3, 3, 3, 2, 2, 2, 2, 2, 2)$, or you can roll $(3, 2, 3, 2, 3, 2, 3, 2, 2, 2).$ All in all, there will be $\binom{10}{4}$ such combinations. The final solution, therefore, if we denote $A$ as the set of all outcomes where the conditions are satisfied, is $$P(A)={\binom{10}{4}(\frac{1}{2})^6(\frac{1}{6})^4}.$$ Edit 2: Again, thank you. I think it should be correct now, finally.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Induction Proof: $(1+x)^n = 1+x^n$ for even $n$ in $\mathbb{F}_2[x]$ I'm trying to work on a proof by induction. The statement is: Let n be even. Then, $(x+1)^n = x^n + 1$ for $n\in\mathbb{N}\cup{0}$ and $(x^n+1) \in \mathbb{F}_2[x]$ Base case: $n=0$ and $n=2$ both satisfy the condition, fairly trivially. I include the $n=2$ case specifically to draw upon later. We suppose true for $n=2k$ (as we require n even), so that we assume: $(x+1)^{2k} = x^{2k} + 1$ Then, if $n=2k+2$: $(x+1)^{2k+2} = (x+1)^{2k} (x+1)^2$ $\Rightarrow (x+1)^{2k+2} = (x^{2k}+1)(x^2+1)$, where the first RHS bracket follows from the inductive assumption, and the second bracket was shown to hold earlier. Then, expanding RHS gives: $(x+1)^{2k+2} = x^{2k+2} + 1 + x^2 + x^{2k}$ This is where I come unstuck. I'm hoping to show that $(x+1)^{2k+2} = x^{2k+2} + 1$ , but I can't see a way to get rid of the extra terms on the RHS to complete my proof. For what it's worth, the actual question I'm working on simply concerns the case $n=6$, and while I could do this by showing that (x+1) is a factor, then divide it out, and then show that (x+1) is still a factor, and repeat 6 times, that feels like an unusually ugly approach. I can't see any other obvious way to handle the problem, so hints appreciated.
$(1+x)^6=1+6x+15x^2+20x^3+15x^4+6x^5+x^6$ in $\mathbb{Z}[x]$, hence $$ (1+x)^6=1+x^2+x^4+x^6 $$ in $\mathbb{F}_2[x]$. So the claim isn't true. It is however the case that $(1+x)^n=1+x^n$ in $\mathbb{F}_2[x]$ if $n$ is a power of $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving $ \overline{E \cup F} = \overline {E} \cup \overline{F}$ and $ \overline{E \cap F} \subset \overline {E} \cap \overline{F}$ Let $E, F \subset X$, prove that $ \overline{E \cup F} = \overline {E} \cup \overline{F}$. For further clarification: I'm referring to $\overline{E}$ as E closure. E' would be the limit point, defined as $E' = \{ ( E \cap N_r (p) ) \backslash \{p\} \neq \emptyset , \ \ \forall R >0 \}$. This is also in a topological space. I proved it as follows: $\overline{E \cup F} = \overline{E \cup E' \cup F \cup F'}$ $\\ = \overline{(E \cup E') \cup (F \cup F')}$ $\\ = \overline{(E \cup E')} \cup \overline{(F \cup F')}$ $\\ = \overline{(E)} \cup \overline{F}$ The next one is a little tricky for me. Prove $ \overline{E \cap F} \subset \overline {E} \cap \overline{F}$. Here's my go at it: Let $x \in \overline{E \cap F}$. Then $x \in \overline{ (E \cup E') \cap (F \cup F)} \rightarrow x \in \overline{E} \cap \overline{F}. $ Feedback would be much appreciated! I've also been trying to come up with examples that would help me visualize these statements a little better.
To prove that $\overline{E \cap F} \subset \overline{E} \cap \overline{F}$: Let $x \in \overline{E \cap F}$. There exists $(x_n)_n \subset E \cap F$ such that $x_n \underset{n \rightarrow \infty}{\rightarrow} x$. $(x_n)_n \subset E \cap F$ means that $(x_n)_n \subset E$ and also that $(x_n)_n \subset F$. Can you finish this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the limit of $\lim_{n\to\infty}{a^n/n!}$ $$\lim_{n\to\infty}\frac{a^n}{n!}$$ $a>0, n\in N$ If possible, a solution through the squeeze theorem. Not sure how to solve it.
We know $$\mathrm{e}^x = \sum_{n=0}^{\infty} \frac{ x^n }{n!} $$ Has radius of convergence $R=\infty$. Thus, it better converge for $x=a$. It follows then that $$ \lim_{n \to \infty} \frac{ a^n }{n!} = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
The need for the Gram–Schmidt process As far as I understood Gram–Schmidt orthogonalization starts with a set of linearly independent vectors and produces a set of mutually orthonormal vectors that spans the same space that starting vectors did. I have no problem understanding the algorithm, but here is a thing I fail to get. Why do I need to do all these calculations? For example, instead of doing the calculations provided in that wiki page in example section, why can't I just grab two basis vectors $w_1 = (1, 0)'$ and $w_2 = (0, 1)'$? They are clearly orthonormal and span the same subspace as the original vectors $v_1 = (3, 1)'$, $v_2 = (2, 2)'$. It is clear that I'm missing something important, but I can't see what exactly.
You can also get into function spaces where it's not clear what the basis you can just grab from is. The Legendre polynomials can be constructed by starting with the functions $1$ and $x$ on the interval $x \in [-1,1]$, and using Gram-Schmidt orthogonalization to construct the higher order ones. The second order polynomial is constructed by removing the component of $x^2$ that points in the direction of $1$, for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 6, "answer_id": 0 }
Does the two-element set have a categorical description in the category of (finite) sets? So more or less what I ask in the title: is it possible to identify (uniquely up to bijection) the two-element set in the category of sets as an object that has a particular (categorical) property? EDIT: Following Hanno's answer, I would like to mention that I know $2$ is a subobject classifier. What I am instead searching for here is some other property that $2$ has (if there is one) and which is not intrinsic to toposes.
The two-element set, call it $\Omega$, represents the subset-functor: For any set $X$, there is a natural bijection $$(\ddagger):\qquad\text{Hom}_{\textsf{Set}}(X,\Omega)\ \ \cong\ \ \text{Subset}(X).$$ Because of this, it is called a Subobject Classifier. By the Yoneda-Lemma, an object is determined up to unique isomorphism by the datum of a natural isomorphism $(\ddagger)$. The two elements $0,1:\ast\to\Omega$ of $\Omega$ can be recovered from $(\ddagger)$ as those corresponding to $\emptyset,\ast\in\text{Subset}(\ast)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do I find the sum of a sequence whose common difference is in Arithmetic Progression? How do I find the sum of a sequence whose common difference is in Arithmetic Progression ? Like in the following series :- $1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91$ And also how to find it's $n^{th}$ term ??
I enjoyed several of the solutions here. However the idea of substituting values and solving for a, b, c seems really long and inefficient. Below I have an efficient method to do this. As codetalker pointed out: a(n+1)-a(n)= some linear term=1+n(for example) So now what you do is make a telescopic series: a(2)-a(1)=1+1 a(3)-a(2)=1+2 ........................ a(n)-a(n-1)=1+(n-1) So now you add all individual equations, you see that a(n) =a(1)+n×1+(n-1)(n)/2 ( I have used the formula for sum of n terms). You know a(1) So now you have a(n) On summation you can apply the formulae the other answers specified for sum of numbers and squares. Hope this answer makes is more efficient to do these
{ "language": "en", "url": "https://math.stackexchange.com/questions/1961952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 3 }
The tea bag problem: probability of extracting a single bag of tea Suppose you have a bunch of tea bags in a box, initially in pairs, like these: Let us suppose the box initially contains only joined pairs of tea bags, say $N_0$ of them (thus making for a total of $2N_0$ tea bags). Every time you want to make yourself a tea, you put a hand in the box and randomly extract a tea bag. Sometimes you will find yourself with a joined pair, in which case you split it, take one for your tea, and put the other back into the box. If you instead extract a single tea bag (which was already split before), you just take it. Now if you ever happened to be in a similar situation, you will probably have noticed that after a while you will almost always extract single tea bags and seldolmly find doubles (which is not surprising of course). The question is, what exactly is the probability $p_k$ of extracting a single tea bag, after $k$ tea bags have already been picked? Suppose for this problem that each time there is an equal probability of extracting any of the tea bags, regardless of them being joined with another or not, so that after the first step (in which we necessarily extract and split a double) the probability of extracting a single bag is $p_1=\frac{1}{2N_0-1}$. It is relatively easy, just by computing the values of $p_k$ for the first $k$s, to see that the answer to the problem is quite nice: $$p_k = \frac{k}{2N_0 -1}.$$ How can we prove this? An interesting variation of the problem is asking what happens if we instead consider the picking of a pair as a single event (instead that as two, as in the above considered case). With this assumption the previous formula does not hold, as computing the first values of $p_k$ shows: $$ p_1 = \frac{1}{N_0}, \\ p_2 = \frac{2(N_0-1)}{N_0^2} .$$
(This is a condensed version of @lulu 's answer.) If in drawing${}_{k+1}\>$ I pick a certain bag $b$ then any other bag, in particular the partner of $b$, is among the $k$ previously drawn bags with probability $${k\over 2N-1}\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
If $p(x)$ is a cubical polynomial with $p(1)=3,p(0)=2,p(-1)=4$, then what is $\int_{-1}^{1} p(x)dx$? Q.If $p(x)$ is a cubical polynomial with $p(1)=3,p(0)=2,p(-1)=4$,Then $\int_{-1}^{1} p(x)dx$=__? My attempt: Let $p(x)$ be $ax^3+bx^2+cx+d$ $p(0)=d=2$ $p(1)=a+b+c+d=3$ $p(-1)=-a+b-c+d=4$ From them,we get $b=3.5$,$d=2$ and $a+c=-0.5$ I could not progress any further.
This method will work. However easier is to apply Simpson's rule for approximating an integral (https://en.wikipedia.org/wiki/Simpson%27s_rule). You have been given all the required information from the polynomial and fortunately Simpson's rule is exact for polynomials up to degree 3!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Rigidity of surface groups I am interested in the following result: Theorem: A torsion-free group which contains the fundamental group of a closed surface as a finite index must be the fundamental group of a closed surface. This is a consequence of a harder theorem stating that PD²-groups are surface groups; see Eckmann's survey, Poincaré duality groups of dimension two are surface groups. Similar kind of rigidity appears for free groups and abelian free groups. Do you know if alternative proofs exist?
You can also use Tukia's theorem, from the 1980's, which shows that a uniform convergence subgroup of the homeomorphism group of the circle is either conjugate to a Fuchsian group or is one of an explicitly described class of groups which are not torsion free. Your group does indeed have a uniform convergence action on the circle, and the kernel must be finite, and so the theorem does apply to your group. See Tukia's paper "Homeomorphic conjugates of Fuchsian groups" in J. Reine Angew. Math. There's a history behind these kinds of theorems which goes back to Nielsen, but early work had some errors. Zieschang was the first to work out some cases with full rigor (and the one who identifed the errors in early works), and I believe that this paper of Tukia may be the first to rigorously cover your case of interest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Explain solution of this system of non-linear congruence equations So I have a system of non-linear congruence equations (i may be wrong in terms): \begin{cases} x^3 \equiv 21\ (\textrm{mod}\ 23) \\ x^5 \equiv 17\ (\textrm{mod}\ 23) \end{cases} Somewhere I've read that to solve this system one should: * *Find the solution of $3\cdot a + 5\cdot b = 1$ with Extended Euclidean Algorithm *Use $a$ and $b$ values from previous step in the next formula: $x \equiv 21^a\times17^b\ (\textrm{mod}\ 23)$ *If $a$ or $b$ is negative, then calculate the modular inverse of $21$ or $17$ and use it in second formula with $-a$ or $-b$ And I don't understand why is it working. I've tried to perform some calculations to get the second formula but didn't succeeded. : ( Can you please explain me this?
Since neither $3$ or $5$ are divisors of $\varphi(23)=22$, both the maps $$ f:x\mapsto x^3,\qquad g:x\mapsto x^5 $$ are bijective on $\mathbb{F}_{23}$. In particular, since $3^{-1}\equiv 15\pmod{22}$ and $5^{-1}\equiv 9\pmod{22}$, $$ f^{-1}:x\mapsto x^{15},\qquad g^{-1}:x\mapsto x^9 $$ and $x^3\equiv 21\pmod{23}$ is equivalent to $x\equiv 7\pmod{23}$, as well as $x^5\equiv 17\pmod{23}$ is equivalent to $x\equiv 7\pmod{23}$. So $\color{red}{x\equiv 7\pmod{23}}$ is the wanted solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Verifying compatibility of symplectic and metric structure of $\mathbb{R}^{2n}$ I was reading about on wikipedia under the Hermitian Manifold page that for a almost-complex structure on a manifold $M$ that we have the following: $$\omega(\cdot,\cdot) = g(J\cdot,\cdot).$$ I am having trouble with the explicit calculation of this relation between the symplectic form and the inner product on $\mathbb C^n$. I have tried using $\mathbb{R}^{2n}$ with its usual symplectic form $\omega_0$ as $\mathbb C^n$ where $z_j = x_j + iy_j$, and I am using $i = J$. Generally I start out with: $$\begin{align} g(J(u),v) &= \sum_{j=1}^n J(u_j)\bar v_j, \\ &= \langle J(\tilde u),\tilde v\rangle + i\omega_0(J(\tilde u),\tilde v), \end{align}$$ where that inner product is the usual one on $\mathbb R^{2n}$ and $\tilde\cdot$ indicates the change in coordinates from $z_j$ to $x_j,y_j$. But beyond there I am a bit fuzzy on what to do since this so far seems incorrect, getting $J$ in the /real/ inner product, and also in the /real/ symplectic form. Is it possible that someone can give me a hint/nudge in the right direction on this problem (and also perhaps correct me where I have been mistaken)? Thanks!
Using $z_j = x_j + \sqrt{-1} y_j$, the standard symplectic structure on $\mathbb C^n$ is $$ \omega = \sum_{i=1}^n dx_i \wedge dy_i.$$ Writing $(x, y) = \sum_{i=1}^nx^i e_i + y^i f_i$, $J$ is given by $$ J e_i = f_i,\ \ \ Jf _j = - e_j.$$ Then by checking directly, $$\begin{cases} \omega(e_i, e_j) = g(Je_i, e_j) = 0, \\ \omega(f_i, f_j) = g(Jf_i, f_j) = 0, \\ \omega (e_i, f_j) = g(Je_i, f_j) = \delta_{ij} \end{cases}$$ So $\omega$ and $g(J\cdot, \cdot)$ agree on basis and thus are the same. Note that in general the compatibility is part of the assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Locally compact metric space I'm trying to prove that a metric space is locally compact iff every closed ball is compact, using the more general definition that applies to Hausdorff spaces, that every point has a compact neighbourhood. So call $X$ my space. The only non trivial thing to prove is that every closed ball is compact, assuming $X$ is locally compact. So consider $N$ a compact neighbourhood of some $x\in X$. Then as a neighbourhood, it contains $B(x,r)$ for some $r$. So it contains $\bar{B}(x,r/2)$. This is closed inside $N$ which is compact, so it's also compact. So I've proven that at any point there is a compact closed neighbourhood ball. Surely it's not too hard to prove all the bigger closed balls are compact ?
If $(X,d)$ is a metric space, then $d'(x,y) = d(x,y)/(1+d(x,y))$ defines a new metric having the same (open or closed) balls as $(X,d)$. But then $X$ is the ball of radius $1$ centered anywhere. Hence if, in addition, $X$ is locally compact but not compact, the metric space $(X,d')$ admits a closed non compact ball.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
The probability of probability I am trapped by a logic premise. We know coins have no memory and each throw is a independent event. Thus, after 20 faces, it should not really matter what side you bet. But there is also a normal distribution, and the probability of getting far to the mean. Isn´t there some sort of statistical tendency for returning to the mean? A black swan or something? The roulette record is 35 reds. If you happen to be in the 36th red in 2016, its exactly the same betting on both colors when you know the casuality is probably not gonna continue? Indeed, ine empirical terms a series of 50 reds in contrast to another random pattern is so unusual that im sure any scientist would believe the roulette is not working well, but all permutations are equally probable in the end. Isn´t some kind of marginal probability as soon as you get far from the mean? Is it eqully probable to get a face in the next 5 tries when you are at throw 0 than when you are at 25? Is it equally probable a particular random pattern of crosses and faces that 20000 faces in a row? Mathematically, an infinite or infinite -1 series of faces is possible. But whats the probability of not being equally distirbuted after 30,3000,30000 throws?
Dice and coins indeed have no memory; in fact, if I show you a coin, you even don't know how many tosses before there was a "face". Is the coin smarter than you? Note that if you throw a die 5 times, then the outcome $(4,2,5,1,6)$ has exactly the same probability as $(6,6,6,6,6)$. The normal distribution can be derived purely combinatorially: you can just count how many 6-tuples contain "one six", how many contain "two sixes" etc. You can check all of your questions and concerns yourself, by running a simple simulation. Of course, with a real die, if you throw it 10 times and you get 10 times a "six", then it is very likely that the eleventh will be a "six" too, that is, the die is not fair.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Compute the gradient of mean square error Let $Y = \begin{pmatrix} y_1 \\ \cdots \\ y_N\end{pmatrix}$ and $X = \begin{pmatrix} x_{11} & \cdots & x_{1D} \\ \cdots & \cdots & \cdots \\ x_{N1} & \cdots &x_{ND}\end{pmatrix}$. Let also $e = y - Xw$ and let's write the mean square error as $L(w) = \frac{1}{2N} \sum_{i=1}^{N} (y_n - x_n^Tw)^2 = \frac{1}{2N} e^T e$. I want to prove that the gradient of $L(w)$ is $-\frac{1}{N} X^T e$. What would be a way of proving this?
Since $$ L(w) = \frac{1}{2N}\sum_{n=1}^N(y_n - (Xw)_n)^2 $$ it follows that $$ \frac{\partial L}{\partial w_j} = -\frac{1}{N}\sum_{n=1}^N x_{nj}(y_n - (Xw)_n) = -\frac{1}{N}x_j^Te, $$ where $x_j$ is the $j$th column of $X$. Therefore, $$ \nabla L(w) = -\frac{1}{N}X^Te $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1962877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Existence of a local transverse embedded submanifold for a flow Let $M$ be a smooth manifold and $\Phi$ the flow of a non-vanishing vector field. There always exists around every point $x$ at least locally an embedded submanifold $S_x$ that is transversal to $\Phi$ at $x$, right? How can one show this? I thought that somehow, since $\Phi$ foliates $M$, there is a foliated atlas, so that for any $x$ there is a chart $(U,\varphi_\perp,\varphi_\parallel)$ with $x\in U$ and $\{y: \varphi_\parallel(y)=\varphi_\parallel(x)\}$ is naturally transverse to $\Phi$.Would this go in the right direction?
You can suppose that $x$ is in the domain of a chart $U$ that you identify with an open subset of $R^n$ and $x=0$. Suppose that $X$ is the vector field which does not vanish, $X(x)$ is a vector of $R^n$, consider an hyperplane (which contains the origin) $H$ defined by $\alpha(u)=0$ which does not contain $X(x)$. The function defined on $U$ by $f(y) =\alpha(X(y))$ is continue. Since $f(x)\neq 0$, there exists an open interval $f(x)\in I$ and $0$ is not in $I$. $f^{-1}(I)=V$ is open since $f$ is continue. If, $y\in V\cap H, f(y)=\alpha(X(y))\neq 0 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of the two variable function $f(x,y).$ How to show that $$\lim_{(x,y)\to(0,0)}\frac{x^{2}y^{2}}{\sqrt{x^{2}+y^{2}}(x^{4}+y^{2})}=0$$ I tried with different paths as $x=0,y=0, y=x$ its comes to zero but i have no general idea. Please help. Thanks to lot.
You have \begin{align} \left| \frac{x^{2}y^{2}}{\sqrt{x^{2}+y^{2}}(x^{4}+y^{2})}\right| &=\frac{x^{2}y^{2}}{\sqrt{x^{2}+y^{2}}(x^{4}+y^{2})} \leq \frac{x^{2}y^{2}}{\sqrt{x^{2}+y^{2}}\,(y^{2})}\\ \ \\ &=\frac{x^2}{\sqrt{x^2+y^2}}\leq\frac{x^2}{\sqrt{x^2}}=|x|\\ \ \\ &\leq\sqrt{x^2+y^2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
whether this set is closed subset of a Hilbert space Whether the set of sequences $(x_n \in l_2: \sum_{n=1}^{\infty}\frac{x_n}{n}=1 )$ is closed ? How do you find the limit point of a set of sequences? Moreover what is the complement of this set?, and whether that is open? (IIT-GATE 2015)
We have $f=(1,{1 \over 2},..., {1 \over n},...) \in l_2$, hence $f^*(x)= \langle f,x\rangle$ is a continuous linear functional, hence the inverse image of a closed set is closed. The complement is just the set $\{x | f^*(x) \neq 1 \}$ which is open because its complement is closed (or, indeed, because $\mathbb{R} \setminus \{1\}$ is open). Another way is to note that $H=\{x | f^*(x) = 1 \} = \{e_1\}+\ker f^*$, that is, $H$ is just a translate of the kernel, which is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Hints on the right-hand side of a combinatorial proof question $\sum_{k=0}^{n} {n \choose k} (n-k)^{n+1}(-1)^{k} = \frac {n(n+1)!} {2}$ So the left-hand side looks so much like inclusion-exclusion principle. The sign changes between - and + depending on whether k is even or odd (due to $(-1)^{k}$) It's like we are subtracting singles but then recounting doubles but then triples are over-counted so we subtract them etc. But the right-hand side is confusing to me. How does the right hand side have anything to do with the left-hand side? What question could I ask for the right-hand side for it to calculate the same thing as the left-hand side? I realize I could think of it as ${n \choose 1} \frac {(n+1)!} {2!}$ as well, but it hasn't helped me much. Any help would be appreciated. Thank you very much!
Consider sequences of length $n+1$ using letters from an alphabet of size $n$. Both sides count the number of such sequences in which one letter appears twice, and all other letters appear exactly once. Right-hand side: There are $n$ ways to choose the letter that appears twice. There are $(n+1)!$ ways to order the $n+1$ letters if they were distinct; we divide by $2$ to account for the letter that appears twice. Left-hand side: note that any sequence of length $n+1$ that does not satisfy the above property must not contain all $n$ letters of the alphabet. Let $A_j$ be the set of such sequences that are missing the letter $j$. Then by inclusion exclusion, the set of sequences that are missing at least one letter of the alphabet is $\left|\bigcup_{j=1}^n A_j\right| = -\sum_{k=1}^n \binom{n}{k} \left|\bigcap_{j=1}^k A_j\right| (-1)^k = - \sum_{k=1}^n \binom{n}{k} (n-k)^{n+1} (-1)^k$. Subtracting this from the total number of sequences $n^{n+1}$ gives $\sum_{k=0}^n \binom{n}{k} (n-k)^{n+1} (-1)^k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solutions of $\sin2x-\sin x>0$ with $x\in[0,2\pi]$ What are the solutions of this equation with $x\in[0,2\pi]$? $$\sin2x-\sin x>0$$ I took this to $$(\sin x)(2\cos x-1)>0$$ Now I need both terms to be the same sign. Can you please help me solve this?
When $\sin x>0$, i.e. $x\in(0,\pi)$, the inequation reduces to $\cos x>1/2$, i.e. $x\in(0,\pi/3)$. When $\sin x<0$, i.e. $x\in(\pi,2\pi)$, the inequation reduces to $\cos x<1/2$, i.e. $x\in(\pi,2\pi-\pi/3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Ways of arranging of different nationality persons at a round table $2$ American, $2$ British, $2$ Chinese, $1$ Dutch, $1$ Egyptian, $1$ French and $1$ German people are to be seated for a round table conference, Then $(a)$ Then number of ways in which no two persons of same nationality are seated together $(b)$ The number of ways in which only American pair is adjacent, is $(c)$ The number of ways in which exactly two pairs of same nationality are together, is $\bf{My\; Try::}$ For $(a)$ Total number of ways of arranging persons in a row is $(10-1)! = 9!$ Now number of ways in which all persons of same nationality sit together is $ = 6!\cdot 2!\cdot 2!\cdot 2!$ So number of ways in which all persons of same nationality sit together is $ = 9!-6!\cdot 8$ Is my solution for first part right? If not, then how can I solved it? Also, help required in $(b)$ and $(c)$. Thanks
The first problem is an inclusion-exclusion problem. Your $9!-2\cdot8!=357,120$ is the number of arrangements that have at least one of the pairs separated, not the number that have all three pairs separated. The number of ways in which the two Americans sit together is $2\cdot 8!$: we treat them as a single individual, so we’re seating $9$ individuals, but that one individual has two ‘states’ that have to be distinguished, since the two can sit in either order. Similarly, there are $2\cdot 8!$ arrangements with the two British together and $2\cdot 8!$ with the two Chinese together. Thus, to a first approximation there are $9!-3\cdot2\cdot8!$ arrangements that have no two people of the same nationality seated together. However, the figure $3\cdot2\cdot8!$ counts twice every arrangement that has both the Americans and the British together. There are $2^2\cdot7!$ such arrangements (why?), so we have to subtract this number from our original approximation. The same goes for the two other pairs, American and Chinese, and British and Chinese: in each of those cases we’ve also counted $2^2\cdot 7!$ arrangements in the figure of $3\cdot2\cdot8!$. Thus, our second approximation is $9!-3\cdot2\cdot8!+3\cdot2^2\cdot7!$. Unfortunately, this still isn’t quite right: the $2^3\cdot6!$ arrangements that have the two Americans together, the two British together, and the two Chinese together were counted once each in the $9!$ term; subtracted $3$ times each in the $-3\cdot2\cdot8!$ term; and added back in $3$ times in the $3\cdot2^2\cdot7!$ term, so they have been counted a net total of one time each. But we don’t want to count them, so we have to subtract $1$ for each of them to reach the final answer: $$\begin{align*} 9!-3\cdot2\cdot8!+3\cdot2^2\cdot7!-2^3\cdot6!&=(9-6)\cdot8!+(84-8)\cdot6!\\ &=3\cdot8!+76\cdot6!\\ &=244\cdot720\\ &=175,680\;. \end{align*}$$ To count the arrangements that have just the two Americans together, start with the $2\cdot8!$ arrangements that have them together. Among these there are $2^2\cdot7!$ that also have the two British together (why?), and $2^2\cdot7!$ that also have the two Chinese together. Finally, there are (as we saw before) $2^3\cdot6!$ arrangements that have all three pairs together. The inclusion-exclusion calculation is simpler this time: $$2\cdot8!-2\cdot2^2\cdot7!+2^3\cdot6!=46,080\;.$$ I’ll leave the last one for you to try now that you’ve seen the first two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the goal of harmonic analysis? I am taking a basic course in harmonic analysis right now. Going in, I thought it was about generalizing Fourier transform / series: finding an alternative representation of some function where something works out nicer than it did before. Now, having taken the first few weeks of this, it is not at all about Fourier analysis but about the Hardy-Littlewood-Maximal-operator, interpolation theorems, Stein's theorem/lemma and a lot of constants which we try to improve constantly in some bounds. We are following Stein's book on singular integrals, I guess. Can anyone tell me where this is leading? Why are we concerned with this kind of operators and in which other areas are the results helping?
Ultimately it helps one to prove theorems (like existence and uniqueness) of partial differential equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 4, "answer_id": 2 }
Proof for Riemann's isolated singularity. Let $f$ be complex function has isolated singularity at $z_0$. Suppose $f$ is bounded on some deleted neighborhood of $z_0.$ Then $f$ is holomorphic and bounded on some deleted neighborhood of $z_0.$ Let $h(z)= \begin{cases} (z-z_0)^2f(z)\mbox{, if z is not z_0 }\\ 0 \mbox{, if $z=z_0$} \end{cases}$ Then Since $f$ is bouned on some deleted neighborhood of $z_0$, $$ \lim_{z \to z_0} h(z) =0 $$ $$\lim_{z\to z_0}\frac{h(z)-h(z_0)}{z-z_0} = 0.$$ So $h$ is analytic at $z_0$. since $h(z_0)=h^\prime (z_0)=0$, $$h(z)=\sum_{n=2}^{\infty} a_n(z-z_0)^n $$ so $f(z)=\sum_{n=2}^{\infty}a_n(z-z_0)^{n-2}$ on some deleted neighborhood of $z_0$ , By properties of powerseries $f$ is defined at $z_0$ so that $f$ is analytic at $z_0$ [[I want you guys to be check whether my proof is correct]]
Your argument is correct, but your presentation of it may be considered incomplete. That depends on what you can use without mentioning it. After you've shown that $h$ is complex differentiable at $z_0$ with $h(z_0) = h'(z_0) = 0$, you assert that $h$ is analytic at $z_0$. That's true, but it doesn't follow from the differentiability at $z_0$ alone, you need the complex differentiability on a full neighbourhood of $z_0$. Probably you should mention that you have the complex differentiability of $h$ on a punctured neighbourhood of $z_0$ from the assumptions on $f$ and the definition of $h$. At the end, you mention some unspecified properties of power series. Are you in a position to leave them unspecified (does your audience know which properties these are, and do they know that you know which)? If not, you need to specify these properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1963851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving $x+x^3=5$ without using the cubic equation. In lessons, I get quite bored and recently throughout these lessons I have been trying to solve for x in: $$x+x^3=5$$ I've figured out how to do it for squares using the quadratic equation, but the cubic equation looks so dauntingly massive it actually makes my bladder hurt. So, is there a way to figure this out using a different process, and better so for $x^n$. Danke Chien
Sometimes it is difficult to find roots in closed form, so this answer is in the spirit of numerical values for the roots: $$x^3+x=5$$ $$x^2+1=\frac5x$$ $$x^2=\frac5x-1=\frac{5-x}x$$ $$x=\sqrt{\frac{5-x}x}$$ Once you solve for $x$, you can employ fixed-point iteration. First, see the root is near $x=1.5$ $$x_0=1.5$$ $$x_1=\sqrt{\frac{5-x_0}{x_0}}=\sqrt{\frac{5-1.5}{1.5}}=\sqrt{\frac{7}{3}}=1.527525231651947$$ $$x_2=\sqrt{\frac{5-{x_1}}{x_1}}=1.507736168412725$$ $$x_3=\sqrt{\frac{5-x_2}{x_2}}=1.521916573662357$$ And you can keep doing this until you find an answer out as many decimals you want. A much better algorithm to use is Newton's method: $$x_{n+1}=x_n-\frac{(x_n)^3+x_n-5}{3(x_n)^2+1}$$ Starting with $x_0=1.5$ again, $$x_0=1.5$$ $$x_1=1.5-\frac{1.5^3+1.5-5}{3(1.5)^2+1}=1.516129032258065$$ $$x_2=1.51598024045$$ Admittedly more complicated, but the amount of digits correct just about doubles each time, which is far faster than the fixed point iteration method. Also note this already captures all of the digits Will Jagy gives us.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1964176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
The Nature of Differentials and Infinitesimals I have been wondering for some time what the limits of Leibniz notation is, and what exactly its meaning is. I learned limits and later learned (to some extent) infinitesimals, but there are some oddities which have me befuzzled. The one person I know who could answer the question gave me a reference so dense I couldn't make heads or tails of it. In any case, let's say you have a function $y = f(x)$. Now, the derivative is $\frac{dy}{dx} = f'(x)$ and the second derivative is $\frac{d^2y}{dx^2} = f''(x)$. Anyway, if you play around with these a bit, you can see that $\frac{dx}{dx} = 1$, which means that $x$ always changes in unity with itself. However, a very odd result happens if you look at the second derivative. Since $\frac{dx}{dx} = 1$, and 1 is a constant, that means that the second derivative, $\frac{d^2x}{dx^2} = 0$, which means that x never has any acceleration with respect to itself. However, algebraically, what this seems to mean to me is that $d^2x$ is always zero, but this is obviously not the case, as it could be put in ratio with $dy^2$ to produce a real-valued function. However, this seems to be at odds with an infinitesimal definition of $d^2x$ (or any other definition I have seen). It seems to imply that that $dx$ is more of a relational quantity than an infinitesimal or even a limit. I did not know if anyone had any specific knowledge about this, or knew of any books that dealt with this topic. I have a hard time finding any at all that approach this subject. On a side note (but related), I would also be interested in any books which discussed any possible meaning of quantities like $\frac{d^2y}{d^2x}$ (note that this is different from the Leibniz second derivative which is $\frac{d^2y}{dx^2}$). Anyway, if anyone has ideas or references, I would love to investigate this topic further.
The algebraic definition of the second derivative of $y=f(x)$ is $$ f''(x)=\frac{d\left[\frac{dy}{dx}\right]}{dx} $$ This can be expanded using the quotient rule $$ f''(x)=\frac{d^2y}{dx^2}-\frac{dy\ d^2x}{dx^3} $$ Furthermore, if you desire to evaluate $d^2y/d^2x$ this can be found by taking algebraic differentials $$ dy=f'(x)dx $$ $$ d^2y=f''(x)dx^2+f'(x)d^2x $$ And finally $$ \frac{d^2y}{d^2x}=f''(x)\frac{dx^2}{d^2x}+f'(x) $$ Unfortunately, it is not possible to place a value on this expression because the quantity $d^2x/dx^2$ cannot be evaluated. In order to obtain a usefull result, algebraic manipulation of the derivitive quantities must be done to cancel out all of the differential terms ($dx$, $dy$, $d^2x$, etc.), only then the expression can be evaluated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1964301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Riemann (darboux?) integrating $f: [2,3] \to \mathbb{R} \quad f(x)=\frac{1}{x^2}$? I have function $$ f(x) = \frac{1}{x^2} $$ I want to riemann (I don't know whether what I mean is actually Riemann or Darboux integration) integrate it on the interval $$ x \in \left[2,3\right] $$ What I could do is partition the interval into subintervals, first stating that $$ 2 = x_0 < x_1 < x_2 < \dots < x_{n-1} < x_n =3 \implies P=(x_0, \: \dots \:, x_n)$$ And $$ m_i = \inf_{x \in [x_{i-1}-x_{i}]}{\left( f(x) \right)} $$ $$ M_i = \sup_{x \in [x_{i-1}-x_{i}]}{\left( f(x) \right)} $$ So I can tell the darboux (??) sums: Lower: $L_{f,P} = \sum_{i=1}^{n}{(x_i-x_{i-1})m_i}$ Upper: $U_{f,P} = \sum_{i=1}^{n}{(x_i-x_{i-1})M_i}$ I could find $m_i$ and then, for example, $L_{f,P}$ is: $$L_{f,P} = \sum_{i=1}^{n}{(x_i-x_{i-1}) \left( \frac{1}{x_i^2} \right)}$$ How do I continue? I know that I should find what this value approaches as $n \to \infty$ and check whether $U_{f,P}$ approaches the same number to find the integral. But I can't take limits of sums and I don't know how I should simplify that... Please, if possible, I'd like simple and beginner-level answers
Let $I \subset\mathbb{R}$ be a closed interval and $f:I\to\mathbb{R}$ be a bounded function. Let \begin{eqnarray} \mathrm{L}f := \sup_{P \text{ is a partition of }I}L_{f,P}\\ \mathrm{U}f := \inf_{P \text{ is a partition of }I}U_{f,P}. \end{eqnarray} and $|P| > 0$ be the maximum length among the subinterevals in $P$, where $P$ is a partiton of $I$. Then there is a theorem that for all $\varepsilon > 0$ there exists $\delta > 0$ s.t. if a partition $P$ of $I$ satisfies $|P| < \delta$ then \begin{eqnarray} \left| L_{f,P}-Lf \right| < \varepsilon\\ \left| U_{f,P}-Uf \right| < \varepsilon. \end{eqnarray} Hence you can calculate the lower and upper Riemannian integral with partitions whose subintervals are of length $\frac{1}{n}$. In order to calculate the lower one, calculate \begin{eqnarray} \frac{1}{n}\sum_{i=1}^n\frac{1}{\left( 2 + \frac{i}{n} \right)\left( 2 + \frac{i-1}{n} \right)} \end{eqnarray} instead of \begin{eqnarray} \frac{1}{n}\sum_{i=1}^n\frac{1}{\left( 2 + \frac{i}{n} \right)^2} \end{eqnarray} and then estimate the difference to each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1964422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $u, v, w$ are in the span of $\{u+v, 2u+3v, 4v+6w\}?$ I know this has to do with linear combinations, namely that you would set out to solve the following set of equations to show that $c_{1}, c_{2}$, and $c_{3}$ exist and are not all 0, but I'm unclear as to how I actually solve for those in this case. That is, I know I should have these equations: $u = c_{1}(u+v) + c_{2}(2u+3v) + c_{3}(4v+6w)$ $v = c_{1}(u+v) + c_{2}(2u+3v) + c_{3}(4v+6w)$ $w = c_{1}(u+v) + c_{2}(2u+3v) + c_{3}(4v+6w)$ Do you not need to solve for c to do this proof?
HINT: What is the rank of the following matrix? $$ \begin{pmatrix} 1 & 2 & 0 \\ 1 & 3 & 4 \\ 0 & 0 & 6 \end{pmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1964527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Range perpendicular to Nullspace I'm stuck in this Linear Algebra problem: Let $A\in M_n(\mathbb{C})$ with $\mathrm{rank}(A)=k$. Prove that the following are equivalent: a) $R(A) \bot N(A)$ b) $N(A)=N(A^*)$ c) $R(A)=R(A^*)$ for a) implies b) I should prove double contention, i.e. $N(A)\subset N(A^*)$ and $N(A^*)\subset N(A)$. So for the first contention, I took $x \in R(A)$ so $\exists \ u \in \mathbb{C}^n$ such that $Au=x$, then I took $y \in N(A)$. And by hypothesis (I mean $R(A) \bot N(A)$ ), $(Au)^*y=0$ this is the same as $u^*A^*y=0$. I had to prove that $y$ is in $N(A^*)$ i.e. $A^*y=0$ but i've got this upset $u^*$, how do I can improve that? Or from here can I conclude that $y\in N(A^*)$? Thanks in advance.
If $A^*y = u \neq 0$ then $(y,Au)=(A^*y,u)>0$, where $(.,.)$ is the inner product. However $Au \in R(A)$ and $y \in N(A)$ by assumption so it must be the case that $(y,Au)=0$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1964635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How do I find the length between two circles that have the same tangent line? P and N are the center of the two circles with radii 50 units and 5 units respectively. TS is the common tangent to the circles at point Z and R. TNP is a straight line and the distance between P and N is 170 units. Find the length of ZR. I'm stuck doing this problem. Can someone teach me how do you solve this. Which theorems do you need to use to solve this problem?
$$\Delta TNR\sim\Delta TPZ$$ Hence, $$\frac{NT}{NR}=\frac{NT+PN}{PZ}$$ $$\frac{NT}{5}=\frac{NT+170}{50}$$ $$50NT=5NT+850$$ $$45NT=850$$ $$NT=\frac{850}{45}=\frac{170}{9}$$ $$PT=PN+NT=170+\frac{170}{9}=\frac{1700}{9}$$ Now, use the Pythagoras theorem on $\Delta TPZ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1964930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the sum $\sum_{n \geq 1} \frac{2^n\operatorname{mod} n}{n^2}$ converge? I am somewhat a noob, and I don't recall my math preparation from college. I know that the sum $\displaystyle \sum_{n\geq 1}\frac{1}{n}$ is divergent and my question is if the sum$$\sum \limits _{n\geq 1}\frac{2^n\mod n}{n^2}$$converges. I think is not but I do not know how to prove that! Thanks!
In this answer, we prove that $$ \sum_{n=1}^{\infty} \frac{2^n \text{ mod } n}{n^2} = \infty. \tag{*} $$ Idea. The intuition on $\text{(*)}$ comes from the belief that the sequence $(2^n \text{ mod } n)/n$ is equidistributed on $[0, 1]$, which is quite well supported by numerical computation. Proving this seems quite daunting, though, so we focus on a particular subsequence which is still good to give a diverging lower bound of $\text{(*)}$. To be precise, we focus on the indices of the form $n = 5p$ for some prime $p$ and prove that the corresponding sum is comparable to the harmonic series for primes $\sum_p \frac{1}{p}$, which also diverges. Proof. To this end, we consider the sequence $(a_k : k \geq 0)$ in $[0, 1)$ defined by $$ a_k = \left\{ \frac{2^{5p_k}}{5p_k} \right\},$$ where $\{ x \} = x - \lfloor x \rfloor$ is the fractional part of $x$ and $p_k$ is the $k$-th prime number. Now focusing only on the index $n = 5p_k$ for some $k$, we can bound the sum $\text{(*)}$ below by $$ \sum_{n=1}^{\infty} \frac{2^n \text{ mod } n}{n^2} = \sum_{n=1}^{\infty} \frac{1}{n}\left\{ \frac{2^n}{n} \right\} \geq \sum_{k=1}^{\infty} \frac{a_k}{5p_k}. $$ So it suffices to prove that this bound diverges. First, for any prime $p$ we have $$ 2^{5p} \equiv 2^5 \equiv 32 \pmod{p}. $$ This allows us to write $2^{5p} = mp + 32$ for some non-negative $m$. Next, notice that any prime $p$ other than $2$ and $5$ are either of the form $p = 4k+1$ or of the form $p = 4k+3$. Depending on which class $p$ falls in, we find that $$ 2^{5p} \equiv 2^p \equiv \begin{cases} 2, & \text{if } p =4k+1 \\ 3, & \text{if } p =4k+3 \end{cases} \pmod{5}. $$ What this tells about $m$ is as follows: $$ m \equiv \begin{cases} 0, & \text{if } p =4k+1 \\ p^{-1}, & \text{if } p =4k+3 \end{cases} \pmod{5}. $$ (Here, $p^{-1}$ is the multiplicative inverse of $p$ modulo $5$.) From this, for $p_k > 32$ we have the following estimate: $$ a_k \geq \frac{1}{5} \quad \text{if } p_k \equiv 3 \pmod{4}. $$ Consequently, by the PNT for arithmetic progression, $$ \frac{a_1 + \cdots + a_n}{n} \geq \frac{1}{5} \frac{\pi_{4,3}(p_n) + \mathcal{O}(1)}{\pi(p_n)} \xrightarrow[n\to\infty]{} \frac{1}{10}. $$ (The $\mathcal{O}(1)$-term appears by discarding terms with $p_k \leq 32$.) Finally, let $s_n = a_1 + \cdots + a_n$. Then by summation by parts, as $N \to \infty$ we have \begin{align*} \sum_{k=1}^{N} \frac{a_k}{5p_k} &= \frac{1}{5} \bigg( \frac{s_N}{p_N} + \sum_{k=1}^{N-1} \left( \frac{1}{p_k} - \frac{1}{p_{k+1}} \right) s_k \bigg) \\ &\geq \frac{1}{5} \sum_{k=1}^{N-1} \left( \frac{1}{p_k} - \frac{1}{p_{k+1}} \right) \frac{k}{11} + \mathcal{O}(1) \\ &\geq \frac{1}{55} \sum_{k=1}^{N} \frac{1}{p_k} + \mathcal{O}(1). \end{align*} Taking $N \to \infty$, this series diverges by the harmonic series for primes. Therefore the claim $\text{(*)}$ follows. //// Elaborating this argument, we find that $$ a_k \equiv \frac{2^{5p_k}}{5p_k} \equiv \tilde{a}_k + \frac{32}{5p_k} \pmod{1} $$ where $\tilde{a}_k$ satisfies $$ \tilde{a}_k = \begin{cases} 0, & \text{if } p_k \equiv 1, 9, 13, 17 \pmod{20} \\ 1/5, & \text{if } p_k \equiv 11 \pmod{20} \\ 2/5, & \text{if } p_k \equiv 3 \pmod{20} \\ 3/5, & \text{if } p_k \equiv 7 \pmod{20} \\ 4/5, & \text{if } p_k \equiv 19 \pmod{20} \end{cases}. $$ Thus by the PNT for arithmetic progression again, we have the following convergence in distribution: $$ \frac{1}{n} \sum_{k=1}^{n} \delta_{a_k} \xrightarrow{d} \frac{1}{2}\delta_{0} + \frac{1}{8}\sum_{j=1}^{4} \delta_{j/5} \quad \text{as } n \to \infty$$ The following numerical simulation using first 1,000,000 terms of $(a_k)$ clearly demonstrates this behavior:
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43", "answer_count": 3, "answer_id": 1 }
$\displaystyle \log_a(3)=q$ & $\displaystyle \log_a(2)=p$ Express $log_a 72$ in terms of p & q $\displaystyle \log_a(3)=q$ & $\displaystyle \log_a(2)=p$ Express $log_a 72$ in terms of p & q Currently I have tried nothing as I cannot even figure out where to begin a demonstration kindly will help much Many thanks :)
$ \log_a72 = \log_a(2^3 \cdot 3^2)\\ =\log_a (2^3) + \log_a (3^2)\\ = 3 \log_a2 + 2\log_a3\\ = 3p + 2q$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$A$, $B$ and $C$ can do a ... $A$, $B$ and $C$ can do a piece of work in $30$, $40$ and $50$ days respectively. If $A$ and $B$ work in alternate days started by $A$ and they get the assistance of $C$ all the days, find in how many days the whole work will be finished? My Attempt: In $30$ days, $A$ does $1$ work. In $1$ day, $A$ does $\frac {1}{30}$ work. In $40$ days, $B$ does $1$ work. In $1$ day, $B$ does $\frac {1}{40}$ work. In $50$ days, $C$ does $1$ work. In $1$ day, $C$ does $\frac {1}{50}$ work. In $1$ day, $(A+C)$ do $\frac {4}{75}$ work. In $1$ day, $(B+C)$ do $\frac {9}{200}$ work. I could not solve from here. Please help.
A takes-> 30 days, B takes-> 40 days, C takes-> 50 days Total unit of work they have to finish is(LCM of 30,40,50) i.e 600 units It means A does -> 20 unit of work in 1 day, B does -> 15 unit in 1 day and C does 12 units in 1 day. On 1st day A+C does 32 units out of 600, On 2nd day B+C does next 27 units of remaining work. It means they do 59 unit of work in 2 days. In 20 days 590 units will be done by them. Remaining 10 units are done by A+C in \frac{10}{32}$$ days. Total days they took to finish the work 600/59 =10 \frac{5}{16}$$ = 20 \frac{5}{16}$$ days.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Show that $2^{n} \geq (n +2)^{2}$ for all $n \geq 6$ Edit: If it is hard to read what I have written the essence of my question is: How come that $2 \times 2^{k} - (k+3)^{2} \geq 2^{k}$ from the assumption that $2^{k} \geq (k+2)^{2}$? Show that $2^{n} \geq (n +2)^{2}$ for all $n \geq 6$ I have excluded steps: Assumption: $\textsf{LHS}_{k} \geq \textsf{RHS}_{k} = 2^{k} \geq (k+2)^{2}$ We want to show that $\textsf{LHS}_{k+1} - \textsf{RHS}_{k+1} \geq 0$ So I start as follows, $\textsf{LHS}_{k+1} - \textsf{RHS}_{k+1} = 2^{k+1} - (k+3)^{2} = 2^{k} \times 2 - (k+3)^{2} = \textsf{LHS}_{k} \times 2 - (k+3)^{2} \geq \textsf{LHS}_{k} \geq \textsf{RHS}_{k}...$. (according to the assumption) Here I need to stop because I do not understand how that is the case. I do not understand how $\textsf{LHS}_{k} \times 2 - (k+3)^{2} \geq \textsf{LHS}_{k}$ which is the same as $\textsf{LHS}_{k} \times 2 - \textsf{RHS}_{k+1} \geq \textsf{LHS}_{k}$ I have no problem with $\textsf{LHS}_{k+1} > \textsf{LHS}_{k} \geq \text{RHS}_{k}$ nor $\textsf{LHS}_{k} \times 2 - \text{RHS}_{k} \geq \textsf{LHS}_{k} \geq \text{RHS}_{k}$ it is the $\textsf{RHS}_{k+1}$ I have a problem with.
Base Case: $$2^{6} =64\geq (6 +2)^{2}=64$$ Inductive Step: Assume true for some $k\geq 6$ $$2^{k} \geq (k +2)^{2}$$ Now show true for $n=k+1$ from assumed truth of $n=k$ case. $$2^{k+1} \geq ((k+1) +2)^{2}=k^2+6k+9$$ so \begin{align*} 2^{k+1}&=2^k\cdot 2 \geq 2(k+2)^{2}\\ &=2k^2+8k+8\geq ((k+1) +2)^{2} \end{align*} for $k\ge6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
$4x^2y′′-8x^2y′+(4x^2+1)y=0$ solve by Frobenius Method. I would like to ask if someone can explain to me how can we solve the following DE using this method. $4x^2y′′-8x^2y′+(4x^2+1)y=0$
Given differential equation $4x^2y^{\prime\prime}-8x^2y^{\prime}+(4x^2+1)y=0$. $P(x)=-2$ and $Q(x)=\frac{1+4x^2}{4x^2}$, since $Q(x)$ is not analytic at $x=0$, we say $x=0$ is a singular point of this differential equation . Singular points are futher classified into regular singular and irregular singular points. Since $xP(x)$ and $x^2Q(x)$ are analytic at $x=0$, we say $x=0$ is a regular singular point. So we can assume that $y=\displaystyle\sum_{n=0}^{\infty} a_n x^{n+m}$ for real number $m$ which we will find out. Then $y^{\prime}=\displaystyle\sum_{n=0}^{\infty} a_n (n+m)x^{n+m-1}$ and $y^{\prime\prime}=\displaystyle\sum_{n=0}^{\infty} a_n(n+m)(n+m-1) x^{n+m-2}$. We will substitute this in the given differential equation. We get, $$(4a_0m(m-1)+a_0)x^m+(4a_1m(m+1)-8a_0m+a_1)x^{m+1}+\sum_{n=0}^{\infty}\left[ 4a_n+4a_{n+2}(n+m+1)(n+m+2)-8a_{n+1}(n+m+1)+a_n+2\right]x^{n+m+2}=0$$ $m=\frac{1}{2}$, $a_1=a_0$ and $a_{n+2}(4(n+m+2)(n+m+1)+1)-8a_{n+1}(n+m+1)=4a_n=0$ Put $m=\frac{1}{2}$ , the recussion relation is $a_{n+2} = \dfrac{1}{(n+2)^2}\left((2n+3)a_{n+1}-a_n\right)$. Now by induction you can prove that $a_n = \dfrac{a_0}{n!}$. Therefore $y_1 = \sqrt{x}\displaystyle\sum_{n=0}^{\infty}\frac{x^n}{n!} = \sqrt{x}e^x$. Since the roots of the Indicial equation equation are equal, we cannot find the second Inependent solution using this method. It can be solved as $y_2 = y_1\displaystyle\int \dfrac{1}{y_1^2}e^{-\int p(x)dx}dx$. So $y_2 = \sqrt{x}e^x\displaystyle\int \dfrac{1}{xe^{2x}}e^{2x}dx = \sqrt{x}e^x\ln{|x|}$. Thefore our final answer wil be $y=c_1y_1+c_2y_2 = c_1\sqrt{x}e^x+c_2\sqrt{x}e^x\ln{|x|}$. Where $c_1$ and $c_2$ are arbitrary constants. Hope this helps :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Haven't learned calculus yet but I need this for a proof for Fibonacci numbers and its limit In an infinite series of Fibonacci numbers, is this always true $$\frac{F_{n}}{F_{n+1}}>\frac{F_{n-1}}{F_n}$$? Can you make an argument that in an infinite convergent series, eventually that will be false?
hint :We know **$\color{red} {F_{n+1}=F_n+F_{n-1}}\\ \to \color{red} {F_{n}=F_{n+1}-F_{n-1}}$ $$\frac{F_{n+1}-F_{n-1}}{F_{n+1}}>\frac{F_{n-1}}{F_{n+1}-F_{n-1}} $$ or see this $$\begin{bmatrix}f_{n+1} & f_n \\f_n & f_{n-1} \end{bmatrix}=\begin{bmatrix}1 & 1 \\1 & 0 \end{bmatrix}^n \to \det(\begin{bmatrix}f_{n+1} & f_n \\f_n & f_{n-1} \end{bmatrix})=\det(\begin{bmatrix}1 & 1 \\1 & 0 \end{bmatrix}^n)\\f_{n+1}f_{n-1}-f_n.f_n=(-1)^n\\n=2k \to f_{n+1}f_{n-1}-f_n.f_n=+1 \\f_{n+1}f_{n-1}>f_n.f_n\\ \frac{f_{n+1}}{f_{n}}>\frac{f_{n}}{f_{n-1}}\\ n=2k+1 \to f_{n+1}f_{n-1}-f_n.f_n=-1 \to f_{n+1}f_{n-1}<f_n.f_n \\ \to \frac{f_{n+1}}{f_{n}}<\frac{f_{n}}{f_{n-1}} \to \\ \color{red} {\frac{f_{n}}{f_{n+1}}>\frac{f_{n-1}}{f_{n}}}$$ this is your case when $n$ is odd
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Does $\sum_{k=0}^\infty e^{-\sqrt{k}} (-1)^k$ converge faster than $e^{-\sqrt{k}}$? Does $\sum_{k=0}^\infty e^{-\sqrt{k}} (-1)^k$ converge faster than $e^{-\sqrt{k}}$? In particular, is $$ \lim_{N\to\infty} e^{\sqrt{N}} \sum_{k=N}^\infty e^{-\sqrt{k}} (-1)^k=0? $$ I know that when the $\sqrt{k}$ and $\sqrt{N}$ are replaced with $k$ and $N$ the expression in the limit oscillates around $0$ alongside the parity of $N$. The square root effectively makes the sign of that summand wiggle faster, so I suspect that the extra cancellation makes the series converge faster. Any help would be appreciated. Thanks!
$$e^{\sqrt{N}}\sum_{k\geq N}e^{-\sqrt{k}}(-1)^k = (-1)^N \sum_{h\geq 0}(-1)^h \exp\left(-\frac{h}{\sqrt{N+h}+\sqrt{N}}\right) $$ is well-approximated by $$ (-1)^N\sum_{h\geq 0}(-1)^h \exp\left(-\frac{h}{2\sqrt{N}}\right) = \frac{(-1)^N}{1+\exp\left(-\frac{1}{2\sqrt{N}}\right)}$$ hence the given limit does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of Uniqueness of Two Lines I am trying to show a proof of the uniqueness of two equations. If $a,b,c,d\in\mathbb R$ and $ad-bc\neq 0$, then or any $\alpha,\beta \in \mathbb R$ the pair of equations: $$ax + by = \alpha\\cx + dy = \beta$$ have a unique solution where $x = x_0$ and $y=y_0$ that depends on $a,b,c,d,\alpha,$ and $\beta$. I am going to find a solution first by setting $x=x_0$ and $y=y_0$, then divide the first equation by $a$ and the seccond equation by $c$ I have:$$x_0+\frac{by_0}{a}=\frac{\alpha}{a}\\x_0 + \frac{dy_0}{c}=\frac{\beta}{c}$$ Subtract the two equations from each other: $$\left(\frac{b}{a}-\frac{d}{c}\right)y_0=\left(\frac{\alpha}{a}-\frac{\beta}{c}\right)$$ Multiply both sides by $ac$: $$(cb-ad)y_0=(c\alpha-a\beta)$$ Isolate $y_0$ by divining by the coefficients: $$y_0=\frac{c\alpha-a\beta}{cb-ad}$$ I isolate $x_0$ by doing the same as $y_0$ I divide the original top equation by $b$ and the bottom equation by $d$: $$\frac{ax_0}{b}+y_0=\frac{\alpha}{b}\\\frac{cx_0}{d}+y_0=\frac{\beta}{d}$$ Subtract the two equations from each other: $$\left(\frac{a}{b}-\frac{c}{d}\right)x_0=\left(\frac{\alpha}{b}-\frac{\beta}{d}\right)$$ Multiply both sides by $bd$: $$\left(da-bc\right)x_0=\left(d\alpha-b\beta\right)$$ Divide both sides by the coefficient of $x_0$: $$x_0=\frac{d\alpha-b\beta}{da-bc}$$ To show uniqueness I assume there is an alternative solution where $x=m_0$ and $y=n_0$. $$ax_0 + by_0 = \alpha\qquad am_0+bn_0=\alpha\\cx_0 + dy_0 = \beta\qquad cm_0+dn_0=\alpha$$ Question: How do I show that $x_0=m_0$ and $y_0=n_0$? Do I do the same process as above to the new set of equations with $m_0$ and $n_0$ to show that they equal the same thing? Here is what I think I should do: Set the top equations equal to each other because they are both equal to constant $\alpha$, and set the bottom equations equal to each other because they are both equal to constant $\beta$. $$ax_0+by_0=am_0+bn_0\\cx_0+dy_0=cm_0+dn_0$$ Divide the top equation by $a$ and the bottom equation by $c$: $$x_0+\frac{by_0}{a}=m_0+\frac{bn_0}{a}\\x_0+\frac{dy_0}{c}=m_0+\frac{dn_0}{c}$$ Subtract the two equations: $$\left(\frac{b}{a}-\frac{d}{c}\right)y_0=\left(\frac{b}{a}-\frac{d}{c}\right)n_0$$ And divde both sides by the coefficient in front of $y_0$ or $n_0$ to show that$$y_0=n_0$$ I can then do the same for $x_0$ and $m_0$ Ultimate Question: Is this valid as proof of uniqueness?
It is more simple and more correct to prove the unicity as a consequence of the unicity of the solution of the equations: $$(cb-ad)y_0=(c\alpha-a\beta)$$ and $$\left(da-bc\right)x_0=\left(d\alpha-b\beta\right)$$ Solving this equation you cannot say something as ''Isolate $y_0$ by divining by the coefficients:'' But you have to distinguish the cases: $1) \quad (cb-ad) \ne 0$: in this case $(cb-ad)$ has an inverse and this inverse is unique so we have the unique solution $y_0=(c\alpha-a\beta)(cb-ad)^-1$ $2) \quad (cb-ad)=0 $ and $(c\alpha-a\beta)\ne 0$, in this case the equation has no solutions. $2) \quad (cb-ad)=0 $ and $(c\alpha-a\beta)= 0$, in this case we have an identity with infinitely many solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When should I search for the covariance matrix instead of the variance? Suppose I have a random variable $X$ and $n$ realizations of this variable: $x_1, ..., x_n$. It seems clear to me in that case that if I am interested in knowing the variability I have in my data (realizations) then I should calculate the variance of $X$ i.e. $var(X)=E[(X-E(X))^2]$ where if I'm interested in its value I can compute its estimate (unbiased sample variance for example): $\frac{1}{n-1}\sum_{i=1}^n(x_i-\hat{\mu})^2$ where $\hat{\mu}=\frac{1}{n}\sum_{i=1}^n x_i$ is the sample mean. Suppose now that I just take the realizations $x_1, ..., x_n$ and put them inside a vector $\mathbf{x}$. Then if I'm again interested in knowing the variability I have in my data then what should I compute? * *Is it $var(\mathbf{x})$ and what does it give in that case? *Is it a covariance matrix I need to compute i.e. $E[(\mathbf{x}-E(\mathbf{x}))(\mathbf{x}-E(\mathbf{x}))^H]$? If so why?
Let $X$ be univariate random variable, say height of an individual. Then $x_{1},x_{2}\cdots,x_{n}$ be a realization on the variable height, and variance is what we compute. In multivariate analysis, the components of the random vector $X$ are different variables. In this case we will e computing to know how the different components related to each other. Suppose our random vector is $X=\left(\begin{array}{c} X_{1}\\ X_{2} \end{array}\right)$ where, $X_{1}$ is height, and $X_{2}$ is weight of an individual. Then, a realization of $n$ values on $X$ will be $\left(\begin{array}{c} x_{11}\\ x_{21} \end{array}\right)$ $\left(\begin{array}{c} x_{12}\\ x_{22} \end{array}\right)$ $\left(\begin{array}{c} x_{13}\\ x_{23} \end{array}\right)$ $\cdots$ $\left(\begin{array}{c} x_{1n}\\ x_{2n} \end{array}\right)$ where the first component of each data value corresponds to height and the second component corresponds to weight. For such data, we compute covariance matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Dimension of Kernel of Composition of Linear Transformations Take two linear transformations $T \colon U \to V$ and $S \colon V \to W$ where $U$, $V$ and $W$ are finite. I want to show that $$ \dim \ker (S \circ T) \leq \dim \ker S + \dim \ker T. $$ Attempt: I've been using the dimensional theorem because of the finity from each vector space. So I got that \begin{align*} &\, \dim \ker T + \dim \ker S \\ =&\, \dim \ker (S \circ T) + \dim V + \dim \operatorname{im} (S \circ T) - \dim \operatorname{im} T - \dim \operatorname{im} S. \end{align*} I have been using the inequalities * *$\dim \operatorname{im} S > \dim \operatorname{im} (S \circ T)$, *$\dim V > \dim \mathrm{im} T$, and that $\ker T \subseteq \ker (S \circ T)$. But with such inequalities I couldn’t conclude anything. Any tips or hint in order to progress or get the answer?
We do it like this: Lemma $1$: $T[\ker (S \circ T)] = \ker S \cap {\rm Im}\,T$. Proof: you do it, okay? Lemma $2$: If $F\colon V_1 \to V_2$ is linear and $Z \subseteq V_2$ is a subspace, then $F^{-1}[Z]$ is a subspace of $V_1$ and $\dim F^{-1}[Z] \leq \dim \ker F + \dim Z$. Proof: I'll check only the formula. Applying the rank-nullity theorem for the restriction of $F$ to $F^{-1}[Z]$, we get $$\dim F^{-1}[Z]= \dim \ker F\big|_{F^{-1}[Z]}+\dim {\rm Im}\,F\big|_{F^{-1}[Z]} \leq \dim \ker F + \dim Z.$$ Now to prove what you want we need only note that $$\begin{align} \dim \ker(S \circ T) &\stackrel{(1)}{\leq} \dim T^{-1}[T[\ker S \circ T]] \\ &\stackrel{(2)}{=} \dim T[\ker(S \circ T)] + \dim \ker T \\ &\stackrel{(3)}{=} \dim (\ker S \cap {\rm Im}\,T)+\dim \ker T \\ &\stackrel{(4)}{\leq} \dim \ker S + \dim \ker T,\end{align}$$where in (2) we apply lemma $2$, in (3) we apply lemma $1$, and (1) and (4) follow because of elementary set-theoretic considerations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1965966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is $\lim\limits_{n\to\infty}\frac {1}{n^2}\sum\limits_{k=0}^{n}\ln\binom{n}{k} $? It was originally asked on another website but nobody has been able to prove the numerical result. The attempts usually go by Stirling's approximation or try to use the Silverman-Toeplitz theorem.
By Stolz Cezaro $$\lim\limits_{n\to\infty}\frac {1}{n^2}\sum\limits_{k=0}^{n}\ln\binom{n}{k}=\lim\limits_{n\to\infty}\frac {1}{2n+1} \left(\sum\limits_{k=0}^{n+1}\ln\binom{n+1}{k}- \sum\limits_{k=0}^{n}\ln\binom{n}{k} \right)\\ =\lim\limits_{n\to\infty}\frac {1}{2n+1} \sum\limits_{k=0}^{n}\left(\ln\binom{n+1}{k}- \ln\binom{n}{k} \right)\\ =\lim\limits_{n\to\infty}\frac {1}{2n+1} \sum\limits_{k=0}^{n}\ln\left(\frac{(n+1)!}{k!(n+1-k)!} \frac{k!(n-k)!}{n!} \right)\\ =\lim\limits_{n\to\infty}\frac { \sum\limits_{k=0}^{n}\ln\left(\frac{(n+1)}{(n+1-k)} \right)}{2n+1}\\ =\lim\limits_{n\to\infty}\frac { \ln\left(\prod\limits_{k=0}^{n}\frac{(n+1)}{(n+1-k)} \right)}{2n+1}\\ =\lim\limits_{n\to\infty}\frac { \ln\left(\frac{(n+1)^n}{(n+1)!} \right)}{2n+1}\\ $$ Applying stolz Cezaro again, we get $$ =\lim\limits_{n\to\infty}\frac { \ln\left(\frac{(n+2)^{n+1}}{(n+2)!} \right)-\ln\left(\frac{(n+1)^n}{(n+1)!} \right)}{2} \\ =\lim\limits_{n\to\infty}\frac { \ln\left(\frac{(n+2)^{n+1}}{(n+2)!} \frac{(n+1)!}{(n+1)^n} \right)}{2} \\ =\lim\limits_{n\to\infty}\frac { \ln\left(\frac{(n+2)^{n}} {(n+1)^n} \right)}{2}\\ =\lim\limits_{n\to\infty}\frac { \ln\left(1+\frac{1}{n+1} \right)^n}{2}\\ =\frac{\ln(e)}{2}=\frac{1}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1966057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Seeking an intuitive explanation of the Mapping Class Group For a surface $S$ the mapping class group $MCG(S)$ of $S$ is defined as the group of isotopy classes of orientation preserving diffeomorphisms of $S$: $$MCG(S)=Diff^+(S)/Diff_0(S).$$ I understand this definition as well as all of its component pieces. What I don't understand is why this quotient is a natural thing to study. Specifically, I can see why the full diffeomorphism group $Diff(S)$ would be natural to study, and if $S$ happens to be orientable, I can see why it would be reasonable to restrict ones attention to $Diff^+(S)$. However, I don't see why the quotient is a natural or intuitive next step. Is there a good explanation why diffeomorphisms that are isotopic to the identity are 'uninteresting'? Thanks!
If you view an isotopy as a path in the space of diffeomorphisms, each element of the mapping class group corresponds to a path component of the orientation-preserving diffeomorphism group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1966182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Prove $\operatorname{GL}_n(\Bbb{R})/\operatorname{SL}_n(\Bbb{R}) \cong \Bbb{R}^\times$ $$\operatorname{GL}_n(\Bbb{R})/\operatorname{SL}_n(\Bbb{R}) \cong \Bbb{R}^\times$$ This is trivial to prove with the first isomorphism theorem - by using the determinant as a endomorphism, then as $\operatorname{SL}_n(\Bbb{R})$ is $1$ under the determinant, it is the kernel, and by FIT, the above is proved. I was wondering how to prove this without the isomorphism theorem ?
Hint: consider the homomorphism $$\mathbf{R}^\times \ni \lambda \mapsto (\lambda I_n) SL_n \in GL_n/SL_n$$ where $I_n$ is the $n\times n$ identity matrix and $gSL_n$ is the coset of $g$ in the quotient group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1966320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Multiplying two logarithms I've searched for some answer already, but couldn't find any solution to this problem. Apparently, there's no rule for the product of two logarithms. How would I then find the exact solution of this problem? $$ \log(x) = \log(100x) \, \log(2) $$
The question does not specify the base $B$ of the logarithm, but it will affect the solution, so we make it explicit: \begin{align} \log_B(x) &= \log_B(100\, x) \, \log_B(2) \\ &= (\log_B(100) + \log_B(x)) \, \log_B(2) \iff \\ (1 - \log_B(2)) \log_B(x) &= \log_B(2) \log_B(100) \\ \end{align} For $B = 2$ the LHS vanishes and we have no solution, as the logarithms on the RHS do not vanish. For $B \ne 2$ we can continue: \begin{align} \log_B(x) = \frac{\log_B(2) \, \log_B(100)}{1 - \log_B(2)} = f(B) \iff \\ x = B^{f(B)} = B^{(\log_B(2) \, \log_B(100))/(1 - \log_B(2))} \end{align} For $B=e$ one gets $$ x = e^{f(e)} = e^{10.4025\dotsb} = 32944.48\dotsb $$ Here are graphs of $f(B)$: (Links to larger versions: left, right)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1966921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Diophantine equation $\frac{a+b}{c}+\frac{b+c}{a}+\frac{a+c}{b} = n$ Let $a,b,c$ and $n$ be natural numbers and $\gcd(a,b,c)=\gcd(\gcd(a,b),c)=1$. Does it possible to find all tuples $(a,b,c,n)$ such that: $$\frac{a+b}{c}+\frac{b+c}{a}+\frac{a+c}{b} = n?$$
First note that we can assume $a,b,c \in \mathbb{Q}$, without loss of generality. A rational solution can then be scaled to integers. Combining into a single fraction gives the quadratic in $a$ \begin{equation*} a^2+\frac{b^2-nbc+c^2}{b+c}a+bc=0 \end{equation*} and, for rational solutions, the discriminant must be a rational square. Thus, the quartic \begin{equation*} D^2=b^4-2(n+2)b^3c+(n^2-6)b^2c^2-2(n+2)bc^3+c^4 \end{equation*} must have rational solutions. This quartic is birationally equivalent to the elliptic curve \begin{equation*} V^2=U^3+(n^2-12)U^2+16(n+3)U \end{equation*} with \begin{equation*} \frac{b}{c}=\frac{V+(n+2)U}{2(U-4(n+3))} \end{equation*} The elliptic curve is singular when $n=-2,-3,6$. $n=6$, for example, corresponds to $a=b=c=K$ as a solution. If $n \ne -2,-3,6$, the curve has $5$ finite torsion points at $(0,0)$, $(4,\pm 4(n+2))$ and $(4n+12,\pm 4(n+2)(n+3))$ none of which give a solution. For $n=7$, there are a further $6$ torsion points which lead to the solutions $(a,b,c)=(1,1,2)$ and $(a,b,c)=(1,2,2)$. Thus, if $n \ne -3,-2,6,7$, we need the elliptic curve to have rank greater than zero to find a solution. Computations using the Birch and Swinnerton-Dyer conjecture suggest the first few solutions are with $n=8,11,12,15,\ldots$. For example, the $n=15$ curve has a generator $(-36,468)$ which gives the solution $a=2, b=3, c=15$. As $n$ gets larger, the size of solutions generally increases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
poker probability a pack of poker contains 52 cards and we are going to flip through one by one so the probability of the following events are * *a king right after the first ace ? *an ace right after the first ace ? *the first ace is the 10-th card? *the probability the next card is the ace of spades, if the first ace is the 30-th card? *The probability the next card is the jack of diamonds If the first ace is the 30-th card? I have tried the (1)(2)(3) and not sure is right or not,but no idea with the (4) (5) (1)$\frac{4}{52}*\frac{4}{51}$ and which is same prob as (2) (3)$\frac{48}{52}*\frac{47}{51}*\frac{46}{50}*\frac{45}{49}*\frac{44}{48}*\frac{43}{47}*\frac{42}{46}*\frac{41}{45}*\frac{40}{44}*\frac{4}{43}$but any simpler way to write it out?
1) King follows First Ace. Let's not worry about suits or values of the other cards. Take a pack of four king of hearts, four ace of spades, and forty four jokers. There are $52!/(4!4!44!)$ equally probable ways to arrange this deck. Set aside one king, shuffle the remaining cards, stick that king after the first ace. There are $51!/(3!4!44!)$ ways to do this. So the probability of the event is $\quad\dfrac 1 {13}$ 2) Ace follows First Ace. As above, so below. Set aside one ace.   There are $51!/(4!3!44!)$ ways to sort 4 kings, 3 aces, 44 jokers (and stick the set aside ace behind the first ace).   Coincidentally yielding the same probability of $1/13$. 3) First Ace is tenth card. Take 4 ace of spades and 48 jokers. $52!/(4!48!)$ ways to arrange them in total. Now count ways to arrange 9 jokers, an ace in tenth place, and then all the rest (3 aces, 39 jokers). 4) Ace of spades follows first ace when that is in 30th place. The deck consists of three ace of hearts, one ace of spades, and 48 jokers. The deck is shuffled so we have 29 jokers, an ace, in that order, and then the rest (3 aces, 19 jokers) in any order. If the first ace is spades, it cannot follow itself.   Multiply the probability that it is not, by the conditional probability of the event given that. 5) Jack of Diamonds follows First Ace when that is in thirtieth place. The deck consists of four ace of hearts, jack of diamond, and 47 jokers, shuffled so we have 29 cards, an aces, and then the rest. If the jack is among the 29 cards before first ace, then it cannot follow it.   Multiply the probability that it is not, by the conditional probability of the event given that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Probability that a number is divisible by 11 The digits $1, 2, \cdots, 9$ are written in random order to form a nine digit number. Then, the probability that the number is divisible by $11$ is $\ldots$ I know the condition for divisibility by $11$ but I couldn't guess how to apply it here. Please help me in this regard. Thanks.
The rule of divisibility by $11$ is as follows: The difference between the sum of digits at odd places and the sum of the digits at even places should be $0$ or a multiple of $11$. We also know that the sum of all the digits will be $45$ as $1 + 2 + ... + 9 = 45$. Let $x$ denote sum of digits at even position s and $y$ denote sum of digits at odd places, or vice versa. Case 1 (difference is $0$): $$x + y = 45$$ $$x - y = 0$$ Thus, $2x = 45$, or $x = 22.5$ which cannot be obtained. Case 2 (difference is $11$): $$x + y = 45$$ $$x - y = 11$$ Thus, $2x = 56$, or $x = 28$ and $y = 17$. This is a valid possibility. Case 3 (difference is $22$): $$x + y = 45$$ $$x - y = 22$$ Thus, $2x = 67$, or $x = 33.5$, which cannot be obtained. As you can see, the difference between the sum of the digits at odd places and the sum of the digits at even places must be $11$. Now, imagine that there are $9$ placeholders (representing the $9$ digits of the $9$-digit number). Either the sum of the digits at odd places ($5$ odd places) should be $28$, or the sum of the digits at even places ($4$ even places) should be $28$. We write down the possibilities: $2$ ways to express $28$ as a sum of $4$ numbers between $1$ and $9$. $9$ ways to express $28$ as a sum of $5$ numbers between $1$ and $9$. In the first case, there are $4!$ ways of arranging the $4$ numbers (that add up to $28$) and $5!$ ways of arranging the $5$ other numbers (that add up to $17$). Hence, no. of ways$ = 2 * 4! * 5!$ In the second case, there are $5!$ ways of arranging the $5$ numbers ( that add up to $28$) and $4!$ ways of arranging the $4$ other numbers (that add up to $17$). Hence, no. of ways$ = 9 * 5! * 4!$ Total favourable possibilities$$= 2 * 4! * 5! + 9 * 5! * 4!$$ $$= 4! * 5! * (2 + 9)$$ $$= 4! * 5! * 11$$ Also, total no. of ways of arranging $9$ numbers to form a $9$-digit number = $9!$ Hence, probability$=P= (4! * 5! * 11)/9!$ $$= 11/126$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 2 }
$\tau = \left(\sum_{n = 1}^\infty f_n\right) d\nu + \sum_{n = 1}^\infty \mu_n$ the Lebesgue decomposition of $\tau$? Assume $\tau_n$ is a sequence of positive measures on a measurable space $(X, \mathcal{F})$ with $\sup_n \tau_n(X) < \infty$ and $\nu$ is another finite positive measure on $(X, \mathcal{F})$. Suppose $\tau_n = f_n\,d\nu + \mu_n$ is the Lebesgue decomposition of $\tau_n$; in particular, $\mu_n \perp \nu$. If $\tau = \sum_{n = 1}^\infty \tau_n$ is a finite measure, is$$\tau = \left(\sum_{n = 1}^\infty f_n\right) d\nu + \sum_{n = 1}^\infty \mu_n$$the Lebesgue decomposition of $\tau$?
Yes. * *The composition. $$\tau (A) = \sum \tau_n (A) = \sum (\int_A f _n d \nu + \mu_n(A)).$$ By monotone convergence $\sum \int_A f_n d \nu = \int_A \sum f_n d\nu$. Let $f=\sum f_n$. Then $f$ is measurable and nonnegative. Also since $\tau$ is finite, $f\in L^1(\nu)$. Define the set function $\mu= \sum \mu_n$. We show this is a measure. It is clearly nonnegative. Next: a. $\mu(\emptyset)=0$. b. If $A_1,A_2,\dots$ are disjoint, then \begin{align*} \mu (\cup A_j)&= \lim_{N\to\infty}\sum_{n\le N} \mu_n (\cup A_j)\\ &= \lim_{N\to\infty} \sum_{n\le N} \sum_j \mu_n (A_j)\\ &= \lim_{N\to\infty} \sum_j \sum_{n\le N} \mu_n (A_j) \\ &= \sum_j \lim_{N\to\infty} \sum_{n\le N} \mu_n (A_j)\\ &= \sum_j \mu(A_j). \end{align*} We have used monotone convergence for the fourth equality, and the third equality is a statement about a finite sum of series of positive terms. Therefore $\mu$ is a measure. Since $\tau$ is finite, $\mu$ is also finite. Bottom line: $d\tau = f d\nu + d\mu$, where $f\in L^1(\nu)$ and $\mu$ is finite measure. *By uniqueness of Lebesgue decomposition, all that remains to show is that $\mu\perp \nu$. Since for each $n$ $\mu_n \perp \nu$, there exists $A_n$ such that $\nu (A_n) =\mu_n (A_n^c) =0$. Let $A=\cup A_n$. Then $\nu(A)\le \sum_n \nu(A_n) =0$, and $$\mu(A^c) = \mu( \cap A_n^c) =\sum_n \mu_n (\cap A_n^c) \le \sum \mu_n (A_n^c)=0.$$ Therefore $\mu\perp \nu$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Three vertices of a parallelogram have coordinates (-2,2),(1,6) and (4,3). Find all possible coordinates of the fourth vertex. Do I use the distance formula for the two points and use them to add to each other to get two parallel sides?
If three points are $P,Q,R$ then $R+(P-Q)$ gives a fourth vertex for a parallelogram. So pick the ordered pair $(P,Q)$ in all six ways, and that gives three parallelograms. There are actually only three such paralellograms, some using my description being repeats (same vertices). So if $P,Q,R$ are the three given points (which are not collinear) the three parallelograms are those formed by using the given three vertices along with any one of the three choices $$P+Q-R,\ P+R-Q,\ Q+R-P$$ as the fourth vertex of the parallelogram. Added note: In each case the subtracted point winds up being diagonally opposite the constructed point in that parallelogram. For example, if $X=P+Q-R,$ then also $X-P=Q-R$ as expected in a parallelogram labeled going around say counterclockwise in the order $X,P,R,Q.$ The equality of the vectors $X-P$ and $Q-R$ means they are parallel and point in the same direction, so that side $XP$ is parallel to side $QR.$ And also from $X=P+Q-R$ we get $X-Q=P-R$ showing the other pair $XQ,PR$ are parallel (the remaining pair of parallel sides of that parallelogram). Suggest draw a picture to see why for this case each of $XR$ and $PQ$ wind up as "diagonals" of the parallelogram formed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Necessary and Sufficient Conditions for the pair of integers $\{m,n\}$ to generate $\mathbb{Z}$ Let $m,n \in \mathbb{Z}$ be two non zero numbers. I need to find necessary and sufficient conditions on $m$ and $n$ for which the pair $\{m,n\}$ generates the additive group $\mathbb{Z}$. I made an attempt at a proof, and would like to know if 1) It's correct, and 2) if there's anything I can do to make it better. First of all, I asserted that a necessary and sufficient condition would be for $\mathbf{\gcd(m,n)=1}$. Now, here's my proof: $(\implies)$ Suppose that $\gcd(m,n)=1$. Then, by Bezout's Identity, $\exists$ integers $p,q$ such that $mp+nq = \gcd(m,n)=1$. Let $k$ be an arbitrary element of $\mathbb{Z}$. Then, we can produce $k$ by multiplying both sides of $mp+nq = 1$ through by $k$ to yield $m(pk)+n(qk)=k$. Defining $x=pk \in \mathbb{Z}$ and $y = qk \in \mathbb{Z}$, we see that $\forall k \in \mathbb{Z}$, $\exists x, y \in \mathbb{Z}$ such that $mx+ny=k$. So, $<m,n>=\mathbb{Z}$. $(\Longleftarrow)$ Suppose $<m,n>=\mathbb{Z}$. Then, $\forall k \in \mathbb{Z}$, $\exists x,y \in \mathbb{Z}$ such that $k=mx+ny$. Therefore, we must have $\gcd(m,n)|k$. (The part I'm really not sure about) However, $k$ is an arbitrary integer, and thee only integer that is a divisor of any arbitrary integer is $1$. Thus, $\gcd(m,n)=1$. Thank you!
Your proof of the first direction looks perfectly fine. Your proof of the second part is definitely on the right track, and as it stands it's not wrong, but it's a little unclear. Somewhat better is to note that, since $\gcd(m,n)$ divides $k$ for every integer $k$, it divides $1$ in particular. Knowing that $\gcd(m,n)$ divides $1$, you get $\gcd(m,n)=1$. That is, you can just immediately take $x$ and $y$ such that $mx+ny=1$ and conclude from there that they are coprime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
The space $L^{\infty -}$ and showing it is an algebra. Let $$\mathcal{A}=L^{\infty -}(\Omega,P):=\bigcap_{1\le p<\infty}L^p(\Omega,P)$$ I've already showed it is a vector space over $\mathbb{C}$. So, with the usual multiplication of (complex-valued) functions operation I want to prove that $fg\in\mathcal{A}\quad\forall f,g\in\mathcal{A}\\\text{Which is the same as to prove that }\forall f,g\in\mathcal{A}\\ \int_{\Omega}|fg|^p\;dP<\infty,\quad 1\le p<\infty$ But got stucked. I only get that $fg\in L^1(\Omega,P)$ thanks to Hölder's inequality: $$ \int_{\Omega}|fg|\;dP\le \Big(\int_{\Omega}|f|^p\,dP\Big)^{1/p}\Big( \int_{\Omega} |g|^q\;dP \Big)^{1/q},\quad\frac{1}{p}+\frac{1}{q}=1\quad 1\le p,q< \infty $$ for $p=q=2$. From here I can't see how $fg\in L^p(\Omega,P)$ for $p\ge2$. Any help or hints would be appreciated.
Hint: $$|fg|^p = |f|^p |g|^p\le \frac{|f|^{2p} + |g|^{2p}}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The difference between $\mathbb{Z}\times\mathbb{Z}$ and $\mathbb{Z}*\mathbb{Z}$? I have some questions regarding some notations being used here. I am still relatively new to algebraic topology, so I am a bit confused. I saw that $\pi_1(S^1\times S^1)\simeq\mathbb{Z}\times\mathbb{Z}$ and $\pi_1(S^1\vee S^1)\simeq\mathbb{Z}*\mathbb{Z}$. I know the difference between $\times$ and $\vee$. But what I am unsure is the difference between $\mathbb{Z}\times\mathbb{Z}$ and $\mathbb{Z}*\mathbb{Z}$. $\mathbb{Z}\times\mathbb{Z}$ is the product group am I right? But what is $\mathbb{Z}*\mathbb{Z}$? How do we call it? I could not search since I don't know the name. Could somebody please give some help? Thanks.
The notation $*$ denotes that you are considering the free product of the two groups. The free product of two groups is basically words in an alphabet given by them. However, we also require that each word is fully reduced (i.e $aa^{-1} = e, aa = a^2$) in the dictionary of words that can be described by the alphabet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Sequence Question If $(x_n)$ is a sequence of positive values and $\lim_{n\to\infty} n x_n $ exists, prove that $(x_n) \rightarrow 0$. Since $\lim_{n\to\infty} n x_n $ exists, we know $(nx_n)$ converges to some positive number; call it $x$. Let $\varepsilon > 0$. Then we can find an $N \in \mathbb{N}$ such that $|nx_n - x| < \varepsilon$. This is implies that $-\varepsilon < nx_n - x < \varepsilon$, which is equivalent to $\frac{x - \varepsilon}{n} < x_n < \frac{x + \varepsilon}{n}$. I am stuck here. How can I show $(x_n) \rightarrow 0$. Thanks.
Hint. We have that $\frac{x - \varepsilon}{n}\to 0$ and $\frac{x + \varepsilon}{n}\to 0$ as $n$ goes to infinity. Notice that the limit $x$ is greater or equal to zero (it is not necessarily positive).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1967991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Ways of forming palindromic strings The following problem was given to us in the recruitment test of InMobi. Given a list of strings $\{a_1, a_2,..,a_n\}$, I want to count the number of ways of forming PALINDROMIC string $S=s_1+s_2+..+s_n$, where $s_i$ represents a non-empty sub-sequence of string $a_i$. As answer can be large to fit in integer bounds, give ans mod $10^9+7$ Example: Given strings $\{zz, a, zz\}$, there are $5$ ways of forming $S$. $zaz$ can be formed in $4$ ways and $zzazz$ in $1$ way.
Concatenate all strings and find the number of palindromic subsequences. Now you have to remove those ways in which theres atleast one string that did not participate in palindrome formation. So make all $2^n$ combinations of strings that didn't participate and calculate the number of palindromic subsequences formed by rest of them and keep on subtracting this from answer. Finding palindromic subsequences in a string is a simple $O(n^2)$ dp. So overall complexity would be $O(2^n.n^2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is the Axiom of Choice equivalent to the Banach-Tarski paradox? I've seen many explanations that just state they are equivalent straight away however I don't understand why they're equivalent. As far as I understand, the axiom of choice states that for any indexed family of non empty sets, it is possible to choose one element from each set. But what does this have to do with the Banach-Tarski paradox? In one of my books on measure it says that if you assume the Axiom of Choice, then there is a subset $F$ of the unit sphere $S^2$ in $\mathbb{R}^3$ such that for $k \in [3,\infty)$, $S^2$ is the disjoint union of $k$ exact copies of $F$: $$S^2 = \bigcup_{i=1}^{k}\tau_i^{(k)}F$$ where each $\tau_i^{(k)}$ is a rotation. This then apparently leads to the conclusion that the 'area' of $F$ has to simultaneously equate to many different values. But at no point in this process do I understand where the axiom of choice is employed. Nor do I understand why $F$ has to take different values - once you've chosen the subset $F$ at the start then surely it's fixed and $F$ is just how big you chose it originally? Why does it have to have simultaneously many values?
Banach Tarski is prooved by picking a representative from an infinite set (orbit equivalence classes). This requires the axiom of choice. However banach tarski does not imply the axiom of choice, they are not equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can two integer polynomials touch in an irrational point? We define an integer polynomial as polynomial that has only integer coefficients. Here I am only interested in polynomials in two variables. Example: * *$P = 5x^4 + 7 x^3y^4 + 4y$ Note that each polynomial P defines a curve by considering the set of points where it evaluates to zero. We will speak about this curve. Example: The circle can be described by * *$x^2 + y^2 -1 = 0$ We say two polynomials $P,Q$ are touching in point $(a,b)$ if $P(a,b) = Q(a,b) = 0$ and the tangent at $(a,b)$ is the same. Or more geometrically, the curves of $P$ and $Q$ are not crossing. (The Figure was created with IPE - drawing editor.) We also need a further technical condition. For this let $D$ be a ''small enough'' disk around $(a,b)$. Then $Q$ and $P$ define two regions indicated green and yellow. Those regions must be interior disjoint. Without this condition for $P = y-x^3$ and $Q=y$ the point $(0,0)$ would be a touching point as well. See also the right side of the figure. (I know that I am not totally precise here, but I don't want to be too formal, so that I can reach a wide audience.) (Thanks for the comment from Jeppe Stig Nielsen.) Example: * *$P = y - x^2$ (Parabola) *$Q = y$ ($x$-axis) They touch at the origin $(0,0)$. My question: Does there exist two integer polynomials $P,Q$ that touch in an irrational point $(a,b)$? (It would be fine for me if either $a$ or $b$ is irrational) Many thanks for answers and comments. Till
Here's a general way to find such examples where both curves are of the form $y=f(x)$. Notice that $y=f(x)$ and $y=g(x)$ meet at a given value of $x$ iff that value of $x$ is a root of the polynomial $h(x)=f(x)-g(x)$, and they have the same tangent line iff that value is a root of $h(x)$ of multiplicity greater than $1$. So this means that to find an example, we just need a polynomial $h(x)$ with integer coefficients that has a double root at some irrational value of $x$ (we can then take $g(x)$ to be any polynomial with integer coefficients at all, and $f(x)=h(x)+g(x)$). This is easy to do: just take any polynomial $p(x)$ with integer coefficients and an irrational root, and let $h(x)=p(x)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 3, "answer_id": 1 }
Increasing $g$ where $g' = 0$ a.e. but $g$ not constant on any open interval? As the question title suggests, does there exist an increasing function $g$ such that $g' = 0$ almost everywhere but $g$ isn't constant on any open interval?
Yes, Let $\phi(x)$ be Cantor-Lebesgue function for $[0,1]$ and continue it to a function on $\mathbb{R}$ by fixing it $1$ for $x>1$ and $0$ for $x<0$. Let $O_n = (a_n,b_n)$ be an enumeration of all open intervals in $\mathbb{R}$ such that the end-points are of rational value. Define $ \phi_n(x) = \phi(\frac{x-a_n}{b_n-a_n}) $ and define$$ g(x) = \sum_{n=1}^\infty \frac{1}{2^n}\phi_n(x)$$ Now, for us to differentiate $g$ we need to recall Fubini's theorem (which you can verify, holds). then we have $g'(x)= \sum_{n=1}^\infty \frac{1}{2^n}\phi_n(x)' = 0 \quad (\text{a.e})$ $g$ is stricly increasing since if $x>y$ then $\phi_n(x) \geq \phi_n(y)$ for all $n$. Moreover, there must exist some rational $y<r<x$ and hence for atleast one $\phi_k(x)$ we have $\phi_k(x)>\phi_k(y)=0$. A stricly increasing function is not constant on any open interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Matrix calculus in multiple linear regression OLS estimate derivation The steps of the following derivation are from here Starting from $y= Xb +\epsilon $, which really is just the same as $\begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{N} \end{bmatrix} = \begin{bmatrix} 1 & x_{21} & \cdots & x_{K1} \\ 1 & x_{22} & \cdots & x_{K2} \\ \vdots & \ddots & \ddots & \vdots \\ 1 & x_{2N} & \cdots & x_{KN} \end{bmatrix} * \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{K} \end{bmatrix} + \begin{bmatrix} \epsilon_{1} \\ \epsilon_{2} \\ \vdots \\ \epsilon_{N} \end{bmatrix} $ it all comes down to minimzing $e'e$: $\epsilon'\epsilon = \begin{bmatrix} e_{1} & e_{2} & \cdots & e_{N} \\ \end{bmatrix} \begin{bmatrix} e_{1} \\ e_{2} \\ \vdots \\ e_{N} \end{bmatrix} = \sum_{i=1}^{N}e_{i}^{2} $ So minimizing $e'e'$ gives us: $min_{b}$ $e'e = (y-Xb)'(y-Xb)$ $min_{b}$ $e'e = y'y - 2b'X'y + b'X'Xb$ (*) $\frac{\partial(e'e)}{\partial b} = -2X'y + 2X'Xb \stackrel{!}{=} 0$ $X'Xb=X'y$ $b=(X'X)^{-1}X'y$ I'm pretty new to matrix calculus, so I was a bit confused about (*). In step (*), $\frac{\partial(y'y)}{\partial b} = 0$, which makes sense. And then $\frac{\partial(-2b'X'y)}{\partial b} = -2X'y$, but why exactly is this true? If it were $\frac{\partial(-2b'X'y)}{\partial b'}$, then that would make perfect sense to me. Is taking the partial derivative with respect to $b$ the same as taking the partial derivative with respect to $b'$? Similarly, $\frac{\partial(b'X'Xb)}{\partial b} = X'Xb$ Why is this true? Shouldn't it be $= b'X'X$?
This is not exaclty a proof but rather a way to think about it. You are trying to minimize a scalar function $F(b)$. Now use the implicit derivative: $$dF=d(y'y)-2d(b'X'y)+d(b'X'Xb)=-2db'X'y+db'X'Xb+b'X'Xdb.$$ Now transpose the last expression (which is a scalar) and factor $db'$. $$dF=2db'(-X'y+X'Xb)$$ So the gradient of $F(b)$ is $2(-X'y+X'Xb)$. Set this to zero and solve for $b$. This procedure is sometimes also called the external definition of the gradient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Defining a norm in the quotient space $E/M$ Let $(E,\|\cdot\|)$ a normed space and consider $M \subseteq E$ a closed vectorial subspace. Consider in $E$ the equivalence relation $x \equiv y \iff x-y \in M$, and let $E/M$ the quotient set. The equivalence class of $x$ is the set $x+M = \{ x+m | m \in M \}$. Show that $E/M$ is a vectorial space with the opperations $(x+M) +' (y+M) = (x+y) + M$ and $\lambda \cdot'(x+M) = \lambda x + M$. Show that the expression $\|x+M\| = \inf \{ \|x+m\| \mid m \in M\}$ defines a norm in $E/M$. Please verify my progress in proving $E/M$ is a vectorial space: * *Well-definiteness of $+'$ and $\cdot'$: Suppose $x+M = x'+M$ and $y+M = y'+M$. Then, by definition of set $x+M$, $x=x'$ and $y=y'$, then $x+y = x'+y'$. Suppose $x+M = x'+M$. Then $x=x'$. Hence $\lambda (x+M) = \lambda x + M = \lambda x' + M = \lambda (x'+M)$. *Axioms of $+'$: $$(x+M) +' (y+M) = (x+y) +' M = (y+x) +' M = (y+M) +' (x+M)$$ $$((x+M) +' (y+M)) +' (z+M) = ((x+y)+M) +' (z+M) = (x+y+z)+M$$ $$(x+M)+'((y+M)+'(z+M)) = (x+M) +' +((y+z)+M) = (x+y+z)+M$$ $$M +' (x+M) = x+M = (x+M) +' M$$ $$(-x+M) +' (x+M) = M = (x+M) +' (-x+M)$$ *Axioms of $\cdot'$: $$0 \cdot' (x+M) = M$$ $$1 \cdot' (x+M) = x+M$$ $$(\lambda \mu)\cdot'(x+M) = \lambda \mu x + M = \lambda\cdot'(\mu x + M) = \lambda\cdot' ( \mu \cdot'(x+M))$$ *Axioms of Distributivity: $$\lambda \cdot'((x+M) +' (y+M)) = \lambda\cdot'((x+y)+M) = \lambda(x+y) + M = \lambda x + \lambda y + M = (\lambda x + M) +' (\lambda y + M) = \lambda \cdot' (x+M) +' \lambda \cdot' (y+M)$$ $$ (\lambda + \mu) \cdot' (x+M) = (\lambda + \mu)x + M = (\lambda x + \mu x) + M = (\lambda x + M) +' (\mu x + M) = \lambda \cdot'(x+M) +' \mu \cdot'(x+M)$$ Then I need to prove that the expression $\|x+M\| = \inf \{ \|x+m\|\mid m \in M\}$ defines a norm in $E/M$. I'm not sure how to conclude positive definiteness neither if what I did on homogeneity and triangle inequality is right. Positive definiteness: $\|x+M\| \geq 0$, since $\|x+m\| \geq 0$ and and so the infimum has to be $0$ or bigger. If $\inf \{ \|x+m\| \mid m \in M \} = 0$, then $\forall \epsilon>0 \exists y \in \{ \|x+m\| \mid m \in M \} $ such that $ 0 \leq y < \epsilon$. I need to conclude that $x=0$, but I'm not sure how. Homogeneity: $||\lambda\cdot'(x+M)|| = ||\lambda x + M || = \inf \{ || \lambda x + m|| | m \in M \} = \inf \{ || \lambda x + \lambda m|| | m \in M \}$ (since $\lambda \in M)$ $ = \inf \{ || \lambda (x + m)|| | m \in M \} = |\lambda| \inf \{ || x + m|| | m \in M \} = |\lambda|||x+M||$ Triangle Inequality: $ || (x+M) +' (y+M)|| = \inf \{ || (x+y) + m|| | m \in M \} \leq \inf \{ || (x+y) + 2m|| | m \in M \} \leq \inf \{ ||x+m|| + ||y + m|| | m \in M \} = \inf \{ ||x+m|| | m \in M \} + \inf \{||y + m|| | m \in M \} $ Thanks.
For positive definiteness you should use that $M$ is assumed closed. So if $\|x+m_k\|\rightarrow 0$, $m_k\in M$ then show (1) that the sequence $(m_k)_k$ is Cauchy, (2) that it therefore converges to some $m\in M$ and (3) finally $x=-m\in M$. In homogeneity you should distinguish the case $\lambda=0$ and $\lambda\neq 0$. In the triangle inequality perhaps better to say for the last part: $$\inf\{\|x+y+m\| : m\in M\}=\inf\{\|x+m+y+n\| : m,n\in M\} \leq \inf\{\|x+m\|+\|y+n\| : m,n\in M\} = \inf\{\|x+m\| : m\in M\}+\inf\{\|y+n\| : n\in M\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Understanding a proof that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ I'm having some trouble understanding a proof on Naoki Sato's notes on Number Theory and I was wondering if you guys could give me some help. The problem is that I don't understand the last implication on the proof for example 1.1 Example 1.1. Let x and y be integers. Prove that 2x + 3y is divisible by 17 iff 9x + 5y is divisible by 17. Solution. 17 | (2x + 3y) ⇒ 17 | [13(2x + 3y)], or 17 | (26x + 39y) ⇒ 17 | (9x + 5y) conversely, 17 | (9x + 5y) ⇒ 17 | [4(9x + 5y)], or 17 | (36x + 20y) ⇒ 17 | (2x + 3y). My problem is that I don't understand how does 17 | (26x + 39y) imply 17 | (9x + 5y). If you could elaborate on this step I would be most grateful. I'm sorry if this is an obvious question but I am a beginner and I just can't get it. Thanks for your help in advance.
If $17\mid (26x+39y)$, and $17\mid (-17x-34y)$, then we may add to get $17\mid 9x+5y$. In general the rule is, if $p\mid a$ and $p\mid b$, then $p\mid (a+b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
This question concerns functions $f:\{A,B,C,D,E\}\rightarrow\{1,2,3,4,5,6,7\}$ (counting) Can someone guide me towards a way to count surjective functions of the below question? This question concerns functions $f:\{A,B,C,D,E\}\rightarrow\{1,2,3,4,5,6,7\}$. How many such functions are there? How many are injective? Surjective? Bijective? My answers with logic behind them: There are a total of $7^5$ functions since each $f(k)$ where $k\in\{A,B,C,D,E\}$ may map to $7$ elements in the set $\{1,2,3,4,5,6,7\}$. The number of injective functions is $7\cdot6\cdot5\cdot4\cdot3$ since once we select an element to map to we may not map to it again since injectivity means that if $x\neq y\Rightarrow f(x)\neq f(y)$. Not sure on surjective count...
There are no surjective functions from a finite set to a bigger finite one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What's the difference between continuous and piecewise continuous functions? A continuous function is a function where the limit exists everywhere, and the function at those points is defined to be the same as the limit. I was looking at the image of a piecewise continuous function on the following page: http://tutorial.math.lamar.edu/Classes/DE/LaplaceDefinition.aspx But the image of the function they've presented isn't continuous. As such, I'm confused by what a piecewise continuous function is and the difference between it and a normal continuous function. I'd appreciate it if someone could explain the difference between a continuous function and a piecewise continuous function. Also, please reference the image of the piecewise continuous function presented on this page http://tutorial.math.lamar.edu/Classes/DE/LaplaceDefinition.aspx . Thank you.
A piecewise continuous function doesn't have to be continuous at finitely many points in a finite interval, so long as you can split the function into subintervals such that each interval is continuous. A nice piecewise continuous function is the floor function: The function itself is not continuous, but each little segment is in itself continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1968943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Ideals in $S^{-1}A$. I am studying localization of rings and got stuck at a problem. It states that if $S$ is a multiplicatively closed subset of a ring $A$ then fractional ideals of $\ S^{-1}A $ are in bijective correspondence with those of $A$ which do not meet $S$. However, prime ideals of $ S^{-1}A $ are in bijective correspondence with those of $A$ which do not meet $S$. My question is why can't we say that ideals of $A$ which do not meet $S$ are in bijective correspondence with those of $S^{-1}A$? We know that ideals of $S ^{-1}A$ are of the form $S^{-1}I$ where $I$ is an ideal of $A$. Why won't the correspondence $I\rightarrow S^{-1}I $ work?
The correspondence $I\mapsto S^{-1}I$ need not be injective. For instance, let $A=k[x,y]$ and $S=\{y^n:y\in\mathbb{N}\}$. Then if $I=(x)$ and $J=(xy)$, $S^{-1}I= S^{-1}J$ even though $I\neq J$ and neither intersects $S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A better way to evaluate a certain determinant Question Statement:- Evaluate the determinant: $$\begin{vmatrix} 1^2 & 2^2 & 3^2 \\ 2^2 & 3^2 & 4^2 \\ 3^2 & 4^2 & 5^2 \\ \end{vmatrix}$$ My Solution:- $$ \begin{align} \begin{vmatrix} 1^2 & 2^2 & 3^2 \\ 2^2 & 3^2 & 4^2 \\ 3^2 & 4^2 & 5^2 \\ \end{vmatrix} &= (1^2\times2^2\times3^2)\begin{vmatrix} 1 & 1 & 1 \\ 2^2 & \left(\dfrac{3}{2}\right)^2 & \left(\dfrac{4}{3}\right)^2 \\ 3^2 & \left(\dfrac{4}{2}\right)^2 & \left(\dfrac{5}{3}\right)^2 \\ \end{vmatrix}&\left[\begin{array}{11}C_1\rightarrow\dfrac{C_1}{1} \\ C_2\rightarrow\dfrac{C_2}{2^2}\\ C_3\rightarrow\dfrac{C_3}{3^2}\end{array}\right]\\ &= (1^2\times2^2\times3^2)\begin{vmatrix} 1 & 0 & 0 \\ 2^2 & \left(\dfrac{3}{2}\right)^2-2^2 & \left(\dfrac{4}{3}\right)^2-2^2 \\ 3^2 & 2^2-3^2 & \left(\dfrac{5}{3}\right)^2-3^2 \\ \end{vmatrix} &\left[\begin{array}{11}C_2\rightarrow C_2-C_1 \\ C_3\rightarrow C_3-C_1\end{array}\right]\\ &= (1^2\times2^2\times3^2) \begin{vmatrix} 1 & 0 & 0 \\ 2^2 & 2^2-\left(\dfrac{3}{2}\right)^2 & 2^2-\left(\dfrac{4}{3}\right)^2 \\ 3^2 & 3^2-2^2 & 3^2-\left(\dfrac{5}{3}\right)^2 \\ \end{vmatrix}\\ &=(1^2\times2^2\times3^2) \begin{vmatrix} 1 & 0 & 0 \\ 2^2 & \dfrac{7}{4} & \dfrac{20}{9} \\ 3^2 & 5 & \dfrac{56}{9} \\ \end{vmatrix}\\ &=(1^2\times2^2) \begin{vmatrix} 1 & 0 & 0 \\ 2^2 & \dfrac{7}{4} & 20 \\ 3^2 & 5 & 56 \\ \end{vmatrix}\\ &=4\times(-2)\\ &=-8 \end{align} $$ As you can see, my solution is a not a very promising one. If I encounter such questions again, so would you please suggest a better method which doesn't include this ridiculous amount of calculations.
Using the rule of Sarrus, the computation is really not too long, and we get in general for all $n\ge 1$, $$ \det \begin{pmatrix} n^2 & (n+1)^2 & (n+2)^2\cr (n+1)^2& (n+2)^2 & (n+3)^2\cr (n+2)^2& (n+3)^2 & (n+4)^2\end{pmatrix}=-8. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 4 }
Does $\mathbb{C}\setminus\{0\}$ with those operations constitute a vector space over $\mathbb{C}$? If given set $V = \mathbb{C}\setminus\{0\}$ and field $\mathcal{F} = \mathbb{C}$ with operations defined for all $\vec{x}, \vec{y} \in V, \vec{x} = x \in \mathbb{C}\setminus\{0\}, \vec{y} = y \in \mathbb{C}\setminus\{0\}$ and $\alpha \in \mathbb{C}$ as: $$\vec{x} \oplus \vec{y} = x\cdot y$$ $$\alpha \odot \vec{x} = \alpha\cdot x$$ Is it really enough to take $\alpha = 0$, which gives $\alpha\cdot \vec{x} \notin V$ to prove that this is not a vector space over $\mathbb{C}$? I.e. just one counterexample is enough, right?
The answer is yes. The set $\mathbb{C}\backslash \{0\}$ with usual multiplication is an abelian group, so $V$ with $\oplus$ is an abelian group. The problems arise with "scalar multiplication", and although one example is enough, it's also true that distributive laws fail for (almost) any $\alpha,x,y$ $$\alpha\odot (x\oplus y) \neq \alpha \odot x \oplus \alpha\odot y$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equation of tangent to a circle Find an equation of the tangent to the circle with equation $x^2+y^2-10x+4y+4=0$ at the point $(2,2)$ I have solved up to $4y - 8 = 3x - 6$, but I am not sure whether the final answer should be $3x-4y+2=0$ OR whether it should be $y=\frac{3}{4}x+\frac{1}{2}$. The solutions say $3x-4y+2=0$ should be the answer; however, it doesn't ask for a specific form of the equation. Could it be both?
Actually I'd use neither! The important parts here are that it passes through a particular point and goes in a particular direction, which means that point-slope form is best: $$y-2 = \frac{3}{4}(x-2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Natural number divisible by $42$? There is a natural number divisible by $42$. The sum of digits which do not take part in the written number is $25$. Prove that there are two identical numerals in the natural number.
Hint: The sum of the digits of the number has to be a multiple of $3$, because it is divisible by $3$. What is the sum of all the digits there are? If every digit in the number is used only once, what is the sum of its digits?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is it possible to cover a $8 \times8$ board with $2 \times 1$ pieces? We have a $8\times 8$ board, colored with two colors like a typical chessboard. Now, we remove two squares of different colour. Is it possible to cover the new board with two-color pieces (i.e. domino pieces)? I think we can, as after the removal of the two squares, we are left with $64-2=62$ squares with $31$ squares of each colour, and - since the domino piece covers two colours - we can cover the new board with domino pieces. But how should one justify it mathematically?
Hint: A promising strategy is to prove that the claim If we remove two opposite-colored squares from a $2m\times 2m$ chessboard, we may tile the remaining part with $2\times 1$ dominoes. by induction on $m$. The case $m=1$ is trivial. Assume that the claim holds for some $m\geq 1$ and consider a $(2m+2)\times (2m+2)$ chessboard. If both the removed squares do not lie on the boundary of the chessboard, there is nothing to prove. Hence we may assume that at least one of the removed squares lies on the boundary. And we may also start tiling by following a spiral, starting next to the removed square on the boundary: Another interesting idea is just to place $31$ non-overlapping dominoes on a $8\times 8$ chessboard and start playing Sokoban with the placed dominoes, in order to free the wanted squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Integrals of a function which has finitely many discontinuities are not differentiable at the discontinuities I've tried solving an exercise stated below. $$\text{Suppose that $f\in\mathcal R$ on $[a,b]$ and define $F(x)=\int_{a}^{x}{f(t)dt}$.}\\\text{If $x$ is a point at which $f$ is not continuous, is it still possible that $F'(x)=f(x)$?}$$ So I approached in two ways. Let $k\in\{\text{discontinuities of $f$}\}$. One is to show $F$ is continuous at $x=k$. And the other is to show that $\lim\limits_{h\to0}\frac{F(k+h)-F(k)}{h}$ exists. First step, $^\exists M\gt0$ satisfying $\vert f(x)\vert\leq M$ for all $x\in [a,b]$, since $f\in\mathcal R$. (I'll assume that $f$ is defined at all the point on $[a,b]$.) So, $\vert F(y)-F(x)\vert=\vert\int_{x}^{y}f\left(t\right)dt\vert\leq M\left(y-x\right)$ for $a\leq x\leq y\leq b$. For any $\epsilon\gt0$, $\vert F(y)-F(x)\vert\lt\epsilon$ where $\vert y-x\vert\lt\delta=\epsilon/M$ . Hence, the continuity of $F$ is clear. Now I'm suffering to show that the second step is false. For a jump discontinuity case, I've tried to Consider $f(x)= \begin{cases} x^2/2+2, &\text{if $x\lt 2$} \\ x^2/2+6, &\text{if $x\geq 2$} \end{cases}$. In this case $k=2$. Then, $$\left\vert \frac{F(2+h)-F(2-h)}{2h}-f(2)\right\vert=\left\vert\frac{\int_{2-h}^{2}\left(t^2/2+2\right)dt+\int_{2}^{2+h}\left(t^2/2+6\right)dt}{2h}-8\right\vert\gt\epsilon=1\\\text{for any $0\lt h\lt\delta=g(\epsilon).$ (I'll skip the part of evaluating improper integrals.)}$$ Thus $F$ is not differentiable at $x=k$ where $k$ is a jump discontinuity, and it says roughly that $F$ is changing sharply at $x=k$, right? But how about removable discontinuities? Any hint will be helpful.
Hint: Consider the function $f(x) = \sin(1/x), x\ne 0, f(0)=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that every graph with diameter d and girth 2d+1 is regular How to prove that every graph with diameter $d$ and girth $2d+1$ is regular. I know just relation between diameter and girth. which is given by below formula $girth(G)\leq 2diam(G)+1$
The proof of this fact that I know goes as follows. Suppose we have the following claim Claim 1. If $G$ is a graph of diameter $d$ and girth $2d+1$ then any two vertices $u,v$ at distance $d$ have the same degree. Once you establish Claim 1 your claim follows easily. If $C$ is a $2d+1$ cycle in $G$ then by Claim 1 all vertices on $C$ have the same degree. For any vertex not on $C$ you can find a path of length $d$ to a vertex on $C$ and hence by Claim 1 you are done. Hence it only remains to prove Claim 1, which I think you can do yourself. If you need any hints on that as well post a comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1969932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Entire function with image contained in "slit plane" is constant Let S be the "slit plane" $S = \mathbb{C} - \{t \in \mathbb{R} : t \leq 0\}$ and deduce that if $f$ is an entire function whose image is contained in $S$, then $f$ is constant. Clearly $S$ is simply-connected and does not contain zero, so I attempted to go through the analytic branch of $\log(f(z))$ but was not successful. The closest I've gotten seemed rather roundabout: Let $w: S \to \{z: Im(z) > 0\}$, for instance $w(z) = e^{i\pi /2} \sqrt z = i\sqrt z$. Then the image of $w(f(z))= (w \circ f)(z)$ is contained in the upper half plane. Since $f$ is entire and $f(z) \neq 0$ for any $z$, $\sqrt{f(z)} = e^{\frac{1}{2}\log(f(z))}$ is an entire function, thus $w \circ f = i\sqrt f$ is entire. So $e^{i w \circ f}$ is an entire function whose image lies in the unit disk. In other words, $e^{i w \circ f}$ is an entire bounded function and thus is constant by Liouville's theorem. But now $$e^{i w \circ f}=c \iff i w \circ f = \log c \iff w \circ f = -i\log c \iff w(f(z)) = -i \log c$$ and so we conclude that $f$ must be constant. Does this solution seem valid?
Let $g(z) = {1 \over \log f(z) + i (\pi +1) }$, then $|g(z)| \le 1$ and $g$ is entire, hence constant. Since $\log f(z) = {1 \over g(0)} -i(\pi+1)$, we see that $f$ is constant too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derivatives and Differentiation rules I am currently encountering a math problem that I can't seem to solve on my own and I think it is because I missed the last math lecture. Usually I am pretty good when it comes to derivatives but this one seems to be my nemesis. Can somebody maybe help me out? Thank you guys! PS Im trying to use the formating system, I hope I'm getting it right. Let $F(x)=f(x^3)$ and $G(x)=(f(x))^3$. You also know that $a^2=4$, $f(a)=2$, $f'(a)=7$, $f'(a^3)=4$ Find $F'(a)=$ and $G''(a)=$
From the chain rule $$F'(a) = \left.(f(x^3))'\right|_a = f'(a^3)\cdot \left.(x^3)'\right|_a = f'(a^3)3a^2 = (4)(3)(4)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
George's imagined numbers George pictured 4 natural numbers. He multiplied each of those numbers by three and wrote all four results on a blackboard. He also calculated all possible products of the pairs of the written numbers, he then wrote all 6 products on the blackboard. Prove that (of the ten numbers written on the blackboard) there are surely two numbers which end with the same digit.
Just to be ornery and different: Each original number "influences" four of the ten results; once when multiplied by 3, and once when multiplied by each of the three other original numbers. Given two original numbers each will have four results, but one of the results will be the common result of multiplying those two numbers together. So a pair of two original number "influences" seven of the ten results; The two where each are multiplied by 3, the four where each is multiplied by each of the other two original numbers, and the one where they are multiplied together. Remember: Even times anything is even. Odd times odd is odd. To have an even result, there must be an even original. If there is an even original it will only have even results. So.... If there are two or more even original numbers they will influence 7 even results. They can not be distinct as there are only 5 even digits. If there is exactly one even original number, it will influence 4 even results. All the remaining 6 results will be odd. As there are only 5 odd digits these will not be distinct. Finally if there are no even original numbers, there will be no even results and the ten results can not be distinct. So the results can not be distinct. (Indeed, they will have 4, 7, 9, or 10 even digits and 6,3,1, or 0 odd digits.) .... But for simplicity, I prefer Mike Pierce's answer of noting $0$ as an original number results in repeated results. (In my terminology it would only have $1$ distinct result: $0$). $5$ does as well (as it would only have two distinct results: $5$ and $0$). Without $0$ or $5$ as original numbers, results of $0$ and $5$ are impossible ($a*b = 0$ implies one or the other is zero or one or the other is five and the other is even; $a*b = 5$ implies one is $5$ and the other is odd.) and without $5$ or $0$ as results there are only $8$, not $10$, possible distinct results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Showing $\mathrm{Var}(\min(X,y))$ is increasing in $y$ where $X$ is a random variable For nonnegative random variables, expectation is defined to be the supremum of all expectations of simple random variables $A$ that satisfy $A\leq X$. (For simple random variables, $E(A)=\sum_jc_jP(C_j)$ where $c_j$'s are nonnegative constants and $C_j$'s are disjoint events in $\Omega$.) Let $X$ be a nonnegative real-valued random variable, how can we show that $\mathrm{Var}(\min(X,y))$ is increasing in $y$? ($y$ is a constant.) I thought that the proof is probably based on definition of expectation and I would need compare $\mathrm{Var}(\min(X,y_1))$ and $\mathrm{Var}(\min(X,y_2))$ for $y_1\leq y_2$ and probably start from some simple RVs. Thank you for any help!
Following the hint from @Michael: $Var(min(X,y))$ = $Var(Z)$ = $\int_0^{\infty} P[Z^2 > t] dt - (\int_0^{\infty} P[Z > t] dt)^2$ = $\int_0^{\infty} P[min(X,y)^2 > t] dt - (\int_0^{\infty} P[min(X,y) > t] dt)^2$ = $\int_0^{y^2} P[X^2 > t] dt - (\int_0^{y} P[X > t] dt)^2$ Then, $\frac{d}{dy}[Var(min(X,y))]$ = $ 2y P[X^2 > y^2] - 2 (\int_0^{y} P[X > t] dt) P[X > y]$ = $ 2y P[X > y] - 2 (\int_0^{y} P[X > t] dt) P[X > y]$ = $ 2 P[X > y] (y - \int_0^{y} P[X > t] dt)$ $\geq 0$ with the last line following from $\int_0^{y} P[X > t] dt \leq \int_0^{y} dt = y$ The inequality must be weak because, if $X$ were a constant then $\forall y: Var(min(X,y)) = 0$ so $Var(min(X,y))$ does not increase in $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit points and the trivial topology: A textbook error? I'm reading George L. Cain's Introduction to General Topology, and am confused by the following example. We have: Pg 32: Definition: Let $(X,T)$ be a topological space. If $S$ is a subset of $X$, a point $p$ in $X$ is a T-limit point of $S$ if every element of $T$ containing $p$ meets $S$ in a point other than $p$. Pg 32: Example 2.4a: Let $X$ be the set of real numbers and let $T$ be the trivial topology ($ \emptyset, X$). Suppose $S$ is any nonempty subset of $X$. Then every $x \in X - S $ is a $T$-limit point of $S$. It seems like every $x \in X$ should be T-limit point of $S$, unless $S$ is a singleton, in which case the book would be correct. In the case where $S$ is not a singleton, I think I've even found a counter-example to the book. Consider $S=[0,2]$. Note that $p=1$ is a limit point of $S$ since all 1 elements of the topology containing $p=1$ meet $S=[0,2]$ at a point other than $p=1$. Who is correct about the T-limit points of S in Example 2.4a? If the book is correct, what is wrong with my counterexample?
You're both right- in your example, $1$ is indeed a limit point of $S$. But the text doesn't claim that those are the only $T$ limit points of $S$. It just doesn't want to go to deep into the detail of this examples just to say that if $S$ is a singleton, then all points of $X-S$ are $T$ limit points of $S$, and otherwise, all points in $S$ are $T$ limit points of $S$. It just gives you the general case which is true of any $S$, though you're free to investigate the particulars by yourself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$ x\in \left[0,{1\over n-1} \right] \to 1+nx \le (1+x)^n \le {1+x\over 1-(n-1)x}$ (Homework assignement) This is about a homework I have to do. I don't want the straight answer, just the hint that may help me start on this. To give you context, we're now studying integrals. Now here is the question : Prove : $ x\in \left[0,{1\over n-1} \right] \to 1+nx \le (1+x)^n \le {1+x\over 1-(n-1)x}$ The exercice suggest using what I can only poorly translate to "Inequality of finite increasing" and that states : Let $f$ be a function continuous on $[a,b], a<b$ and differentiable on $[a,b]$. $\exists M \in \mathbb{R}, \forall x \in [a,b], f'(x)\le M \to f(b)-f(a) \le M(b-a)$ I tried to apply this to $f(x)=(1+x)^n$ but to no avail. Any input will be greatly apreciated, Thanks !
For the inequality $1+nx\leq (1+x)^n $, just expand using the binomial theorem and notice that all terms are positive. The other inequality, after some manipulations (note that all terms are positive) looks like $$1-(n-1)x\leq (1+x)^{-(n-1)}. $$ Consider the function $$f (x)=(1+x)^{-(n-1)}+(n-1)x-1.$$ We have $f(0)=0$, and $$f'(x)=-(n-1)(1+x)^{-n}+n-1=(n-1)(1-(1+x)^{-n})>0,$$since $(1+x)^{-n}<1$. We have, then, that $f (0)=0$ and $f $ is increasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
To find $p$ such that max/min of $(\sin p+\cos p)^{10}$ occurs To find max/min of $(\sin p+\cos p)^{10}$. I have to find value of $p$ such that the expression is max/min. I tried to manipulate expression so as to get rid of at least $\sin$ or $\cos$. Then I can put what is left over equals to $1$ to get the maximum. But I'm unable to do that.
$(\sin p + \cos p)^{10} = (\sin^2 p + 2\sin p\cos p + \cos^2 p)^5 = (1+\sin 2p)^5$ Function $x\mapsto (1+x)^5$ is monotone increasing, thus, extremes of $(1+\sin 2p)^5$ are the same as extremes of $\sin 2p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Tiling a cylindrical piece of paper Imagine a piece of paper. It has a square grid of 1x1 on it, so that every square has an area of 1 cm(squared). That piece of paper was folded into the shape of an (empty, hollow) cylinder whose length is 50 cm and whose base circumference is also 50 cm (look at the picture below). Can you cover the area of that cylinder with the shape on picture b, which is made up of 4 squares and is also of dimensions 1x1?
It is impossible : The number of squares in cylinder is $50^2$ And we color black or white in them, like chess board Hence at block in b) we have two coloring ways : 3 black and 1 white, 1 black and 3 white If the number of blocks of first type is $x$ and the number of second types is $y$, then $$ 3x+y=50^2/2 $$ $$ x+3y=50^2/2 $$ so that $ x+y=625$ I will receive JeanMarie's suggestion and I will use TonyK's argument : $x+y=625$ That is the number of 4-square-tiles is odd Hence WLOG we can assume that $x$ is odd and $y$ is even Hence the number of black squares in cylinder is $$50^2/2=3x+y$$ Right hand side is even and left hand side is odd. Hence it is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1970912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
The level curves and the Jacobian How do I have to approach this problem? Intuitionally I can imagine the situation, but I have no idea how to prove this. Problem : Let $f=(f_1, f_2)$ be a continuously differentiable function defined on an open set $U$ in $R^2$ such that $\nabla f_1$ and $\nabla f_2$ do not vanish at any point of $U$. a) Suppose that $J_f($x$)=0$ for all x in $U$. Prove that a curve $C$ in $U$ is a level curve of $f_1$ if and only if it is also a level curve of $f_2$. b) Suppose that $f_1$ and $f_2$ have the same level curves on $U$. Prove that $J_f($x$)=0$ for all x in $U$.
We are given two $C^1$-functions $f_1$, $f_2$ with a common domain $U\subset{\mathbb R}^2$, whereby both $\nabla f_1$ and $\nabla f_2$ do not vanish in $U$. (a) The condition $$J_f({\bf x})=\nabla f_1({\bf x})\wedge\nabla f_2({\bf x})=0\qquad({\bf x}\in U)$$ means that $\nabla f_1({\bf x})$ and $\nabla f_2({\bf x})$ are parallel at all points ${\bf x}\in U$. It follows that the orthogonal trajectories of the two gradient fields are the same. But these orthogonal trajectories are just the level lines of $f_1$, resp., $f_2$. (b) Same thing: Since $\nabla f_i\ne{\bf 0}$ in $U$ the level lines of both $f_1$ and $f_2$ are smooth curves, whereby through each point ${\bf x}\in U$ passes exactly one curve for each of $f_1$ and $f_2$. If these curves do coincide then their tangents, and hence their normals $\nabla f_i({\bf x})$, have to be parallel at each point ${\bf x}\in U$. This implies $J_f({\bf x})=0$ for all ${\bf x}\in U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to evaluate $\lim\limits_{x\to 0} \frac{\arctan x - \arcsin x}{\tan x - \sin x}$ I have a stuck on the problem of L'Hospital's Rule, $\lim\limits_{x\to 0} \frac{\arctan x - \arcsin x}{\tan x - \sin x}$ which is in I.F. $\frac{0}{0}$ If we use the rule, we will have $\lim\limits_{x\to 0} \frac{\frac{1}{1+x^2}-\frac{1}{\sqrt{1-x^2}}}{\sec^2x-\cos x}$. So, I think that I approach this problem in the wrong way. Have you guy any idea?
Alternatively, one may use standard Taylor expansions, as $x \to 0$, $$ \begin{align} \sin x&=x-\frac{x^3}{6}+o(x^4) \\\tan x&=x+\frac{x^3}{3}+o(x^4) \\\arctan x&=x-\frac{x^3}{3}+o(x^4) \\\arcsin x&=x+\frac{x^3}{6}+o(x^4) \end{align} $$ giving, as $x \to 0$, $$ \frac{\arctan x - \arcsin x}{\tan x - \sin x}= \frac{-\frac{x^3}{2}+o(x^4)}{\frac{x^3}{2}+o(x^4)}=-1+o(x) \to -1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Index of p-subgroup divisible by p implies normalizer index divisible by p Show that if $H$ is a $p$-subgroup of $G$ with $p$ dividing $[G:H]$, then $[N_G(H):H]$ is divisible by $p$. By considering the action of $H$ on $G/H$. I've considered the $\varphi:H\to S_{G/H}$ and tried a lot of things, I just fail to see how the relevance of the normalizer will solve this problem.
$\newcommand{\Size}[1]{\left\lvert #1 \right\rvert}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Consider the action of $H$ by multiplication on the right on the set of cosets $$ G/H = \Set{ H x : x \in G}. $$ So $h \in H$ sends the coset $H x$ to the coset $H x h$. Since $H$ is a (finite) $p$-group, of order $p^{n}$, say, the orbits will be of size a divisor of $p^{n}$. Let $s_{i}$ be the number of orbits of size $p^{i}$, then $$ \Size{G : H} = \Size{G/H} = \sum_{i=0}^{n} s_{i} p^{i}. $$ Since $\Size{G : H}$ is divisible by $p$, we obtain that $s_{0}$ is divisible by $p$. Now you know that there is at least one orbit of size $1$, namely the orbit of the coset $H = H 1$. So $s_{0} > 0$, and since it is divisible by $p$, $s_{0} \ge p$. Now for all $H x$ such that $H x$ has length $1$ we have $H x H = H x$. Now show that $x \in N_{G}(H)$. Conversely, show that if $x \in N_{G}(H)$ we have that $H x$ has an orbit of length $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
System of quadratic equations with parameter I'd appreciate your help with this problem: $p - a^2 = b$ $p - b^2 = c$ $p - c^2 = d$ $p - d^2 = a$ Where $a$, $b$, $c$, $d$ are real numbers, $p$ is a real parameter lower or equal to 1 and greater or equal to 0. Thank you a lot. I am capable of solving the problem for positive numbers, but that's it. :(
* *Subtracting two adjacent equations, $$ \left \{ \begin{array}{ccc} (a-b)(a+b) &=& c-b \\ (b-c)(b+c) &=& d-c \\ (c-d)(c+d) &=& a-d \\ (d-a)(d+a) &=& b-a \end{array} \right.$$ Note that $$a=b \iff b=c \iff c=d \iff a=d$$ Now $$a^2+a-p=0$$ $$a=b=c=d=\frac{-1\pm \sqrt{1+4p}}{2}$$ * *Subtracting alternate equations, $$ \left \{ \begin{array}{ccc} (a-c)(a+c) &=& d-b \\ (b-d)(b+d) &=& a-c \end{array} \right.$$ Note that $$a=c \iff b=d$$ $$ \left \{ \begin{array}{ccc} p-a^2 &=& b \\ p-b^2 &=& a \end{array} \right.$$ Now $$p-(p-a^2)^2=a$$ $$p^2-(2a^2+1)p+(a^4+a)=0$$ $$(a^2+a-p)(a^2-a+1-p)=0$$ Since $a^2+a-p=0$ reproduces the previous case, we only need to solve $$a^2-a+1-p=0$$ $$a=c=\frac{1\pm \sqrt{4p-3}}{2}$$ $$b=d=\frac{1\mp \sqrt{4p-3}}{2}$$ * *For distinct $a$, $b$, $c$ and $d$, that'll be a solution of $12$-th order polynomial equations in terms of $p$. P.S. $a,b,c,d$ are critical points of the iteration $u_{n+1}=p-u_{n}^2$ of period $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
associated homogeneous linear differential equations Can someone please explain how associated homogeneous linear differential equations work with an example?
Lets say you have the linear differential equation $y'' + y =3x$; The associated homogeneous equation is $y'' + y = 0$ The set of the solutions to the homogeneous equation is {$\alpha \cos +\beta \sin ; \alpha, \beta \in \Bbb R$}. One particular solution to the innitial equation is 3x. Thus theset of solutions to the initial equation is {$\alpha \cos x +\beta \sin x + 3x; \alpha, \beta \in \Bbb R$}.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to use the binomial theorem to calculate binomials with a negative exponent I'm having some trouble with expanding a binomial when it is a fraction. eg $(a+b)^{-n}$ where $n$ is a positive integer and $a$ and $b$ are real numbers. I've looked at several other answers on this site and around the rest of the web, but I can't get it to make sense. From what I could figure out from the Wikipedia page on the subject, $(a+b)^n$ where $n>0$ Any help will be much appreciated.
Note that $(a+b)^{-n} = \frac{1}{(a+b)^n}$. Now, apply $(a+b)^n = \sum_{i=0}^n \binom{n}{i} a^i b^{n-i}$ to calculate the denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Prove function is < 0 For all $x$ if $x^6 + 3x^4 - 3x < 0$ then $0 < x < 1$. Prove this. (1) Find the negation (2) Prove (1) The negation is simply, $\exists x$, $x^6 + 3x^4 - 3x < 0 \wedge (x \le 0 \vee x \ge 1)$ (2) The proof is the difficult part here. We prove the contrapositive. It is easy to prove it for the condition that $x \le 0$ but it is harder for $x \ge 1$.
If $x <-1$, then $f (x)>f (-x)$ so it will suffice to check the cases for $x>1$. Notice that $x^6>x,$ since $(x\cdot x\dots x>x\cdot 1\dots 1=x $. Can you finish with the same argument for $3x^4$ and $3x $? What can you conclude for values of $f(x) $ outside of the unit interval?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Rational function field of product affine varieties Let $X, Y$ be affine varieties, we know that the coordinate ring of the product variety $X\times Y$ satisfies $k[X\times Y]\cong k[X]\otimes_k k[Y]$. My question is is it true that for rational function field, we also have $k(X\times Y)\cong k(X)\otimes_k k(Y)$? If not, is there a way to relate $k(X\times Y)$ with $k(X)$, and $k(Y)$ ?
It is not true in general that $k(X\times Y)\cong k(X)\otimes_k k(Y)$. For example, let $k=\mathbb{Q}$, $k[X]=k[Y]=\mathbb{Q}(i)$ then $k(X)=k(Y) = \mathbb{Q}(i)$, but $\mathbb{Q}(i) \otimes_{\mathbb{Q}} \mathbb{Q}(i)$ is not even a field. It is also not true for product of two affine lines: $k(x) \otimes_k k(y) \subsetneq k(x,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find the equations of the common tangents to the parabola $y^2=15x$ and the circle $x^2+y^2=16$. The text says: Find the equations of the common tangents to the parabola $y^2=15x$ and the circle $x^2+y^2=16$. I tried the approach of the discriminant and also one using the distance from a line but both didn't work for me. A previous exercise asked me to demonstrate that the line $y=mx+\frac{15}{4m}$ is a tangent to the parabola for every value of $m$. A suggestion in the text says I can use this result also to find the common tangent.
Let a common tangent touch the circle at $\displaystyle (a,b)$ and the parabola at $\displaystyle (\alpha, \beta)$, and let it have the equation $\displaystyle y = mx + c$. Now proceed systematically and list what you know, writing equations along the way. 1) $\displaystyle (a,b)$ satisfies the equation of the tangent. Hence $\displaystyle b = ma + c$ 2) $\displaystyle (a,b)$ satisfies the equation of the circle. Hence $\displaystyle a^2 + b^2 = 16$ 3) $\displaystyle (\alpha,\beta)$ satisfies the equation of the tangent. Hence $\displaystyle \beta = m\alpha + c$ 4) $\displaystyle (\alpha,\beta)$ satisfies the equation of the parabola. Hence $\displaystyle \beta^2 = 15\alpha$ 5) The tangent to the circle at $\displaystyle (a,b)$ is equal to $\displaystyle m$. By implicit differentiation, you know that $\displaystyle 2x + 2yy' = 0$ defines the tangent to the circle at an arbitrary point, so you get $\displaystyle m = -\frac ab$ 6) The tangent to the parabola at $\displaystyle (\alpha, \beta)$ is equal to $\displaystyle m$. By implicit differentiation, you know that $\displaystyle 2yy' = 15$ defines the tangent to the parabola at an arbitrary point, so you get $\displaystyle m = \frac {15}{2\beta}$ You now have a system of $\displaystyle 6$ equations in $\displaystyle 6$ unknowns. Proceed to solve them, remembering that you're trying to get values for $m$ and $c$ and eliminating the other variables systematically. You should finally get $\displaystyle m = \pm \frac 34$ and $\displaystyle c = \pm 5$, giving you the equations of the common tangents as $\displaystyle y = \pm (\frac 34x + 5)$ Here is a graphical representation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1971986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Equation in Complex plane I know that $\cos(\theta)=\frac{\exp(i\theta)+\exp(-i\theta)}{2}$ in which $\theta=arg(z)$ for some complex number $z$. Can I assume $\theta$ in above folmula as a comlex number? I mean, $$\cos(z)=\frac{\exp(iz)+\exp(-iz)}{2}$$ You know, I want to do $\cos(z)=2i$. Someone told me that I can, but $\theta$ is an angle and $z$ itself is a number and I can not understand this point! Thanks
Yes, the formula $$ \cos(z)=\frac{\exp(iz)+\exp(-iz)}{2} $$ holds for all complex numbers $z$, including the real values $z = \theta$ that you first saw. This follows from a uniqueness theorem for analytic continuations. The so-called identity theorem states that if two holomorphic functions $f$ and $g$ agree on a set containing an accumulation point, then they must be identically equal. In your case, we take $f$ to be the analytic continuation of (the real-valued function) $\cos(\theta)$, and $g(z) = \frac{\exp(iz)+\exp(-iz)}{2}$. Your statement that $$ \cos(\theta)=\frac{\exp(i\theta)+\exp(-i\theta)}{2} $$ for $\theta \in \mathbb{R}$ means the functions $f$ and $g$ agree on $\mathbb{R}$. Since $\mathbb{R}$ contains an accumulation point (indeed, every point of $\mathbb{R}$ is an accumulation point), then the theorem implies that $f$ and $g$ must be identically equal. Since the functions are defined on all of $\mathbb{C}$, this means that $$ \cos(z)=\frac{\exp(iz)+\exp(-iz)}{2} $$ for all $z \in \mathbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is $(\mathbb{R},*)$, where $a*b=ab+a+b$, a group? I have a problem with this question is $(\mathbb{R},*)$, when $a*b=ab+a+b$ a group? If not, can you skip any element $a \in\mathbb{R} $ in that way, that $(\mathbb{R}$\{a}$,*)$ is a group? I can prove, that $(\mathbb{R},*)$ is a binary operation, associative and it's neutral element is $0$. But I couldn't find an inverse element. And I have no idea which element I can skip to get the other group. I tried to skip $a=0$ but it wasn't a good tip. Thank you for your time.
you can start by looking at which elements lack an invert. Thus, we compute $a^{-1}$: $aa^{-1}+a+a^{-1}=0\iff a^{-1}=\frac{-a}{a+1}$ So you should delete $-1$ out of $\mathbb{R}$. However, you have to prove ($\mathbb{R}$\{-1},*) does make a group. It is closed under multiplication (verify if $a,b\neq-1$ then ($a*b$)$\neq-1$), every element has an inverse (we got rid of $-1$), we have a unit $0$, and * is associative ($(a*b)*c=a*(b*c)=abc+ab+bc+ac+a+b+c$). Hope I helped.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do I can evaluate this integral: $\int_{1}^{\infty}\frac{y\cosh(yx)}{\sinh(y\pi)}dy$? I'm interested to know how do I evaluate this integral: $$\int_{1}^{\infty}\frac{y\cosh(yx)}{\sinh(y\pi)}dy.$$ Wolfram alpha gives the output $$\frac1{(\pi-x)^2}+\text{Li}(-2e^{-2\pi})-\frac{\log(1-e^{-2\pi})}{\pi}-\frac12+O(\pi-x).$$ Note: I have tried to put $x=y$ to show if the above integral easy for evaluation where I took $t=\tan\frac{x}{2}$ but I got a complicated form which I can't use some standard and simple trigonometric transformations for, Thank you for any help.
I presume you are actually interested in this integral for some application, and then you might want a convenient expression --- Mathematica does evaluate it in closed form [*], as indicated in the comments, but this involves special functions and might not be of much practical use. Here is what I propose to do. Consider the integral with a variable lower bound, $$I_u(x)=\int_{u}^{\infty}\frac{y\cosh(yx)}{\sinh(y\pi)}dy,\;\;u\geq 0,\;\;|x|<\pi.$$ The function for $u=0$ has a simple form, $$I_0(x)=\frac{1}{2+2\cos x},$$ which is already quite close to the desired $I_1(x)$. We can improve by adding an offset of $-1/4$, as is evident from the plot: blue = $I_1$, orange = $I_0$, green = $I_0-\frac{1}{4}=\frac{1}{4}\tan^2(x/2)$ [*] For the record, after some massaging this is the Mathematica output for $I_1$, with $\Phi$ the Lerch transcendent: $$I_1(x)= \frac{1}{2\cos^2(x/2)}\\ \quad+\sum_{+x,-x}\left[\frac{e^{\pi+x}}{\pi+x} \, _2F_1\left(1,\tfrac{x }{2 \pi }+\tfrac{1}{2};\tfrac{x}{2\pi} +\tfrac{3}{2};e^{2 \pi }\right)-\frac{e^{\pi+x}}{4 \pi ^2 } \Phi \left(e^{2 \pi },2,\tfrac{x }{2 \pi }+\tfrac{1}{2}\right)\right]$$ (the sum over $+x$ and $-x$ ensures that the result is an even function of $x$, as it should be)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Calculate limit involving exponents Calculate: $$\lim_{x \rightarrow 0} \frac{(1+2x)^{\frac{1}{x}} - (1+x)^{\frac{2}{x}}}{x}$$ I've tried to calculate the limit of each term of the subtraction: $$\lim_{x \rightarrow 0} \frac{(1+2x)^{\frac{1}{x}}}{x}$$ $$\lim_{x \rightarrow 0} \frac{(1+x)^{\frac{2}{x}}}{x} $$ Each of these two limits gave me $\lim_{x \rightarrow 0} e^{2 - \ln x}$, so the initial limit must be $0$. However, the correct result is $-e^2$ and I can't get it. Please explain me what I did wrong and how to get the correct result. Thank you!
Using Taylor expansion: $$(1+2x)^{1/x}\approx e^2-2e^2x+o(x^2)$$ $$(1+x)^{2/x}\approx e^2-e^2x+o(x^2)$$ $$\lim_{x \rightarrow 0} \frac{(1+2x)^{\frac{1}{x}} - (1+x)^{\frac{2}{x}}}{x} \approx \lim _{x\to 0}\left(\frac{e^2-2e^2x+o\left(x^2\right)-e^2+e^2x+o\left(x^2\right)}{x}\right)$$ $$= \lim _{x\to 0}\left(\frac{-e^2x+o\left(x^2\right)}{x}\right)\rightarrow_0 \color{red}{-e^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Definition of a shadow in space, and how to derive a shadow for a given shape $\newcommand{\Reals}{\mathbf{R}}$I am struggling with the concept of a shadow in $\Reals^3$. My professor provided the class with the following definition: Given $S \subset \Reals^3$, the Shadow of $S$ in the $XY$ plane is equal to $$\{(x,y,0) | \text{$Z$ ray determined by $(x,y,0)$ hits the solid.}\}$$ This was what was written on the blackboard in my Calculus class. What exactly does this mean, and is there a better way to define a shadow? How does this translate to deriving the shadow for any shape in $\Reals^3$? Does deriving a shadow work differently when considering a cylindrical or spherical coordinate system, and if so, how?
$\newcommand{\Reals}{\mathbf{R}}$Imagine a light "at infinity" on the $z$-axis: Its rays travel along lines parallel to the $z$-axis. If $S \subset \Reals^{3}$, and if $(x_0, y_0)$ is a point of $\Reals^{2}$, then the ray of light $\{(x_0, y_0, t): t > 0\}$ touches $S$ if and only if there exists a $z > 0$ such that $(x_0, y_0, z) \in S$, if and only if $(x_0, y_0, 0)$ lies in the shadow of $S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does $6^{\frac{5}{3}}$ simplify to $6\sqrt[3]{36}$? I was recently given a problem along the lines of the below: Simplify $6^{\frac{5}{3}}$ to an expression in the format $a\sqrt[b]{c}$. The answer, $6\sqrt[3]{36}$, was then given to me before I could figure out the problem myself. I'm wondering what the steps to perform this simplification are, and how they work.
$$6^{5/3} = 6^{1 + 2/3} = 6 \cdot 6^{2/3} = 6 \sqrt[3]{6^2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to solve this tetration equation $\;^n 2 = \;^2 n $? How would one find all real solutions to the following equation: $\qquad$ $n^n = 2^{2^{2^{2^{\dots^2}}}} $(where the number of $2$s is equal to $n$) generalizing to $n$ being a real value. In tetration-notation this is $\qquad $ find a solution to $\displaystyle \;^2 n = \; ^n 2$ for real $n \ne 2$. I know one solution is $n = 2$, but I wonder if any other solutions exist. Edit: Could there be any negative number solutions to this equation?
I wanted to add some graphs to Gottfried's solution. First definitions; $\text{sexp}_2(z)= \;^z 2$ which is extended to the complex plane by Kneser's solution. I wrote a program to calculate the slog; which is the inverse of sexp and has some nice uniqueness properties. The fatou.gp program works for a wide range of real and complex sexp bases and is written in pari-gp and is available on this site http://math.eretrandre.org/. Instead of graphing $\text{sexp}_2(x)\;$and $\;x^x$, I will take the $\log_2(x)$ of both equations which works when both are positive. So I am graphing $$\text{sexp}_2(x-1)\;\;\text{vs}\;\;\log_2(x^2)=\frac{x\ln(x)}{\ln(2)}$$ Here is a graph for sexp base 2 which shows a solution at x=0, x=2, and at x=3.4141760984020147407016 What about other bases? Here is a graph for sexp base=2.1150455841, which has a parabolic crossing near 2.5360 so there are only two solutions. For larger bases, there is only one solution. Here is sexp base e which only has a solution at x=0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1972958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Tangents to a parabola that go through the same point The question is: The two lines tangent to f (x) = $x^2$ + 4x + 2 through the point (2, -12)have equations y = ax + b and y = cx + d, respectively. What is the value of a + b + c + d? What I did to solve it: f '(x) = 2x + 4. The point is (2 -12) so I plugged in two to get f '(2) = 8. I used this in point-slope form to get y + 12 = 8(x-2) => y = 8x -28. Since it is two equations, I added them up and multiplied by 2 and got -40, which is the answer. I feel like this is a fluke and it doesn't make sense to me why this worked. Can someone explain either why it works or a way that will always work? Thanks in advance!
Look at the figure: The point $A=(2,-12)$ is not a point of the parabola, so the slopes of the tangents to the parabola from this point cannot be be simply derived starting from the derivative of the parabola at $x=2$. If $P=(X,X^2+4X+2)$ is a point of tangency (the points $C$ and $D$ in the figure), than the slope $m=(y_P-y_A)/(x_P-x_A)$ of the line $PA$ must be the same as the slope of the tangent to the parabola in $P$, i.e.: $m=f'(x_P)$ . So, For our $A$ and $P$ we have: $$ \frac{X^2+4X+2+12}{X-2}=2X+4 $$ Solving this equation we find the coordinates $x_C$, $x_D$ of the two points of tangency, from wich we can find the equations of the two tangent lines. Another simple solution, without using derivative, is to note that the system: $$ \begin {cases} y=x^2+4x+2\\ y+12=m(x-2) \end{cases} $$ represents the intersection between the parabola and the lines that passe thorough $A$, and a line is tangent to the parabola if the system has only one (double) solution. This means that the discriminat $\Delta(m)$ of the second degree equation that solve the system is such that: $\Delta(m)=0$ . Since $\Delta(m)$ is a second degree polynomial in the parameter $m$, this is a second degree equation in $m$ that gives the two values of $m$ for the slopes for the two tangent lines.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the floor of a number preserve order? For example, say you got x < y for some x and y. Then $\lfloor x \rfloor \geq \lfloor y \rfloor $ ? Is it always the case? The reason why I am confused on this point is I was reading a solution posted on chegg which doesn't seem convincing. The question states to prove $\lfloor x + y \rfloor \geq \lfloor x \rfloor + \lfloor y \rfloor$ for some real $x,y$. And their proof states that $x \geq \lfloor x \rfloor$ and $y \geq \lfloor y \rfloor$ then $x + y \geq \lfloor x \rfloor + \lfloor y \rfloor$. I understand this part, but then they said since the greatest integer function (floor) preserve order, then $\lfloor x + y \rfloor \leq \lfloor \lfloor x \rfloor \rfloor + \lfloor \lfloor y \rfloor \rfloor = \lfloor x \rfloor + \lfloor y \rfloor$ which is not true. Assume it is true then since floor preserve order then the negation of $\geq$ is $<$ not $\leq$ if you guys can explain this it would be great! thank you
If $x<y$, then $\lfloor x\rfloor\leq\lfloor y\rfloor$. (Your inequality is flipped.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve this linear hyperbolic PDE analytically? Is it possible to solve this equation analytically? $$ u_t = k u_{xx} + \frac{k}{c} u_{xt} $$ I attempted to solve it for a finite domain and homogeneous B.C with separation of variables but it got very ugly, with complex eigenvalues. I'm wondering if the equation could be solved with the method of characteristics? Or if there is a coordinate transformation which converts this to an easier PDE? I am interested in solving the equation for a sine function initial condition. Either infinite or finite domain would be okay, whichever is easier.
I realized that the coordinate transformation $x=x_*$ and $t=t_*-\frac{1}{2c} x_*$ transforms this equation into the damped wave equation, which has well-known solutions which can be obtained through separation of variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $a \ge c$ for all $c < b$, then $a \geq b$ Let $a$ and $b$ be elements in an ordered field, prove that if $a \ge c$ for every $c$ such that $c \lt b$, then $a\ge b$. My proof idea below: Let $S = \{x | x<b\}$. Then $a$ is an upper bound for $S$. If I can show that $b$ is the least upper bound for $S$, then it follows from the definition of least upper bound that $a\ge b$. However, I have a hard time proving the claim that $b$ is the least upper bound for $S$. Am I on the right direction? Can anyone help? Thank you.
Let $S=\{c:c<b\} .$ By hypothesis, $\forall c\in S\;(c<a).$ So if $a\in S$ then $a<a,$ which cannot be, because "$<$" is irreflexive. The whole field is equal to $S\cup \{b\} \cup \{d:d>b\} $ because "$<$" satisfies trichotomy. Since $a\not \in S $ we have $a\in \{b\}\cup \{d:d>b\}.$ QED. Note that this applies to any linearly ordered set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }