Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Number of possible combinations of the Enigma machine plugboard This is a question about basic combinatorics. I recently watched again a youtube video about the Enigma cipher machine (in the Numberphile channel, https://www.youtube.com/watch?v=G2_Q9FoD-oQ), where the Enigma machine is briefly analyzed. In particular, they describe the number of different machine configurations, where the rotors and plugboard are set. The plugboard is basically an electric board where two letters are interchanged (e.g. letter 'q' is substituted by letter 'a' and 'a' by 'q'), having ten connections. That makes 10 pairs to choose among the 26 possible letters. Here I have a problem understanding how the number of possibilities is obtained. In the video (around 9:00), the number of configurations is obtained as $\dfrac{26!}{6! 10! 2^{10}}$, and they explain how to get there, but I find it difficult to understand the $2^{10}$ factor in the denominator. For me, the number of configurations looks like a selection of 10 pairs among all the possible pairs, i.e., $\text{possible pairs} = \binom{26}{10} = \dfrac{26!}{2! (26-2)!} = 325$, and choose 10 from them, $\text{possible configurations} = \binom{325}{10} = \dfrac{325!}{10! (325-10)!}$, which is a completely wrong answer, but I don't understand where I got lost. TL:DR how many 10-pairs are possible in a set of 26 letters? Please help me understand this, and if you can point me to what to read to get me started in this basic combinatorial analysis, I'll be forever grateful. Many thanks.
Here is one way to do it: arrange the alphabet in such a way that the pairs come first and the unpaired letters last. For example $$\hbox{BGSI . . . UAWEXPTL}$$ means that B and G are paired, S and I are paired, . . . U and A are paired, while W, E, X, P, T and L are unpaired. There are $26!$ ways to do this. However . . . * *it does not matter which order the letters in a pair are listed, for example, $$\hbox{GBSI . . . AUWEXPTL}$$ would be the same plugboard setting as above. So we must divide by $2^{10}$ to compensate for overcounting. *It does not matter in which order the pairs themselves are listed, for example, $$\hbox{SIAU . . . BGWEXPTL}$$ would be the same as above. So divide by $10!\,$. *It does not matter in which order the last six letters are listed, for example, $$\hbox{BGSI . . . UAXEPWLT}$$ would be the same as above. So divide by $6!\,$. This gives the answer you have quoted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Text for study of subgroup lattices of finite abelian groups. I want to study the subgroup lattice of a finite abelian group. I have found a text on the subject: Subgroup Lattices of Groups by Roland Schmidt, de Gruyter 1994. This book is about subgroups of any group, not just finite abelian groups. Is this text a good introduction to the subject? Are there other more accessible texts or lecture notes on the subject? Is it advisable to study a text on lattice theory first, or should I pick up the poset and lattice basics from the proposed text? Any info regarding introductory texts / notes on the subgroup lattice of a finite abelian group might help.
Subgroup lattices of finite abelian groups have been studied from various points of view, see the following reference: Vogt, Frank : Subgroup lattices of finite Abelian groups: Structure and cardinality. In: Lattice theory and its applications. Hrsg.: K.A. Baker, R. Wille. S. 241-259. Heldermann , Berlin . [Buchkapitel], (1995)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simple solution to system of three equations I've been given the question; $$xy = \frac19$$ $$x(y+1) = \frac79$$ $$y(x+1) = \frac5{18}$$ What is the value of $(x+1)(y+1)$? Of course, you could solve for $x$ and $y$, then substitute in the values. However, my teacher says there is a quick solution that only requires $2$ lines to solve. How can I solve $(x+1)(y+1)$ without finding $x$ and $y$, given the values above?
Multiply the second by third then substitute the first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Showing that a function is in $L^1$ I need to prove the following statement or find a counter-example: Let $u\in L^1\cap C^2$ with $u''\in L^1$. Then $u'\in L^1$. Unfortunately, I have no idea how to prove or disprove it, since the $|\bullet|$ in the definition of $L^1$ is giving me huge problems. I found counter-examples if either $u\notin L^1$ or $u''\notin L^1$, but none of them could be generalized to a example that satisfies all the conditions.
It is useful to recall the following lemma: If $f$ is a twice differentiable function over $I=[a,b]$ and $$M_0=\sup_{x\in I}|f(x)|,\quad M_1=\sup_{x\in I}|f'(x)|,\quad M_2=\sup_{x\in I} |f''(x)|,$$ then $M_1^2\leq 4M_0 M_2$. It is a well-known exercise from baby Rudin's: you can find a proof of it here. By the Cauchy-Schwarz inequality, it gives: $$\begin{eqnarray*} \int_{-M}^{M}|u'(t)|\,dt &\leq& \sum_{k=0}^{N-1}\sup_{t\in I_k=\left[-M+k\frac{2M}{N},-M+(k+1)\frac{2M}{N}\right]}|u'(t)|\cdot\mu(I_k)\\&\leq& 2\sum_{k=0}^{N-1}\sqrt{\mu(I_k)\sup_{t\in I_k}|u(t)|\cdot \sup_{t\in I_k}|u''(t)|\mu(I_k)}\\&\leq&2\sqrt{\sum_{k=0}^{N-1}\sup_{t\in I_k}|u(t)|\mu(I_k)\cdot \sum_{k=0}^{N-1}\sup_{t\in I_k}|u''(t)|\mu(I_k)},\end{eqnarray*} $$ so, by letting $N\to +\infty$, we have that the $L^1$-norm of $u'(t)$ on $[-M,M]$ is bounded by twice the geometric mean of the $L^1$-norms of $u(t)$ and $u''(t)$ on the same interval. This proves the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
If $AB = 0$, prove that the columns of matrix $B$ are vectors in the kernel of $A$ Let $A,B$ be $n\times n$ matrices. If $AB=0$, prove that the columns of matrix $B$ are vectors in the kernel of $Ax=0$. I'm not sure how to approach this. I know that if $B = 0$ and $A$ isn't, then $Ax=0$ is when $x=0=B$. But what if $A=0$? Seems like in this case B doesn't have to be a part of the kernel. Or perhaps I'm just missing something?
It results from the very definition of matrix multiplication: to obtain column $j$ in the product $AB$, you multiply each line of $A$ with column $j$ of $B$, i.e. you multiply $A$ by column $j$ of $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding a differentiable function with a particular property If $f:\mathbb{R}^2 \to \mathbb{R}$ is differentiable and satisfies $$2 \frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}=0,$$ show that there exists a differentiable function $\widetilde{f}:\mathbb{R} \to \mathbb{R}$ such that $f(x,y)=\widetilde{f}(x-2y)$ for every $(x,y) \in \mathbb{R}^2$. A hint to the problem says to show that for suitable $a,b,c,d \in \mathbb{R}$, the function $F(u,v)=f(au+bv,cu+dv)$ is independent of $u$. By the chain rule, we have that $\frac{\partial F}{\partial u}=a\frac{\partial f}{\partial x}+c \frac{\partial f}{\partial y}$, so by the hypotheses, $\frac{\partial F}{\partial u}=(a-2c)\frac{\partial f}{\partial x}$. Hence, $F$ as defined is independent of $u$ if $a=2c$. My question is, how can I use this information to obtain the function $\widetilde{f}$ with the required property?
Hint: As you observed, we may write $f(2u + v, u + v) = g(v)$ for some differentiable function $g$ (because it is independent of $u$). Now if you find $u,v$ such that $2u + v = x$ and $u + v = y$, then you can write $f(x,y) = g(v)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find a transcendental number where no two adjacent decimal digits are equal? By using WolframAlpha, I couldn't find any transcendental number without equal adjacent digits among the numbes $\tan(n)$, $\sin(n)$, $\cos(n)$, $\sec(n)$, $\cot(n)$, $\csc(n)$, $e^n$, and $ \log(n)$, where $n$ is an integer number. How to find a transcendental number where no two adjacent decimal digits are equal? Some results below
Start with Liouville's constant $0.11000100000000000000000100 \dots$ which is known to be transcendental and add $\frac{2}{99} = 0.02020202 \dots$. The resulting number is transcendental (because it differs from Liouville's constant by a rational) and has no identical adjacent digits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Innocent looking open problems in real analysis Are there any apparently easy problems or conjectures in basic real analysis (that is, calculus) that are still open? By apparently easy, I mean: so much so, that, if it was for the statement alone, they could be part of a calculus book for undergraduates?
Look for questions marked as open-problem or open-problem-list on mathoverflow. I guess you will find some open problems there. Here a list of some open questions i have found there: * *Convergence of a series and this question *Gourevitch's conjecture *Cover of the unit square by rectangles *...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Confusion with Algebra (Substitution) I apologize for the title, I'm not sure what category this would fall under. The advertisement read Buy $3$ tires at the regular price and get a fourth tire for only $3$ dollars. Carol paid $\$240$ for a set of $4$ tires. What is the regular price of a tire? So I came up with $3x + 3 = 240$. $\frac{240}{3} = 80$ $3 = 80$ does not make sense. Why do I have to subtract the $3$ first? I was told that when using systems of equations I needed to use PEMDAS.
Your mistake is that dividing by three means you divide the entire left side, so you'd end up with: $$3x+3=240,$$$$\frac{3x+3}{3}=\frac{240}{3},$$$$x+1=80,$$$$x=\$79.$$ You omitted the $3x$ term in your calculations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1095984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the coefficient of $x^{18}$ in the expansion of $(x+x^{2}+x^{3}+x^{4}+x^{5}+x^{6})^{4}$? How to approach this type of question in general? * *How to use binomial theorem? *How to use multinomial theorem? *Are there any other combinatorial arguments available to solve this type of question?
We really seek the coefficient of $x^{14}$, factoring out an $x$ from each term in the generating function. Then observe that: $(1 + x + x^{2} + x^{3} + x^{4} + x^{5}) = \frac{1-x^{6}}{1-x}$ Now raise this to the fourth to get: $f(x) = \left(\frac{1-x^{6}}{1-x}\right)^{4}$. We have the identities: $$(1-x^{m})^{n} = \sum_{i=0}^{n} \binom{n}{i} (-1)^{i} x^{mi}$$ And: $$\frac{1}{(1-x)^{n}} = \sum_{i=0}^{\infty} \binom{i + n - 1}{i} x^{i}$$ So we expand out the numerator and denominator, picking terms of $x^{14}$. Note that we are multiplying the numerator expansion by the denominator expansion. $$\binom{14 + 4 - 1}{14}x^{14} - \binom{4}{1} \binom{8 + 4 - 1}{8} x^{14} + \binom{4}{2} \binom{2 + 4 - 1}{2} x^{14}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Sign of a Permutation using Polynomials One way to define sign of permutation is to consider the polynomial $P(x_1,x_2,\cdots, x_n)=\prod_{1\leq i<j\leq n} (x_i-x_j)$ and define $$Sign(\sigma)= \frac{P(x_{\sigma(1)},x_{\sigma(2)},\cdots, x_{\sigma(n)})}{P(x_1,x_2,\cdots, x_n)}.$$ I want to see the proof of $Sign(\sigma\tau)=Sign(\sigma)Sign(\tau)$. For this, $$ Sign(\sigma\tau)=\frac{P(x_{\sigma\tau(1)},x_{\sigma\tau(2)},\cdots, x_{\sigma\tau(n)})}{P(x_1,x_2,\cdots, x_n)}$$ $$=\frac{P(x_{\sigma\tau(1)},x_{\sigma\tau(2)},\cdots, x_{\sigma\tau(n)})}{P(x_{\tau(1)},x_{\tau(2)},\cdots, x_{\tau(n)})}\times\frac{P(x_{\tau(1)},x_{\tau(2)},\cdots, x_{\tau(n)})}{P(x_1,x_2,\cdots, x_n)}.$$ I didn't get why the first term on RHS in last step is $Sign(\sigma)$. Can anybody help me? (This could be elementary, but I am not justified.)
Hint: The definition $$ Sign(\sigma){}={}\dfrac{P(x_{\sigma(1)},x_{\sigma(2)},\cdots, x_{\sigma(n)})}{P(x_1,x_2,\cdots, x_n)} $$ is valid for any initial order of $x_1,\ldots,x_n\,$. For example, $$ \dfrac{(x_1-x_2)}{(x_2-x_1)}{}={}\dfrac{(x_2-x_1)}{(x_1-x_2)}{}={}-1\,. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Squeeze Theorem Question My question is from this Video In the last example He says that $$\lim_{x \to 0} x^2 \cos(\frac{1}{x^2}) = 0$$ Squeeze Theorem: $$g(x) \leq f(x) \leq h(x)$$ Given: $$-1 \leq \cos(x) \leq 1$$ he confusing gets $$-x^2 \leq x^2\cos(\frac{1}{x^2})\leq x^2$$ and finds the limits with that. How does he find $g(x) \text{ and } h(x)$ for the squeeze theorem. Is there a special way to find them?
I don't know whether there is any special way to find $g(x)$ and $h(x).$ It's mostly intuition and using some known inequality. Here are some examples: (1). $\lim_{x \to 0} \sin x = 0.$ In this case we use the fact that $|\sin x | \leq 1.$ So we get $-x \leq \sin x \leq x.$ (2). $\lim_{x \to 0} \cos x = 1.$ In this case we use the following inequality $1 - \frac{1}{2}x^2 \leq \cos x \leq 1, \forall x \in \mathbb R.$ (3). $\lim_{x \to 0}\dfrac{\cos x -1}{x} = 0.$ For this we use two inequality: $$ -\frac{1}{2}x \leq \dfrac{\cos x -1}{x} \leq x, \text{for} \space x > 0 $$ and $$ 0 \leq \dfrac{\cos x -1}{x} \leq -\frac{1}{2}x, \text{for} \space x < 0. $$ Now define $g(x) := -\frac{x}{2}$ for $x \geq 0$ and $g(x) := 0$ for $x < 0.$ Also define $h(x) := 0$ for $x \geq 0$ and $h(x) := -\frac{x}{2}$ for $x < 0.$ If you look at the above examples, then you can see that, to use squeeze theorem, we are using some known inequality so that both $g(x)$ and $h(x)$ have the same limit. The more complicated the given functions is, the harder it is to find $g(x), h(x)$ to use squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Power set of $\{\{\varnothing\}\}$ $$\mathcal{P}(x)=\{y\mid y\subseteq x\}$$ $$\mathcal{P}(\varnothing)=\{\varnothing\}$$ $$\mathcal{P}(\{\varnothing\})=\{\varnothing,\{\varnothing\}\}$$ $$\mathcal{P}(\{a,b\})=\{\varnothing,\{a\},\{b\},\{a,b\}\}$$ For $\mathcal{P}(\{\{\varnothing\}\})$ we have: $$\varnothing\subseteq\{\{\varnothing\}\}$$ $$\{\varnothing\}\subseteq\{\{\varnothing\}\}\text{ and}$$ $$\{\{\varnothing\}\}\subseteq\{\{\varnothing\}\}$$ Therefore $\mathcal{P}(\{\{\varnothing\}\})=\{\varnothing,\{\varnothing\},\{\{\varnothing\}\}\}$ Is this answer correct? Unsure with this topic, just need some verification, thanks.
This one $$\{\varnothing\}\subseteq\{\{\varnothing\}\}$$ is incorrect. $A\subseteq B$ means that for every $x\in A$ it is true that $x\in B$. $\{\varnothing\}$ contains one element: $\varnothing$, but $\{\{\varnothing\}\}$ doen not contain $\varnothing$ so $$\{\varnothing\}\not\subseteq\{\{\varnothing\}\}$$ The other two are correct. $$\mathcal{P}(\{\{\varnothing\}\})=\{\varnothing,\{\{\varnothing\}\}\}$$ And as always: power set of set which contains one element contains two elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do you reject negative base solution for Logs? $log_x64=2$ translates to $x^2=64$ This solves to $x=\pm8$ Why do you reject the solution of $x=-8$ ? Doesn't it successfully check? $log_{-8}64=2$ means "The exponent for -8 to get 64 is 2" which is a true statement, no ?
$\log_{-8}x$ would be an inverse function of $(-8)^x$ but this function does not behave well at all. What would be $(-8)^π$ for example?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
If $f \in L^1 \cap L^2$ is $L^2$-differentiable, then $Df \in L^1 \cap L^2$ Working with the definition that $f \in L^2(\mathbb{R})$ is $L^2$-differentiable with $L^2$-derivative $Df$ if $$ \frac{\|\tau_hf-f-hDf\|_2}{h} \to 0 \text{ as } h \to 0 $$ (where $\tau_h(x) = f(x+h)$), I want to try and show that If $f \in L^1 \cap L^2$ is $L^2$-differentiable, then $Df \in L^1 \cap L^2$. Showing that $Df \in L^2$ is immediate from the definition; however, I'm unsure on how to get that $Df \in L_1$. I've tried arguing from density of continuous compactly supported functions (for which $Df \in L^2 \implies Df \in L^1$ by Jensen's inequality), but to no avail.
I don't think that what you are trying to prove is true. Consider e.g. $$ f(x) = \frac{1}{x^2} \cdot \sin(x^2) \text{ for large } x, $$ i.e. truncate $f$ somehow near the origin. We then have $f \in L^1 \cap L^2$ with $$ f'(x) = -\frac{1}{x^3} \cdot \sin(x^2) + \frac{2}{x} \cdot \cos(x^2). $$ We have $f' \in L^2$, but (as shown below) $f' \notin L^1$. Even with your definition of differentiability, I am convinced that $f$ is $L^2$-differentiable with $L^2$-derivative $f' \in L^2 \setminus L^1$. EDIT: Ok, here is a proof that $f' \notin L^1$. First of all, we have $\frac{1}{x^3} \cdot \sin(x^2) \in L^1 ((1,\infty))$ (and we are considering everything away from the origin), so that $f' \notin L^1$ is equivalent to $\frac{2}{x} \cdot \cos(x^2) \notin L^1$. But the subsitution $\omega =x^2$ yields $$ \int_1^\infty \bigg| \frac{2}{x} \cdot \cos(x^2) \bigg| \, dx = \int_1^\infty \bigg| \frac{\cos(\omega)}{\omega} \bigg| \, d\omega = \infty, $$ because of $|\cos(\omega)| \geq 1/2$ on $(-\epsilon, \epsilon) + 2\pi \Bbb{Z}$, which easily implies that the integral diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Relation between Frobenius norm and eigenvalues I'm considering a stochastic multivariate process, the stability of which implies that * *all eigenvalues $\lambda_i$, $i = \overline{1,n}$ of a certain square real-valued matrix $A$ lie within the unit circle. Besides that we know nothing about $A$. But I also need the condition * *$\| A \|_F < 1$ to hold true and I wonder if there's a connection between these 2 conditions. How much more strict would the latter one be?
Let $\lambda$ be an eigenvalue of $A$ and $v$ an associated eigenvector. We may suppose without loss of generalities that $\|v\|_2 =1$. Thus we have $|v_i|\leq 1$ for every $i$ and there exists $j \in \{1,\ldots,n\}$ such that $|v_j|\geq n^{-1/2}$ since $$ 1 = \left(\sum_{k=1}^n |v_k|^2\right)^{1/2} \leq \left(n\max_{k=1,\ldots,n}|v_k|^2 \right)^{1/2}$$ Then we have $$ |\lambda |= |\lambda n^{1/2}||n^{-1/2}|\leq |\lambda n^{1/2}| |v_j| = | n^{1/2}||(Av)_j|=n^{1/2}\left|\sum_{k=1}^nA_{j,k}v_k\right|\leq n^{1/2}\sum_{k=1}^n|A_{j,k}||v_k|\leq n^{1/2}\sum_{k=1}^n|A_{j,k}|\leq n^{1/2}\max_{j=1,\ldots,n}\sum_{k=1}^n|A_{j,k}|.$$ Now, note that by equivalence between matrix norms we get $$n^{1/2}\max_{j=1,\ldots,n}\sum_{k=1}^n|A_{j,k}| = n^{1/2}\|A\|_{\infty} \leq \|A\|_2 \leq \|A\|_F.$$ It follows that your condition $\|A\|_F < 1$ implies that the eigenvalue of $A$ lie in the unit circle. Now, note that the matrix $\frac{2}{3}\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$ has one eigenvalue equal to $2/3$. However, its Frobenius norm is $\sqrt{3\left(\frac{2}{3}\right)^2}=\sqrt{\frac{4}{3}}>1$. EDIT: In the comment, OP asked about a relation of type $\|A\|_F \leq C\rho(A)$ where $\rho(A)$ is the spectral radius of $A$ and $C >0$ may depend on the dimension $n$, i.e. $A \in \Bbb R^{n\times n}$. Here's a proof that such a bound doesn't exist. Let $n =2 $ and consider the matrix $$ A = \begin{pmatrix} 1 & b \\ 1 & 1 \end{pmatrix},$$ Then for $b >1$ we have $\rho(A) = b^{1/2}+1$ and $\|A\|_F = \sqrt{3+b^2}$, suppose such a $C$ exists then we must have $$ C \geq\frac{\sqrt{3+b^2}}{\sqrt{b}+1}\geq \frac{\sqrt{3+b^2}}{2\sqrt{b}}\geq \frac{\sqrt{b^2}}{2\sqrt{b}}=\frac{\sqrt{b}}{2} \qquad \forall b \geq 1.$$ But note that $\lim_{b\to \infty}\frac{\sqrt{b}}{2}=\infty,$ a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of Sets I need some help on this one: We have sets $X$ and $Y$ chosen independently and uniformly at random from among all subsets of $\{1,2,\ldots,100\}$. Determine the probability that $X$ is a subset of $Y$.
how many pairs $(X,Y)$ satisfy $X\subseteq Y$? classify according to the size of $Y$. So we get $\sum\limits_{k=0}^n\binom{n}{k}2^k$. Since there are $2^n\times 2^n$ possible pairs $(X,Y)$ you want $$\frac{\sum\limits_{k=0}^n\binom{n}{k}2^k}{2^{2n}}$$ Now notice that $\sum\limits_{k=0}^n\binom{n}{k}2^k$ is equal to $3^n$ because it is the number of ways to color the integers from $1$ to $n$ with colors white grey and black. To see this we could say we are indexing on the number of elements that are grey or black (So first pick the subset of numbers that are grey or black) And once that subset has been selected choose the subsets of that subset that is going to be black. Hence uniquely determining the coloring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Example of a unbounded projection Let $H$ be a Hilbert space over $\mathbb{K}$. Let $T:H\rightarrow H$ be a linear transformation such that $T^2=T$. What is an example of $T$ such that $T$ is unbounded?
For example: let $H = \ell^2$. Define the transformation $$ (x_1,x_2,\dots) \mapsto \left(\sum_{k=1}^\infty kx_k, 0,0,\dots \right) $$ Note, however, that this operator is not defined over all of $\ell^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Brownian Motion and Continuity Consider a Brownian Motion $(B_{t})_{t\geq0}$. In my lecure notes it says, without proof, that $\mathbb{P}\left(\sup_{t,s\leq N}\left\{ \left|B_{t}-B_{s}\right|:\left|t-s\right|<\delta\right\} <\varepsilon\right)$ converges to $0$ for $\delta$ tending to $0$. I think it is a consequence of the continuity of the sample paths, but can anyone help with a proof?
First there's a typo: you should have $>\varepsilon$ or should say that the probability converges to to $1$, not $0$. Anyway, here's my proof. It's informal and skips over details; you should fill these in. Fix $t$ and $s$ for a moment. Then your probability is the probability that a normal variable centered at zero whose variance is at most $\delta$ is outside the interval $(-\varepsilon,\varepsilon)$. This goes to zero by Markov's inequality (or Chebyshev's inequality, different authors seem to use different names for this). Now fix $s$ and let $t \geq s$ be arbitrary. Then the Markov property tells us that $B_t - B_s$ is another Brownian motion, call it $B'_{t-s}$. Doob's maximal inequality estimates your probability by a constant times the corresponding probability for $B'_{N-s}$, which goes to zero by the previous argument. Now allow $s$ to vary as well. Then the probability still goes to zero, because the suprema over $t$ from the previous paragraph are bounded by our estimate for the supremum over $t$ with $s=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1096969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Describe explicitly all inner products on $\mathbb{R}$ and $\mathbb{C}$ I know this is an elementary question, however I am really lost as to where to start. Since both $\mathbb{R}$ and $\mathbb{C}$ are finite-dimensional I think the inner product will be completely determined by the basis $\{1\}$. I am not sure where to go from here.
It seems the following. We shall consider these fields as vector spaces over itself. Let $x,y\in\Bbb R$. Then $(x,y)=xy(1,1)$, so an inner product on $\Bbb R$ is completely determined by the value $(1,1)$. Conversely, it is easy to check that for each $c>0$ the function $f_c:\Bbb R\times \Bbb R$, $f(x,y)=xyc$ is an inner product at the space $\Bbb R$. Let $x,y\in\Bbb C$. Then $(x,y)=x\bar y(1,1)$, so an inner product on $\Bbb C$ is completely determined by the value $(1,1)$. Conversely, it is easy to check that for each $c>0$ the function $f_c:\Bbb C\times \Bbb C$, $f(x,y)=x\bar yc$ is an inner product at the space $\Bbb C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
why is $\sqrt{-1} = i$ and not $\pm i$? this is something that came up when working with one of my students today and it has been bothering me since. It is more of a maths question than a pedagogical question so i figured i would ask here instead of MESE. Why is $\sqrt{-1} = i$ and not $\sqrt{-1}=\pm i$? With positive numbers the square root function always returns both a positive and negative number, is it different for negative numbers?
Well, if you are considering that $y=\sqrt{x}$ is the relation $y^2=x$, then, yes, $\pm i$ are both solutions to $\sqrt{-1}$. However, this is not usually how square roots are defined. Typically we say: $$\sqrt{1}=1$$ Not plus or minus $1$ - just $1$. This means that $\sqrt{x}$ is a "right inverse" of $x^2$ - that is, we have that $$\left(\sqrt{x}\right)^2=x$$ but not necessarily that $$\sqrt{(x^2)}=x.$$ The difference here is that, to make $\sqrt{x}$ a function, we need it to yield unique output values - so we choose a principal root. We generally define that whenever we take the square root of a positive number, we get a positive number. And the definition $\sqrt{-1}=i$ is equally innocuous, since if we defined it as $\sqrt{-1}=-i$, we could just relabel every number by its complex conjugate, and get back to our typical definition (that is to say, $\sqrt{-1}=i$ more defines $i$ than it defines $\sqrt{-1}$). So, though the equation $x^2+1$ has two solutions (as does any equation $x^2-c$ for complex $c\neq 0$), for the sake of making square roots act like a function, we must choose one of them to be the square root of $-1$, and we call this number $i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 5 }
Math, how do we know if a substitution is true? For instance, in calculus we often do u-substitutions. Quite often, we do trignometric substitutions to solve integrals. For instance, if we have the following relation $y=\sqrt{1-x^2}$ And we substitute $x = \sin u$, for $x \in [-1, 1]$; how do we know that our substitution is correct? If I graph $\sin u$ and $x$, it is clear these functions are very different. How is there any relation at all? Why are we allowed to do substitutions like this?
In general, you're allowed to do substitutions whenever you're able to undo them. Precisely speaking, the substitution $$y = g(x)$$ has to be such that $g$ is a bijective function. What this means is that: * *Two values $x_1\neq x_2$ cannot be mapped to the same value, i.e. $f(x_1) \neq f(x_2)$. *Every $y$ has to be the image under $f$ of a certain $x$. Now for (definite) integrals, the same has to hold, but you need to have the additional hypothesis that $f$ is differentiable. I'll note that this is more of a sketch than anything, because we'd need to discuss the relative domains and codomains where they apply. I've skipped it to give a more concise idea. Lets look at why your substitution is correct then (and in this case we will be obligated to consider the domain). Suppose your integral is $$\int_{-1}^1\sqrt{1-y^2} \,dy$$ Now, you say that $$y = \sin x\,, x\in\left[\frac{-\pi}{2},\frac{\pi}{2}\right)$$ I've chosen the domain where $x$ lies so that we do in fact get every $y$ in $[-1,1)$ (integration tip: it doesn't matter if we integrate in $[a,b]$ or $(a,b)$), and $\sin x$ does not repeat values here (you can verify these affirmations). Additionally, $\sin $ is differentiable, so here, the substitution is valid. Actually, proving that we can do this does take some effort, so it's not as if we're dealing with some very evident fact. There are theorems to prove in the process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
If $x,y,z \geq 1/2, xyz=1$, showing that $2(1/x+1/y+1/z) \geq 3+x+y+z$ If $x,y,z \geq 1/2, xyz=1$, showing that $2(1/x+1/y+1/z) \geq 3+x+y+z$ I tried Schturm's method for quite some time, and Cauchy Schwarz for numerators because of the given product condition.
Let $x+y+z=3u$, $xy+xz+yz=3v^2$ and $xyz=w^3$. Hence, our inequality is equivalent to $f(v^2)\geq0$, where $f$ is a linear increasing function. Hence, $f$ gets a minimal value, when $v^2$ gets a minimal value, which happens for equality of two variables or maybe one of them equal to $\frac{1}{2}$. * *$y=x$, $z=\frac{1}{x^2}$, which gives $(x-1)^2(2x^2+2x-1)\geq0$; *$z=\frac{1}{2}$, $y=\frac{2}{x}$, which gives $\frac{1}{2}\geq0$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
A conical tent is $8$ $m$ high and the radius of its base is $6$ $m$. A conical tent is $8$ $m$ high and the radius of its base is $6$ m. Find (i) Slant height of the tent (ii) Cost of the canvas required to make the tent, if the cost of $1$ $m^2$ canvas is $\$70$. What I've tried so far, Height=$8$ $m$ Radius=$6$ $m$ Slant height=$\sqrt{r^2 + h^2} =10$ $m$
Lets start by deriving the Surface area of a cone ignoring the base Let the height be $h$ We can see by Pythagoras that the slant height $s = \sqrt{h^2+r^2}$ The shape of the material would, when flattened out look like this Which we can see could be cut from a circle of radius $s$ We know the formula for the area of a circle and we know what proportion of the circle we need Surface Area $A = \pi \cdot s^2 \cdot \dfrac{2 \cdot \pi \cdot r}{2 \cdot \pi \cdot s} = \pi \cdot s \cdot r$ I'll leave it as an exercise to enter from here all you need to do is multiply the area by the cost of the canvas to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to say if Eigenvectors of A are orthogonal or not? without computing eigenvectors I am give matrix : $$A=\begin{bmatrix} 0&-1 & 2 \\ -1 & -1 & 1 \\ 2 & 1 &0 \end{bmatrix} $$ *1. Without finding the eigenvalues and eigenvectors, determine whether the eigenvectors are orthogonal or not. Justify your answer *2. Express matrix $A$ in the form $A=UDU^T$ where $D$ is a diagonal matrix and $U$ is an orthogonal matrix. What are $U$ and $D$ ? *I can check if a vectors are orthogonal or not, by dot product = 0 *I know that if $B^T=B^{-1}$ so that $B$ be can be said orthogonal, and $B^TB=I$ * I also can find the eigenvalues and eigenvectors, but the question asks without finding them.. How to check whether eigenvalues are orthogonal or not without finding? and how to express $A=UDU^T$?
Let $v$ be an eigenvector correspond to $\lambda$ and let $w$ be an eigenvector correspond to $\delta$. Then $$\lambda \langle v,w \rangle= \langle \lambda v,w \rangle=\langle Av,w \rangle=\langle v,A^tw \rangle=\langle v,\delta w \rangle= \delta \langle v,w \rangle\Rightarrow (\lambda-\delta)\langle v,w \rangle \Rightarrow \langle v,w \rangle=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do you find this limit $\lim_{n\to +\infty} \int _{\frac{1}{n}}^{n} \frac{|\sin x|^n}{x^{\alpha}}\,dx $ I don't know how to solve the limit $$\lim_{n\to +\infty} \int _{\frac{1}{n}}^{n} \frac{|\sin x|^n}{x^{\alpha}}\,dx $$ for each $\alpha>1$. My attempt: $\displaystyle f_n(x)=\frac{\chi_{[\frac{1}{n},n]}(x) |\sin x|^n}{x^{\alpha}}$ If $x>0$ : $0<|f_n(x)|<\frac{1}{x^\alpha}$ and $f_n \to 0 $ pointwise $\lim_{n\to +\infty} \int _{\frac{1}{n}}^{n} \frac{|\sin x|^n}{x^{\alpha}}\,dx =\lim_{n\to +\infty} \int _{0}^{+\infty} \frac{|\sin x|^n}{x^{\alpha}}\,dx < \lim_{n\to +\infty} \int _{0}^{+\infty} \frac{1}{x^{\alpha}}\,dx $ but $\alpha>1$ ! Any help is appreciated :)
Let be: $$f_n(x)=\chi_{[\frac{1}{n},1]}\frac{|sin(x)|^n}{x^\alpha}+\chi_{[1,n]}\frac{|sin(x)|^n}{x^\alpha}$$ and let be$$\chi_{[\frac{1}{n},1]}\frac{|sin(x)|^n}{x^\alpha}=g_n(x)$$ $$\chi_{[1,n]}\frac{|sin(x)|^n}{x^\alpha}=h_n(x)$$ so we have: $$g_n(x)<\chi_{[0,1]}sin(1)^nn^\alpha\to0$$$$\int{\chi_{[0,1]}sin(1)^nn^\alpha}=sin(1)^nn^\alpha\to0=\int 0$$ and $$h_n(x)<\frac{1}{x^\alpha}\chi_{[1,\infty]}$$ so for the first and second theorem of dominated convergence$$\lim\int{f_n}=\lim\int{g_n+h_n}=\lim(\int{g_n}+\int{h_n})=\int{\lim h_n}+\int{\lim g_n}=\int{\lim f_n}$$ so $$\lim \int{f_n}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Calculate Angle between Two Intersecting Line Segments Need some help/direction, haven't had trig in several decades. On a 2 dimensional grid, I have two line segments. The first line Segment always starts at the origin $(0,0)$, and extends to $(1,0)$ along the $X$-axis. The second line Segment intersects the first at the origin, and can extend to potentially anywhere within $(-1,-1)$ to $(1,1)$. I need to always calculate the angle to the right of the first segment... If this is already addressed in another post, please comment me the link. UPDATE I will have a single input of $(x,y)$ for the end of the 2nd segment... so segment $A$ would be $(0,0)$ →$ (1,0)$ and segment $B$ would be $(0,0)$ → $(x,y)$ where $(x,y)$ can be anywhere inside $(-1,-1)$ and $(1,1)$ assuming that the scale is $0.1$. Let me know If I can provide any additional information that will help. UPDATE OK... assuming that the first segment is running along the $Y$-axis... $A(0,0)$ and $B(0,1)$ And the second segment is running from $A(0,0)$ to $C(.4,.4)$ with a scale of .2.... $$\theta= \tan^{-1}{\dfrac{.4}{.4}}= 45$$ If I change C to $C(.4,-.4)$ I get. $$\theta= \tan^{-1}{\dfrac{.4}{-.4}}= -45$$ Do I have to manually compensate for the quadrant because this seems to calculate based specifically on the axis... I would expect the 2nd one to come up as 135 degrees from the positive Y Axis... What am I missing? Just for posterity... If I had $C(-0.4,-0.1)$ I would expect the result for the angle from the positive Y axis to this line segment to be roughly 255 degrees... $$\theta= \tan^{-1}{\dfrac{.4}{-.1}}= 75.9637$$ Plus 180 from starting at the positive Y axis....
Not sure EXACTLY what you are asking, but I will answer this to the best of my ability. If you could include a visual that would greatly help me. When we have two intersecting line segments like this finding any single value (a, c, b, d) will reveal all other values. For example, if we have the value of a, then c = a, and we have (in degrees) b = d = 180-a. Therefore, I can equip with the tools to find the angle between two vectors, as you have given. For example, we can treat the first vector as you have said as $[1, 0]$ and the second vector as $[1, 1]$. We take the dot product between them, which just means that we multiply the corresponding values and sum them up, or $<a, b> = \sum\limits_{i = 1}^{n}{a_ib_i}$ where $n$ is the number of elements in the vector. We use the geometric fact that $<a, b> = |a||b|cos(\theta_{a, b})$ where $|a|$ means its norm or magnitude (I only deal with standard inner product and norm here). This gives us that the dot product, which is $1$, is equal to $\sqrt{2}cos(\theta)$, which means $cos(\theta) = \frac{1}{\sqrt{2}}$ This in turn gives us that the angle between these two is 45 degrees, and we can figure out the adjacent angle as 135, and the vertical angles all share the same degrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How does one expression factor into the other? How does $$(k+1)(k^2+2k)(3k+5)$$ factor into $$(k)(k^2-1)(3k+2) + 12k(k+1)^2$$
Well, $RHS=k(k^2-1)(3k+2) + 12k(k+1)^2 = k(k+1)((k-1)(3k+2)+12(k+1))=k(k+1)(3k^2+2k-3k-2+12k+12)=k(k+1)(3k^2+11k+10)=k(k+1)(k+2)(3k+5)=LHS$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Arrange soccer fixtures with correct home - away alternation for each team I am trying to do as the title says. I have 10 teams in the same group. Every team must play the rest once each but each of them will always alternate home and away. This means that if they play at home their first game, the second MUST be away, the third home, away, home, and so on. I tried like this: 1st round 0 vs. 9 1 vs. 8 2 vs. 7 3 vs. 6 4 vs. 5 2nd round 8 vs. 0 7 vs. 1 6 vs. 2 5 vs. 3 9 vs. 4 3rd round 0 vs. 7 1 vs. 6 2 vs. 5 3 vs. 9 4 vs. 8 4th round 6 vs. 0 5 vs. 1 9 vs. 2 8 vs. 3 7 vs. 4 5th round 0 vs. 5 1 vs. 9 2 vs. 8 3 vs. 7 4 vs. 6 The method works fine until the $5^{th}$ round but not for any further rounds since it would imply that $0$ must play at home against a team from $1$ to $4$, which are supposed to play at home that round as well. Is this mathematically possible? IF so, please show me how. Thank you.
Yes, it is possible. By way of illustration... here's how my solution starts $$\begin{array} \\ 1 & BvA & DvC & FvE & HvG & JvI \\ 2 & AvD & CvF & EvH & GvJ & \\ 3 & DvB & FvA & HvC & JvE & IvG \\ 4 & BvF & AvH & CvJ & EvI & \\ 5 & FvD & HvB & JvA & IvC & GvE \\ \end{array}$$ (it might be that my interest in bell-ringing is an advantage here :-) )
{ "language": "en", "url": "https://math.stackexchange.com/questions/1097923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I show the two limits How can I show the two limits $$ \displaylines{ \mathop {\lim }\limits_{x \to + \infty } \frac{{x^2 e^{x + \frac{1}{x}} }}{{e^{ - x} \left( {\ln x} \right)^2 \sqrt x }} = \mathop {\lim }\limits_{x \to + \infty } \left( {\frac{{x^{\frac{3}{4}} e^{\left( {x + \frac{1}{{2x}}} \right)} }}{{\ln x}}} \right)^2 = + \infty \cr \mathop {\lim }\limits_{x \to 0^ + } \left( {\frac{{x^{\frac{3}{4}} e^{\left( {x + \frac{1}{{2x}}} \right)} }}{{\ln x}}} \right)^2 = + \infty \cr} $$ think you
$$\frac{x^2e^x}{e^{-x} (\ln x)^2\sqrt x} < \frac{x^2e^{x + \frac1x}}{e^{-x} (\ln x)^2\sqrt x} \text{ for large $x$ since } e^{\frac1x} > 1$$ $$$$ $$\lim_{x \to \infty} \frac{x^2e^x}{e^{-x} (\ln x)^2\sqrt x} = e^{2x}\frac{x^{1.5}}{(\ln x)^2} = +\infty$$ $$$$ $$\text{Thus, the right hand side at the top should diverge to $+\infty$ too.}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing $\left\{ \frac{1}{n^n}\sum_{k=1}^{n}{k^k} \right\}_{n \in \mathbb{N}}\rightarrow 1$ I would like to prove: $$\left\{ \frac{1}{n^n}\sum_{k=1}^{n}{k^k} \right\}_{n \in \mathbb{N}}\rightarrow 1$$ I found a proof applying Stolz criterion but I need to use the fact that: $$\left\{\left(\frac{n}{n+1}\right)^n\right\}\text{ is bounded}\tag{$\ast$}$$ I would like to calculate this limit without using $(\ast)$ and preferably using basic limit criterion and properties. (Sorry about mispellings or mistakes, English is not my native language.)
Obviously $$\frac{1}{n^n}\sum_{k=1}^{n}k^k \geq 1,$$ while: $$\frac{1}{n^n}\sum_{k=1}^{n}k^k\leq \frac{1}{n^n}\sum_{k=1}^{n}n^k\leq\frac{1}{n^n}\cdot\frac{n^n}{1-\frac{1}{n}}=\frac{n}{n-1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
how can I prove that $\frac{\arctan x}{x }< 1$? I have got some trouble with proving that for $x\neq 0$: $$ \frac{\arctan x}{x }< 1 $$ I tried doing something like $x = \tan t$ and playing with this with no success.
We want to show that $\arctan(x) \leq x$ for all positive x (or vice-versa for negative x). Notice that at $x=0$, we can evaluate $\arctan(x) = 0$, so the functions are equal. Now, the derivative of $\arctan$ is $1/(1+x^2) < x' = 1$, and paired with our former observation, by a well-known theorem from calculus, this means that $\arctan(x) \leq x$ for positive x. (By the same theorem, this is in fact a strict inequality).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 9, "answer_id": 0 }
How to find the solution for $n=2$? Let $f: \mathbb R^n \setminus \{0\} \to \mathbb R^n \setminus \{0\}$ be a function only depending on the distance from the origin, $f = f(r)$, where $r = \sqrt{\sum_{i=1}^n x_i^2}$. I calculated $$ \Delta f = {n-1\over r}f_r + f_{rr}$$ and I am trying to determine which $f$ satisfy $${n-1\over r}f_r + f_{rr}=0$$ I integrated and found that $$ f(r) = K' r^{2-n} + K''$$ satisfies this equation. My problem is: I know that for $n=2$ the logarithm satisfies this equation too. But I don't know how to deduce it from what I've done so far.
You're not showing your detailed reasoning, but I imagine the penultimate step must have been $$ f'(r) = K_0 r^{1-n} $$ from which you get by indefinite integration $$ f(r) = \frac{K_0}{2-n} r^{2-n} + K_1 $$ When $n\ne 2$, the division by $2-n$ can be absorbed into the arbitrary constant, but for $n=2$ you end up dividing by zero and everything blows up. Therefore you need a special case for integrating $r^{-1}$ and we get $$ f(r) = K_0 \log r + K_1 $$ The rule $\int x^k\,dx = \frac{1}{k+1}x^{k+1} + K$ works only under the condition that $k+1\ne 0$. The logarithm that arises in the $k=-1$ case may seem to be a strange discontinuity, but actually it's a nice pointwise limit of the other indefinite integrals, if only we select appropriate constants of integration: $$ \log x = \lim_{k\to -1} \left[\frac{x^{k+1}}{k+1}-\frac{1}{k+1}\right] $$ It only looks like an isolated case because for all other exponents we can choose the antiderivative such that its value at either $0$ or $+\infty$ is $0$ -- but that option isn't available for the logarithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to explain this formally? This is an exercise of my homework: Let $K\subset A\subset \mathbb{R}^N$ such that $N\in\mathbb{N}^*$, $K$ is compact and $A$ is open. Show that there is an $K_1$ compact such that $K\subset {K_1}^o \subset K_1 \subset A$ (Where ${K_1}^o$ is the set of interior points of $K_1$) My strategy is show that there is a closed set $F$ in $A$ containing $K$, but how to do this? In $\mathbb{R}$ this is simple, but how to do in $\mathbb{R}^N$? The idea is that because A is open and K is compact we have (interior) points of A outside of K but how to use this to find $F$?
HINT: For each $x\in K$ there is an $\epsilon_x>0$ such that $B(x,\epsilon_x)\subseteq\operatorname{cl}B(x,\epsilon_x)\subseteq A$. Each of the sets $\operatorname{cl}B(x,\epsilon_x)$ is closed and bounded in $\Bbb R^N$, so it’s compact. Let $\mathscr{U}=\{B(x,\epsilon_x):x\in K\}$; $\mathscr{U}$ is an open cover of $K$. Now use the fact that $K$ is compact. Here $B(x,\epsilon)$ is the open ball of radius $\epsilon$ centred at $x$ in the usual Euclidean metric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find all optimal solutions by Simplex Let "stable operation" be an operation on a simplex tableau such that the entering variable has a reduced cost of 0. Recall that a pivoting operation will not change the objective value if either the reduced cost (i.e. in the $\bar c$ row shown below) is 0, or if the leaving variable has value 0 already. Example of stable operation (which preserves objective value): $$ \begin{array}{c|cccccc|c} \text{Basis} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{Sol.} \\ \hline \bar c & 0 & 0 & 0 & 0 & -1 & -1 & 1 \\ x_1 & 1 & 0 & 0 & 1 & 1 & 1 & 4 \\ x_2 & 0 & 1 & 0 & -1 & 1 & 1 & 2\\ x_3 & 0 & 0 & 1 & -1 & 1 & 1 & 3\\ \end{array} $$ $$\LARGE\pmb\downarrow$$ $$ \begin{array}{c|cccccc|c} \text{Basis} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{Sol.} \\ \hline \bar c & 0 & 0 & 0 & 0 & -1 & -1 & 1 \\ x_4 & 1 & 0 & 0 & 1 & 1 & 1 & 4 \\ x_2 & 1 & 1 & 0 & 0 & 2 & 2 & 6 \\ x_3 & 1 & 0 & 1 & 0 & 2 & 2 & 7 \\ \end{array} $$ Example of an operation which is not stable but also preserves objective value: $$ \begin{array}{c|cccccc|c} \text{Basis} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{Sol.} \\ \hline \bar c & 0 & 0 & 0 & -1 & -1 & -1 & 1 \\ x_1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ x_2 & 0 & 1 & 0 & -1 & 1 & 1 & 2\\ x_3 & 0 & 0 & 1 & -1 & 1 & 1 & 3\\ \end{array} $$ $$\LARGE\pmb\downarrow$$ $$ \begin{array}{c|cccccc|c} \text{Basis} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{Sol.} \\ \hline \bar c & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ x_4 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ x_2 & 1 & 1 & 0 & 0 & 2 & 2 & 2 \\ x_3 & 1 & 0 & 1 & 0 & 2 & 2 & 3 \\ \end{array} $$ However, in the second case, the solution does not change. Is there always a path of stable operations from a tableau for an optimal basic feasible solution (BFS) to a tableau for all other optimal BFS? Which of the following are true: * *For all optimal BFS $X_1$, for all tableaus $T_1$ for $X_1$, for all optimal BFS $X_2$, for all tableaus $T_2$ for $X_2$, there is a path of stable operations from $T_1$ to $T_2$. (I think the answer for this is false.) *For all optimal BFS $X_1$, for all tableaus $T_1$ for $X_1$, for all optimal BFS $X_2$, there is a tableau $T_2$ for $X_2$ such that there is a path of stable operations from $T_1$ to $T_2$. *For all optimal BFS $X_1$, there is a tableau $T_1$ for $X_1$ such that for all optimal BFS $X_2$, there is a tableau $T_2$ for $X_2$ such that there is a path of stable operations from $T_1$ to $T_2$. The motivation for this problem is so that we decide when we have exhausted all optimal solutions when searching for them; can we limit our procedure to search only with stable operations on the tableau?
Note that the "unstable operation" above won't give you a different solution. You've just replaced the basic variable $x_1 = 0$ by a nonbasic variable $x_4 = 0$, but the solution shown in the above optimal simplex tableau is still $(x_1,x_2,x_3,x_4,x_5,x_6)^T = (0,2,3,0,0,0)^T$. Therefore, only "stable operations" with a non-zero variable $x_r \ne 0$ to be replaced by $x_j$ will give you a different basic optimal feasible solution. It's easy to verify that any convex combination of a set of basic optimal feasible solution(s) is still an optimal feasible solution (since the feasible region in a linear program in convex), so the set of optimal feasible solution is convex (i.e. path connected). Hence, the answer to your question is yes. (Added in response to the edit of the question) You may just think of the graph so as to visualize your problem. A basic solution corresponds to an extreme point in the feasible region. (If $\mathbf x$ is an extreme point, then there doesn't exists two different $\mathbf u$ and $\mathbf v$ such that $\mathbf x$ is a convex combination of $\mathbf u$ and $\mathbf v$.) Form the graph of the set of all optimal feasible solutions of the linear program, and note its convexity (so it's a convex polygon), then eliminate its relative interior point. Take any two vertices of the remaining boundary points to be $X_1$ and $X_2$ in (1). Clearly, we can see a path running from $X_1$ to $X_2$ through adjacent vertices. (Assume that you have $m$ constraints and $n$ decision varibles, and $m < n$. Choose $n$ vectors in the $m$-by-$m+n$ matrix $A$ to form a basis matrix $B$. This is similar to choosing hyperplanes (representing the constraints) in $\Bbb R^{m+n}$ and find its intersection.) This corresponds to a (finite) series of steps of replacing $x_r$ by $x_j$, which is feasible. Since that's feasible, there exists a pivot operation in one of the simplex tableaux $T_1$ for $X_1$. Let $y_{00} = \bar c$ be the objective function value, $y_{0j}$ be the $j$-th column of the objective function row (i.e. the row of $\bar c$), $y_{r0} = x_r$ be the $r$-th component of the current solution, and $y_{rj}$ be the $r,j$-th entry in the coefficient matrix. \begin{matrix} y_{0j}& y_{00}\\ y_{rj}& y_{r0} \end{matrix} In order not to change the value of $y_{00}$ after a pivot operation $y_{00}-\dfrac{y_{r0} y_{0j}}{y_{rj}}$, i.e. $$y_{00}-\dfrac{y_{r0} y_{0j}}{y_{rj}} = y_{00},$$ we have $y_{r0} y_{0j} = 0$. Since we want to move to another point, we must have $y_{r0} \ne 0$, which implies $y_{0j} = 0$, so the operation must be stable. Hence (1) is proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Examples of bounded continuous functions which are not differentiable Most often examples given for bounded continuous functions which are not differentiable anywhere are fractals.If we include probabilistic fractals exact self-similarity is not required. Are their examples of functions which are bounded ,continuous, not differentiable anywhere and can not be modeled as fractals?
It might be worth pointing out that the typical function, in the sense of Baire category, has this property. More specifically, Let $X=C([0,1])$ denote the set of all continuous, real valued functions defined on the unit interval endowed with the sup norm. Let $S\subset X$ denote the set of all functions that nowhere differentiable and let $T\subset X$ denote the set of all functions whose graph has Hausdorff dimension $1$. Then $S$ and $T$ are both residual sets - i.e., each is the complement of a countable collection of nowhere dense sets. As a result, their intersection is second category as well and, in particular, non-empty. The fact that $S$ is second category is a classic theorem of Stefan Banach - in fact, it's the seminal result of this type. A proof may be found in chapter 11 of the important book Measure and Category by John Oxtoby. The fact that $T$ is second category is proven by Humke and Petruska in Volume 14 of the The Real Analysis Exchange, though the title is "The packing dimension of a typical continuous function is 2" and it's proven in some other references as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Mistake in my proof: what is the normalisation factor of the surface integral of a sphere? I was trying to prove $$ {1\over \varepsilon} \int_{\partial B(a,\varepsilon)} f dS = {1\over r} \int_{\partial B(a,r)} f dS$$ where $0<\varepsilon < r$ and $f$ is harmonic on $\mathbb R^2$ and $n$ is the normal vector to the sphere. Here is what I did: (0) I use the equation $$ \int_{D}(f\Delta g - g \Delta f)dV = \int_{\partial D}\left( f {\partial g \over \partial n} - g {\partial f \over \partial n}\right) dS$$ and let $g(x) = \log \|x-a\|$. (this is a radial harmonic function) (1) I calculated that ${\partial g \over \partial n} = 1$ (this is probably correct, I suspect my mistake to be later in the calculation) (2) I note that $\int_{B(a,r) \setminus B(a,\varepsilon)} (f \Delta G - G \Delta f)dV = 0$ because both $f$ and $g$ are harmonic (3) I showed that $$ \int_{\partial B(a,r)} g {\partial f \over \partial n} dS = 0$$ (4) I put (1)-(3) together so that $$ 0 = \int_{B(a,r) \setminus B(a,\varepsilon)} (f \Delta G - G \Delta f)dV = \int_{\partial B(a,r) \sqcup \partial B(a,\varepsilon)} f dS$$ But this results in $$ \int_{\partial B(a,\varepsilon)} f dS = \int_{\partial B(a,r)} f dS$$ and there is one problem with that: I am missing the factors of ${1\over \varepsilon}$ and ${1\over r}$. Hence my question is: How can I calculate the normalisation of the surface integral of the sphere?
(4) I put (1)-(3) together so that $$ 0 = \int_{B(a,r) \setminus B(a,\varepsilon)} (f \Delta G - G \Delta f)\, dV = \int_{\partial B(a,r) \sqcup \partial B(a,\varepsilon)} f\, dS $$ It appears you've lost the normal derivative of $g$. That is, the right-hand integral should be $$ 0 = \int_{B(a,r) \setminus B(a,\varepsilon)} (f \Delta G - G \Delta f)\, dV = \int_{\partial B(a,r) \sqcup \partial B(a,\varepsilon)} f \frac{\partial g}{\partial n}\, dS, $$ and of course $\dfrac{\partial g}{\partial n}(x) = \dfrac{1}{\|x - a\|}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of handshakers in a conference I was given the following problem. In a conference where n representatives attend, if 1 of any 4 of the attendants shake hands with the other 3, prove that 1 of any 4 of the attendants shake hand with the rest of the n − 1 attendants. I'm familiar with the handshaking theorem and most of its applications but I'm not sure how to show in every random group of four, one must shake hands with everybody else. Thanks.
Suppose this is not true. Pick a person $v$. He doesn't know someone (call him $w$). If $w$ does not know anyone we are done since if we pick $w$ and any other three persons no one will know $w$. Now pick a friend $x$ of $w$ such that $x$ knows $v$. (If no person exists who knows $w$ and $v$ we are also done since if we pick $v$,$w$ and any other two people none of them will know $v$ and $w$). So let $x$ know $v$ and $w$. Then there is a $y$ such that $x$ does not know $y$. The set of vertices $v,w,x,y$ contradicts the hypothesis that given any four vertices one of them knows the other three. Since $u$ and $v$ don't know each other and $x$ and $y$ don't know each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The roots of a certain recursively-defined family of polynomials are all real Let $P_0=1 \,\text{and}\,P_1=x+1$ and we have $$P_{n+2}=P_{n+1}+xP_n\,\,n=0,1,2,...$$ Show that for all $n\in \mathbb{N}$, $P_n(x)$ has no complex root?
Interlacing is a good hint, but let we show a brute-force solution. By setting: $$ f(t) = \sum_{n\geq 0}P_n(x)\frac{t^n}{n!} \tag{1}$$ we have that the recursion translates into the ODE: $$ f''(t) = f'(t) + x\, f(t) \tag{2}$$ whose solutions are given by: $$ f(t) = A \exp\left(t\frac{1+\sqrt{1+4x}}{2}\right) + B\exp\left(t\frac{1-\sqrt{1+4x}}{2}\right)\tag{3}.$$ This gives: $$ P_n(x) = A\left(\frac{1+\sqrt{1+4x}}{2}\right)^n + B\left(\frac{1-\sqrt{1+4x}}{2}\right)^n\tag{4}$$ and by plugging in the initial conditions we have: $$ A=\frac{1+2x+\sqrt{1+4x}}{2\sqrt{1+4x}},\quad B=\frac{-1-2x+\sqrt{1+4x}}{2\sqrt{1+4x}}\tag{5}$$ so $P_n(x)=0$ is equivalent to: $$\left(\frac{1+\sqrt{1+4x}}{1-\sqrt{1+4x}}\right)^n = \frac{1+2x-\sqrt{1+4x}}{1+2x+\sqrt{1+4x}},\tag{6}$$ or, by setting $x=\frac{y^2-1}{4}$, to: $$\left(\frac{y+1}{1-y}\right)^n = \frac{y^2-y+1}{y^2+y+1} \tag{7} $$ or, by setting $y=\frac{z-1}{z+1}$, to: $$ z^n = \frac{3+z^2}{1+3z^2}\tag{8} $$ or to: $$ 3z^{n+2}+z^n-z^2-3 = 0.\tag{9} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Give a direct proof of the fact that $a^2-5a+6$ is even for any $a \in \mathbb Z$ Give a direct proof of the fact that $a^2-5a+6$ is even for any $a \in \mathbb Z.$ I know this is true because any even number that is squared will be even, is it also true than any even number multiplied by 5 will be even?? is this direct proof enough?
Here is a 'logical' way to prove this, without using modular arithmetic, and without case distinctions. $ \newcommand{\calc}{\begin{align} \quad &} \newcommand{\calcop}[2]{\\ #1 \quad & \quad \unicode{x201c}\text{#2}\unicode{x201d} \\ \quad & } \newcommand{\endcalc}{\end{align}} \newcommand{\ref}[1]{\text{(#1)}} \newcommand{\true}{\text{true}} \newcommand{\false}{\text{false}} \newcommand{\even}[1]{#1\text{ is even}} $I'm assuming that we can use the following two rules: \begin{align} \tag 0 \even{n + m} \;\equiv\; \even n \equiv \even m \\ \tag 1 \even{n \times m} \;\equiv\; \even n \lor \even m \\ \end{align} Now we simply calculate: $$\calc \even{a^2 - 5 \times a + 6} \calcop={$\ref 0$, two times} \even{a^2} \;\equiv\; \even{5 \times a} \;\equiv\; \even 6 \calcop={$\ref 1$, two times} \even a \lor \even a \;\equiv\; \even 5 \lor \even a \;\equiv\; \even 6 \calcop={5 is odd, 6 is even; logic: simplify} \even a \;\equiv\; \even a \calcop={logic: simplify} \true \endcalc$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1098937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 6 }
How find a closed form for the numbers which are relatively prime to $10$, Interesting Question Let $a_n$ be the positive integers (in order) which are relatively prime to $10$. Find a closed form for $a_n$. I know $$a_{1}=1,a_{2}=3,a_{3}=7,a_{4}=9,a_{5}=11,a_{6}=13,a_{7}=17,a_{8}=19,a_{9}=21,\cdots$$ It is said the $$a_{n}=2\left\lfloor \dfrac{5}{3}\left(n-1-\left\lfloor\dfrac{n-1}{4}\right\rfloor\right)\right\rfloor-2\left\lfloor\dfrac{1}{2}\left(n-1-4\left\lfloor\dfrac{n-1}{4}\right\rfloor\right)\right\rfloor+1$$ But I can't find this proof. Question 2: Let $a_n$ be the positive integers relatively prime to $m$. Find a closed form for $a_{n}$. Is this problem a research problem? I think it should be since this is in the OEIS, and although Euler's totient function is similar, these sequences are different.
That solution to question 1 looks quite complex. One could also just say $$ a_n = 2n + 2\left\lfloor\frac{n+1}4\right\rfloor - 1 $$ which uses that the first differences 2,4,2,2,2,4,2,2,2,4,2,2,2,4,... have a particularly simple structure in this case. As a more immediately generalizable solution one could say $$ a_n = 10\left\lfloor \frac {n-1}4 \right\rfloor + \left\lfloor 10^n \frac{1379}{9999} \right\rfloor \bmod 10 $$ since $\frac{1379}{9999}=0.137913791379...$. This same principle can be used to construct closed formulas for any other integer sequence whose first differences repeat.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the formula of sum $(k+1)^2+(k+2)^2+...(k+(n-1))^2$ I know the sum of square of numbers which stars from $1$ but I don't know what the formula becomes when the first term is not $1$ as follow $$(k+1)^2+(k+2)^2+...(k+(n-1))^2$$
Hint $$\sum_{i = k}^n i^2 = \sum_{i=1}^n i^2 - \sum_{i=1}^{k-1}i^2$$ $\text{Now substitute using the fact that }$ $$\sum_{i=1}^n i^2 = \frac{n(n + 1)(2n + 1)}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Application of the mean value theorem to find $\lim_{n\to\infty} n(1 - \cos(1/n))$ While reading Heuser (2009) "Lehrbuch der Analysis Teil I" on page 286, I got this question: Find $$\lim\limits_{n \rightarrow \infty} n\Big(1 - \cos\Big(\frac{1}{n}\Big)\Big)$$ with the help of the Mean Value Theorem. How do you apply the Mean Value Theorem to this problem?
Obviously, $$\left|1-\cos \left(\frac{1}{n} \right) \right| = \left|\cos(0)-\cos \left(\frac{1}{n} \right) \right|.$$ By the mean value theorem there exist $a_n \in (0,1/n)$ such that $$\left|1-\cos \left(\frac{1}{n} \right) \right| = |\cos'(a_n)| \left( \frac{1}{n} - 0 \right) = \sin(a_n) \frac{1}{n}.$$ Since $\sin(0)=0$ and $x \mapsto \sin(x)$ is continuous, we have $$a_n \to 0 \quad \text{as} \, n \to \infty \implies \sin(a_n) \to 0 \quad \text{as} \, n \to \infty.$$ Combining both facts, we get $$\begin{align*} 0 &\leq \liminf_{n \to \infty} \left| n (1-\cos(1/n)) \right| \\ &\leq \limsup_{n \to \infty} \left| n (1-\cos(1/n)) \right| \\ &\leq \limsup_{n \to \infty} \sin(a_n) = 0. \end{align*}$$ (In the last step, we have used that $\limsup = \lim$ whenever the limit exists.) Consequently, $$\lim_{n \to \infty} n (1-\cos(1/n))=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 1 }
Mean value theorem with trigonometric functions Let $f(x) = 2\arctan(x) + \arcsin\left(\frac{2x}{1+x^2}\right)$ * *Show that $f(x)$ is defined for every $ x\ge 1$ *Calculate $f'(x)$ within this range *Conclude that $f(x) = \pi$ for every $ x\ge 1$ Can I get some hints how to start? I don't know how to start proving that $f(x)$ is defined for every $ x\ge 1$ and I even had problems calculating $f'(x)$ Thanks everyone!
Hint Recall that the $\arctan$ function is defined on $\Bbb R$ while the $\arcsin$ function is defined on $[-1,1]$. Compute the derivative $f'(x)$ and prove that it's equal to $0$. Conclude that $f$ is a constant which we can determinate by taking the limit of $f$ at $+\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Mathematic correct plotting rational function What is range of $$-\frac{1-4\epsilon+3\epsilon^2}{1-\epsilon^2}$$ assuming $\epsilon\in(0,1)$? It looks like $$\lim_{\epsilon\rightarrow1}-\frac{1-4\epsilon+3\epsilon^2}{1-\epsilon^2}=\lim_{\epsilon\rightarrow1}-\frac{-4+6\epsilon}{-2\epsilon}=1$$ However plotting on mathematica $-\frac{1-4\epsilon+3\epsilon^2}{1-\epsilon^2}$ explodes to infinity as $\epsilon\rightarrow1$. Is $2$ an upper bound for $$1-\frac{1-4\epsilon+3\epsilon^2}{1-\epsilon^2}$$ with $\epsilon\in(0,1)$?
There is no problem in Mathematica! If you define $$T=1-\frac{1-4\epsilon+3\epsilon^2}{1-\epsilon^2},$$ And use the following command: T//Simplify, the output generated is $$\frac{4\epsilon}{1+\epsilon}$$ Mathematica will, in general, not automatically simplify your expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simplifying $\int_0^t\frac{1}{1+Au^b}du$ I'm trying to simplify $$ \int_0^t\frac{1}{1+Au^b}du,\quad A>0,b>0,t\in[0,1]. $$ It looked simple at first but after trying a bit, I actually don't know how to tackle this. I entered the integral into Mathematica but it gave me back the same expression without any further simplification. Can someone help please?
A simple closed form in terms of elementary functions probably does not exist, but the integral can be expressed in terms of the Incomplete Beta Function, $\mathbf{B}\left(x;\alpha,\beta\right)$. $$\int_{0}^{t}\frac{1}{1+Au^b}\operatorname{d}u = \frac{1}{bA^{\frac{1}{b}}}\mathbf{B}\left(\frac{At^b}{1+At^b};\frac{1}{b},1-\frac{1}{b}\right)$$ First make the substitution, $x = Au^b$. This brings the integral to the form $$\frac{1}{bA^{\frac{1}{b}}}\int_{0}^{At^b}\frac{x^{\frac{1}{b}-1}}{1+x}\operatorname{d}x$$ which you can verify. Now make the substitution $x = \frac{s}{1-s}$, this brings the integral to the form $$\frac{1}{bA^{\frac{1}{b}}}\int_{0}^{\frac{At^b}{1+At^b}}s^{\frac{1}{b}-1}(1-s)^{\left(1-\frac{1}{b}\right)-1}\operatorname{d}s = \frac{1}{bA^{\frac{1}{b}}}\mathbf{B}\left(\frac{At^b}{1+At^b};\frac{1}{b},1-\frac{1}{b}\right)$$ I'm not sure if this answer is useful to you, as this doesn't really simplify the integral. Perhaps it will serve as a good jumping off point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $1! + 2! + 3! + \ldots + n! =y^3$ has only one solution in the set of natural numbers? I actually know that the above equation is true for $n=1$ and $y=1$ but am unable to prove it for the entire set of natural numbers. Can anyone please help me solve this in a simple way?
Hint. Let $$a_n = \sum_{k=1}^{n} k!.$$ Then $a_n$ is divisible by 3 for all $n \geq 2$, and we have $a_n \equiv 0 \pmod{27}$ only when $n=7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Need help in metric spaces proving this statement! Please if someone could help me prove this rather annoying statement. Let $C(0,1)$ be the set of continuous functions on the open interval $(0,1) \subset \mathbb R$. Fro any two functions $x(t), y(t) \in C(0,1)$ define the set $E(x,y)=\{t \in (0,1) | x(t) \neq y(t)\}$. Show that $E(x,y)$ is a union of disjoint open intervals. I hope I've been clear enough. Thanks.
Addition to the answer by Henno Brandsma: Take the function $h:(0,1)\rightarrow \mathbb R$ with $h(t) = x(t)-y(t)$, which is continuously (because $x$ and $y$ are continously). You have $$E(x,y) = h^{-1}(\mathbb R\setminus\{0\})$$ and thus $E(x,y)$ is open ($E(x,y)$ is the inverse image of the open set $\mathbb R \setminus \{0\}$ under the continuous function $h$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1099959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are birational morphisms stable under base change via a dominant morphism Let $f: X \to Y$ be a birational morphism of integral schemes and $g: Z \to Y$ a morphism of integral schemes which maps the generic point of $Z$ to the generic point of $Y$, i.e., the morphism $g$ is dominant. Is then $X \times_Y Z \to Z$ birational? Edit: My ideas: Denote the generic points of $X,Y,Z$ by $\eta_X, \eta_Y, \eta_Z$. Then $f$ induces an isomorphism $\eta_X = \eta_Y$. Denote the base change $X \times_Y Z \to Z$ by $f'$. Then $f'$ induces an isomorphism $g'^*(\eta_X) = \eta_Z$?
Since $f: X \to Y$ is birational, we can find some open subsets $U \subset Y$ and $V \subset X$ so that $f$ restricts to an isomorphism $f: V \to U$. Then $g: W = g^{-1}(U) \to U$ is still dominant (really dominance of $g$ here just guarantees that $W$ is nonempty for any open $U$ we may need to restrict to). Then the pullback $V \times_U W \to W$ is an isomorphism. $V \times_U W$ is an open subset of $X \times_Y Z$ and the morphism is just the restriction of the pullback $X \times_Y Z \to Z$. Thus the pullback is birational since it induces an isomorphism on open subsets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Why is unitary diagonalization works? I've been told that if you take an Hermitian matrix, find it's eigenvectors, normalize them and write them as columns of a matrix, $P$ then: $$P^{-1}AP = D$$ Where (Magically) $D = \text{Diag}(\lambda_1,\ldots,\lambda_n)$ ($\lambda_i$ is an eigenvalue of $A$). So I really want to use this algorithm but first I wish to understand why does this magic works?
By Schur decomposition, every square matrix $A$ is unitarily similar to an upper triangular matrix $U$. In particular if $A$ is Hermitian, then $$U^* = (Q^* A Q)^* = Q^* A^* Q = Q^* A Q = U$$ which shows that $U$ is a diagonal matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
mean value theorem question I was trying to solve the following: Given: $0 < a < b$ and $n>1$, prove: $$na^{n-1}(b-a) < b^n-a^n < nb^{n-1}(b-a)$$ I managed to get this far using the mean value theorem: $$a^n(b-a)<b^n -a^n<b^n (b-a)$$ Any idea how to continue?
Let $f(x)= x^n-a^n$ using the Mean Value Theorem $$f'(c) = \frac{f(b)-f(a)}{b-a} = \frac{b^n - a^n}{b-a}$$ for some $c \in (a,b)$. But $f'(c)= n \cdot c^{n-1}$ so $$\frac{b^n - a^n}{b-a} = n \cdot c^{n-1}$$ Now note that $n \cdot a^{n-1} < n \cdot c^{n-1} < n \cdot b^{n-1}$ so $$ n \cdot a^{n-1} < \frac{b^n - a^n}{b-a} < n \cdot b^{n-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that there is a symmetric matrix B, such that BX=Y Let $X,Y$ be two vectors in ${\mathbb C}^n$ and assume that $X≠0$. Prove that there is a symmetric matrix $B$ such that $BX=Y$.
Ok, the last answer was not detailed enough, so here is another approach, which is constructive. Pick some orthogonal matrix $Q$, hence $QQ^T = I$ and build $B$ as $B = Q\Lambda Q^T$, where $\Lambda$ is diagonal with it's entries $\Lambda_{jj}$ still left to determine. Then, $BX =Y \Leftrightarrow Q\Lambda Q^TX =Y \Leftrightarrow \Lambda Q^TX = Q^TY $, which component-wise gives you the solution for $\Lambda_{jj}$ as $\Lambda_{jj} = \frac{q^T_jY}{q^T_jX}$. ($q_j$ is the jth column of $Q$). This is a solution in case you do not mind, that $B\in \mathbb{C}^{n \times n}$. If you want $B\in \mathbb{R}^{n \times n}$, then even if $B$ is not symmetric, it might be impossible. Notice, that the case where $X = \bar{Y}$ provides a counter-example for that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Sum: $\sum_{n=1}^\infty\prod_{k=1}^n\frac{k}{k+a}=\frac{1}{a-1}$ For the past week, I've been mulling over this Math.SE question. The question was just to prove convergence of $$\sum\limits_{n=1}^\infty\frac{n!}{\left(1+\sqrt{2}\right)\left(2+\sqrt{2}\right)\cdots\left(n+\sqrt{2}\right)}$$ but amazingly Mathematica told me it had a remarkably simple closed form: just $1+\sqrt{2}$. After some fiddling, I conjectured for $a>1$: $$\sum\limits_{n=1}^\infty\prod\limits_{k=1}^n\frac{k}{k+a}=\sum\limits_{n=1}^\infty\frac{n!}{(1+a)(2+a)\cdots(n+a)}=\frac{1}{a-1}$$ I had been quite stuck until today when I saw David H's helpful answer to a similar problem. I have included a solution using the same idea, but I would be interested to know if anyone has another method.
The idea of this solution is to appeal to the Beta function and then to exchange the order of integration and summation (made possible by Fubini's theorem). $$\begin{align} \sum\limits_{n=1}^\infty\prod\limits_{k=1}^n\frac{k}{k+a}&=\sum\limits_{n=1}^\infty\frac{n!}{(1+a)(2+a)\cdots(n+a)} \\&=\sum\limits_{n=1}^\infty\frac{\Gamma(n+1)\Gamma(1+a)}{\Gamma(n+a+1)} \\&=\Gamma(1+a)\sum\limits_{n=1}^\infty\frac{\Gamma(n+1)}{\Gamma(n+a+1)} \\&=\frac{\Gamma(1+a)}{\Gamma(a)}\sum\limits_{n=1}^\infty\frac{\Gamma(n+1)\Gamma(a)}{\Gamma(n+a+1)} \\&=a\sum\limits_{n=1}^\infty \operatorname{B}(n+1,a) \\&=a\sum\limits_{n=1}^\infty\int\limits_0^1t^n(1-t)^{a-1}\,dt \\&=a\int\limits_0^1\sum\limits_{n=1}^\infty t^n(1-t)^{a-1}\,dt \\&=a\int\limits_0^1\frac{t(1-t)^{a-1}}{1-t}\,dt \\&=a\int\limits_0^1t(1-t)^{a-2}\,dt \\&=a\operatorname{B}(2,a-1) \\&=\frac{a}{a(a-1)} \\&=\frac{1}{a-1} \end{align}$$ Note that we used $a>1$ when evaluating $\operatorname{B}(2,a-1)$, since the beta function is only defined when both arguments are greater than $0$. A final note: the restriction $a>1$ is sharp in the sense that when $a=1$ the inside product simplifies to $$\prod\limits_{k=1}^n\frac{k}{k+1}=\frac{1}{n+1}$$ so the sum becomes $$\sum\limits_{n=1}^\infty\prod\limits_{k=1}^n\frac{k}{k+1}=\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\dots=\infty$$ and for $a<1$ in essence we can use the comparison test with the $a=1$ case to show that the sum diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Is there a law that you can add or multiply to both sides of an equation? It seems that given a statement $a = b$, that $a + c = b + c$ is assumed also to be true. Why isn't this an axiom of arithmetic, like the commutative law or associative law? Or is it a consequence of some other axiom of arithmetic? Thanks! Edit: I understand the intuitive meaning of equality. Answers that stated that $a = b$ means they are the same number or object make sense but what I'm asking is if there is an explicit law of replacement that allows us to make this intuitive truth a valid mathematical deduction. For example is there an axiom of Peano's Axioms or some other axiomatic system that allows for adding or multiplying both sides of an equation by the same number? In all the texts I've come across I've never seen an axiom that states if $a = b$ then $a + c = b + c$. I have however seen if $a < b$ then $a + c < b + c$. In my view $<$ and $=$ are similar so the absence of a definition for equality is strange.
This is an axiom of predicate logic. For example, check out this list of the axioms in predicate calulus, intended to be an ambient logic for ZFC set theory. Note axioms 13 and 14: $$\vdash x=y\to (x\in z\to y\in z)$$ $$\vdash x=y\to (z\in x\to z\in y)$$ In set theory, the only basic atomic formulas are of the form $x=y$ or $x\in y$, so this, together with transitivity of equality (axiom 8), which will allow you to prove $$\vdash x=y\to (x=z\to y=z)$$ $$\vdash x=y\to (z=x\to z=y),$$ is sufficient to prove by induction that for any predicate of the language $\varphi(x)$, it is a theorem that $$\vdash x=y\to(\varphi(x)\leftrightarrow \varphi(y)).$$ And once you define class terms via $x\in\{x\mid\varphi\}\leftrightarrow\varphi$, you can prove $$\vdash x=y\to A(x)=A(y)$$ for any class term $A(x)$ by converting it to the statement $$\vdash x=y\to (z\in A(x)\leftrightarrow z\in A(y))$$ using extensionality and applying the theorem above for predicates. This demonstrates how the rule $x=y\to f(x)=f(y)$ gets translated into a rigorous proof in a formal system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 9, "answer_id": 6 }
Unitary matrix matrix problem I need to proof this question Show that for any Recall that the unitary group $U(n)$ consists of all $A \in M_n(C)$ with $A^*A = I$. Show that a matrix $A \in M_n(C)$ is in $U(n)$ if and only if $\langle Ax,Ayi\rangle = \langle x,y\rangle$ for all $x, y \in C$. So I just need the $\Leftarrow$ implication; I made it but I am not sure if my justification is good enough. $\langle Ax,Ay\rangle = \langle x,A^*Ay\rangle = \langle x,y\rangle = x^*A^*Ay = x^*y$ so {problem here} $A^*A = I$ therefore $A \in M_n(C)$ is in $U(n)$. So do you guys think that this is good enough or do is their some problems in this argument?
Hints: 1.) For $T,S\in M_n(\Bbb C)$ we have $\ T=S\ \iff\ \forall x,y:\langle Tx,y\rangle=\langle Sx,y\rangle$. 2.) Use the adjointness property: $\langle z,Ay\rangle=\langle A^*z,y\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
'Distributive' property for a function mod m What properties must some function $f(n)$ have for it to be the case that: $f(n) = (n + 3) \mod m = (n \mod m) + (3 \mod m)$? Similarly, what if $f(n) = (n + 3) \mod m = (n \mod m + 3)?$ Is this something that is well studied? Where might I go to find more information? Suppose here that $n,m \in \mathbb{Z^+}$ $-$ {$0$}, that the equation holds for all or some subset of $m,n$ and that 'mod' stands for the standard modular arithmetic operator.
Let's assume that $m$ is a positive integer and look at your first equation. Your equation $(n+3)\mod m=(n\mod m)+(3\mod m)$ is true for all $n$ if $m$ divides $3$: i.e. $m$ is $1$ or $3$. In those cases, both sides of the equation are the same as $n\mod m$: adding the $3$ does nothing. In all other cases, your equation is not true for all $n$. For $m=2$, use $n=1$. For $m>3$, use $m-3$. In both cases, the left hand side will be zero while the right hand side will be the sum of two non-zero positive integers. If you want me to answer the second question you will need to clarify your question, as I explain in my comments. To get more information, look at just about any book on number theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all invariant subspace of $A$ Consider a matrix mapping $A: V \to V$ for a vertor space V. Matrix $A$ has 3 eigenvalues are distinct : $\lambda_1,\lambda_2,\lambda_3$ and $v_1,v_2,v_3$ are vectors-corresponding .Find all invariant subspace of $A$
Hint: note that each $\text{span}(v_i)$ is a invariant vector space. Also note that $\{0\}$ and $V$ are invariant subspaces. Furthermore note that if $A$ is an invariant subspace and $B$ is an invariant subspace then so is $A + B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
(Dis)prove that $\sup(A \cap B) = \min\{\sup A, \sup B\}$ Just beginning real analysis so I'm having some trouble with disproving this statement: $$\sup(A \cap B) = \min\{\sup A, \sup B\}$$ Initially it asks whether it's true or false and to provide a counterexample if false, which by basic intuition to me it is. However I'm not sure where to start, as my book is rather poor in its explanations of concepts and I'm just starting out. Thanks for any help
Let $A = \{1\}$, $B = \{2\}$. Note that $\sup A \cap B = -\infty$. If you want to avoid infinities, try $A = \{1, 2\}$, $B = \{1, 3\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When are graphs deceiving? What are some examples of functions or quantities relating to functions (e.g., limits) $f:A \to B$ where $A$, $B \subseteq \mathbb{R}$ that require by-hand, "analytical" methods for analysis which are seemingly contradicted by a graph generated by software? For instance, I recall that a Pre-calculus text stated that $\lim\limits_{x \to 0}\dfrac{1-\cos x^{6}}{x^{12}} = \dfrac{1}{2}$ (which, if I recall correctly, is proven using Taylor series) but the graph itself seems to suggest that it perhaps doesn't exist, due to the oscillations occurring around 0. [Graphs were generated via WolframAlpha.]
Almost any example of catastrophic cancellation plus enough zoom will do the trick. Two cases: A simplification of your example (simpler function, bigger zoom): $$\frac{(1-\cos(x^2))}{x^4},\qquad x\in[-0.001,0.001]$$ A rational function with a removable discontinuity: $$\frac{x^{50}-1}{x-1},\qquad x\in[0.999999999,1.000000001]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1100960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 1 }
Guidelines for choosing integrand to get a sum. The idea was to find: $$\sum_{n=1}^{\infty} \frac{\coth(n\pi)}{n^3}$$ As you see in the solution, they conveniently choose a $f(z)$ they chose: $$f(z) = \frac{\pi \cot(\pi z)\coth(\pi z)}{z^3}$$ That eventually led to their goal. What are the guidelines for choosing such $f(z)$?
First of all, i'm not an expert in this kind of problems, but i think you should have always three things in mind: 1.) Some of the poles of your generating function should render the sum that you are looking for 2.) The residues of all the other poles should be: 2a) As Easily obtainable as possible and 2b) be of finite number, or generating a sum which is easier to calculate then original one. 3.) You have to make sure that you can close your contour of integration in an appropriate manner I hope this helps...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to derive the equation of a parabola from the directrix and focus Could someone please offer me proof and explanation of the following? - I am just having trouble with finding the '$a$' part of the equation. "The leading coefficient '$a$' in the equation $$y−y_1 =​​ a(x−x_1)^{2}$$​ indicates how "wide" and in what direction the parabola opens. It's always the reciprocal of $2$ times the distance from the directrix to the focus." Thanks.
The parabola is defined by the locus of the points that are at equal distance from the focus and a line, called the directrice. Let's call $d$ the distance between the focus $F$ and the directrix $D$. Then $a=\dfrac{1}{2d}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $[ab]=[[a][b]]$ Let $a,b,N\in\mathbb{Z}$, where $N>0$. Prove that $[ab]=[[a][b]]$, where $[x]$ denotes the remainder of $x$ after division by $N$. Here's my attempt: Proof. Since $x\equiv{[x]}_c\pmod c$, we know $a\equiv{[a]} \pmod{N}$ and $b\equiv{[b]} \pmod{N}$. By Proposition 1.3.4 in the book, $$ab\equiv{[a]}{[b]} \pmod N$$ Applying the first theorem we used to each side of the equation, we have $${[ab]}={[{[a]}{[b]}]}$$ QED. I think the last line in my proof is wrong. I tried combining these two statements: * *$ab\equiv[ab]\pmod{N}$ *${[a]}{[b]}\equiv[{[a]}{[b]}]\pmod{N}$ But what makes me hesitate is that the statements have equivalence, not equality. I'm new to number theory and do not know how to handle equivalence. Can you offer any help? I am looking for hints, not complete answers. Thank you.
Hint: Since the remainder of something upon division by $N$ is an element of $\{0,1,2,\ldots,N-1\}$, it is sufficient to show that $[ab] \equiv [[a][b]] \mod N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Finding $ \int \sin^2(2x)/[1+\cos(2x)]dx$. I am surprisingly having a bit of difficulty with an indefinite integral which is interesting since the integral I solved before is $$ \int \frac{1+\cos2x}{\sin^2(2x)} dx$$ The integral I am currently working on is $$ \int \frac{\sin^2(2x)}{1+\cos2x} dx$$ I first divided out giving: (this is the mistake I made early on, in this case you use the pythagorean identity for sin, and then cancel out $1+\cos2x$) $$ \int \sin^2(2x)+ \sin^2(2x)\cdot \sec(2x) \,dx$$ Then I factored out the $\sin^2(2x)$ resulting in: $$ \int \sin^2(2x)(1+ \sec(2x)) \,dx$$ Substituting for $u=2x$ and splitting the integral into two parts: $$\frac{1}{2} \int \sin^2(u) du + \frac{1}{2} \int \sin^2(u)\sec(u) \,du$$ lets call this eq. 1. Now, this is where I am having difficulty as 1.) dealing with even powers of sin and 2.) the $\sec(u)$ term is proving to be troublesome. Another form of the above equation is: $$ \frac{1}{2} \int \sin^2(u) \,du + \frac{1}{2} \int \sin(u)\tan(u) \,du$$ Some approaches I have tried are using different trigonometric identities e.g. $$\sin^2(u) = \frac{1}{2} (1-\cos(2u))$$ however, this results in for eq. 1 $$ \frac{1}{4} \int 1-\cos(2u) \,du + \frac{1}{4} \int \frac{(1-\cos(2u)}{\cos(u)} du $$ Then I would have to use the cosine angle addition formula which quickly gets out of hand. I understand there are different approaches to solving different indefinite integrals. The purpose of this problem is to only use substitutions. Questions that I have are as follows, 1.) is it possible to continue along with the steps I have taken?, 2.) or must I do an entirely different substitution at the beginning. Sorry for the long post and thank you for your time.
The function $\sin^2(2x)/(1+\cos(2x))$ can be simplified to $1-\cos 2x$: $$ \frac{\sin^2(2x)}{1+\cos(2x)}=\frac{1-\cos^2(2x)}{1+\cos(2x)}=\frac{(1-\cos(2x))(1+\cos(2x))}{1+\cos(2x)}=1-\cos(2x). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove that for an arbitrary (possibly infinite) language, that for a finite L-structure $M$, if $M \equiv N$ then $ M \cong N$ Prove that for an arbitrary (possibly infinite) language, that for a finite L-structure $M$, if $M \equiv N$ then $ M \cong N$ I'm struggling to think of what to do, I presume the best thing is to keep it simple and assume the Language to only be relational. I'd start by assuming we have a finite Language and taking $|M|$ = k = $|N|$ and as I've assumed the language to only be relational and finite take {$R_1,...,R_p$} to be the relation symbols. I'm guessing I need to show that there is some sentence $\sigma$ that is true in both $M$ and $N$? I have no idea what that sentence would be could someone help me out? Thanks. Also how would I even begin to answer it in the infinite case?
Let $a=a_1,\dots,a_k$ be an enumeration of $M$ and let $p(x)={\rm tp}_M(a)$. By construction $p(x)$ is consistent in $M$, then $p(x)$ is finitely consistent in any $N\equiv M$. As $N$ is also finite, $p(x)$ is realized in $N$, say by the tuple $b$. Then $a\mapsto b$ is the required isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate $\iint\limits_D {\sqrt {{x^2} - {y^2}} }\,dA$ ... Calculate $$\iint\limits_D {\sqrt {{x^2} - {y^2}} }\,dA$$ where $D$ is the triangle with vertices $(0,0), (1,1)$ and $(1,-1)$. I get the following integral $$I = 2\int\limits_0^1 {\int\limits_0^x {\sqrt {{x^2} - {y^2}} } \,dydx} $$ I would appreciate some hints or help in solving this integral. The answer $\frac{\pi }{6}$ somehow implies that the solution may be obtained by introducing polar coordinates. I tried solving it with polar coordinates setting the integration limits $0 \ldots \frac{\pi }{4}$ for $\theta $ and $0 \ldots \tan \theta $ for $r$ and multiplying the result by 2. However, i could not get the right answer...
Write $D=\{(x,y)\in \mathbb R^2\colon 0\leq x\leq 1\land -x\leq y\leq x\}$. The integral then comes equal to $\displaystyle \int \limits_0^1\int \limits _{-x}^x\sqrt{x^2-y^2}\mathrm dy\mathrm dx$. You can get away with the one dimensional substitution $y=x\sin(\theta)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inequality $\arctan x ≥ x-x^3/3$ Can you help me prove $\arctan x ≥ x-x^3/3$? I have thought of taylor but I have not come up with a solution.
Have you tried derivatives? $$(\arctan x-x+x^3/3)'=\frac 1{1+x^2}-1+x^2=\frac{x^4}{1+x^2}\ge0$$ So the difference is an increasing function. This fact, together the equality when $x=0$ means that $$\arctan x\ge x-x^3/3\text{ when }x\ge 0\\\arctan x\le x-x^3/3\text{ when }x\le 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove $\mathbb{P}( k < l/2 ) \geq \frac{l}{2} \times \mathbb{P}( k = l/4 ) $ for binomial variable $k$ Suppose we have a Binomial variable: $$ k \sim Bin(l,\alpha) $$ Is it possible to prove/disprove that: $$ \mathbb{P}( k < l/2 ) \geq \frac{l}{2} \times \mathbb{P}( k = l/4 ) $$ EDIT: it's been used in 2nd line of 3rd paragraph in http://arxiv.org/pdf/1110.3564v4.pdf (page 33)
As stated in the comments, the inequality holds vacuously if $l$ is not a multiple of four. Assuming $l=4n$, if we prove that the pdf of $\operatorname{Bin}(\alpha,4n)$ is convex on the interval $[0,2n]$, then the inequality follows from Jensen's inequality. Notice that the inflection points of the pdf of the standard normal distribution occurs in $x=\pm 1$, so it is reasonable to expect that the pdf of $\operatorname{Bin}(\alpha,4n)$ is convex on the set $|x-\mu|\geq\sigma$, or $$ |x-4n\alpha|\geq\sqrt{4n\alpha(1-\alpha)},$$ so if $\alpha\geq\frac{1}{2}$ we're OK. This is exactly the assumption in the linked paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1101748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that two different representations to the base $k$ represent two different integers I would like to show: Given two distinct, positive, integer representations in base $k$, say $\sum_{i=0}^na_ik^i$ and $\sum_{i=0}^mb_ik^i$ where $a_n \neq 0 \neq b_m$ and $a_i,b_i \in \{0,1,\ldots , k-1 \}$, prove that $$\sum_{i=0}^na_ik^i \neq \sum_{i=0}^mb_ik^i$$ I would also like to show this using the result that $$\sum_{i=0}^pc_ik^i \leq k^{p+1}-1$$ for every integer representation in base $k$. Additionally, I don't want to use the Basis Representation Theorem (that every basis representation is unique). What I have so far: I figured there are two cases to make $\sum_{i=0}^na_ik^i$ and $\sum_{i=0}^mb_ik^i$ be distinct. First, if WLOG $m>n$. Then we know $$\sum_{i=0}^na_ik^i\leq k^{n+1}-1 \leq k^m-1 < k^m \leq \sum_{i=0}^mb_ik^i$$ Then I moved on to the second case of $m=n$. For the two integer representations to be distinct then there must be some $i \in \{1,2,\ldots , n \}$ such that $a_i \neq b_i$. At this point I am stuck on how to show the two integer representations must be different numbers, using the result of $\sum_{i=0}^pc_ik^i \leq k^{p+1}-1$. Does anyone have an idea how to do this? Or to do away with cases?
This is a comment, not an answer, but the comment space is too small to hold this. I proved this result on representation in general bases over 40 years ago: Let $\mathbb{B} =(B_j)_{j=0}^{\infty}$ be an increasing series of positive integers with $B_0 = 1$. A positive integer $n$ is represented in $\mathbb{B}$ if $n$ can be written in the form $n = \sum_{j=0}^{\infty} d_j B_j $ where $0 \le d_j < B_{j+1}/B_j $. (For the usual representation, let $B_j = b^j$ for some integer $b \ge 2$.) Then (1) Every positive integer can be represented in a form with all but a finite number of the $d_j$ being zero. (2) The representation is unique for all positive integers if and only if $B_j$ divides $B_{j+1}$ for all $j$. Uniqueness then holds in the case mentioned above ($B_j = b^j$), but also holds in the factorial numbering system, where $B_j = (j+1)!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Find the point on the curve farthest from the line $x-y=0$. the curve $x^3-y^3=1$ is asymptote to the line $x-y=0$. Find the point on the curve farthest from the line $x-y=0$.can someone please explain it to me what the question is demanding? I cant think it geometrically as I am not able to plot it Also is there any software to plot such graphs.?
Rotate by $-\pi/4$. This corresponds to $x\mapsto x-y$ and $y\mapsto x+y$ (up to a scalar factor). Then your line becomes $x-y-(x+y)=0\iff y=0$ and the equation of the curve becomes $$(x-y)^3-(x+y)^3=1\iff -2y^3-6yx^2=1\iff x^2=-\frac{1+2y^3}{6y}$$ as $y$ can't be $0$. You want to find the farthest point from the line $y=0$, so you need to minimize $y$ so that the fraction on the right is positive. So the solution is $(0,-1/\sqrt[3]2)$, which is $$(1/\sqrt[3]2,-1/\sqrt[3]2)$$ in the original coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
$\lim_{n \rightarrow \infty} \frac{1-(1-1/n)^4}{1-(1-1/n)^3}$ Find $$\lim_{n \rightarrow \infty} \dfrac{1-\left(1-\dfrac{1}{n}\right)^4}{1-\left(1-\dfrac{1}{n}\right)^3}$$ I can't figure out why the limit is equal to $\dfrac{4}{3}$ because I take the limit of a quotient to be the quotient of their limits. I'm taking that $\lim_{n \rightarrow \infty}1-\left(1-\frac{1}{n}\right)^4 = 0$ and likewise that $\lim_{n \rightarrow \infty}1-\left(1-\frac{1}{n}\right)^3 = 0$, which still gives me that the limit should be 0.
$$\lim_{n \rightarrow \infty} \dfrac{1-\left(1-\dfrac{1}{n}\right)^4}{1-\left(1-\dfrac{1}{n}\right)^3} \stackrel{\mathscr{L}}{=}\lim_{n \rightarrow \infty} \dfrac{4\left(1-\dfrac{1}{n}\right)^3 \dfrac{1}{n^2}}{3\left(1-\dfrac{1}{n}\right)^2\dfrac{1}{n^2}} =\lim_{n \rightarrow \infty} \dfrac{4\left(1-\dfrac{1}{n}\right)^3}{3\left(1-\dfrac{1}{n}\right)^2} = \color{blue}{\dfrac{4}{3}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
About the rank of a small square matrix An interesting question which hit me just now: Suppose we have a square matrix, for instance, a 3 by 3 one. Each of its entry is a randomly assigned integer from 0 to 9, then whats' the probability that it becomes a singular matrix? Or, in general cases, what if the matrix is n by n? Does this probability increase or decrease as n goes up ?
Are you familiar with the theorem that states: In the set of all nxn (in this case n=3) matrices, the set of all singular matrices has Lebesgue measure zero? If you restrict the space to only entries from 0 to 9, the space itself has Lebesgue measure zero, and thus, if you were to equip this space with probability measure so that you induce a finite distribution over the set of all matrices (since we're randomizing, let's say it's uniform), then there is a non-zero probability you will generate a singular matrix. Now, while the 0 to 9 case for a 3x3 is actually quite cumbersome—to do it you'd have to find out combinatorially how many possible singular matrices there are and divide that number by the total number of matrices you could generate—it's still easy to observe for a simpler case that as you increase the dimensions of the matrix, but keep the number of entries the same, the probability of generating a singular matrix actually increases. Consider the following construction: You are generating a 2 x 2 matrix with only entries 0 and 1. This is equivalent to randomly drawing 2 vectors from the set \begin{align} \{(0,0), (0,1), (1,0), (1,1)\} \end{align} The probability of randomly generating a singular matrix is equivalent to the probability of randomly drawing the same vector twice. You should verify that this probably is $\tfrac{1}{4}$. Now, we'll extend the matrix to size 3 x 3, but still only draw from vectors with entries 0 and 1. The vectors we can draw from is now the set \begin{align} \{(0,0,0), (1,0,0), (0,1,0), (0,0,1), (1,1,0), (1,0,1), (0,1,1), (1,1,1)\} \end{align} The probability of some combination of vectors that result in a singular matrix is the same thing as the complement of drawing three different vectors. You should verify that this probability is in fact $\tfrac{22}{64} > \tfrac{1}{4}$. The question is: Does this hold for the example you mentioned? Find out how many pairwise-linearly independent vectors of length 3 you can generate with only entries from the set $\{1,...,9\}$ and simply calculate the probability of drawing different vectors for all three draws. This is precisely the probability of generating a singular matrix in this space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does a prime (apostrophe) mean before a predicate? I found this statement in a paper by John McCarthy: $$ \forall x.ostrich\ x \supset species\ x ={}^\prime{}ostrich $$ I can't figure out what the prime indicates.
It seems to me that it is used as an "operator" for nominalization, in place of the standard $\lambda$ operator. See : * *Nino Cocchiarella, Conceptual Realism and the Nexus of Predication : Lecture Five (Rome, 2004), page 14 : Consider, for example, the predicate phrase "is famous", which can be symbolized as a $\lambda$-abstract $[\lambda xFamous(x)]$ as well as simply by $Famous( )$. The $\lambda$-abstract is preferable as a way of representing the infinitive "to be famous", which is one form of nominalization: to be famous $\to$ to be an $x$ such that $x$ is famous $\to$ $[\lambda xFamous(x)]$. The $\lambda$-abstract $[\lambda xFamous(x)]$ acts as a term, and thus can fill the argument-place of a predicate (like : $x = t$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Quick way to express percent from negative interval I have some data which looks like this: I get a number in the -6/+4 range and I need to express it with percent. If I get 4 of course the result is 100%, If I get -6 is 0%, etc. Is there a quick formula I can use to obtain the percent value of, say, -3.25?
For $x \in [-6, 4]$ the function $$p(x) = \frac{x+6}{10}$$ is what you are looking for. You should note however that this gives numbers between $0$ and $1$. If you want the actual percentages, just use the function $$q(x) = p(x) * 100 = 10 (x + 6)$$ instead. In general for an interval $[a, b]$ you can use the formulas $$p(x) = \frac {x-a} {b-a}$$ and $$q(x) = 100 * p (x) = 100 * \frac {x-a} {b-a}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How likely is it to guess three numbers? In the Irish lottery if you guess three numbers correctly you win 576x your original stake and there are 12 draws a week. My questions is: How likely is it, over the course of two years (104 weeks or 1248 draws) that I will guess the numbers correctly? The rules are as follows: * *you can only choose numbers between 1-49 *you can only choose three of these numbers *the total draw will produce 6 numbers, yet only the three you have chosen need to match So over two years, how likely will I be to win? Thanks :)
The chance of getting the first number right is $\frac6{49}$, the second one is $\frac5{49}$ and third one is $\frac4{49}$. the product of these 3 fractions gives you the probability of winning a draw on a single attempt. If you have $1248$ draws, multiply this fraction by the same to get the probability of winning $1$ (or more) draws over $2$ years. Note: If numbers can not be repeated, use $49$, $48$ and $47$ as denominators, instead of $49$ all $3$ times. Assuming numbers can be repeated, this gives a result of about $1.2$, which means you can be pretty sure of winning once in $2$ years. $576$ times profit is still an inadequate amount for $1248$ tries, but its worth trying your luck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find an explicit atlas for this submanifold of $\mathbb{R}^4$ I'm having a hard time coming up with atlases for manifolds. I can prove using the implicit function theorem that $M = \{(x_1,x_2,x_3,x_4)\in\mathbb{R}^4:x_1^2+x_2^2=x_3^2+x_4^2=1\}$ is a $2$-dimensional manifold. I would like to find an explicit atlas for this manifold now. Any help would be greatly appreciated.
OK, so based on the comments, I think this should be the answer: Let $U_1=\{(x_1,x_2,x_3,x_4)\in M: x_1>0\}, \phi_1:U_1\to\mathbb{R}$ defined as $\phi(x_1,x_2,x_3,x_4)=x_2$, $U_2=\{(x_1,x_2,x_3,x_4)\in M: x_2>0\}, \phi_2:U_2\to\mathbb{R}$ defined as $\phi(x_1,x_2,x_3,x_4)=x_1$, $U_3=\{(x_1,x_2,x_3,x_4)\in M: x_1<0\}, \phi_3:U_3\to\mathbb{R}$ defined as $\phi(x_1,x_2,x_3,x_4)=x_2$, $U_4=\{(x_1,x_2,x_3,x_4)\in M: x_2<0\}, \phi_4:U_4\to\mathbb{R}$ defined as $\phi(x_1,x_2,x_3,x_4)=x_1$. Then, we'll do the same thing with the third and the fourth components (calling them $V_j$ and $\psi_j$) and an atlas for $M$ should consist of $\{(U_i\times V_j,\phi_i\times \psi_j)\}_{i,j=1}^4$. Is this the correct answer?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Suppose $\int_{[a,b]}f=0,\text{ then }f(x)=0 \forall x\in[a,b]$ Let $a<b$ be real numbers. Let $f:[a,b]\to\mathbb{R}$ be a continuous non-negative function. Suppose $\int_{[a,b]}f=0,\text{ then }f(x)=0 \forall x\in[a,b]$ Proof: Suppose for the sake of contradiction $\exists x_0\in[a,b] \text{ such that } f(x_0)= D> 0$ Since the function is continuous at $x_0$, we have $$\forall \epsilon>0, \exists \delta>0 \text{ such that } |f(x)-f(x_0)|\leq\epsilon\text{ whenever } x\in[a,b]\cap(x_0-\delta,x_0+\delta)$$ choose $\delta=\delta_1$ such that $\epsilon=D/2$, then we can say: $$-D/2 \leq f(x)-f(x_0)\leq D/2$$ $$-D/2 \leq f(x) - D$$ $$f(x)\geq D/2>0$$ This implies that $\int_{[a,b]\cap(x_0-\delta_1,x_0+\delta_1)}f >0$ Now since $\int_{[a,b]} f = 0$. we have $$\int_{[a,b]}f=\int_{[a,x_0-\delta_1]}f+ \int_{(x_0-\delta_1, x_0+\delta_1)} f + \int_{[x_0+\delta_1,b]}f=0$$ $$\int_{[a,x_0-\delta_1]}f+\int_{[x_0+\delta_1,b]}f<-D/2,$$ but this is a contradiction since $f(x)\geq 0$ and hence $\int_{[a,x_0-\delta_1]}f+\int_{[x_0+\delta_1,b]}f\geq 0$ Is my proof correct?
This isn't exactly what you're asking for, but this is how I would approach this problem: If $f$ is Riemann integrable, then it is Lebesgue integrable and the two integrals are equal. So if $\int_{[a,b]}f\mathrm dm=0$ where $m$ is Lebesgue measure, then $f=0$ almost everywhere (since $f$ is nonnegative). Since $f$ is continuous, it follows that $f$ is identically zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Unusual mathematical terms From time to time, I come across some unusual mathematical terms. I know something about strange attractors. I also know what Witch of Agnesi is. However, what prompted me to write this question is that I was really perplexed when I read the other day about monstrous moonshine, and this is so far my favorite, out of similar terms. Some others: * *Cantor dust *Gabriel's Horn (also known as Torricelli's trumpet) *Koch snowflake *Knaster–Kuratowski fan (also known as Cantor's leaky tent or Cantor's teepee depending on the presence or absence of the apex; there is also Cantor's leakier tent) Are there more such unusual terms in mathematics? Jan 17 update: for fun, word cloud of all terms mentioned here so far: and another, more readable:
I always wanted to get a room at the Hilbert Hotel. I also love working with annihilators....
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97", "answer_count": 39, "answer_id": 22 }
Is it an axiom that the inequalities are translation-invariant or can we prove it? I was thinking about the inequalities on the set of real numbers. To me and everyone else, it's been taught that an inequality is translation-invariant, i.e.: $x < y \implies x + c < y + c \quad \forall c \in \mathbb{R}$ But I've been trying to think why. Is it simply a property we assign, or is there a reason that draws from the inequality's existing properties?
The reals can either be defined axiomatically or constructed from (usually) a model of the rationals, in various ways. Axiomatically, the reals are an ordered field, which means that translation invariance property you refer to is an axiom. So, if your approach to the reals is that they are simply a model (one of many, but all isomorphic) of the axioms of the reals, then this property is an axiom. If instead you prefer to construct the reals from the rationals, then you must first accept the same translation invariance property for the rationals (and again, this is either taken axiomatically, or proven for a construction of the rationals, say from the integers) and then prove it for the particular construction giving the reals. For most constructions of the reals this part is actually quite easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1102902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I solve this, first I have to factor $ 2x\over x-1$ + $ 3x +1\over x-1$ - $ 1 + 9x + 2x^2\over x^2-1$? I am doing calculus exercises but I'm in trouble with this $$\frac{ 2x}{x-1} + \frac{3x +1}{ x-1} - \frac{1 + 9x + 2x^2}{x^2-1}$$ the solution is $$ 3x\over x+1$$ The only advance that I have done is factor $ x^2-1$ = $( x-1)$ $ (x+1)$. I do not know how can I factor $1 + 9x + 2x^2$, can someone please guide me in how to solve this exercise.
Multiply the first two terms by $x+1$ in nominator and denominator. Then add all three terms and you obtain an expression $$\frac{f(x)}{(x-1)(x+1)}. $$ Now see how to factor $f(x)=3x^2-3x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find the value of $\lim_{x \to +\infty} x \lfloor \frac{1}{x} \rfloor$ Determine if the following limits exist $$\lim_{x \to +\infty} x \lfloor \frac{1}{x} \rfloor$$ note that $$\frac{1}{x}-1 <\lfloor \frac{1}{x}\rfloor \leq \frac{1}{x}$$ $$1-x <x\lfloor \frac{1}{x}\rfloor \leq 1$$ i'm stuck here
Observe that $\lfloor\frac1x\rfloor=0$ for $x>1$, hence $x\lfloor\frac1x\rfloor$ is identically zero on $]1,+\infty[$. Hence the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove that $e^x = 8x^3$ has only one solution in $[0,1]$? Prove that $e^x = 8x^3$ has only one solution in $[0,1]$. If we define $f(x) = e^x - 8x^3$ then by mean value theorem there exists at least one solution. But $f$ is not strictly decreasing/increasing. How do I continue?
$f$ is concave on $[0.1,1]$, so $f(0.1)>0$ and $f(1)<0$ imply that the root from $(0.1,1)$ is unique. To check that there is no root on $(0,0.1)$, note that $f'(x)$ is positive here and $f(0)>0$, $f(0.1)>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding parameters for a quotient of a polynomial ring Let $a,b \in \mathbb{R}, T_{a,b} := \mathbb{R}[x] \ /\langle x^2+ax+b\rangle$, where $\langle x^2+ax+b\rangle$ is the ideal generated by $x^2+ax+b$. 1) for which $a,b \in \mathbb{R}$ is $T_{a,b}$ a field? 2) for which $a,b \in \mathbb{R}$ is $x \in T_{a,b}$ invertible ($x$ refers to the $x$ in $\mathbb{R}[x]$), and how does the inverse look like? 1) Let $b = 0 \Rightarrow T_{a,0} = \mathbb{R}[x] \ /\langle x^2+ax\rangle$. Let $u = x, v = a x \Rightarrow u,v \in T_{a,0}$ $uv = x^2+ax \not\in T_{a,0} \Rightarrow T_{a,0}$ not field $\Rightarrow T_{a,b}$ field $\Rightarrow b \neq 0$. This is all that I saw. Can you please tell me how the other conditions of a,b can be found, so that $T_{a,b}$ field applies? I tried to identify all roots of $x^2 + ax + b$, but didn't find an approach that helped me to go on. 2) I tried to somehow utilize that $x^2 + ax + b = 0$ for $T_{a,b}$, but got nowhere. Can you please help me to find a solution?
2) We have $$ x(x+a)=x^2+ax=-(ax+b)+ax=-b. $$ Thus $$ \frac{1}{x}=\frac{x+a}{x(x+a)}=\frac{x+a}{-b} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute the derivative $ \frac{d}{dR}\iiint_{\{(x,y,z)\in\textbf{R}^3: \sqrt{x^2+y^2+z^2} \leq R\}}f(x,y,z)\,dx\,dy\,dz. $ Let the function f and its first-order partial derivatives be continuous in $\textbf{R}^3$. Suppose that $$ \iiint_{\textbf{R}^3}|f(x,y,z)|\,dx\,dy\,dz < \infty. $$ Compute the derivative $$ \frac{d}{dR}\iiint_{\{(x,y,z)\in\textbf{R}^3: \sqrt{x^2+y^2+z^2} \leq R\}}f(x,y,z)\,dx\,dy\,dz. $$ with the derivative given in terms of a surface integral. Converting to spherical coordinates I found that $$ \frac{d}{dR}\iiint_D f(x,y,z)\,dx\,dy\,dz = \frac{d}{dR} \int_0^{2\pi} \int_0^\pi \int_0^R f(r,\theta,\phi)\;r^2 \sin(\phi)\,dr\,d\phi\,d\theta. $$ My next step was $$ \frac{d}{dR} \int_0^{2\pi} \int_0^\pi \int_0^R f(r,\theta,\phi)\;r^2 \sin(\phi)\,dr\,d\phi\,d\theta= \int_0^{2\pi} \int_0^\pi \frac{d}{dR}\int_0^R f(r,\theta,\phi)\;r^2 \sin(\phi)\,dr\,d\phi\,d\theta. $$ In general, are you allowed to move the derivative through the outside integrals? Why? My final step was $$\int_0^{2\pi} \int_0^\pi \frac{d}{dR}\int_0^R f(r,\theta,\phi)\;r^2 \sin(\phi)\,dr\,d\phi\,d\theta= \int_0^{2\pi} \int_0^\pi f(R,\theta,\phi)R^2 \sin(\phi)\,d\phi\,d\theta= \iint_{S(R)}f(R,\theta,\phi)R^2\;dS \;\;\;\text{ where }\;\;\; dS=\sin(\phi)\,d\phi\,d\theta,\;\; S(R) \text{ is the sphere of radius R.} $$ Is this correct? Is there a better way of doing this problem?
In general moving the derivative in or outside an integral is not mathematically valid, but in your case you can. It makes no difference since the integrals with respect to $\phi$ and $\theta$ are not dependant on $R$, thus differentiating the results of these integrals with respect to $R$ will yield $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Preimages of intersections/unions Let $f(x) = x^2$ and suppose that $A$ is the closed interval $[0, 4]$ and $B$ is the closed interval $[−1, 1]$. In this case find $f^{−1}(A)$ and $f^{−1}(B)$. Does $f^{−1}(A\cap B) = f^{−1}(A) \cap f^{−1}(B)$ in this case? Does $f^{−1}(A \cup B) = f^{−1}(A) \cup f^{−1}(B)$?
I will assume that $f: \mathbb{R} \rightarrow \mathbb{R}$. If $F: U \rightarrow V$ and $Y \subset U$ then $f^{-1}(Y)= \{x \in U : F(x) \in Y \}$ So $f^{-1}(A)=f^{-1}([0,4])=\{x \in \mathbb{R} : x^2 \in [0,4] \}= [-2,2]$ and $f^{-1}(B)=f^{-1}([-1,1])=\{x \in \mathbb{R} : x^2 \in [-1,1] \}= [0,1]$ as complex solutions are not allowed by assumption on $f$. Also $f^{-1}(A \cap B)=f^{-1}([0,1])=\{x \in \mathbb{R} : x^2 \in [0,1] \}= [0,1]$ and $f^{-1}(A \cup B)=f^{-1}([-1,4])=\{x \in \mathbb{R} : x^2 \in [-1,4] \}= [-2,2]$ again as complex solutions not allowed. Therefore $f^{-1}(A) \cap f^{-1}(B)= [-2,2] \cap [0,1]=[0,1] = f^{-1}(A \cap B)$ $f^{-1}(A \cup B)=f^{-1}([-1,4])=\{x \in \mathbb{R} : x^2 \in [-1,4] \}= [-2,2] = f^{-1}(A) \cup f^{-1}(B)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What significant differences are there between a Riemannian manifold and a pseudo-Riemannian manifold? I am reading John Lee's book Riemannian Manifolds. On page 91, he begins a chapter called "Geodesics and Distance," which is I think the first chapter that seriously addresses geodesics. I was very surprised when I came across the following sentence: Most of the results of this chapter do not apply to pseudo-Riemmanian metrics, at least not without substantial modification. I thought the only real difference between the two was about the positive-definite constraint. But this makes it sound like there's a whole host of properties that don't apply to pseudo-Riemannian metrics but that do apply to Riemannian metrics. Can someone clarify this for me? Are the things we can do on Riemannian manifolds that can't be done on pseudo-Riemannian ones?
For one thing, a Lorentz-signature metric on a compact manifold can fail to be geodesically complete. If memory serves, Chapter 3 of Einstein Manifolds by Besse contains an example of a metric on a torus where a finite-length geodesic "winds" infinitely many times. Generally, the "unit sphere" in a tangent space is non-compact for a metric of indefinite signature (e.g., it's a hyperbola on a Lorentz-signature surface), which can cause all manner of fun.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
How to iterate through all the possibilities in with this quantifier? This is a problem from Discrete Mathematics and its Applications My question is on 9g. Here is my work so far I am struggling with the exactly one person part. The one person whom everybody loves is pretty straight forward ( ∃ x∀y(L(y,x)). I am trying to apply the method that Alan gave in How to express exact quantifier in this situation? from my other question. From what I have, if I know that x is a possibility(one, exists), I have to iterate over all the rest of the domain to ensure that there are no other possibilities(check against x) That's what I tried doing with the conjunction. However, this doesn't work because in my diagram, A is the exact one person whom everybody loves. I also showed that C loves C. Once q and w take up C and C (go through all the values in the domain) w, which is C, is not A, which means the whole expression is false because the implication is false but the expression shouldn't be false(A is the only one in the diagram whom everybody loves. C loving C should not have an effect) Is there any way else I can restructure the nested quantifier so i can still iterate through and see if there are any others that everybody loves?
Following the approach in the link, you would write $\forall y L(y,x) \wedge \forall z L(z,w) \implies w=x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to visualize the gradient as a one-form? I just finished reading the proof that the gradient is a covariant vector or a one-form, but I am having a difficult time visualizing this. I still visualize gradients as vector fields instead of the level sets associated with dual vectors. How to visualize the gradient as a one-form?
To clear up some confusion in the comments: when Carroll refers to the gradient of a function $f$ as a $1$-form he probably intends to refer to the exterior derivative $df$ of $f$. This is a $1$-form containing all of the information which is contained in the gradient, but which can be defined in the absence of a (pseudo-)Riemannian metric. In particular it contains precisely the data needed to differentiate $f$ with respect to any tangent vector; the result tells you how fast $f$ is changing in a particular direction at a particular point, which is what the gradient is supposed to do. In the presence of a (pseudo-)Riemannian metric, one can identify $1$-forms and vector fields, and the vector field corresponding to $df$ along this identification is what people usually call the gradient or gradient vector field of $f$. Unlike $df$, it depends on a choice of (pseudo-)Riemannian metric. It's unusual and imprecise to refer to $df$ itself as the gradient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Finding a particular solution to a differential equation what is the particular solution for the following differential equation? $$D^3 (D^2+D+1)(D^2+1)(D^2-3D+2)y=x^3+\cos\left(\frac{\sqrt{3}}2x \right)+xe^{2x}+\cos(x)$$ I tried Undetermined Coefficients and it took so long to solve it,not to mention it was on an exam. I was wondering if there could be any faster and simpler solution.
For the term $x^3$, use the indeterminate coefficients method ($6^{th}$ degree polynomial). For the terms $\cos\left(\frac{\sqrt{3}}2x \right)$ and $\cos(x)$ use the complex exponential form and keep the real part of $e^{i\lambda x}/Y(i\lambda)$. For the term $xe^{2x}$, also use indeterminate coefficients. Rewriting the characteristic polynomial as a function of $D-2$ coud help as $$(D-2)x^ke^{2x}=kx^{k-1}e^{2k}+2x^ke^{2x}-2x^ke^{2x}=kx^{k-1}e^{2x}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1103754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Can I use the index of a series for help with divergence? I was studying this series: $$\sum_{n=2}^{\infty}\dfrac{5}{7n+28}$$ I know that it's an increasing, monotone sequence. Also, I know I can rewrite as: $$\sum_{n=2}^{\infty}\dfrac{5}{7(n+4)} = \dfrac{5}{7} \cdot \sum_{n=2}^{\infty}\dfrac{1}{n+4}$$ Also, I know that the following series is called the Harmonic Series and can be shown to diverge by over-estimating grouped terms (essentially at a point it is worse than adding $1/2$ over and over). (Also, by Cauchy Condensation): $$\sum_{n=1}^{\infty}\dfrac{1}{n}$$ I tried to setup a comparison test, but I don't think it works since the Harmonic is always larger: $$0 \lt \dfrac{1}{n+4} \lt \dfrac{1}{n}$$ Is it legitimate to massage the index without affecting divergence/convergence? For example, The first few terms in the sequence of the series: $$\sum_{n=2}^{\infty}\dfrac{1}{n+4}$$ Are $$\dfrac{1}{6},\dfrac{1}{7},\dfrac{1}{8},\dfrac{1}{9},...$$ Which looks just like the Harmonic just starting off at a different place. So can I somehow change $$\sum_{n=2}^{\infty}\dfrac{1}{n+4}$$ to $$\sum_{n=-3}^{\infty}\dfrac{1}{n+4}$$ to show it diverges? Or am I just way off track here?
Notice that by changing the index we have $$\sum_{n=2}^\infty\frac1{n+4}=\sum_{n=6}^\infty\frac1n$$ so the series is divergent. Notice also that the nature of a series doesn't depend on the first few terms which means that the two series $\sum\limits_{n\ge1}u_n$ and $\sum\limits_{n\ge n_0}u_n$ (for any $n_0$) have the same nature (both convergent or both divergent).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is there a solution for y for ${{dy}\over dx} = axe^{by}$ I have come up with the equation in the form $${{dy}\over dx} = axe^{by}$$, where a and b are arbitrary real numbers, for a project I am working on. I want to be able to find its integral and differentiation if possible. Does anyone know of a possible solution for $y$ and/or ${d^2 y}\over {dx^2}$?
I think there are some scenarios to consider if $a$ and or $b$ is equal to zero. Case 1: $a=0$ Then $$\frac{dy}{dx} = 0 \implies y = C$$ Case 2: $a \neq 0, b=0$. $$\frac{dy}{dx} = ax \implies y = \frac{ax^2}{2}+C$$ Case 3: $a \neq 0 \neq b$. Then $$\frac{dy}{dx} = axe^{by} \implies e^{-by}dy = axdx \\ \implies \int e^{-by}dy = \int axdx \\ \implies \frac{-1}{b}e^{-by} = \frac{ax^2}{2}+C \\ \implies e^{-by} = \frac{-ax^2}{2b}+\tilde{C} \\ \implies y = \frac{-1}{b}\ln\left(\frac{-ax^2}{2b}+\tilde{C} \right)$$ It would probably be wise to look at the solution of $y$ in case $3$ to figure out if there are other constraints on the constants $a,b,\tilde{C}$. For example, if $a>0, b<0, \tilde{C}<0$ then you would be taking the natural log of a negative number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Why is Hawaiian earring not semilocally simply connected? Let $H$ denote the Hawaiian earring: We defined a space $X$ to be semilocally simply connected if every point in $X$ has a nbhd. $U$ for which the homomorphism from the fundamental group of $U$ to the fundamental group of $X$, induced by the inclusion map, is trivial. I'm looking for intuition on why $H$ is not semilocally simply connected.
Consider any neighborhood of the point where the circles accumulate. At least one of the circles is completely contained in that neighborhood. A loop going around that circle is a non-trivial representative of the fundamental group of $U$ and it gets mapped by the inclusion to a non-trivial loop in $X$. To show that these loops are non-trivial consider its image by the map $X\to S^1$ that collapses all but the circle in question to a single point. This image gives us the loop that goes around $S^1$ once, which is non-trivial in $\pi(S^1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Integer solutions of the following question: How many integer solutions are there in this equation: $$x_1 + x_2 + x_3 + x_4 + x_5 = 63, \quad x_i \ge 0, \quad x_2 ≥10$$ I got $C(56,3)$. Is that correct?
Substitute $y_1 = x_1 + 1, y_2 = x_2 - 9, y_3 = x_3 + 1, y_4 = x_4 + 1, y_5 = x_5 + 1$. This yields the equivalent problem $$ (y_1 - 1) + (y_2 + 9) + (y_3 - 1) + (y_4 - 1) + (y_5 - 1) = 63; y_i \ge 1 $$ i.e. $$ y_1 + y_2 + y_3 + \cdots + y_5 = 58 $$ By a classic stars-and-bars argument (writing $58$ in unary) the answer is then $$ {58 - 1 \choose 5 - 1} = {57 \choose 4}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Separable diff. eqn: $(1+x^2)y' = x^2y^2, x > 0$ I have been given a step-by-step answer which I just cannot understand or follow. $\begin{eqnarray} &(1+x^2)y' &= x^2y^2 + y\cdot1 \\ \iff& \frac{1}{y^2} &= \frac{x^2}{1+x^2} \end{eqnarray}$ From there on it's a matter of integrating and using a intitial value I was given, steps I understand. * *What is going on in the first step when a term $y\cdot1$ was added from nowhere? *I have no clue how he got from step 1 to step 2. What I do know: Separable functions of the form $G(y(x))y' = h(x)$ can be rewritten $D(G(y(x))) = h(x)$ and I suspect something like that happens from step 1 to step 2; I just do not see it.
First, if you differentiate $G(y(x))$ with respect to $x$ you get $\cfrac {dy}{dx} \cdot \cfrac {dG}{dy}$ So if you have an equation $G(y)=G(y(x))=F(x)$ and differentiate it, you get$$y'\frac {dG}{dy}=\frac {dF}{dx}$$ Going into reverse, if you have $y'g(y)=f(x)$ then you can put $G(y)=F(x)+c$ where $G(y)=\int g(y)dy$ and $F(x)=\int f(x) dx$ I'm not sure that this is quite what you have stated as what you know, because you don't seem to have differentiated your function $G$ with respect to $y$ - just multiplied by $y'$ Now your first target is therefore to put the expression into the form $y'g(y)=f(x)$. The equation is separable if you have that factor $y'$. The first problem is that the $y\cdot 1$ term simply looks, as others have said, like a typo. So let's ignore that. It is a simple manipulation to separate the terms - divide through by $y^2$ and $1+x^2$ to obtain $$y'\cdot \frac 1{y^2} = \frac {x^2}{1+x^2}$$Note that in the question the $y'$ which needs to be there has been dropped. Once this is in place each side can be separately integrated, as you have noted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Do 'symmetric integers' have some other name? $-1 \cdot -1 = +1$, but there seems to me to be no reason we couldn't define a number system where negative number's and positive numbers were completely symmetric. Where: $$1 \cdot 1 = 1$$ $$-1 \cdot -1 = -1$$ I understand that in order to do this, multiplication could no longer be commutative and we'd have to decide what the result of $1 \cdot -1$ should be. I think we could choose that resulting sign of a multiplication could be the sign of the second term, so: $$1 \cdot -1 = -1$$ $$-1 \cdot 1 = 1$$ or more generally, the sign of any multiplication is determined by the sign of the second term. But where they otherwise behave roughly as expected, i.e. $1 - 2 = -1$. Some other consequences I'm aware of: $$\sqrt{-1} = -1$$ $$\sqrt{1} = 1$$ $f(x) = x^2$ would behave in a way that can only be described piecewise in the normal reals as $x^2$ when $x \geq 0$, and $-(x^2)$ when $x < 0$. Is there already research or another name for such a number system? Or perhaps is there a ring that matches this? After looking at the properties of a ring, on http://en.wikipedia.org/wiki/Ring_%28mathematics%29 what I've described cannot be a ring since it does not have a multiplicative identity. There is no element i_m such that a * $i_m = a$ and $i_m \cdot a = a$ since multiplying by $1$ in the system I've described may change the sign of $a$ to be positive.
Note that elements of a ring which satisfy $x^2=x$ are called idempotents, and these become important in some contexts - for example a square matrix with $1$ as the top left entry and zero everywhere else is a non-trivial idempotent. Such things become significant, for example, in representation theory. [note $0,1$ are trivial idempotents and in characteristic $2$ you have $-1=1$ being a trivial idempotent - I'm addressing other situations] One issue with idempotents is that if they are invertible, then you trivially have $x=1$ simply by multiplying through by the inverse. Your $-1$ is ambiguous, because it is unclear whether you imagine it to be invertible. Algebraic structures where $ij=-ji$ for significant elements are also important. The Quaternions are a significant example. Note that (provided your system is associative) you have $i^2j=ji^2$ so some products don't have the minus sign. You can have $i,j$ invertible in this kind of system, but then they can't be idempotent. So there are mathematical structures of significance which capture the ideas in your post, but there are come challenges to having everything you want at once, and also a big question of what use it might be if you did.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
Finding determinant of $n \times n$ matrix I need to find a determinant of the matrix: $$ A = \begin{pmatrix} 1 & 2 & 3 & \cdot & \cdot & \cdot & n \\ x & 1 & 2 & 3 & \cdot & \cdot & n-1 \\ x & x & 1 & 2 & 3 & \cdot & n-2 \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ x & x & \cdot & \cdot & x & 1 & 2 \\ x & x & \cdot & \cdot & \cdot & x & 1 \\ \end{pmatrix} $$ We know that $x \in R$ So far I managed to transform it to the form: $$ \begin{pmatrix} 1-x & 1 & 1 & \cdot & \cdot & \cdot & 1 \\ 0 & 1-x & 1 & 1 & \cdot & \cdot & 1 \\ 0 & 0 & 1-x & 1 & 1 & \cdot & 1 \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & \cdot & \cdot & 0 & 1-x & 1 \\ x & x & \cdot & \cdot & \cdot & x & 1 \\ \end{pmatrix} $$ by the operations: (Let's say $r_i$ is the ith row) $$r_1 = r_1 - r_n,r_2 = r_2-r_n, r_3 = r_3 - r_n, ..., r_{n-1} = r_{n-1} - r_n$$ and then $$r_1 = r_1 - r_2, r_2 = r_2 - r_3, r_3 = r_3 - r_4,...,r_{n-2} = r_{n-2} - r_{n-1}$$ Unfortunately, I have no idea how to eliminate the last row. Any hints?
Multiply the last row by $\frac{1-x}{x}$; this means that the determinant you want will be the determinant of the changed matrix times $-\frac{x}{x-1}$. Now subtract $r_1$ from $r_n$ leaving $$r_n = (0, -x, -x, -x, \cdots, -x, \frac{(x-1)^2 - x^2}{x}) $$ where I have intentionally written $$ \frac{1-x}{x} -1 = \frac{-2x+1}{x} = \frac{(x-1)^2 - x^2}{x} $$ Now we have 0 in the last row in columns 1 through 1. For each remaining column $j$ up to column $n-1$, multiply the last row by $\frac{x-1}{x}$ (giving another power of $\frac{x}{x-1}$ in the factor before the changed matrix), at which point you can eliminate the $(1-x)$ in column $j$ of the last row by adding the $j$-th row. When you do this, the last element of the $n$-th row changes to $$ \frac{(x-1)^{j}- x^{j}}{x^{j-1} } \frac{x-1}{x} - 1 = \frac{(x-1)^{j+1}-x^{j}(x-1) -x^{j}}{x^{j}} = \frac{(x-1)^{j+1}-x^{j+1}}{x^{j}} $$ and the process repeats for the mext $j$ In the end, the final term in $A_{nn}$ involves $$ \frac{(x-1)^n -x^n}{x^{n-1}}$$ and a lot of cancelation with the accumlated factors happens, leaving the answeer $$ (-1)^n \left( (x-1)^n - x^n \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
If M is a manifold of dimension $ n \neq0$ then M has no isolated points. I am in doubt whether the following statement is true or false: "If M is a manifold of dimension $ n \neq0$ then M has no isolated points." The idea that made me find the true statement was as follows: If $ p \in M $ is an isolated point, consider $ x: U \rightarrow \mathbb{R} ^ n $ a chart where $ U $ is open in M and $p \in U $. Since $ x $ is a homeomorphism and $ \{p \} $ is an open, we have $ \{x (p)\} $ is an open in $ \mathbb {R} ^ n $, but this is only possible if $ n = 0$, a contradiction. Is that correct?
This was given as a comment, but the question needs closure, so I will answer it. Yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Finding total after percentage has been used? Tried my best with the title. Ok, earlier while I was on break from work (I have low level math, and want to be more fond of mathematics) I work retail, and 20% of taxes are taken out, and I am wanting to find out how much I made before 20% is taken out. So I did some scribbling, and this is without googling so please be gentle. but lets say I make 2000 dollars per paycheck, so to see how much money I get after taxes I do 2000 * .8 which gives me 1600. I can also do 2000 * .2 which gives me 400 and then do 2000 - 400 (imo I don't like this way since I have a extra unnecessary step) Anyways, I was pondering how to reverse that to have the answer be 2000, and what I did was 1600 * 1.20 (120%) which gave me 1920, and I thought that is odd so I did 1600 * 1.25 and answer was 2000 exact. My question is why did I have to add extra 5% (25%) to get my answer? I am sure I did something wrong, and I fairly confident the formula I am using is a big no no. edit; wow thank you all for your detailed answers. I am starting to like mathematics more and more.
$80\% = 80/100$. Since we've multiplied this into the gross pay to get the net pay, we have to do the opposite, which is to say divide by this, to go from net pay to gross pay. Dividing by $a/b$ is the same as multiplying by $b/a$, so you have to multiply your net pay by $100/80=5/4=125/100$ to get your gross pay back.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 3 }
2nd order differential equation with missing y' I have the following 2nd order differential equation: $$y'' + p(x) y =0, \tag{1}$$ where $p(x)$ involves only first order of $x$, for example, $p(x)=ax+b$. Any suggestion how to obtain or guess a solution for (1)? Thanks.
This does not seem (for me) to be so easy to solve in a nice way. I suppose there might be a substitution, but I can't think of any. This, however, can most certainly be solved by assuming that $y(x)$ is analytic on reasonable domain, and so with that assumption we can write out $y(x)$ as a power series and $y'(x)$ as its derivative. $$y(x)=a_0 + a_1x + a_2x^2 + ...$$ $$y''(x)=2a_2+6a_3x+12a_4x^2+...$$ $$p(x)=ax+b$$ And now we solve: $$\left(2a_2+6a_3x+12a_4x^2+...\right)+\left(ax+b\right)\left(a_0 + a_1x + a_2x^2 + ...\right)=0$$ $$\left(2a_2+ba_0\right)+\left(6a_3+aa_0+ba_1\right)x+\left(12a_4+aa_1+ba_2\right)x^2+\left(20a_5+aa_2+ba_3\right)x^3+...=0$$ Now we have the following equations to solve: $$\begin{align} 2a_2+ba_0&=0\\ 6a_3+aa_0+ba_1&=0\\ 12a_4+aa_1+ba_2&=0\\ 20a_5+aa_2+ba_3&=0\\ \vdots\\ (n)(n-1)a_n+aa_{n-3}+ba_{n-2}&=0\\ \vdots \end{align} $$ And if you confidently plow through those equations, you will arrive at your general analytic solution for $y(x)$ in terms of $a$,$b$, and two particular $a_j$'s of your choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1104944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to transform $ \int_a^b f(x^2+y(x)^2)\sqrt{1+y'(x)^2}\;dx$ into polar coordinates I have the following homework problem (from Calculus of Variations course) : Show that if in $$ \min \int_a^b f(x^2+y(x)^2)\sqrt{1+y'(x)^2}\;dx$$ polar coordinates are used, then the problem will be converted into one that contains no independent variable. Solve it to optimality. I'm having few problems on converting the integral into one with polar coordinates in it. For example I'm not quite sure how to write $y'(x)$ in polar coordinates. The thing that confuses me is that if $y(x)=y(r\cos \theta)$, then should $y'$ be calculated with respect to $r$ or $\theta$? Could someone show how the transformation into polar coordinates would be done in this example. Hope my question is clear =) thank you for the help! Please let me know if you need more information.
Suppose the graph of the given curve $x \mapsto y(x)$, $a \leq x \leq b$, can be written as a polar curve, say, as $\theta \mapsto r(\theta)$, $\alpha \leq \theta \leq \beta$. Now, note that $$\sqrt{1 + y'(x)^2} dx$$ is just the arc length element (mnemonically, this is $\sqrt{\left[1 + \left(\frac{dy}{dx}\right)^2\right] \cdot dx^2} = \sqrt{dx^2 + dy^2}$), which some elementary geometry gives is $$\sqrt{r^2 + r'(\theta)^2} d\theta$$ in polar coordinates. Finally, $x^2 + y^2 = r^2$, so the given integral can be written as $$\int_{\alpha}^{\beta} f(r(\theta)^2) \sqrt{r(\theta)^2 + r'(\theta)^2} d\theta,$$ which indeed doesn't explicitly involve the independent variable. Remark Note that if $f$ is a suitable power function and we regard $r(\theta)$ (or $y(x)$) as tracing out the shape of a wire with unit mass per length, then the integral is the corresponding moment of the wire about the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Specific treatment for the first and last element of sequence in a function? Let $A = \langle a_1,\dots,a_n \rangle$ be a sequence. I have a function that given any element $a_k$ it will return the values of $a_{k-1}+a_k+a_{k+1}$ with the exception of the first and last element that is going to return $a_k + a_{k+1}$ for the first element and $a_k + a_{k-1}$ for the last one: $ f(k) = \left\{ \begin{array}{l l} a_k + a_{k+1} & \quad \text{if $k=1$}\\ a_k + a_{k-1} & \quad \text{if $k=n$}\\ a_{k-1}+a_k+a_{k+1} & \quad \text{otherwise} \end{array} \right.$ Is there any better (more compact) way of writing this function?
I'd write this as For $1\le k\le n$ let $f(k)=a_{k-1}+a_k+a_{k+1}$, with the convention that $a_0=a_{n+1}=0$. Note that this formulation even covers the case $n=1$ correctly (which the cases-statement does not).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }