Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Probability of extracting a ball after two balls were swapped We have $2$ boxes, the first one contains $10$ white balls and $11$ black balls. The second box contain $12$ white balls and $13$ black balls. We swap two balls between the boxes then we extract a ball from the first box. What is the probability that the ball is white? A swap consists in taking a ball from the first box and put it in the second one, then taking a ball from the second box and put it in the first one. $$P(\text{white ball from box1})=\frac{10}{21}$$ $$P(\text{black ball from box1})=\frac{11}{21}$$ $$P(\text{white ball from box2})= \frac{12+1}{26}\frac{10}{21}+\frac{12}{26}\frac{11}{21}=\frac{13\cdot10+12\cdot11}{26\cdot 21}$$ $$P(\text{black ball from box2})= \frac{13}{26}\frac{10}{21}+\frac{13+1}{26}\frac{11}{21}=\frac{13\cdot 10+14\cdot 11}{26\cdot 21}$$ $$P(\text{white ball from box1 after swap})=\frac{10+1}{22}\cdot\frac{13\cdot10+12\cdot11}{26\cdot 21}+\frac{10}{22}\cdot\frac{13\cdot 10+14\cdot 11}{26\cdot 21}$$ $$=\frac{2882}{12012}+\frac{2840}{12012}=\frac{5722}{12012}=0.4763$$ Have I done it correctly? I don't know how to verify myself with this kind of problems. Also, is there a trick to find the probability if there were two swaps?
Here’s another approach. [Added: Also, see more on this approach here] Equivalently, we can do the following. Reach into the first box and write “S” on one ball (the one to swap). Now choose a ball from the first box. If it is not the “S” ball, that’s your ball. The probability of this happening is $20\over21$, and the ball you choose will be white with probability $10\over21$. If you do choose the “S” ball (you do this with probability $1\over21$), discard it by throwing it into the second box and then choose a ball at random from the second box (now containing an extra ball), and that’s your ball. If you had to do this, the probability of a white result is ${12+{10\over21}\over26}$, because if we add a randomly-chosen ball from box 1 to box 2, the number of white balls in box 2 effectively increases from $12$ to $12+{10\over21}$ and the number of balls in box 2 increases to $26$. Therefore the total probability you want is $$p = {20\over21}\cdot{10\over21}+{1\over21}\cdot{12+{10\over21}\over26}={2731\over5733}\approx 0.4763649.$$ P.S. I don’t see an easy way to adapt this approach for two swaps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Prove that $6^n=6^{n+5} (mod 100)$ How can we prove that $6^n=6^{n+5} (\text{mod}~ 100)$? I tried by writing $6^{n+5}=7776 \cdot 6^n = 76 \cdot 6^n (\text{mod}~ 100)$ but this approach does not lead to the above result.
$\color{#c00}{a\mid 1\!+\!b,\ n\geq 2}\,\Rightarrow\, (\color{#c00}a\,\color{#0a0}{b})^{\large 2}\!\mid (1\!+\!b)^{\large n+b}\!-{(1\!+\!b)^{\large n}} = \overbrace{(\color{#c00}{1\!+\!b})^{\large\color{#c00} n}}^{\Large \color{#c00}{a^{\Large 2}}\,(\cdots)}\,\underbrace{\overbrace{((1\!+\!b)^{\large b} - 1)}^{\Large \color{#0a0}{b^{\Large 2}}(\cdots)}}_{\rm{ Binomial\ Theorem}}.\ $ Put $\,a=2,\,b=5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Evaluate $\int x^x \ln x\, dx$ The integral $$\int x^x \ln x\, dx= ?$$ I know of the integral $\int x^x dx$ can be further simplified as $\int e^{x\ln x} dx$. And this requires identity to simplify. What about the product in the integral $\int x^x\ln x\,dx=\int e^{x\ln x}\ln x\, dx.$ Is there any identity to be used for this one.
Define $g(x) = x^x$. Then $\ln g(x) = x\ln x$ and differentiating both sides $$\frac{g'(x)}{g(x)}=\ln x+1,$$ which means $g'(x) = x^x(\ln x + 1)$. Now, up to a constant $$x^x = \int g'(x)\,dx = \int x^x \ln x \,dx + \int x^x\,dx$$ thus $$\int x^x \ln x\,dx=x^x-\int x^x\,dx. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Constraining the sum of Gaussian random variables? Suppose I have $n$ Gaussian random variables $X_i$ with $i=1,2,...,n$, each with zero mean $\mu=0$ and the same constant standard deviation $\sigma\neq 0$. I would like to constrain the elements collectively drawn from these distributions to satisfy $$\sum_{i=1}^nX_i=0$$ How should I modify the distributions to achieve this, while maintaining zero means for each distribution separately? In the case of $n=2$ the solution is obviously to restrict $X_2$ elements to $X_2=-X_1$ and let only $X_1$ be drawn independently. But what happens in the case $n>2$? EDIT: Considering the symmetry in the definition of all $X_i$ above, let us seek a solution which preserves this symmetry and keeps all $X_i$ identically distributed.
As pointed out by d.k.o. in a comment, $X_n=-\sum_{i=1}^{n-1}X_i$ represents one such solution. However, it is not homogeneous in the treatment of each $X_i$. Turns out, this can be easily fixed by considering all possible cases $$X_j=-\sum_{{{i=1},{i\neq j}}}^nX_i$$ instead, and defining the new random variables $X'_j$ by superpositions of all above combinations of old ones: $$X'_j\equiv (n-1)X_j-\sum_{{{i=1},{i\neq j}}}^nX_i$$ This is now completely symmetric in all $X'_j$. The explicit construction is implied as follows. To obtain a collective element $(x'_1,x'_2,...,x'_n)$ we first draw a particular collective element $(x_1,x_2,...,x_n)$ from the unrestricted distributions and then literally calculate: $$x'_j\equiv (n-1)x_j-\sum_{{{i=1},{i\neq j}}}^nx_i$$ for all $j$. If needed, one could figure out the new standard deviations from this answer to a different question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
General solution to linear system with general form I am trying to find the general solution of the system $$X' = \begin{bmatrix} a & b \\ c & d \end{bmatrix}X$$ where $a+d \not= 0 $ and $ad-bc=0$ I find that the eigenvalues are 0 and $a+d$ with corresponding eigenvectors both $[0,0]$ which implies that the general solution is simply $X(t)=0$ but this solution does not seem right to me. Is there something I have done wrong?
We are given $$X' = \begin{bmatrix} a & b \\ c & d \end{bmatrix}X\\\text{where}~ ~~a+d \ne 0 , ~~ad-bc=0$$ We can find the eigenvalues using the characteristic polynomial by solving $|A - \lambda I| = 0$, yielding $$\lambda_{1,2} = \frac{1}{2} \left(-\sqrt{a^2-2 a d+4 b c+d^2}+a+d\right),\frac{1}{2} \left(\sqrt{a^2-2 a d+4 b c+d^2}+a+d\right)$$ We can then find the associated eigenvectors by solving $[A-\lambda_i]v_i = 0$, yielding $$v_1 = \begin{pmatrix} -\dfrac{\sqrt{a^2-2 a d+4 b c+d^2}-a+d}{2 c}\\1 \end{pmatrix}, v_2 = \begin{pmatrix} -\dfrac{-\sqrt{a^2-2 a d+4 b c+d^2}-a+d}{2 c}\\1\end{pmatrix}$$ Now, we can use the conditions we are given, namely $a+d \not= 0 ,ad-bc=0 $, by substituting $bc = ad$ in each eigenvalue/eigenvector pair yielding $$\lambda_1 = 0, v_1 = \begin{pmatrix} -\dfrac{d}{c} \\ 1 \end{pmatrix} \\ \lambda_2 = a + d , v_2 = \begin{pmatrix} \dfrac{a}{c} \\ 1 \end{pmatrix}$$ We can now write $$X(t) = c_1~ e^{\lambda_1 t} ~v_1 + c_2 ~e^{\lambda_2 t}~ v_2 = c_1 \begin{pmatrix} -\dfrac{d}{c} \\ 1 \end{pmatrix} + c_2~ e^{(a+d)t} \begin{pmatrix} \dfrac{a}{c} \\ 1 \end{pmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Invertible Matrices within a Matrix Suppose A, B are invertible matrices of the same size. Show that $$M = \begin{bmatrix} 0& A\\ B& 0\end{bmatrix}$$ is invertible. I don't understand how I could show this. I have learned about linear combinations and spanning in my college class, but I don't know how that would help in this case.
Since you mentioned linear combinations, you can also follow that path and prove that all the matrix rows are linearly independent, which is equivalent to invertibility. For that, let $a_i$ and $b_i$ be the $i$-th rows of $A$ and $B$, respectively. Now assume that we have a linear combination of the rows of $M$. These rows have the form $(0|a_i)$ (some zeros followed by a row of $A$) or $(b_i|0)$ (a row of $B$ followed by some zeros). A linear combination of the rows of $M$ is therefore written as $$ \sum_i \alpha_i(0|a_i) + \sum_i \beta_i(b_i|0) $$ where $\alpha_i$ and $\beta_i$ are some scalar coefficients. Assume the sum above is the null vector, and let's prove that the coefficients must be zero. Now since the number of added zeros in the vectors $(0|a_i)$ and $(b_i|0)$ is exactly the size of $A$ and $B$, the above sum can be written as $$ (\sum_i \alpha_i a_i | \sum_i \beta_i b_i) $$ This is the null vector only when we have both $$ \sum_i \alpha_i a_i = 0 \qquad \sum_i \beta_i b_i = 0 $$ Since $A$ and $B$ are invertible, their rows are linearly independent, so in the sums above we must have $\alpha_i=0$ and $\beta_i=0$, for all $i$. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Show that f is non linear and g is linear I have been given the Basis B of V and C of W in Q with $$B = \{v_1,v_2,v_3\}$$ $$C = \{w_1,w_2\}$$ And f and g with: $$f:\mathbb{V} \to \mathbb{W}, f(k_1v_1, k_2v_2, k_3v_3)=(3k_1+k_2)w_1+k_3^7w_2$$ $$g:\mathbb{V} \to \mathbb{W}, g(k_1v_1, k_2v_2, k_3v_3)=5k_1w_1+(2k_2+7k_3)w_2$$ I have to show that f is non linear and g is linear. So I need to check for homogenity and additivity. So $$f(kv) = kf(v)$$ and $$f(v+w) = f(v) + f(w)$$ has to be right for linearity. I see that f can't be linear because of the k^7. But I have no clue how to put this Basis into the definition of linearity for this example. I hope someone could show me. Greetings
this just means that you have (for B) three linearly independent vectors that span your vector space V. It is a more general way of writing vecor spaces, you don't always have a standard basis (i.e. $\mathbb{R}^{2}=span{((1,0), (0,1))}$). For your example you can simply handle this the exact way you would if you had been given a standard basis, i.e.: $f$ is not linear: Let $v\in V, k\in K$, then $$f(k_{1}v_{1}+l_{1}v_1,k_{2}v_2+l_2v_2, k_{3}v_3+l_3v_3) = 3((k_1+l_1+k_2+l_2) w_1 + (k_3+l_3)^7w_2 $$ And continue on from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
About Definition 4.5, Theorem 4.6, Theorem 4.8 in "Principles of Mathematical Analysis" by Walter Rudin. Definition 4.5 Suppose $X$ and $Y$ are metric spaces, $E \subset X, p \in E$ and $f$ maps $E$ into $Y$. Then $f$ is said to be continuous at $p$ if for every $\varepsilon > 0$ there exists a $\delta >0$ such that $$d_Y(f(x),f(p)) < \varepsilon$$ for all points $x \in E$ for which $d_X(x,p) < \delta$. Theorem 4.6 In the situation given in Definition 4.5, assume also that $p$ is a limit point of $E$. Then $f$ is continuous at $p$ if and only if $\lim_{x \to p} f(x) = f(p)$. Rudin didn't write Definition 4.5 as follows: Definition 4.5' Suppose $X$ and $Y$ are metric spaces, $p \in X$ and $f$ maps $X$ into $Y$. Then $f$ is said to be continuous at $p$ if for every $\varepsilon > 0$ there exists a $\delta >0$ such that $$d_Y(f(x),f(p)) < \varepsilon$$ for all points $x$ for which $d_X(x,p) < \delta$. And Rudin didn't write Theorem 4.6 as follows: Theorem 4.6' In the situation given in Definition 4.5', assume also that $p$ is a limit point of $X$. Then $f$ is continuous at $p$ if and only if $\lim_{x \to p} f(x) = f(p)$. And Rudin wrote Theorem 4.8 as follows: Theorem 4.8 A mapping $f$ of a metric space $X$ into a metric space $Y$ is continuous on $X$ if and only if $f^{-1}(V)$ is open in $X$ for every open set $V$ in $Y$. Why? Why is $E$ necessary in Definition 4.5, Theorem 4.6, Theorem 4.7? I think $E$ is redundant.
(Rudin's related explanation in the later chapter) You are right here. He can simply discard the complement of $E$ in $X$, and view it as a function from $E$ to $Y$. As IEm points out, E is a metric space in its own right using the metric inherited from $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum involving $\ln{(2)}$ I got this sum. How do can this sum be equal to $8\ln{(2)}?$ $$\sum_{n=2}^{\infty}\frac{(-1)^n}{n}\left[\frac{35n-37}{(2n-1)(n-1)^2}+\frac{35n+37}{(2n+1)(n+1)^2}\right]=8\ln{(2)}$$ I have try to expand out the sum but it is too messy. Dealing the sum in this form, I haven't got any idea. Any help.
First note that \begin{eqnarray*} &&\frac{1}{n}\left[\frac{35n-37}{(2n-1)(n-1)^2}+\frac{35n+37}{(2n+1)(n+1)^2}\right]\\ &=&\frac{74}{n}+\frac2{(n+1)^2}-\frac2{(n-1)^2}+\frac{41}{n+1}+\frac{41}{n-1}-\frac{156}{2n+1}-\frac{156}{2n-1} \end{eqnarray*} and \begin{eqnarray*} &&\sum_{n=2}^{\infty}\frac{(-1)^n}{n-1}=\ln2,\sum_{n=2}^{\infty}\frac{(-1)^n}{n}=1-\ln2,\sum_{n=2}^{\infty}\frac{(-1)^n}{n+1}=-\frac12+\ln2,\\ &&\sum_{n=2}^{\infty}\frac{(-1)^n}{2n-1}=\frac{4-\pi}{4},\sum_{n=2}^{\infty}\frac{(-1)^n}{2n+1}=\frac{-8+3\pi}{12},\\ &&\sum_{n=2}^{\infty}(-1)^n\bigg[\frac2{(n+1)^2}-\frac2{(n-1)^2}\bigg]\\ &=&2\sum_{n=1}^{\infty}\bigg[\frac1{(2n+1)^2}-\frac1{(2n-1)^2}\bigg]-2\sum_{n=2}^{\infty}\bigg[\frac1{(2n)^2}-\frac1{(2n-2)^2}\bigg]\\ &=&-2+\frac12=-\frac32. \end{eqnarray*} Then you can put them to give the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
For which integer $n$ is $28 + 101 + 2^n$ a perfect square? This question For which integer $n$ is $$28 + 101 + 2^n$$ a perfect square. Please also suggest an algorithm to solve similar problems. Thanks Btw, this question has been taken from an Aryabhatta exam for 8th graders in India.
One way is to notice that $28+101+2^n=128+1+2^n=2^7+1+2^n=2^n+2*2^6+1$. Therefore, if we let $n=12$ we have $2^{12}+2*2^6+1=(2^6)^2+2*2^6+1=(2^6+1)^2$ Which is a perfect square, as desired. This relies on noticing this specific pattern, so doesn't lend itself to a general method. One could always guess and check.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3094979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Can we find two non-congruent right triangles with whole-number lengths and congruent hypotenuses? I know some ways to find some Pythagorean triples. And I understand that if $a^2 + b^2 = c^2$ then $(a-b)^2 + (a+b)^2 = 2c^2$. I feel like that suggests a way forward, but I cannot find that way. Is there an algorithm to pick four whole numbers to serve as the legs of two right triangles such that the triangles are not congruent but the hypotenuses are the same whole number?
Let $(a_1,b_1,c_1)$ and $(a_2,b_2,c_2)$ be arbitrary non-proportional pythagotrean triples, and let $d:={\rm gcd}(c_1,c_2)$. Then $${c_2\over d}\left(a_1,b_1,c_1\right),\qquad{c_1\over d}\left(a_2,b_2,c_2\right)$$ are pythagorean triples to noncongruent right triangles with the same hypotenuse $c={c_1c_2\over d}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
$\int_{0}^{1} t^2 \sqrt{(1+4t^2)}dt$ solve $$\int_{0}^{1} t^2 \sqrt{(1+4t^2)}dt$$ my attempt $$t = \frac{1}{2}\tan(u)$$ $$dt = \frac{1}{2}\sec^2(u)du\\$$ $$\begin{align} \int_{0}^{1} t^2 \sqrt{(1+4t^2)}dt&=\int_{0}^{1} \frac{\tan^2(u)}{4} \sqrt{1+\tan^2(u)}\frac{1}{2}\sec^2(u)du\\ &=\frac{1}{8}\int_{0}^{1} \tan^2(u)\sec^{3}(u)du\\ &= \frac{1}{8} \int_{0}^{1} (\sec^2(u) - 1)(\sec^{3}(u))du\\ &=\frac{1}{8}\int_{0}^{1} \sec^5(u)du - \frac{1}{8} \int_{0}^{1}\sec^3(u)du \end{align}$$ what now?
Hint. I would say $\int \sec^5u\,du = \int\frac{\cos u\,du}{\cos^6u}=\int\frac{\cos u\,du}{(1-\sin^2u)^3}$ and after substitution $\sin u = v,\quad \cos u \,du = dv$ it will be an integral part of the rational function. Edit: I will add: The given integral is the so-called binomial integral with a specific solution. See http://www.nabla.hr/CL-IndefIntegralB5.htm
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Simplifying $\frac{ (2x+1) (x-3) }{ 2x^3 (3-x) }$ In simplifying $$\frac{2x^2-5x-3}{6x^3-2x^4}$$ I got this far $$\frac{ (2x+1) (x-3) }{ 2x^3 (3-x) }$$ but there aren't same brackets to cancel out.
$$\dfrac{2x^2-5x-3}{6x^3-2x^4}=\dfrac{(2x+1)(x-3)}{2x^3(3-x)}=\dfrac{(2x+1)(x-3)}{2x^3[-(x-3)]}=\dfrac{(2x+1)(x-3)}{-2x^3(x-3)}=\dfrac{2x+1}{-2x^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A homeomorphism $T$ from extended complex plane to itself preserving cross ratio is a Mobius map. Cross ratio preserving means $(Ta,Tb,Tb,Td)=(a,b,c,d)$ where $(a,b,c,d)=\dfrac{(a-b)(c-d)}{(a-d)(c-b)}$. If we assume $T$ fixes infinity, can we prove $T$ is affine?
My answer needs two facts about Möbius transforms: * *They preserve cross ratios (this can be checked by a direct calculation). *A Möbius transform can map three arbitrary points to any other three arbitrary points (this is not completely obvious, but at least very intuitive since Möbius transforms have 3 independent complex degrees of freedom). You also need to know that for any fixed pairwise distinct $u, v, w \in \mathbb{C} \cup \{\infty\}$, the function $f : \mathbb{C} \cup \{\infty\} \to \mathbb{C} \cup \{\infty\}, f(x) = (u, v; w, x)$ is bijective (again an easy calculation). This means that the images of three distinct points $a, b, c$ under your map $T$ already completely determine it. However, there also is a Möbius $M$ transform that maps $a, b, c$ to $T(a), T(b), T(c)$ and, by virtue of being a Möbius transform preserves cross-ratios. Therefore $M = T$. We can express $$M(z) = \frac{\alpha z + \beta}{\gamma z + \delta}$$ Suppose $M$ fixes $\infty$, i.e. $M(\infty) = \frac{\alpha \cdot \infty + \beta}{\gamma \cdot \infty + \delta} = \infty$. This is the case iff $\gamma = 0$. In this case $M(z) = \frac\alpha\delta z + \frac\beta\delta$ is affine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do i calculate $\lim_{h\to0}\frac{f(a+h^2)-f(a+h)}{h}$? All do i know about this problem is that f can be derived in "a". What troubles me is the h squared,i just can't get rid of it or make it useful,no matter what i do, i always end up with it giving me an undefined limit, so it stays like that,any idea on how to get rid of it? or any rule i can use to make this easy?
Hint:$$\lim_{h\to0}\frac{f(a+h^2)-f(a)}h=\lim_{h\to0}h\frac{f(a+h^2)-f(a)}{h^2}=h\times f'(a)=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Logical equivalences for $P \implies Q$ and $\neg Q \implies \neg P$ This is probably quite a basic question, but it's something I'm having trouble wrapping my head around. I came across a proof which employed the following strategy The goal was to prove $$P \implies Q$$ And the strategy used was to prove the following instead: $$\neg Q \implies \neg P$$ If possible, could someone provide some intuition as to why these two statements would be equivalent?
In classical logic this boils down to the definition of $\implies$. You can then check this equivalence by testing all possible combinations of truth values for $P$ and $Q$. This principle is called contraposition. In intuitionistic logic, you only get that $P\implies Q$ implies $\lnot Q\implies \lnot P$, but not the converse. Contraposition is often a convenient way of starting a proof, in particular if you don't know where you want to go with your proof. But many such proofs can be in fact reformulated to be direct proofs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Linear transformations with the same kernel If two linear transformations between finite-dimensional vector spaces, we say $$L,T : V \longrightarrow W$$ has the same kernel, and kernel is not the space zero. Is $T$ a multiple of $L$?
Hint: Take square matrices of order $2$ like $A=[a_{ij}]$ where $a_{11}=1, a_{12}=a_{21}=a_{22}=0$ and $B=[b_{ij}]$ where $b_{21}=1, b_{11}=b_{12}=b_{22}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Let $a,b,n$ be positive integers, if $n|a^n-b^n$ then $n|\frac{a^n-b^n}{a-b}$ Let $a,b,n$ be positive integers, if $n|a^n-b^n$ then prove that $n|\frac{a^n-b^n}{a-b}$. My approach: If $n$ is a prime, write $n=p$, then since $$a^p\equiv b^p\pmod p$$ and by Fermat's little theorem, we have $$a^p\equiv a, b^p\equiv b\pmod p.$$ Therefore $$a\equiv b\pmod p.$$ We have $$\dfrac{a^p-b^p}{a-b}\equiv a^{p-1}+a^{p-2}b+\cdots+b^{p-1}\equiv pa^{p-1}\equiv 0\pmod p,$$ so this statement is true for prime $n$. But when $n$ is not a prime, I cannot construct a similar proof. Any suggestion?
Suppose $(n, a-b)=d$ is the highest common factor. Use the $a=b+kd$ and the binomial expansion to show that $d^2|a^n-b^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3095947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Function such that $f^{(n)}(x) = \frac{x}{f(x)^n}$ Let $n$ be a fixed positive integer. Find all function $f:(0, \infty) \to \mathbb{R}$ that can be differentiated $n$ times such that $f^{(n)}(x) = \frac{x}{f(x)^n}$ if $f^{(n)}(x)$ is the $n$-th derivative of $f$. I tried to differentiate the given identity and I wrote $$f^{(n+1)}(x)=\frac{f(x)^n-nxf(x)^{n-1}f'(x)}{f(x)^{2n}}=\frac{f(x)-nxf'(x)}{f(x)^{n+1}}=\frac{1}{f(x)^n}-\frac{nxf'(x)}{f(x)^{n+1}}$$ I tried to connect this with $f^{(n)}(x)$ but the relations didn't get to anything. Also, if $n=2$, I am not able to find any example of a function $f$ that satisfies the equation. It appears from the comments that the solutions are very complicated. Does the problem become more easy if we replace $f(x)^n$ with $f^n(x)=(f \circ f \circ...\circ f)(x)$?
Not an answer (I just didn't have enough room to put all this in a comment.) If you take $f^n(x)=(f\circ f \cdots \circ f) (x)$ as definition, may something nicer happen? For example, consider $n=2$ and restrict yourself to the case $$f(x) = Kx^\alpha,$$ $\alpha,K \in \Bbb R$ and $x \in \Bbb R^+$, so that $$f''(x)=K\alpha(\alpha-1) x^{\alpha-2}$$ and $$(f\circ f)(x) = K(Kx^\alpha)^\alpha,$$ then your condition becomes $$K\alpha(\alpha-1)x^{\alpha-2} = \frac{x}{K^{1+\alpha}x^{\alpha^2}}.$$ This forces $$\alpha -2 = 1-\alpha^2 $$ that is $$\alpha= -\frac{1}{2}\pm \frac{\sqrt{13}}{2}.$$ Call these two values $\alpha_i$, $i=1,2$. Then for the constant factor we must have $$K^{2-\alpha_i} = \frac{1}{\alpha_i(\alpha_i-1)}.$$ Since $\alpha_i(\alpha_i-1) >0$ we can choose $K$ to be $$K_i = \left[\alpha_i(\alpha_i-1)\right]^{-\frac{1}{2-\alpha_i}},$$ with $i=1,2$. Is that any useful? Is this solution extendable to an entire family of functions? Is this process generalizable for $n>2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Complex operator $i$ and Exponents I am trying to understand the complex numbers and exponents. I came across this question. I wonder how to explain the difference between $${2\cdot i} \text{ and } 2^i$$ as $i=\sqrt{-1}$ edit: Rather than explaining the meaning of above two numbers in yet another equally difficult mathematical form.. i am more interested in knowing as to in which situation do we need to use one of the above numbers and in which other situation we would use the other number. From many of the answers I now understand that bot of the above numbers are complex numbers but they are of different types in a way that one has a single value while the other has more than 1 values. So in which practical situation do we prefer to use one form and in which the other?
$$2^i=\mathrm e^{i\ln 2}=\cos(\ln 2)+i\sin(\ln 2)\ne 2i. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A variant of submodularity A function $f: \mathbb{R}^2 \to \mathbb{R}$ is said to be submodular if for all $x,y \in \mathbb{R}^2$ it holds $$ f(x \vee y)+f(x \wedge y)\le f(x)+f(y). $$ In particular, if $x_1 \ge y_1$ and $x_2 \le y_2$, this means that $$ f(y_1,x_2)+f(x_2,y_1) \le f(x_1,y_1)+f(x_2,y_2), $$ or, equivalently, $$ \sum_{I\subseteq \{1,2\}}(-1)^{|I|}f(xIy)\ge 0 $$ where $xIy$ is the vector $I$ where we replace the components of $x$ with the components of $y$ in the positions in $I$. Question. Let us take a function $f:\mathbb{R}^3 \to \mathbb{R}$ with the property that $$ \sum_{I\subseteq \{1,2,3\}}(-1)^{|I|}f(xIy)\ge 0 $$ for all vector $x,y \in \mathbb{R}^3$. Do such function have a name in the literature?
I don't know about the functions that you've described here, but a related concept is something termed "continuous submodlarity", which is a way to extend the notions of submodularity to $\mathbb{R}^n$. The idea is to use the lattice on $\mathbb{R}_n$ obtained by component-wise partial ordering. For $i=1, \dots n$, let $\mathcal{X}_i$ be a compact subset of $\mathbb{R}$ (i.e. an interval $[a,b]$ or a finite set $\{ 0, 1 \}$) and let $\mathcal{X} = \prod_{i=1}^n \mathcal{X_i} \subset \mathbb{R}^n$ be the product. Given two vectors $x,y \in \mathcal{X}$, define $x \vee y$ to be a component-wise maximum and $x \wedge y$ to be component-wise minimum. A function $f: \mathcal{X} \rightarrow \mathbb{R}$ is continuous submodular if for all $x,y \in \mathcal{X}$, we have $$ f(x \vee y) + f(x \wedge y) \leq f(x) + f(y)$$ Here's a 2015 paper by Bach which describes these functions in the context of minimization. Here's a 2019 paper by Rad Niazadeh, Tim Roughgarden, and Joshua R. Wang which gives (what I believe is) the first non-trivial maximization algorithm for this function class.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find sum $u_0 u_1 + u_1u_2+...+u_{n-2}u_{n-1} $ I have $$ u_k = \cos\frac{2k\pi}{n} + i \sin\frac{2k\pi}{n}$$ And I should calculate: $$ u_0 u_1 + u_1u_2+...+u_{n-2}u_{n-1}+u_{n-1}u_0 $$ But I have stucked: Firstly I calculate $$u_0 u_1 + u_1u_2+...+u_{n-2}u_{n-1} $$ and put $$ \alpha_k = u_k \cdot u_{k+1} = ... = e^{\frac{i\pi(2k+1)}{n}} $$ and sum $$ \alpha_0 + ... + \alpha_{n-2} = ... = e^{\frac{i\pi}{n}} \cdot \frac{1-e^{\frac{2(n-1)i\pi}{n}}}{1-e^{\frac{2i\pi}{n}}}$$ and I don't know how ti finish that. If it comes to $$u_{n-1}u_0 = e^{i\pi} = -1 $$
Note that $$\alpha_k = u_{k}\cdot u_{k+1} = e^{i\frac{2\pi}{n}(2k+1)} = e^{i\frac{2\pi}{n}}\cdot e^{i\frac{4\pi}{n}k}.$$ Therefore, $$\sum_{k=0}^{n-2}{\alpha_k} = e^{i\frac{2\pi}{n}}\sum_{k=0}^{n-2}{\left(e^{i\frac{4\pi}{n}}\right)^{k}} = e^{i\frac{2\pi}{n}} \frac{1-e^{i\frac{4\pi(n-1)}{n}}}{1-e^{i\frac{4\pi}{n}}} = e^{i\frac{2\pi}{n}}\frac{1-e^{-i\frac{4\pi}{n}}}{1-e^{i\frac{4\pi}{n}}} = \frac{e^{i2\pi/n}-e^{-i2\pi/n}}{e^{i2\pi/n}(e^{-i2\pi/n}-e^{i2\pi/n})}=-e^{-i\frac{2\pi}{n}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
If $G$ is a group and $a,b\in G$ and $e$ neutral element why $ab = aeb$? The Problem I have if this would be true then one could also say $aaa^{-1}b=ab$. Why do I know that there don't exist counterexamples for which the inequality holds?
$aaa^{-1}b = a(aa^{-1})b = aeb = (ae)b = ab$ So yes, you CAN say that, and you'd be one hundred percent correct when you do say that. Why do I know that there don't exist counterexamples for which the inequality holds? The same way you know that there aren't any counter examples to $k\times \frac 1k =1$ when $k \ne 0$. BY DEFINITION $aa^{-1} = e$ and BY DEFINITION $aeb = (ae)b = ab$. So that is a PROOF.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof that for every $z \in \mathbb C$: $z=\frac{2}{n}\sum_{k=0}^{n-1}(Re(u_{k}\overline z)u_{k})$ Let $n \in \mathbb N$, $n>2$ and $u_{1},u_{2},...,u_{n-1}$ are all root of unity for n. Proof that for every $z \in \mathbb C$: $$z=\frac{2}{n}\sum_{k=0}^{n-1}(Re(u_{k}\overline z)u_{k})$$ I know that $Re(u_{0})=Re(u_{1})$, $Re(u_{2})=Re(u_{3})$ etc. However then I have a problem with $n$ because I don't know anything about parity of $n$. Moreover I don't know what to use this idea even if I consider two cases: $n=2k$ and $n=2k+1$. I thought also about $\sum _{{k=0}}^{{n-1}}e^{{\frac {2\pi ik}{n}}}=0$ but I still do not know how to do it. Are you have any idea?
Note that $$\frac{2}{n}\sum_{k=0}^{n-1}(Re(u_{k}\overline z)u_{k}) = \frac{2}{n}\sum_{k=0}^{n-1}\frac{u_{k}\overline z+\overline{u_{k}}z}{2}u_{k} =\frac 1n (\overline z\sum_{k=0}^{n-1}u_k^2+z\sum_{k=0}^{n-1}|u_k|^2)$$ Using a geometric argument, a geometric sum, or Newton's identities it's easy to show $\sum_{k=0}^{n-1}u_k^2=0$. Besides, $|u_k|=1$, so the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Killing cohomology of surfaces in finite covers Let $S$ be a closed orientable surface and $R$ a commutative ring. Given a nonzero element $\alpha \in H^1(S;R)$ is there a finite cover $p : \tilde{S} \to S$ such that $p^*(\alpha) = 0 \in H^1(\tilde{S}; R)$?
No. This has nothing to do with $S$ being a surface. In what follows all we assume is that $S$ can run the classification theory of covers (path-connected, locally path-connected, semilocally simply connected). It holds for any CW complex. There is a natural isomorphism $H^1(S;G) \cong \text{Hom}(\pi_1 S, G)$ for any $S, G$. Given any element $\phi \in \text{Hom}(\pi_1 S, G)$, if $f: S' \to S$ is a covering space, the correspinding map $\phi f_* = 0$ if and only if $f$ factors through the cover corresponding to $\text{ker}(\phi) \subset \pi_1 S$. (We use here the classification of covering spaces over reasonable spaces.) If $\text{ker}(\phi)$ has infinite index, then the cover corresponding to $\text{ker}(\phi)$ is an infinite cover. For instance, if $G$ is infinite and $\phi$ is surjective (or at least its image has finite index), then you cannot kill $\phi$ in a finite cover. Because every subgroup of $\Bbb Z$ is zero or infinite, in particular, you cannot kill any nontrivial class of $H^1(S;\Bbb Z)$ in a finite cover.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3096848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Integral $\int\frac{2x^2}{2x\cos(2x)+(x^2-1)\sin(2x)} \mathrm d x$ Integrate $\displaystyle\int\dfrac{2x^2}{2x\cos(2x)+(x^2-1)\sin(2x)} \mathrm d x$ I tried dividing by $\cos^2(x)$ and then substituting $\tan(x)=t$.
$$2x\cos (2x)+(x^2-1)\sin 2x=(x^2+1)\bigg[\frac{2x}{x^2+1}\cos 2x+\frac{x^2-1}{x^2+1}\sin 2x\bigg]$$ $$=(x^2+1)\cos\bigg(2x-2\alpha\bigg)$$ $\displaystyle \sin(2\alpha)=\frac{x^2-1}{x^2+1}$ and $\displaystyle \cos(2\alpha)=\frac{2x}{x^2+1}$ integration $$=\int\sec\bigg(2x-\tan^{-1}\frac{x^2-1}{2x}\bigg)\frac{2x^2}{x^2+1}dx$$ put $\displaystyle 2x-\tan^{-1}\bigg(\frac{x^2-1}{2x}\bigg)=t$ And $\displaystyle \frac{2x^2}{x^2+1}dx=dt$ Integral $$=\int \sec(t)dt=\ln\bigg|\sec (t)+\tan (t)\bigg|+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Finding $\int^{\infty}_{0}\frac{\ln^2(x)}{(1-x^2)^2} dx$ Calculate $$\int^{\infty}_{0}\frac{\ln^2(x)}{(1-x^2)^2}dx$$ I have tried to put $\displaystyle x=\frac{1}{t}$ and $\displaystyle dx=-\frac{1}{t^2}dt$ $$ \int^{\infty}_{0}\frac{t^2\ln^2(t)}{(t^2-1)^2}dt$$ $$\frac{1}{2}\int^{\infty}_{0}t\ln^2(t)\frac{2t}{(t^2-1)^2}dt$$ $$ \frac{1}{2}\bigg[-t\ln^2(t)\frac{1}{t^2-1}+\int^{\infty}_{0}\frac{\ln^2(t)}{t^2-1}+2\int^{\infty}_{0}\frac{\ln(t)}{t^2-1}dt\bigg]$$ How can I solve it?
You're definetly on the right track with that substitution of $x=\frac1t$ Basically we have: $$I=\int^{\infty}_{0}\frac{\ln^2(x)}{(1-x^2)^2}dx=\int_0^\infty \frac{x^2\ln^2 x}{(1-x^2)^2}dx$$ Now what if we add them up? $$2I=\int_0^\infty \ln^2 x \frac{1+x^2}{(1-x^2)^2}dx$$ If you don't know how to deal easily with the integral $$\int \frac{1+x^2}{(1-x^2)^2}dx=\frac{x}{1-x^2}+C$$ I recommend you to take a look here. Anyway we have, integrating by parts: $$2I= \underbrace{\frac{x}{1-x^2}\ln^2x \bigg|_0^\infty}_{=0} +2\underbrace{\int_0^\infty \frac{\ln x}{x^2-1}dx}_{\large =\frac{\pi^2}{4}}$$ $$\Rightarrow 2I= 2\cdot \frac{\pi^2}{4} \Rightarrow I=\frac{\pi^2}{4}$$ For the last integral see here for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 0 }
Show that the projection moves you closer to the set, i.e. $|P_C(x) - y| \leq |x-y|$ for all $y\in C$ Let $H$ be a Hilbert space with induced norm $|\cdot|$, with $C\subset H$ a closed, convex subset of $H$ and $P_C(x)$ the projection of $x$ onto $C$, i.e., $$P_C(x) = \arg\min\limits_{y\in C}|x-y|^2$$ I want to show the following trivial result, $$\forall y\in C, \quad |P_C(x) - y|^2\leq |x-y|^2$$ I have tried a couple things but none have worked. The first was the following (let $y\in C, x\not\in C$): $$|P_C(x) -y|^2 = |P_C(x)|^2 - 2\langle P_C(x), y\rangle +|y|^2 = |P_C(x)|^2 - 2\langle x, y\rangle +|y|^2 \\\leq |P_C(x)|^2 + |x-y|^2$$ I also tried this: $$|P_C(x) - y|\leq |P_C(x) - x| + |x-y|\leq 2|y-x|$$ but the $2$ is ruining things for me. Can someone give me a hint? Edit: here is what I came up with ultimately, $$|x-y|^2 = |x-P_C(x) +P_C(x) - y|^2\\ = |x-P_C(x)|^2 -2\langle x-P_C(x), y-P_C(x)\rangle + |y-P_C(x)|^2\\ \geq -2\langle x-P_C(x), y-P_C(x)\rangle + |y-P_C(x)|^2\\ \geq |y-P_C(x)|^2$$ since the inner product is negative. Of course then we should actually show that the inner product is negative in general and this is basically the argument that bananach was referring to, I think.
First $z = P_C(x)$ if and only if $$ \langle z - x, y-z\rangle \ge0 \quad \forall y\in C. $$ This follows from $$ \frac12|y-x|^2 - \frac12|z-x|^2 = \langle z-x, y-z\rangle + \frac12 |y-z|^2. $$ Now take $x_1,x_2$, then $$ \langle P_C(x_1) - x_1, P_C(x_2)-P_C(x_1)\rangle + \langle P_C(x_2) - x_2, P_C(x_1)-P_C(x_2)\rangle\ge0, $$ which is equivalent to $$ |P_C(x_1)-P_C(x_2)|^2 \le \langle x_1 - x_2, P_C(x_1)-P_C(x_2)\rangle. $$ The desired inequality follows from $y=P_C(y)$ and Cauchy-Schwarz.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Complex Infinite Series Having trouble with this infinite series and deciding whether it converges or diverges. The series: $$\sum_{n=1}^\infty n(\frac{1}{2i})^n$$ My thoughts are that you take the modulus of the fraction and get $\frac{1}{2}$ to the exponent $n$ makes it go to $0$ and then multiplied by $n$ make it $$\infty*0$$ which is always divergent right, making the series diverge? Can someone also clarify that this is the case?
First let’s look if the series converges absolutely. For this, we need to see if $\sum b_n = \sum \frac{n}{2^n}$ converges. And this is immediate using the ratio test as $\lim\limits_{n\to \infty}\frac{b_{n+1}}{b_n} =1/2<1$. Conclusion: the given series converges absolutely hence converges
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Angle bisector in triangle, quick question: $|AE| = \frac{bc}{a+c}$ Triangle $ABC$; $AB=c, BC=a, AC=b$; angle bisector of angle $(c, a)$ cuts $AC$ in point $E$. Why is the following true? $$|AE| = \frac{bc}{a+c}$$ Where does that come from?
The angle bisector theorem: $$\frac{CE}{AE}=\frac ac$$ Then: $$\frac{bc}{a+c}=\frac{b}{\frac ac+1}=\frac{b}{\frac{CE}{AE}+1}=\frac{AE\cdot b}{CE+AE}=AE$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
$\displaystyle 1+ \frac{\log_2(2/3(n+1))}{\log_2(3/2)} = \frac{\log_2(n+1)}{\log_2(3/2)}$ Show $\displaystyle 1+ \frac{\log_2(2/3(n+1))}{\log_2(3/2)} = \frac{\log_2(n+1)}{\log_2(3/2)}$ from LS $\displaystyle 1+ \frac{\log_2(2/3(n+1))}{\log_2(3/2)} = \frac{\log_2(3/2) + \log_2(2/3(n+1))}{\log_2(3/2)} = \frac{\log_2\big(\frac{6(n+1)}{6}\big)}{\log_2(3/2)} = \frac{\log_2(n+1)}{\log_2(3/2)}$ is this right?
I would write $$\log_{2}\frac{3}{2}+\log_{2}\frac{2}{3}+\log_{2}(n+1)=\log_{2}(n+1)$$ since $$\log_{2}{\frac{2}{3}}=-\log_{2}{\frac{3}{2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Behaviour of positive function on compact sets Let $v$ be any strictly positive bounded function on $\Omega \subset \mathbb{R}^n$ and zero on $\partial \Omega.$ Can we say that for each compact subsets (w.r.t usual topology defined on $\mathbb{R}^n $) of $\Omega$ there exists a constant $c$ such that $v \geq c >0$? If not, what can be the example?
Let $n=1$, $\Omega =(-1,1)$, $u(\frac 1 n)=\frac 1 n$ for $n=1,2,...$ and $u(x)=1$ for all other $x \in \Omega$, $u(x)=0$ for $x =\pm 1$. Consider the compact set $[0,\frac 1 2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Formal Deduction (logic) Question: $\lnot C, (B \to \lnot C) \to A \vdash (A \to C) \to F$ I've been stuck on this question for around two hours now. I'm trying to prove that: $\lnot C, \ (B \to \lnot C)\to A \vdash (A \to C)\to F $ I'm trying to get my second last step to be: $\lnot C, \ (B \to \lnot C)\to A, \ A \to C \vdash F $, but I have no idea how to get started. I'm not sure what I can start off with, especially due to the $(B \to \lnot C)\to A)$ part. I'm allowed to use 11 rules of formal deducibility/deduction. If anyone can help me get started, or give me some hints, I would be extremely thankful!
You can show that $((B\to \neg C)\to A) \iff ((B\land C)\lor A)$. Now adding $A \to C$ to the hypotheses and using disjunction of cases: \begin{array}{l} A \\ A \to C \\ \hline C \end{array} Which considering $\neg C$ is absurd. Also: \begin{array}{l} B\land C \\ \hline C \end{array} which is again absurd. So we've shown that $A\to C$ is false, so $(A\to C)\to F$ is true regardless of $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3097958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Is there 'Algebraic number' which cannot display with Arithmetic operation and root Let $a + bi$ be an algebraic number. Then there is polynomial which coefficients are rational number and one of root is $a+bi$. I think.. $$x = a + bi$$ we can subtract $c_1$ (which is rational number) from both sides. $$x-c_1=a-c_1+bi$$ and we can power both side. $$(x-c_1)^n=(a-c_1+bi)^n$$ and repeat we can get polynomial which coefficients are rational number and one of root is $a+bi$. Therefore, I think there is no 'Algebraic number' which cannot be displayed with arithmetic operations and roots.
The algebraic numbers are divided into the explicit algebraic numbers and the implicit algebraic numbers. The explicit algebraic numbers can be presented explicitly from the rational complex numbers by the arithmetic operations (addition, subtraction, multiplication, division, raising to integer powers, and $n$-th roots where $n$ is an integer). They are called solutions in radicals. All solutions of algebraic equations of degree $\leq$ 4 are solutions in radicals. There are equations of degree $\ge$ 5 that have solutions that are not solutions in radicals. This is stated by Abel–Ruffini theorem. The algebraic equation $x^5-x+1$ for example doesn't have solutions in radicals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Behavior of $f(x)=(-1)^x$ For what $x$ is $f(x)=(-1)^x$ a real number, and when is it a complex number? When I graph it online, the graph glitches out and has points all over the place.
When $-1$ is raised to power of $x$ the complex number so formed has a real part $\cos \pi x $ and imaginary part $\sin \pi x.$ This can be found using Euler formula $ e^{i \pi}=-1$. The two parts are graphed below .. with each part having a period or wave length $\lambda=2, $ the roots are at $x=..,-2,-1,0,1,2,...$ etc. Real part vanishes for all integer values of $x$ as is obvious from graph and direct check by plugin. The $f(x)$ thus is always complex for all real values of $x$ Again for complex $x= a+ib ,\, f(x) $ is complex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Geometrical proof for length of chord passing through vertex of parabola In a parabola $y^2=4ax$ , the length of focal chord making an angle $\theta$ with the x - axis is $4acosec^2\theta$ . If a chord is drawn parallel to that focal chord which passes through vertex of parabola at (0,0) , it's length comes out to be $4acosec^2\theta cos\theta$ , it's quite easy to prove this using parametric coordinates for the parabola , I'm looking for an intuitive geometric demonstration that AB=A′B′.The equality certainly holds but I feel there must a visual way to show it.I have attached two geometrical 'viewpoints' from which i am seeing the scenario.
Changing the notation a bit, let $\overline{AB}$ be a chord through the parabola's focus, $F$, and let $\overline{UV}$ be a parallel chord through the vertex, $V$. Let $\overleftrightarrow{A^\prime B^\prime}$ be the parabola's directrix, and let $C$ be the fourth vertex of rectangle $\square AA^\prime B^\prime C$. Writing $a := |AA^\prime| = |AF|$ and $b := |BB^\prime| = |BF|$ (and, without loss of generality, assuming $a\geq b$), we see that $|BC| = a-b$. Now, recall that the midpoints of parallel chords of a parabola lie on a line parallel to the axis of that parabola. Consequently, if $M$ and $N$ are the midpoints of $\overline{AB}$ and $\overline{UV}$, respectively, then defining $d := |UN|=|VN|$, we have $$b + d = |BM| = \frac12|AB| = \frac12(a+b) \quad\to\quad d = \frac12(a-b) \quad\to\quad |UV| = |BC|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Relation of complete homogeneous symmetric polynomials and the elementary symmetric polynomials I was reading about the symmetric polynomials and saw the following relation: $$\sum _{{i=0}}^{m}(-1)^{i}e_{i}(X_{1},\ldots ,X_{n})h_{{m-i}}(X_{1},\ldots ,X_{n})=0\text{ for } m>0$$ The proof is constructed by using a generating function with respect to the variable $t$, in Symmetric Functions and Hal Polynomials SECOND EDITION I. G. MACDONALD, page: 21. (Question) I was wondering if this relation still holds for any length $k$, i.e. $$\sum _{{i=0}}^{m}(-1)^{i}e_{i}(X_{1},\ldots ,X_{n})h_{{m-i}}(X_{1},\ldots ,X_{k})=0\text{ for }k \in \{1,...,n\}$$ P.S. I tried cases and it holds for any case.
We have $$h_p(X_1,\ldots,X_n)=\sum h_r(X_1,\ldots,X_k)h_{p-r}(X_{k+1},\ldots,X_n)$$ $$e_p(X_1,\ldots,X_n)=\sum e_r(X_1,\ldots,X_k)e_{p-r}(X_{k+1},\ldots,X_n)$$ $$\sum_{i=0}^m\sum_{p=0}^i\sum_{q=0}^{m-i} (-1)^ie_{i-p}(X_1,\ldots,X_k)h_{(m-p-q) -(i-p) }(X_1,\ldots,X_k)e_p(X_{k+1},\ldots,X_n)h_q(X_{k+1},\ldots,X_n)$$ If we fix $p$ and $q$, this still sums to $0$ since it is just an instance of the same formula in fewer variables, replacing $i$ with $i-p$ and $m$ with $m-p-q$. Thus if we take $q=0$ and sum over all $p$ we get the formula you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Need help to decide if $\{x \in \mathbb R^n : 1 \leq x_1^2 + x_2^2 + \cdots + x_n^2 \leq 2 \}$ is convex Is the following set convex? $$\{x \in \mathbb R^n : 1 \leq x_1^2 + x_2^2 + \cdots + x_n^2 \leq 2 \}$$ I did the following. Assume $1≤x_1^2+x_2^2+...+x_n^2≤2$ and $1≤y_1^2+y_2^2+...+y_n^2≤2$ Assume $z=αx+(1-α)y, α∈[0,1]$ Now I have to prove that $1≤z_1^2+z_2^2+...+z_n^2≤2$ Solving the right hand part is relatively easy as in the end I get $2α^2 + 2(1-α)^2 + 2α(1-α)(x_1y_1+...+x_ny_n)$ which is greater than $2α^2 + 2(1-α)^2 + 2α(1-α)(\sqrt{x_1^2+x_2^2+...+x_n^2} *\sqrt{y_1^2+y_2^2+...+y_n^2})$ according to Cauchy-Schwartz. In the end it comes down to $2(α+(1-α))^2$ which is 2. On the left side, however I cannot use Cauchy-Schwartz because it decreases the value and I need to show that my equation of $z_1^2+z_2^2+...+z_n^2≥1$ Does it mean that the set is not convex?
The set is not convex because $(\pm1,0,0,\ldots,0)$ belong to it, but $(0,0,0,\ldots,0)$ doesn't. Note that$$(0,0,0,\ldots,0)=\frac12(1,0,0,\ldots,0)+\frac12(-1,0,0,\ldots,0).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\lim_{(x,y)\to(0,0)}(xy+y^{3})=0$. Prove that $\lim_{(x,y)\to(0,0)}(xy+y^{3})=0$. I am trying to determine how to set the $\delta$. Here is my rough work, which isn't much: $|f(x,y)-0|=|xy+y^{3}|\leq |y||x+y^{2}|$ I am not sure whether I should separate $y$ or separate $xy$ and $y^{3}$ to make it $|xy| + |y^{3}|$. Any ideas how to finish?
Hint For $\vert x \vert, \vert y \vert \le 1$, you have $$\vert xy+y^3 \vert \le \vert x \vert \vert y \vert + \vert y \vert^3 \le \vert x \vert \vert y \vert + \vert y \vert\le 2 \vert y \vert$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving Symmetric Difference of A and B Let A and B be sets. Define the symmetric difference of A and B as A∆B= (A ∪ B) − (A ∩ B). (a) Prove that A∆B = (A − B) ∪ (B − A) I tried to start this but am getting really lost. if someone could try to help that would be great
Using that $A - B = A \cap B^C$: $A \Delta B = $ $(A \cup B) - (A \cap B) = $ $(A \cup B) \cap (A \cap B)^C =$ $ (A \cup B) \cap (A^C \cup B^C) =$ $ (A \cap (A^C \cup B^C)) \cup (B \cap (A^C \cup B^C)) = $ $(A \cap B^C) \cup (B \cap A^C) = $ $(A - B) \cup (B - A)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How to calculate $\int_{0}^{\pi/3} \sqrt{\sec ^2 (x)} dx$? I'm trying to calculate the following integral: $\int_{0}^{\pi/3} \sqrt{\sec^2 (x)} dx$ But I have no idea where to start. Can you give me some advice?
As on the integration interval, $\sec x>0$, we may simplify to $$\int_0^\tfrac\pi3\sec x\,\mathrm dx=\int_0^\tfrac\pi3\frac{\mathrm dx}{\cos x}=\int_0^\tfrac\pi3\frac{\cos x\,\mathrm dx}{\cos^2 x}.$$ Can you continue?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3098931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solving $f'(x)=f(x+1)$ I was wondering if it was possible to find functions $f$ such that $$ f'(x)=f(x+1) $$ for all $x \in \mathbb{R}$. The only thing i've found so far is that it implies $$ f^{\left(n\right)}\left(x\right)=f\left(x+n\right) $$ is there any way to solve this ?
By the Fourier transform, $$2i\pi\xi F(\xi)=e^{2i\pi\xi}F(\xi).$$ Then $F(\xi)$ is only nonzero for the roots of $2i\pi\xi=e^{2i\pi\xi}$ and the spectrum is discrete. But the equation has no real roots in $\xi$ !
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
showing a relation on $\mathbb Z$ \ $0$ is an equivalence relation We define a relation on $\mathbb Z \setminus {0}$ where a ~ b iff $0< ab$. How would you show this is an equivalence relation and describe the equivalence classes?
I'm using the notation $a \simeq b$ for the relation since I can't get the $\LaTeX$ for the "tilde" sign to work right at the moment. $ab > 0 \tag 1$ if and only if the signs of $a$ and $b$ are the same; thus $a \simeq a, \tag 1$ since $a$ has the same sign as itself; $a \simeq b \Longrightarrow b \simeq a, \tag 2$ since if $a$ has the same sign as $b$, then $b$ has the same sign as $a$; $[a \simeq b] \wedge [b \simeq c] \Longrightarrow [a \simeq c], \tag 3$ since if $a$ has the same sign as $b$ and $b$ has the same sign as $c$ . . . well, you get the idea . . .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the density of $Y = a/(1 + X^{2})$, where $X$ has the Cauchy distribution. Find the density of $Y = a/(1 + X^{2})$, where $X$ has the Cauchy distribution. MY SOLUTION To begin with, let us remember that the Cauchy probability density function is given by \begin{align*} f_{X}(x) = \frac{1}{\pi(1+x^{2})}\quad\text{for}\quad x\in\textbf{R} \end{align*} Consequently, the sought distribution is described as \begin{align*} F_{Y}(Y\leq y) &= \textbf{P}(Y\leq y) = \textbf{P}\left(\frac{a}{1+X^{2}} \leq y\right) = \textbf{P}\left(1+X^{2}\geq \frac{a}{y}\right)\\\\ & = \textbf{P}\left(|X|\geq \sqrt{\frac{a-y}{y}}\right) = \textbf{P}\left(X\geq \sqrt{\frac{a-y}{y}}\right) + \textbf{P}\left(X \leq -\sqrt{\frac{a-y}{y}}\right)\\\\ & = 1 - F_{X}\left(\sqrt{\frac{a-y}{y}}\right) + F_{X}\left(-\sqrt{\frac{a-y}{y}}\right) \end{align*} where $y\in(0,a]$. Finally, we have \begin{align*} F_{X}(x) = \int_{-\infty}^{x}f_{X}(u)\mathrm{d}u = \int_{-\infty}^{x}\frac{\mathrm{d}x}{\pi(1+x^{2})} = \frac{\arctan(x)}{\pi} + \frac{1}{2}\end{align*} I have two questions. Firstly, I would like to know if this approach is correct. Secondly, I would like to know if there is another way to solve this problem. Thanks in advance!
A different way to approach this is to represent a Cauchy r.v. $X$ as the ratio of two independent $N(0,1)$ r.v.s. Once you write $X=S/T$, with $S,T\sim N(0,1)$ the representation $Y=aT^2/(S^2+T^2)$ drops out. And so on... This method relies on "pattern matching" (you have to `know the ratio of Gaussians fact, you have to know what the distribution of $T^2/(S^2+T^2)$ is). But if you have these facts in your working tool kit, the calculus details are less intricate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Prove that every operator G such that GT = TG is a polynomial in T Let be $T:V \rightarrow V$ ($\dim V \lt \infty$) a diagonalizable operator with the algebraic multiplicity of eigenvalues ​​$1$. Then every operator $G$ such that $GT = TG$ is a polynomial in $T.$ My attempt: I Know that if $GT = TG$ then the eigenspace of $T$ is G-Invariant. In fact, let $v$ an eigenvector of $T$ with eigenvalue $\lambda$, by assumption $T(G(v)) = G(T(v)) = \lambda G(v)$ $\Rightarrow$ $G(v)$ is eigenvector of $T$ with eigenvalue $\lambda$. What I do with this?
$T$ is diagonalizable. So it is easiest if just change basis to one where $T$ is diagonal. All eigenvalues have multiplicity $1$, so all diagonal entries are distinct. This means that any matrix which commutes with $T$ is also diagonal in this basis (so the subspace of endomorphisms $V\to V$ which commutes with $T$ has the same dimension as $V$). Also, all diagonal entries of $T$ are distinct, so $I, T, \ldots, T^{n-1}$ are all linearly independent. Therefore the subspace of endomorphisms $V\to V$ consisting of polynomials in $T$ has the same dimension as $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Analytically determine if $f(x) = f'(x)$ is possible? I was taking a test and two true/false type questions were asked. In one of them, I had to say if there is a function $f(x)$ such that $f(x) = f'(x)$. Of course, $e^x$ is such a function and almost everyone who has taken a calculus course knows this fact well. In the other question, I had to determine if $f(x) = -f'(x)$ was possible. I was completely stumped at this one. I had never before encountered a function with such property nor did I know how to approach this problem analytically as I am just a high school student. My question is: is there an analytical way to determine if such a function exists? By analytical, I mean no guessing allowed and just giving an example won't be enough. Is this possible? If not, can you give an example of a function with the above property?
My question is: is there an analytical way to determine if such a function exists? There's a theorem for that. Specifically, the existence-uniqueness theorem for differential equations. Wikipedia link
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Equation in the complex field $(z+2)^6=z^6$ I'm wondering why if $z$ is a solution of the equation $$ (z+2)^6=z^6 $$ then we must have $\Re(z)=-1$. I've tried to take the real part of both sides, noticing that $$ \Re((z+2)^6)=\Re([(z+2)^3]^2)=|(z+2)^3|^2 $$ but it doesn't seem to work. Thank you in advance.
Doing $z = w - 1$: $$0 = (z + 2)^6 - z^6 = 4w(3w^4 + 10w^2 + 3),$$ $w = 0$ ($z = -1$) is obviously solution and the biquadratic factor $3w^4 + 10w^2 + 3$ has only purely imaginary solutions because $w^2 = -3,-1/3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Find rational numbers $\alpha $ and $\beta$ in $\sqrt[3]{7+5\sqrt{2}}=\alpha+\beta\sqrt{2}$ How should we find two rational numbers $\alpha$ and $\beta$ such that $$\sqrt[3]{7+5\sqrt{2}}=\alpha+\beta\sqrt{2}$$ The answer I got $\alpha = 1 $ and $\beta = 1$. If I'm wrong, please correct me. Thank you
By Gauss' lemma, if $\alpha$ and $\beta$ are rational, then they are integers. Simply cube both sides to get $$7+5\sqrt{2}=(\alpha+\beta\sqrt{2})^3.$$ Expanding the right hand side and comparing coefficients shows that \begin{eqnarray*} 7&=&\alpha^3+6\alpha\beta^2&=&\alpha(\alpha^2+6\beta^2),\\ 5&=&3\alpha^2\beta+2\beta^3&=&\beta(3\alpha^2+2\beta^2), \end{eqnarray*} so $\alpha$ divides $7$ and $\beta$ divides $5$. This leaves very few options to check.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3099920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Homeomorphism between sets Let $$ Y = \left\{ \frac{1}{2} + \frac{1}{n} \mid n \in \mathbb{N}\right\}\cup\left\{ \frac{1}{2} - \frac{1}{n} \mid n \in \mathbb{N}\right\}$$ and $$ X = \left\{ \frac{1}{n} \mid n \in \mathbb{N}\right\} $$ be subspaces of the Euclidean space $ \mathbb{R} $. Are $X$ and $Y$ homeomorphic (i.e., does there exist a continuous bijection between them whose inverse function is continuous)? I tried to construct a bijection, but, every time I try, I can't get it to be continuous. So I would assume that they are not homeomorphic.
Your topologies are discrete, so every map is continuous. Therefore every bijection between $\mathbb{Z}^*$ and $\mathbb{N}$ will do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find Basis and Dimension of specific subspace of $M_{2 \times 2}$. Let $V=M_{2 \times 2}(\mathbb{R})$ be the set of all $2 \times 2$ real-valued matrices, and let the field $K = \mathbb{R}$. Then $V$ is a vector space under matrix addition and Scaler multiplication. Let $$A = \begin{bmatrix}1&0\\1&2\end{bmatrix}$$ If $C(A) = \{B \mid B \in V$, such that $AB=BA$} find the dimension of the subspace $C(A)$ and determine a basis. First of all I believe I would have to prove this is a subspace. So I would have to show that the subspace is closed under vector addition and scaler multiplication. Then I would have to construct a basis. The number of vectors in the basis is the dimension of the subspace. It is the condition that is tripping me up. How do show all this with the condition that these $2 \times 2$ matrices are commutative?
To show that $C(A)$ is a subspace, we need to show it closed under both addition and scalar multiplication; so let $B_1, B_2 \in C(A); \tag 1$ then $B_1A = AB_1, \; B_2A = AB_2; \tag 2$ thus, $(B_1 + B_2)A = B_1A + B_2A = AB_1 + AB_2 = A(B_1 + B_2), \tag 3$ which of course implies $B_1 + B_2 \in C(A); \tag 4$ likewise if $\alpha$ is any scalar, we have $(\alpha B)A = \alpha (BA) = \alpha (AB) = A(\alpha B); \tag 5$ thus $\alpha B \in C(A) \tag 6$ as well. Also, it is pretty easy to see that (6) implies $0 \in C(A), \tag 7$ and $B \in C(A) \Longleftrightarrow -B \in C(A); \tag 8$ since the rest of the vector space axioms are inherited by $C(A)$ from $M_{2 \times 2}(\Bbb R)$, it follows that $C(A)$ is indeed a subspace. So, what do the elements of $C(A)$ look like? If $B = \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix}, \tag 9$ then the condition $AB = BA \tag{10}$ reads, with $A = \begin{bmatrix}1&0\\1&2\end{bmatrix}, \tag{10}$ $\begin{bmatrix}1&0\\1&2\end{bmatrix}\begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix}\begin{bmatrix}1&0\\1&2\end{bmatrix}; \tag{11}$ at this point, before proceeding further, we observe that the computations specified in (11) may be considerably simplified if we a priori write $A$ in the form $A = \begin{bmatrix}1&0\\1&2\end{bmatrix} = I + \begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix}, \tag{12}$ since $IB = BI; \tag{13}$ we are left with finding those $B$ such that $ \begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix}\begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix}; \tag{14}$ that is, $\begin{bmatrix} 0 & 0 \\ b_{11} + b_{21} & b_{12} + b_{22} \end{bmatrix} = \begin{bmatrix} b_{12} & b_{12} \\ b_{22} & b_{22} \end{bmatrix}, \tag{15}$ we thus find that $b_{12} = 0, \; b_{11} + b_{21} = b_{22} = b_{12} + b_{22}; \tag{16}$ by virtue of these equations, we see we may write $B$ in the form $B = \begin{bmatrix} b_{11} & 0 \\ b_{22} - b_{11} & b_{22} \end{bmatrix} = b_{11} \begin{bmatrix} 1 & 0 \\ -1 & 0 \end{bmatrix} + b_{22} \begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix}; \tag{17}$ it is now clear that we may take $b_{11}$ and $b_{22}$ as free parameters, and that $C(A)$ is two dimensional, being spanned by matrices $\begin{bmatrix} 1 & 0 \\ -1 & 0 \end{bmatrix}, \; \begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix}, \tag{18}$ which form a basis for $C(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
notation for this summation I have a summation of the product of two variable like this $v_{j}f_i$ where $j=1,2,\dots,d,d+1,\dots, 2d,\dots, nd$ and after each $d$ step $i$ changes from $1$ to $n$, is my notation following correct in this situation? $$\sum_{j=1}^{nd}\sum_{i=1}^{n} v_j f^{(i)}?$$ what I mean is I want $(v_1+\dots+v_d)f^{(1)}+(v_{d+1}+\dots+v_{2d})f^{(2)}+\dots$
So you have $$ \begin{split} S &= (v_1+\dots+v_d)f^{(1)}+(v_{d+1}+\dots+v_{2d})f^{(2)}+\dots \\ &= f^{(1)}\sum_{k=1}^d v_k + f^{(2)}\sum_{k=d+1}^{2d} v_k + \ldots \\ &= \sum_{i=0}^n f^{(i+1)}\sum_{k=di+1}^{d(i+1)} v_k \end{split} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenvalues of a matrix whose square is zero Let $A$ be a nonzero $3 \times 3$ matrix such that $A^2=0$. Then what is the number of non-zero eigenvalues of the matrix? I am unable to figure out the eigenvalues of the above matrix. P.S.: how would the answer change if it were given that $A^3=0$?
Another approach is this one: Since $A^2 = 0$, the polynomial $g(x) = x^2$ annihilates A (and this means that the Linear operator defined by $g(T)$ is the null operator). However, the minimal polynomial of A must divide every polynomial that annihilates $A$, so if $m(x)$ is such polynomial, $m$ must divide $g$. Hence, $m(x) = x^2$, because $A ≠ 0$. Thus, the characteristical polynomial of $A$ is $p(x) = x^3$, because $p(x)$ has the same roots of $m(x)$ (why?), and $p(x)$ annihilates $A$ (by Cayley-Hamilton theorem). In conclusion, the characteristical polynomial of $A$ has only a single root, $0$, and since every eigenvalue of $A$ is a root of it's characteristical polynomial, we have that 0 is the only eigenvalue of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Ideas on calculating the volume of $K:=\{(x,y,z)\in \mathbb R^{3}: x^2+y^2+z^2\leq 1, x^2+y^2\leq a\}$ Calculate the volume of $K:=\{(x,y,z)\in \mathbb R^{3}: x^2+y^2+z^2\leq 1, x^2+y^2\leq a\}$ and note that $0 < a < 1$ My ideas: depending on the size $a$, I have a cylinder whose height is restricted by the radius of the unit sphere. $\lambda^{3}(K)=\int_{K}d\lambda^{3}=\int_{[0,2\pi]}\int_{[0,a]}\int_{[-\sqrt{1-a},\sqrt{1-a}]}rdzdrd\phi=4\pi \sqrt{1-a}\int_{[0,a]}rdr=4\pi\sqrt{1-a}\frac{1}{2}a^2=2\pi a^2 \sqrt{1-a}$ Is this correct? Note: if $a = 1$, I will use $\lambda^{2}$ and in this case $\lambda^{2}(K)=\frac{4}{3}\sqrt{\pi}$
This is a solid of revolution so can be found using the cylindrical shell method $$2\int_0^{\sqrt{a}}2\pi rh\,dx$$ where $r=x$ and $h=\sqrt{1-x^2}$ Note that this is an elementary integral and you will get the result $$ V=\frac{4\pi}{3}\left[1-(1-a)^{3/2}\right] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
S is a dense subset of $L^{p'}$, $\int_{E}fg = 0$ for all $f \in S$, then $g= 0$ Problem: $E$ is a measurable set and $1 \leq p < \infty$. Let $p′$ be the conjugate of $p$, and $S$ is a dense subset of $L^{p′}(E)$. Show that if $g \in L^p(E)$ and $\int_{E}fg = 0$ for all $f \in S$, then $g= 0$. Definition of Density: $S$ is dense in $L^{p'}(E)$ if $\forall h \in L^{p'}(E), \forall \epsilon > 0, \exists f \in S$ s.t. $\left| \left| f-h \right| \right|_{p'} < \epsilon$ or equivalently $\exists (f_n)$ in $S$ s.t. $\lim_{{n}\to{\infty}}f_n=h$ a.e. on $E$. Idea?: As p and p' are conjugates, I was thinking to use Holder's Inequality. $\int_{E}\left| fg \right| \leq \left| \left| f \right| \right|_{p'} \left| \left| g \right| \right|_p$
This problem was probably designed for you to use duality, namely the fact that if $1 \leq p < \infty$ and $g \in L^{p}(E)$, then $$ \| g \|_{L^p(E)} = \sup_{\| h\|_{L^{p’}(E)}=1} \left| \int_E gh \right|. \tag{1} $$ (See, e.g. Grafakos Volume I, Chapter 1.) We will henceforth use $\| \cdot \|_p$ and $\| \cdot \|_{p’}$ to denote the norms $\| \cdot \|_{L^p(E)}$ and $\| \cdot \|_{L^{p’}(E)}$, respectively. To prove that $\|g \|_{p} = 0$, let $\varepsilon > 0$ be fixed. For any $h \in L^{p’}(E)$, we can find $f \in S$ with $$ \| h - f \|_{p'} \leq \frac{\varepsilon}{\| g\|_{p} + 1}. $$ (We have put $\| g\|_p + 1$ in the denominator to cover the case that $\| g \|_p = 0$. We could have assumed that $\| g \|_p \neq 0$, but that would have turned the proof into a proof by contradiction, which I find significantly less appealing.) For this $h$ and $f$, it follows from Hölder that $$ \begin{split} \left| \int_E gh \right| &\leq \left| \int_E gf \right| + \left| \int_E g(h-f) \right| = \left| \int_E g(h-f) \right| \\ &\leq \| g \|_{p} \, \| h-f \|_{p'} \leq \varepsilon. \end{split} $$ Since $\varepsilon$ was arbitrary, we conclude that $\|g \|_{p} = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving $\frac{n}{2n+1} < \sqrt{n^2+n} -n < \frac{1}{2}$ I would like to prove using Mean Value theorem for $n \ge 1$ $$\frac{n}{2n+1} < \sqrt{n^2+n} -n < \frac{1}{2}$$ RHS can be proved by rationalizing the square root term, not sure about the LHS.
Why MVT? For the LHS you are asking about you have \begin{eqnarray*} \sqrt{n^2+n} -n & = & \frac{n^2 + n - n^2 }{\sqrt{n^2+n} + n} \\ & \color{blue}{>} & \frac{n}{\sqrt{n^2+2n+1}+n} \\ & = & \frac{n}{\sqrt{(n+1)^2}+n} \\ & = & \frac{n}{2n+1} \\ \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3100809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How can we find this limit $\lim_{n\to\infty \\x\to\infty}f^n(x)$? Let $f:\mathbb{R}\rightarrow \mathbb{R}$ , $f(x)=\frac{ax+b}{cx+d}$ and $a,b,c,d>0$ then $f^1(x)=f(x), f^2(x)=f(f(x)), f^3(x)=f(f(f(x)))$ and $f^n(x)={f(f(f\cdots f(x)\cdots )))}$, where $ f^n(x)$ is the $ n $ composition of the function $f(x).$ It's required finding this limit: $$\lim_{n\to\infty \\x\to\infty}f^n(x)$$ It is possible to find $f^n(x)$ for finite number $n$. For example, I found $f^2(x) = \frac{a^2 x + a b + b c x + b d}{a c x + b c + c d x + d^2}$. I can't do more than that. What is the math level of this problem? In fact, I wanted to solve. But I could not. Thank you very much!
One approach is using linear algebra: For a matrix $ M = \begin{pmatrix}\alpha&\beta\\\gamma&\delta\end{pmatrix}$, put $f_M(x)=\frac{\alpha x+\beta}{\gamma x+\delta}$. Then direct computation shows that $f_M(f_N) = f_{MN}$. In your case, $f = f_A$, and $f^n=f_{A^n}$, with $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$. From this follows the answers to your question in the text is: 1. The level of math is linear algebra (where you learn how to compute powers of matrices, e.g., by eigenvalues or jordan form) and calculus 1 (where you learn to compute the resulting limit). And the answer to the question in the title is: Yes with some work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Do the eigenvalues of a matrix need to be in the same field as the elements of the matrix? Do the eigenvalues of a matrix need to be in the same field as the elements of the matrix? I believe they do not since, for instance, I was taught that the trace is the sum of the eigenvalues and this would mean that this is not always the case. What puzzles me is that I have seen people claiming that some real matrix has no eigenvalues simply because the eigenvalues were all complex numbers. Were they right?
That depends upon the context. If, say, you have a $n\times n$ real matrix and and someone asks you what are its eigenvalues, then it is implicit that you should provide the real eigenvalues. And perhaps that there is none. On the other hand, it may be implicit that, in fact, you are after the complex eigenvalues. On the other hand, if the matrix is real, then the trace is real, but it may well have non-real eigenvalues. For instance, $\left[\begin{smallmatrix}0&-1\\1&0\end{smallmatrix}\right]$ has null trace, but its eigenvalues are $\pm i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Exchanging limits when functions may not converge uniformly Let $X$ be a compact metric space and for each $i \geq 1$, let $f_i \colon X \to \mathbb{N}$ be continuous functions satisfying: * *$f_{i+1}(x) \geq f_i(x)$ and; *for each $n \in \mathbb{N}$ there exists an $x \in X$ and $j\geq 1$ such that $f_j(x) \geq n$. Does there then exist $x \in X$ such that ${\displaystyle \lim_{i \to \infty}(f_i(x)) = \infty}$? The above situation appeared in my research on metric dynamical systems (in a more specific setting, but I think the above criteria is the relevant information). My attempt was to take the sequence of elements $x_n$ and $j_n$ such that $f_{j_n}(x_n) \geq n$ and then form a convergent subsequence $x_{n_k}$. The limit ${\displaystyle x := \lim_{k \to \infty}x_{j_k}}$ is then a candidate for the point we need. However, we end up getting that $$\begin{array}{rcl} \displaystyle \lim_{i \to \infty}(f_i(x)) & = & \displaystyle \lim_{i \to \infty}(f_i(\lim_{k \to \infty}x_{j_k})) \\ & = & \displaystyle \lim_{i \to \infty}\lim_{k \to \infty}f_i(x_{j_k}) \end{array} $$ and it is not clear at all to me that we are able to exchange the two limits. If we can, then we end up with $\displaystyle \lim_{i \to \infty}\lim_{k \to \infty}f_i(x_{j_k}) = \lim_{k \to \infty}\lim_{i \to \infty}f_i(x_{j_k})$ and as the sequence $(f_i(x_{j_k}))_{i \in \mathbb{N}}$ is eventually bounded below by $k$, it follows that $\displaystyle \lim_{i \to \infty}(f_i(x)) \geq \lim_{k \to \infty} k = \infty$. The Moore-Osgood Theorem would allow this exchange of limits if the $f_i$ converged uniformly, but I don't know if we can assume uniform convergence here. So my problem is in being able to justify the exchange of these limits. Is this always possible in the above situation? Is the statement above true in general or do I require stronger conditions on the space/functions (for instance, it's automatic if $X$ is connected)?
I don't think it is true this general: Consider $X=\{0\}\cup\{\frac {1}{n}:n\in\mathbb {N\}}$ and define $f_n(x)=\frac {1}{x}$ for $x\geq \frac {1}{n}$ and zero otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If the ground field has characteristic $p$ then every line through the origin is a tangent line to the curve $y = x^{p+1}$ I tried simple example $F_{2}$ and $y=x^3$. The there are only 2 points (0,0) and (1,1). Then how to prove that every line through the origin is a tangent line?
Suppose $L$ is a line through the origin, so $L$ is given by the equation $y=mx$ for some $m$. The intersection of $L$ with the curve $y=x^{p+1}$ is $mx=x^{p+1}$ which is equivalent to $x(x^p - m)=0$, or $x(x-\sqrt[p]{m})^p=0$ because we are in characteristic p. By looking at the zeros of this equation, the intersection points correspond to $x=0$ (the origin) or $x=\sqrt[p]{m}$ with multiplicity $p>1$, so that is why the line $L$ is tangent to the curve. Ref:Tangent lines to a curve passing through a given point
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that spaces $C^k(\mathbb{R}^n)$ and $C^\infty(\mathbb{R}^n)$ are infinite dimensional. Prove that spaces $C^k(\mathbb{R}^n)$ and $C^\infty(\mathbb{R}^n)$ are infinite dimensional. So in order to prove that they're infinite dimensional spaces, I need to form a linearly independent sequence which is infinite. Would I simply pick polynomials? (i.e. $1,x,x^2,\dots$ as my basis?)
Yes and no. Yes, because they form an infinite linearly independent family, and that is enough to prove what you want to prove. And no because they do not form a basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there a "greatest function" that converges? We just hit convergence tests in calculus, and learned that $\sum_{n=1}^{\infty} \frac{1}{n^p}$ converges for all $p \gt 1$. I thought that this was sort of a "barrier" between what converges and what diverges. Specifically, that setting $a_n=\frac{1}{n^{1+\epsilon}}$ is sort of the "greatest function" (I'll make this precise later) for which $\sum a_n$ converges. But, I did realize that there are functions that dominate $\frac{1}{n^{1+\epsilon}}$ but not $\frac1n$, such as $\frac{1}{n\log(n)}$. Now, the sum of that specific example diverges, but it got me wondering about whether $\frac{1}{n}$ is truly the "boundary". So, this leads me to two questions. 1) Is there a function $f$ that dominates $\frac{1}{n^p}$ for all $p>1$, meaning: $$\lim_{x\to\infty} \frac{f(x)}{\frac{1}{x^p}}=\infty$$ Such that: $$\sum_{n=1}^\infty f(n)$$ converges? 2) If so, up to a constant is there a function $g$ such that $\sum_{n=1}^\infty g(n)$ converges, such that $g$ dominates $f$ for all other functions $f$ such that $\sum_{n=1}^\infty f(n)$ converges? I'm just a freshman in high school so I apologize if this is a stupid question.
1) $$f(n) = \frac{1}{n \log(n)^2}$$ 2) No. Given any $g > 0$ such that $\sum_n g(n)$ converges, there is an increasing sequence $M_k$ such that $$\sum_{n \ge M_k} g(n) < 2^{-k}$$ Then $ \sum_n g(n) h(n)$ converges, where $h(n) = k$ for $M_k \le n < M_{k+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 2, "answer_id": 0 }
Weak solution for Burgers' equation I have the following IVP: $$ u_t + u u_x = 0,\qquad u(x,0) = \left\lbrace \begin{aligned} &0 && \text{if } x<-3 \\ &0.5 && \text{if } {-3}<x<-2 \\ &1 && \text{if } {-2}<x<0 \\ &1-x && \text{if } 0\leq x<1 \\ &0 && \text{if } x\geq 1 \end{aligned}\right. $$ Question is to find weak solution say at $t=1,2$ try Well, the characteristics are given by $$ \frac{dt}{ds} = 1, \frac{d x }{ds} = u, \frac{u}{ds} = 0 $$ and we choose the initial curve to be $(r,0,f(r))$ where $f(r) = u(r,0)$ We see that the characteristics are of the form $x= f(r) s +r $ and $t = s$ when trying to graph the characteristics, there at the endpoints of the intial data we have lines that cross but I dont understand how to find weak solution. what would be the strategy?
Since your initial data are discontinuous and also decreasing on part of the domain, your solution contains rarefaction waves and shock waves. The rarefaction waves appear at $x=-3$ and $x=-2$ at time $t=0$ and the shock appears at some positive time when the characteristics collide. To extend the solution beyond the collision you need the Rankine-Hugoniot conditions on the shock, giving you the right speed of the shock line. You can read about the topic here https://web.stanford.edu/class/math220a/handouts/conservation.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/3101906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding limit when values of derivatives at a point are given Let $f:\Bbb R\to\Bbb R$ be such that $f''$ is continuous on $\Bbb R$ and $f(0)=1, f'(0)=0, f''(0)=-1$. The $\displaystyle{\lim_{x\to\infty}\left(f\left(\frac{\sqrt2}{x}\right)\right)^x}$ is ..... I did this using particular function $f(x)=1-\frac{x^2}2$ and got answer $\frac1e$, but how to do generally?
Hint: Exponentiate the expression and express it in a form wherein you can use L'Hopital's rule. $$ \exp \ln\Biggl(f\biggl( \dfrac{\sqrt{2}}{x} \biggr)\Biggr)^x=\exp x\ln \Biggl(f\biggl( \dfrac{\sqrt{2}}{x} \biggr)\Biggr)=\exp \dfrac{\ln\Biggl(f\biggl( \dfrac{\sqrt{2}}{x} \biggr) \Biggr)}{1/x}$$Note that now you can simply make use of L'Hopital's rule and rules of differentiation to evaluate this limit. You shouldn't be getting $1/e$, the answer is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that the logarithm family of sums diverges? Define $$a_k(n) =\frac{1}{n\log(n)\log(\log(n)) \cdots \log^{k}(n)}$$ Do all of the sums (for a fixed $k \in \mathbb{N}$): $$A_k = \sum_{n} a_k(n)$$ Diverge?
We can use integral test. Since $\alpha_k(n)$ is decreasing in $n$, we have $$ \sum_n \alpha_k(n) =\infty \ \ \ \Longleftrightarrow \ \ \ \int_{A}^\infty \frac{\mathrm dx}{x\cdot \log (x) \cdot\log(\log x)\cdots \log^k(x)}=\infty. $$ Let $t_k = \log^k (x)$, $t_0 = x$. We find that $t_k = \log(t_{k-1})$ and $$ \mathrm d t_k = \frac{\mathrm dt_{k-1}}{t_{k-1}}=\cdots=\frac{\mathrm dx}{t_{k-1}\cdots t_1 t_0}=\frac{\mathrm dx}{\log^{k-1}(x)\cdots \log(x) \cdot x}. $$ This gives $$ \int_{A}^\infty \frac{\mathrm dx}{x\cdot \log (x) \cdot\log(\log x)\cdots \log^k(x)}=\int_{A'}^\infty \frac{\mathrm dt_k}{t_k}=\log(t_k)\Big|^\infty_{A'}=\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $G: Z[t] → \mathbb{C}$ defined by $G(f)=f(\sqrt{-1})$ is a ring homomorphism with kernel $(t^2 + 1)$ and image the Gaussian integers. I'm clear about the ring homomorphism and image part. But I'm not sure how to formulate my language precisely about the statement of the kernel.
First show that $\langle t^2+1 \rangle\subseteq \ker(G)$. This is easy as $G(t^2+1)=0$. So $t^2+1 \in \ker(G)$, hence the ideal generated by this is a subset of $\ker(G)$. Now you need to show that $\ker(G) \subseteq \langle t^2+1 \rangle $. Let $f \in \ker(G)$, then $f(i)=0$. Since $f(t) \in \mathbb{Z}[t]$, thus $f(-i)=0$. This means $(x-i)(x+i)=x^2+1$ is a factor of $f$. Thus $f \in \langle t^2+1 \rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$c_{00}$ is not complete I try to show that the space $c_{00}=\{(x_n):x_n=0 \text{ all but finitely many }n\}$ is not complete with respect to the norm $\|x\|_\infty=\max |x_n|$. My attempt: Let $(z_n)=\left(1,\frac{1}{2},\dots,\frac{1}{n},0,0,\dots\right)$ be a sequence. Clearly $(z_n)\in c_{00}$. We have convergence $(z_n)\to (z_\infty)$ with $(z_\infty)\in c_{0}$ but $(z_\infty) \not\in c_{00}$. So we have only to show that $(z_n)$ is a Cauchy sequence with respect to $\|\|_\infty$. Now I am confused because of the $\|\cdot\|_\infty$ norm. Do I just subtract the entries of the sequences and then take the maximum entry? Meaning for $m>n$ I have $$z_n-z_m=\left(0, \dots,0,\frac{1}{n+1},\dots,\frac{1}{m},0,0,\dots\right)$$ $$\|z_n-z_m\|_\infty=\frac{1}{n+1}\to 0 \text{ as } n\to \infty $$ Thank you for help
What you did is fine. Now, given $\varepsilon>0$, taket $N\in\mathbb N$ such that $\frac1N\leqslant\varepsilon$. Then$$m>n\geqslant N\implies\lVert z_n-z_m\rVert=\frac1{n+1}\leqslant\frac1{N+1}<\frac1N\leqslant\varepsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $\sum_{i=1}^{n} i \times i! = (n+1)! - 1$ by induction \begin{align*} \sum_{i = 1}^{k + 1} i(i!) & = \sum_{i = 1}^{k} i(i!) + (k + 1)(k + 1)!\\ & = (k + 1)! - 1 + (k + 1)(k + 1)! & \text{by the induction hypothesis}\\ & = (1 + k + 1)(k + 1)! - 1\\ & = (k + 2)(k + 1)! - 1\\ & = (k + 2)! - 1 \end{align*} I have a question from this post solving the problem Prove by induction that $\sum_{i=1}^n i!\times i=(n+1)!-1$ for all $n\in \mathbb{N}$ How does the person go from $ = (k + 2)(k + 1)! - 1\\ = (k + 2)! - 1$ at the very end? I don't understand how the permutation of $(k+1)!$ and (k+2) are able to combine into $(k+2)!$
It is because, by definition, $n! = n(n-1)!$ and $0! = 1$. Just take $n=k+2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Building a function out of a given domain and range How does one build a function while given domain and range ? For example, the domain $(0,5]$ and range $[0, \infty)$
something like $f(x)= \frac 1x - \frac 15$ seems like it would fit the bill. or $g(x) = -\ln x + \ln 5$ I am looking to map the open end in the domain with the open end in the range, and the closed end of the domain with the closed end of the range. It is not entirely necessary, but if we don't it seems to me that we have to start looking for discontinuous functions. $f(x)$ approaches infinity as x approaches 0. And $f(5) = 0$ Any function with asymptotic behavior around $0$ should be a good place to start. Then we do what we need to to fit the endpoint the other endpoint. Functions that are strictly decreasing will be nice, to take away the possibility that they might dip below 0. And continuous functions are nice, as they will go through every point between the extreme values of the range.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Ways to arrange $n\geq2$ people around a circular table, given two permanent seats. How many ways to arrange $n\geq2$ people around a circular table, given two specific people who cannot stand next to each other? I've observed that when $n=2$ and $n=3$ there exists no way to arrange them so that the two specific people aren't standing next to each other. I think for when $n=4$ there are $3!$ ways to arrange them, but I've had trouble coming up with a formula that can describe all values of $n$. Perhaps I can attack the problem by first arranging the other people in the circle, and then placing either one of the two specific people in between the gaps?
Well there are $n$ seats. Assuming the seats are not numbered, so the first mandatory person can be placed anywhere. Then, place the second mandatory person, there are $n-3$ choices (why?) Place the rest of $n-2$ people into $n-2$ seats, so $(n-2)!$ ways Total: $(n-3)(n-2)!$ ways
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
find the sum of$\sum_{n=1}^{\infty } \frac{4^{n}}{n!\left ( n+2 \right )!}$ $\sum_{n=1}^{\infty } \frac{4^{n}}{n!\left ( n+2 \right )!} = \sum_{n=1}^{\infty } \frac{4^{n}}{ \left ( 2n+1 \right )!B\left ( n+1,n+1 \right ) \left ( n+1 \right ) \left ( n+2 \right )}$ Given, $F(n) = \frac{4^{n}}{ \left ( 2n+1 \right )!B\left ( n+1,n+1 \right ) }$ then, $\sum_{n=1}^{\infty } \frac{4^{n}}{n!\left ( n+2 \right )!} = \sum_{n=1}^{\infty } \frac{F\left ( n \right )}{ \left ( n+1 \right )\left ( n+2 \right ) } = \frac{F(1)}{2} + \frac{F(2) - F(1)}{3} + \frac{F(3) - F(2)}{4} + \frac{F(4) - F(3)}{5} + \cdots \cdots $ finally, I couldn't solve it. Please guide me how to deal with this problem. Thank you in advance.
The sum is $$-\frac{1}{2}+\frac{{I}_{2}(4)}{4}$$ where $I_2$ is a modified Bessel function of order $2$. More generally, $$ \sum_{n=0}^\infty \frac{x^n}{n!(n+k)!} = \frac{I_k(2 \sqrt{x})}{x^{k/2}}$$ for nonnegative integers $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3102947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is the sum and difference of two projection matrices $P_1$ and $P_2$ a projection matrix? Let $P_1$ and $P_2$ be two projection matrices for orthogonal projections onto $S_1 \in \mathcal{R}^m$ and $S_2 \in \mathcal{R}^m$, respectively. When does $P_1+P_2$ and $P_1-P_2$ result in a projection matrix? Prove it. I am confident that $P_1+P_2$ is a projection matrix iff $P_1P_2=P_2P_1=(0).$ Similarly, I feel like $P_1-P_2$ is a projection matrix iff $P_1P_2=P_2P_1=P_2.$ However, I do not know how to formally prove either of the above statements. How should I formalize the proof?
If $P_1+P_2$ is a projection, then $$ P_1+P_2=(P_1+P_2)^2=P_1+P_2+P_1P_2+P_2P_1. $$ So $P_1P_2+P_2P_1=0$. Multiply on the left by $I-P_1$ to get $(I-P_1)P_2P_1=0$. So $P_2P_1=P_1P_2P_1$, selfadjoint, which then gives $P_1P_2=P_2P_1$. So $2P_1P_2=0$, and $P_1P_2=0$. If $P_1-P_2$ is a projection, then $$ P_1-P_2=(P_1-P_2)^2=P_1+P_2-P_1P_2-P_2P_1. $$ So $P_1P_2+P_2P_1=2P_2$. Multiply by $I-P_2$ on the right, to get $P_2P_1(I-P_2)=0$. As above, we conclude that $P_2P_1=P_1P_2$. Then $$ 2P_2=P_1P_2+P_2P_1=2P_1P_2, $$ and $P_1P_2=P_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How can I simplify $\sqrt{\frac{5+\sqrt{5}}{2}}$? I've tried to see the root as $\sqrt{\frac{5+\sqrt{5}}{2}} = \sqrt{a}+\sqrt{b},$ but this method doesn't give me something good.
We can write it so: $$\sqrt{\frac{5+\sqrt{5}}{2}}=\sqrt{\frac{5+\sqrt{5}}{2}}+\sqrt0.$$ Let there be rationals $a$ and $b$ for which $\sqrt{\frac{5+\sqrt{5}}{2}}=\sqrt{a}+\sqrt{b}.$ Thus, $$\frac{5+\sqrt5}{2}=a+b+2\sqrt{ab},$$ which gives $$a+b=\frac{5}{2}$$ and $$ab=\frac{5}{16},$$ which says that $a$ and $b$ are roots of the equation $$x^2-\frac{5}{2}x+\frac{5}{16}=0,$$ which is a contradiction because it is easy to see that this equation has no rational roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is the Change of Basis map unique? I've been looking all over, but I haven't found anything satisfactory. We've been shown in class by a commutative diagram that, given an $n$-dimensional vector space $V$ over a field, $\mathbb{F}$, and bases, $\mathcal{B}=\{v_1,...,v_n\}$ and $\mathcal{C}=\{u_1,...,u_n\}$, the coordinate maps $[]_{\mathcal{B}}:V\rightarrow \mathbb{F}^n$ and $[]_{\mathcal{C}}:V\rightarrow \mathbb{F}^n$ give rise to a unique map $P=[]_{\mathcal{B}}\circ []^{-1}_{\mathcal{C}}:\mathbb{F}^n\rightarrow \mathbb{F}^n$, which is our change of basis matrix. But I am having a lot of trouble proving that $P$ is unique. Can anyone enlighten me as to why this is necessarily true?
Remember that any linear map on any linear space $\;V\;$ is uniquely and completely determined once we know its action on any basis of $\;V\;$ ...and that's all. If you want to do this proof, suppose there's another map $\;Q:V\to V\;$ s.t. it coincides on "the old basis" $\;\mathcal B\;$ with $\;P:\;\; Qv_i=Pv_i\;\;\forall\,i=1,2,...,n\;$ , then (using linearity of the maps), for any $$v=\sum_{k=1}^n a_iv_i\in V\;,\;\;Qv=\sum_{k=1}^na_iQv_i=\sum_{k=1}^n a_iPv_i=Pv$$ so $\;Q\equiv P\;$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Problem solving a 4 by 4 linear system with infinite solutions I have a question here. The following is a 4 by 4 system of linear equations which has infinitely many solutions. $$ \left\{ \begin{array}{c} x+y+3z+t=0 \\ x-y-z-t=0 \\ 3x+y+5z+3t=0 \\ x+5y+11z+8t=0 \end{array} \right. $$ An online calculator gave me these solutions: $t = 0, x = - r_1, y = -2 r_1, z = r_1$, where $r_1$ is a parameter. However, how did they solve this? It's kinda complicated for me, because what I'd do is I'd isolate "$x$" from the second equation $(x=y+z+t)$, then I would plug that into the first equation and so on... Can someone help? Thank you and sorry for taking your time !
Substitute $x=y+z+t$ in the first equation and you have $2y+4z+2t=0 \\ \Rightarrow y+2z=-t$ Substitute $x=y+z+t$ in the third equation and you have $4y+8z+6t=0$ Then substitute $y+2z=-t$ and you get $t=0$. So $y=-2z$ and $x=y+z$. Substitute $x=y+z$, $y=-2z$ and $t=0$ in the fourth equation and you have $0=0$ So the fourth equation tells us nothing new (this is because the fourth equation is a linear combination of the other three equations). So we have $y=-2z$ and $x=y+z=-z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I solve this equation with 3 constants? (Setting one constant to an arbitrary case value?) $$m+re^{rx}=3x+2(mx+b+e^{rx}) +1 $$ Solution says: if (r != 0) then m=2b+3; r=0; 0=3+2m ...and then solves for m, r, and b. if (r == 0) then m=2b+1; r=0; 0=3+2m ...and then solves for m, r, and b. I'm not sure how to follow this. If r can be anything, how did they force others to become a fixed expression? Are they matching terms and positions ? Also, what is this method called ? How am I supposed to know to force r to be different cases?
You have two questions here: * *What's it called when you split into multiple cases like this? *How am I supposed to know to do this? For the first, I've heard it called "case analysis". For the second: Suppose I tell you that the difference of my son's and my daughter's ages is twice my son's age, and my daughter is $12$ (and they're not twins) and ask how old my son is. You might say to yourself "the difference in ages is $s-d$ (where $s$ is the son's age, $d$ the daughter's age). Or maybe it's $d - s$...it depends on which kid is older. Hmmm." There are three possibilities: my son is younger than my daughter, or older (they're not twins). So you split your work into two cases. If my son is older, the age difference is $s-d$; otherwise it's $d - s$. And then you go on and do some algebra. In the same way, you can look at $$m+re^{rx}=3x+2(mx+b+e^{rx}) +1 $$ and perhaps do a little algebra to turn it into $$ re^{rx}- 2e^{rx}=3x+2mx+2b+ 1 -m $$ and say to yourself "Those $e^x$ terms on the left are a pain. If $r$ was zero, they'd be nice and simple. Hmmm. Let's split into cases. Case 1: $r$ is zero. That'll be easy. Case 2: $r$ is not zero. That'll be harder, but at least it'll delay things a bit." So for case 1, you say: "Suppose $r = 0$. Then the equation becomes \begin{align} -2 &= 3x+2mx+2b+ 1 -m 0 &= 3x+2mx+2b+ 1 -m + 2 0 &= (3+2m)x+(2b -m + 3) \end{align} This is an equality of polynomials: $0x + 0$ on the left, and $(3+2m)x + (2b-m+3)$ on the right, The coefficients must match up, so I have $3 + 2m = 0$ and $(2b-m + 3) = 0$, and that gets you part 1 of the solution. Then for case 2 you say, "OK. Now let's suppose that $r$ is not zero. Then we have $$ (r-2) e^{rx} = 3x+2mx+2b+ 1 -m = (3 + 2m) x + (2b + 1 - m) $$ where on the left side we have an exponential function, and on the right side we have a polynomial. The only way these can be equal is if the exponential function is zero, i.e., if $r = 2$. Ah..so now we know that $$r = 2$$ and then using the same matching-of-coefficients trick, we see that $3 + 2m = 0$ and $(2b + 1 -m) = 0$. And we've matched the second part of the solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Chinese remainder theorem, can't figure it out! x mod 5 = 3 x mod 7 = 5 x mod 11 = 7 How to determine x? I've been searching on YouTube, but they're giving examples in different ways, for example x ≡ 1(mod 3) I don't understand it, is it the same as x mod 3 = 1? Thanks, Im still new and quite slow at math, I'm trying to understand it still
Chinese Reminder Theorem says that there exist one solution $\mod M= m_1 \cdot \dots m_n$ of the system $$\cases{x \equiv b_1 \mod m_1 \\ x \equiv b_2 \mod m_2 \\ \dots \\ x \equiv b_n \mod m_n}$$ and the solution is of the form $$x=b_1M_1M_1'+ \dots + b_nM_nM_n'$$ where $$ M_i= \frac{m_1 \cdot \dots m_n}{m_i}$$ and $M_i$' is the inverse of $M$, so $$M_i'M_i \equiv 1 \mod m_i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Use the Fundamental Theorem of Calculus to find the derivative of $h(x) = \int_{1}^{e^x} \ln (t) \,dt$ The fundamental theorem of calculus states: If $f$ is continuous on $[a,b]$, then if $g(x) = \int_{a}^{x}f(t)\,dt,~ \textrm{then}~g'(x) = f(x)$. In Example 4, the chain rule is used because the upper bound, $x^4$ needed to be differentiated. My question is, why is that and where is that implied given the theorem stated above? Similarly, find $h'(x)$ of $h(x) = \int_{1}^{e^x} \ln(t)\,dt$ The answer is $h'(x) = xe^x$, which seems to be in line with Example 4. Per previous examples in the book, one might expect the answer to be $h'(x) = \ln(e^x)$ though.
See the proofs of the Leibniz integral rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3103944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Find all polynomials $f (x)$ such that $f (x^2+x+1)$ divides $f (x^3-1)$ I had come across a question in which involved finding polynomials (with real coefficients) satisfying the division criteria stated above. By inspection, it was easy to see that polynomials like $x$, $x^2$, $x^3$, etc. satisfied. So I went on to try a more general polynomial $f (x)=ax^n $ and it worked. I thought that if I'm able to prove that $f (x)$ can't have a non-zero root then it would suffice. Though I have found a solution that uses the 'assumption-contradiction' method (assuming that $f (x)$ has a non-zero root and showing a contradiction), but I was wondering that is there a technique that would actually allow us to solve the question (or questions of the same type) without guessing the answer first?
We will prove that all solutions are of a form $ax^n$, where $n\in \mathbb{N}_0$ and $a\in \mathbb{R}$. Say exsist $a_1\ne 0$ such that $f(a_1)=0$. Then there exsist $x_1\in \mathbb{C}$ such that $x_1^2+x_1+1=a_1$ and $|x_1-1|>1$. Such $x_1$ exsist since the equation $x^2+x+1-a=0$ has two solution $x_1,x_2$ for which $x_1+x_2=-1$. If $|x_i-1|\leq 1$ for each $i$, then we have by triangle inequality: $$2\geq |x_1-1|+|x_2-1|\geq |x_1+x_2-2| = 3$$ But then we have $$f(x_1^3-1) = k(x_1)f(a_1)=0$$ so $$a_2= x_1^3-1$$ is another root for $f$ and we can procede like this to get $a_3, a_4,...$. Since $$|a_2| = |x_1-1||a_1|>|a_1|$$ no two member of this sequence are equal.So we have infinite number of roots for $f$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Find the limit of $\frac {x^2+x} {x^2-x-2}$ where $x \to -1$? I need to find the limit of $\frac {x^2+x} {x^2-x-2}$ where $x \to -1$. Right now I am getting $\frac{0}{0}$ if I don't factor first, or $\frac{2}{0}$ if I do. Here are my factoring steps: $\frac {x^2+x} {x^2-x-2}$ $=\frac{x(x+1)}{(x-2)(x+1)}$ replace $x$ with $-1$ $=\frac{-1(-1+1)}{(-1-2)(-1+1)}$ $=\frac{2}{-3 (0)}$ $=\frac{2}{0}$ How can I solve this problem?
$$\frac{x^2+x}{x^2-x-2}=\frac{x(x+1)}{(x-2)(x+1)}=\frac x{x-2}\xrightarrow[x\to-1]{}\frac{-1}{-3}=\frac13$$ The above is justified by the fact that taking the limit when $\;x\to-1\;$ means $\;x\;$ gets closer and closer to $\;-1\;$ but never equals it in this limit process. To calculate a limit as $\;x\to a\;$ is the same as substituting $\;x=a\;$ in the function iff the function is continuous at $\;a\;$ , otherwise it may fail...as in this case, where the function's not even defined at $\;x=-1\;$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Probability of getting at least one job offer based on interview odds I have recently interviewed for a number of jobs, and am wondering what the odds are of getting accepted for one based on the odds of each interview and the total number of interviews. * *I had $7$ interviews which had a $1$ in $10$ chance of securing a job ($10$ interviewees for every applicant). *I had another interview with a $6\%$ chance, another with a $5\%$ chance, and another with a $14\%$ chance. So in total, $10$ places with odds of $10\%$, $10\%$, $10\%$, $10\%$, $10\%$, $10\%$, $10\%$, $14\%$, $6\%$, $5\%$. I know that I cannot simply add the numbers, but I am also stumped in that I am trying to find the odds of not one job offer, but a minimum of one offer....so the odds of either one positive return, two positive return, etc. vs the odds of all $10$ interviews coming up negative. Anyone know how to calculate this problem?
This is a problem in which the complementary approach will be the most fruitful - let's instead consider how likely you are to not get a job. We know that, for an event $A$, then $$P(A) = 1 - P(\text{not} \; A)$$ That is to say, more relevant to your case, $$P(\text{getting at least one job offer}) = 1 - P(\text{getting no job offers})$$ Since the odds of getting a job doesn't affect that for any other job, we know $$\begin{align} P(\text{getting no offers}) &= (1 - P(\text{getting job #1})) \\ &\times (1 - P(\text{getting job #2})) \\ &\times (1 - P(\text{getting job #3})) \\ &... \\ &\times (1 - P(\text{getting job #10})) \end{align}$$ With these two facts in mind you should find it easy to complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
What's wrong with the following proof of the theorem? I am reading How to prove it. In Section 3.2, it asked us what's wrong with the following proof of the theorem? Suppose $x$ and $y$ are real numbers and $x+y = 10$. Then $x \neq 3$ and $y \neq 8$. Proof: Suppose the conclusion of the theorem is false. Then $x=3$ and $y = 8$. But then $x+y = 11$, which contradicts the given information that $x+y = 10$. Therefore the conclusion must be true. I can feel that the mistake is in $x=3$ and $y=8$. Should it be $x=3$ or $y=8$ since it negates $x=3$ and $y=8$? But I intuitively think the theorem is correct because $x$ cannot be 3 and $y$ cannot be 8 at the same time if $x + y = 10$ is true. Can anyone explain?
The theorem is false. 3 + 7 = 10, so it can in fact be the case that "x + y = 10" is true while "x is not 3 AND y is not 8" is false. So "x + y = 10" does not imply "x is not 3 AND y is not 8". "x + y = 10" does imply "x is not 3 OR y is not 8". And this is what the proof in the text is a proof for: the author assumes that "x + y = 10", then assumes that "x is 3 AND y is 8" (which is the negation of "x is not 3 OR y is not 8") and (correctly!) derives a contradiction. So what's wrong with the proof isn't really wrong with the proof as such; it's just not a proof for that theorem. (Nothing is a proof for that theorem, of course, since it is, as noted, false.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to compute $I(n)=\int_{0}^1x_1^a\int_{x_1}^1x_2^a\cdots\int_{x_{n-1}}^{1}x_n^adx_n\cdots dx_2dx_1$? Let $a$ be a positive integer. For $n\ge 1$, how to compute $$I(n):=\int_{0}^1x_1^a\int_{x_1}^1x_2^a\cdots\int_{x_{n-1}}^{1}x_n^adx_n\cdots dx_2dx_1?$$ I tried first two terms, and found $I(1)=\frac{1}{a+1}$, $I(2)=\frac{1}{2(a+1)^2}$. But I don't see how to compute $I(n)$ by $I(n-1)$ inductively.
Method 1. We have $$ I(n) = \int_{0 \leq x_1 \leq \cdots \leq x_n \leq 1} x_1^a \cdots x_n^a \, \mathrm{d}x_1\cdots\mathrm{d}x_n. $$ Now by symmetry, this is simply $$ I(n) = \frac{1}{n!} \left( \int_{0}^{1} x^a \, \mathrm{d}x \right)^n = \frac{1}{n!(a+1)^n}. $$ * *Addendum. More specifically, for each permutation $\sigma$ on $[n]=\{1,\cdots,n\}$, define $$\Delta(\sigma) = \{ (x_1, \cdots, x_n) \in \mathbb{R}^n : 0 \leq x_{\sigma(1)} \leq \cdots \leq x_{\sigma(n)} \leq 1\}$$ If $S_n$ denotes the set of all permutations on $[n]$, the sets $\Delta(\sigma)$ are non-overlapping for different $\sigma$'s in $S_n$ and $\bigcup_{\sigma \in S_n} \Delta(\sigma) = [0, 1]^n$. Moreover, by symmetry, we may as well write $$ \forall \sigma \in S_n, \quad I(n) = \int_{\Delta(\sigma)} x_1^a \cdots x_n^a \, \mathrm{d}x_1\cdots\mathrm{d}x_n. $$ (This follows from the substitution $(x_1, \cdots, x_n) \mapsto (x_{\sigma(1)}, \cdots, x_{\sigma(n)})$.) Therefore $$ n!I(n) = \sum_{\sigma \in S_n} I(n) = \int_{\bigcup \Delta(\sigma)} x_1^a \cdots x_n^a \, \mathrm{d}x_1\cdots\mathrm{d}x_n = \int_{[0,1]^n} x_1^a \cdots x_n^a \, \mathrm{d}x_1\cdots\mathrm{d}x_n. $$ Now the latter integral factors out into product of $\int_{0}^{1} x_i^a \, \mathrm{d}x_i$'s, proving the desired claim. Method 2. Alternatively, define $$ y(t) = 1 + \sum_{n=1}^{\infty} I(n) t^{n(a+1)} = 1 + \sum_{n=1}^{\infty} \int_{0<x_1<\cdots<x_n<t} x_1^a \cdots x_n^a \, \mathrm{d}x_1 \cdots \mathrm{d}x_n. $$ This solves the following integral equation $$ y(t) = 1 + \int_{0}^{t} s^a y(s) \, \mathrm{d}s. $$ This is equivalent to $y(0) = 0$ and $y'(t) = t^a y(t)$, and so, $y(t) = e^{t^{a+1}/(a+1)}$. From this, we easily read out the value of $I(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the particular solution for Augmented Thin Plate Splines in the context of the Dual Reciprocity Boundary Element Method In the dual reciprocity boundary element method (DRBEM) the non-homogeneous terms are expanded in terms of radial basis functions. This expansion involves approximating the solution to the linear operator in terms of the radial basis function being used. For example with a Laplacian/Poisson operator and augmented thin plate splines (ATPS): The radial basis function approximation of function $g(\mathbf{x})$ looks like $$ g(\mathbf{x_i})=\Sigma_{j=1}^{N}\alpha_j]\phi(||\mathbf{x_i}-\mathbf{x_j}||)_2+\Sigma_{j=1}^{M}\beta_jp_j(\mathbf{x_j})]$$ $$ \mathbf{F}=\left[ \begin{matrix} \phi(||\mathbf{x_1}-\mathbf{x_1}||)_2 & \cdots & \phi(||\mathbf{x_1}-\mathbf{x_N}||)_2 \\ \vdots & \ddots & \vdots \\ \phi(||\mathbf{x_N}-\mathbf{x_1}||)_2 & \cdots & \phi(||\mathbf{x_N}-\mathbf{x_N}||)_2 \\ \end{matrix}\right] $$ $$ \mathbf{\alpha}=\left[\begin{matrix} \mathbf{\alpha_1} \\ \vdots \\ \mathbf{\alpha_N} \\ \end{matrix}\right]$$ For ATPS (Augmented Thin Plate Splines) the matrix of radial basis functions becomes: $$ \mathbf{x_i}=(x_i,y_i) $$ $$ r=\sqrt{ (x_i-x_j)^2 + (y_i+y_j)^2 }=||\mathbf{x_i}-\mathbf{x_j}|| $$ $$f(\mathbf{x_i})=\Sigma_{j=1}^{N}\alpha_j r^2 log(r) + \beta_1+\beta_2x_j+\beta_3y_j $$ $$\Sigma_{j=1}^N \alpha_j=\Sigma_{j=1}^N \alpha_j x_j=\Sigma_{j=1}^N \alpha_j y_j=0 $$ $$ \mathbf{P}= \left[ \begin{matrix} 1 & 1 & \cdots & 1 \\ x_1 & x_2 & \cdots & x_N \\ y_1 & y_2 & \cdots & y_N \\ \end{matrix} \right] $$ $$ \mathbf{F^*}= \left[ \begin{matrix} \mathbf{F} && \mathbf{P^T} \\ \mathbf{P} && \mathbf{0} \\ \end{matrix} \right] $$ $$ \left[ \begin{matrix} \mathbf{u} \\ \mathbf{0} \\ \end{matrix} \right]= \left[ \begin{matrix} \mathbf{F} && \mathbf{P^T} \\ \mathbf{P} && \mathbf{0} \\ \end{matrix} \right]\left[ \begin{matrix} \mathbf{\alpha} \\ \mathbf{\beta} \\ \end{matrix} \right]=\mathbf{F^*\alpha^*} $$ In the following reference, Golberg suggests that the particular solution to this radial basis function approximation is as below: Golberg, Michael A. "The method of fundamental solutions for Poisson's equation." Engineering Analysis with Boundary Elements 16, no. 3 (1995): 205-213. $$ \hat{u}= \Sigma_{j=1}^{N}\alpha_j\Phi_j+\Sigma_{j=1}^{N}\beta_j\Psi_j$$ where $ \nabla^2\Phi_j=\Sigma_{j=1}^{N}\alpha_j r^2 log(r)$ and $\nabla^2\Psi_j= \beta_1+\beta_2x_j+\beta_3y_j $ Golberg finds (incorrectly) that $$\Phi=\frac{r^4 log(r)}{16} - \frac{r^3}{32}$$ When the actual result is $$\Phi=\frac{r^4 log(r)}{16} - \frac{r^4}{32}$$ The error is probably a simple typo. Golberg goes on to say that $$\Psi= \frac{x^2+y^2}{4} + \frac{x^3}{6} + \frac{y^3}{6}$$ How does he obtain this result? Why would this be the desired particular solution instead of something that is symmetric like with the boundary element solution kernel and the thin plate spline particular solution? The other references I can find for ATPS in the literature related to DRBEM either suggest that obtaining the particular solutions is trivial or that they can be found in the literature. Some suggest that the polynomial particular solutions are obtained by the method of unknown coefficients.
Your particular solution isn't correct. Once you add in angular dependency, the Laplacian becomes $$ \nabla^2 \Psi = \frac{1}{\rho}\frac{\partial}{\partial \rho}\left(\rho\frac{\partial \Psi}{\partial \rho}\right) + \frac{1}{\rho^2}\frac{\partial^2 \Psi}{\partial \theta^2} $$ So $\theta$ can't be treated as a constant Instead, you look for a solution of the form $$ \Psi = \beta_1 f(\rho) + \beta_2 g(x) + \beta_3 h(y) $$ where each term satisfies \begin{align} \nabla^2 f &= \frac{1}{\rho}\frac{d}{d \rho}\left(\rho\frac{df}{d\rho}\right) = 1 \\ \nabla^2 g &= \frac{d^2 g}{dx^2} = x \\ \nabla^2 h &= \frac{d^2 h}{dy^2} = y \end{align} Solving each equation individually gives $f = \dfrac{\rho^2}{4} = \dfrac{x^2+y^2}{4}$, $g = \dfrac{x^3}{6}$ and $h = \dfrac{y^3}{6}$ Note that $\Psi$ can't be radially symmetric, because $\nabla^2\Psi$ itself isn't radially symmetric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
integral of $x - x_0$ How to evaluate: $$\int (x-x_0)dx$$ One way is direct: $$\int x dx - \int x_0 dx = \frac{x^2}{2} - x x_0 +C$$ The other way is assuming $x - x_0 = \tau$, $dx=d\tau$, we get $$\int \tau d\tau = \frac{\tau^2}{2} = \frac{(x - x_0)^2}{2} + C$$ Which one is the right one? If both are correct when to choose one over the other? Thanks, Bhupala
Both are correct and equivalent. $$\int x dx - \int x_0 dx = \frac{x^2}{2} - x x_0 +C_1$$ $$\int \tau d\tau = \frac{\tau^2}{2} = \frac{(x - x_0)^2}{2} + C_2=\frac{x^2}{2} - x x_0+\frac{x_0^2}{2}+C_2$$ Now you can see that $\frac{x_0^2}{2} + C_2$ is just another constant and can be thought of the same as $C_1$, i.e. $C_1=\frac{x_0^2}{2} + C_2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3104940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the solution of $x^a+(1+x)^b=0 $? I am not a mathematician, but a theoretical physicist. I am faced with this equation coming from some plasma phenomenon and I am unable to 'recognise' it. Mathematica software cannot solve it. I have searched the web and it might have relation with the Lambert function, but I still can't clearly see how. I apologise before hand if this is too trivial a question or perhaps too difficult, but would enormously appreciate any hint. Many thanks. Cheers. $($I am not even sure under what TAG to classify this question... :-( $)$ To make it clearer $($sorry for not having mentioned it$)$: $a$ and $b$ are real positive numbers and $x$ $($which is actually a function$)$ has real values.
Assuming (as per comments) that $a$ and $b$ are actually positive integers then we have four cases: 1) $a$ even, $b$ even - no real solution. 2) $a$ even, $b$ odd - one real solution in $(-\infty, -2)$ if $a<b$, otherwise no real solutions. 3) $a$ odd, $b$ even - one real solution in $(-1,0)$. 4) $a$ odd, $b$ odd - one real solution in $(-1,0)$ - special case if $a=b$ then solution is $x=-\frac12$. Where a real solution exists it shouldn't be difficult to find a good numerical approximation using Newton's method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3105037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Series power function over exponential function A typical exercise from calculus is to show that any exponential function eventually grows faster than any power function, i.e. $$ \lim_{k \to \infty} \frac{k^a}{b^k} = 0 \qquad \text{ for } a,b>1.$$ In fact, by the ratio test, we can show for $x=a=b$ the even stronger result that the series $$ \sum_{k=1}^\infty \frac{k^x}{x^k} $$ converges for any $x \in (1,\infty)$. This gave me the idea to consider the function $F\colon (1,\infty) \to \mathbb{R}$ defined by $$ F(x) = \sum_{k=1}^\infty \frac{k^x}{x^k} \qquad \text{for } x \in (1,\infty).$$ Now I am curious what properties I can find for this function, but my literature search so far didn't give really fitting results. Is there a name for this function? For integer arguments I could already use the relation $$ F(n) = \text{Li}_{-n}\left(\frac{1}{n}\right), \qquad n \in \mathbb{N}$$ with $\text{Li}$ the polylogarithm to find the representation $$ F(n) = \frac{n}{(n-1)^{n+1}}A_n(n), $$ where $A_n$ is the $n$-th Eulerian polynomial. Furthermore, $F$ seems to have a global minimum at around $$ x = 3.1200906359597\ldots \quad \text{with} \quad F(x)=4.1125402415512\ldots$$ that I found by bisection. The above results gave me hope that there is a closed formula for this minimum as well, e.g. something in terms of elementary functions, but I can't really figure it out. Any ideas?
I think this is a special case of the Lerch transcendent, defined as $$ \Phi(z,s,\alpha)=\sum_{n=0}^\infty\frac{z^n}{(n+\alpha)^s}. $$ Specifically, your proposed function $F$ is given by $$ F(x)=\Phi\big(\tfrac{1}{x},-x,0\big). $$ Plotting this function on WolframAlpha confirms that the global minimum you computed is (approximately) correct (unless you want to extend the domain and muck around with complex numbers). These slides also discuss the so-called fractional polylogarithm $$ \zeta(s,x)=\Phi(x,s,0)=\sum_{n=0}^\infty\frac{x^n}{n^s}. $$ (I think the slides have a typo—sums should start at $n=1$.) The slides indicate that this function has been studied since at least the late nineteenth century. In terms of the fractional polylogarithm, your function is given by $$ F(x)=\zeta\big(-x,\tfrac{1}{x}\big). $$ I don’t know much about these kinds of functions, but it seems like there’s plenty of literature that may answer some of your questions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3105125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Prove that a primitive $q$-th root of unity is in the algebraic closure of $\Bbb F_p$ Let $p$ and $q$ be odd primes. Let $\Omega$ be the algebraic closure of $\Bbb F_p$. Let $\omega$ be a primitive $q$-th root of unity. Show that $\omega \in \Omega$. How do I show that? Please help me in this regard. Thank you very much.
By definition $\omega$ is a root of $X^q-1\in\Bbb{F}_p[X]$, and by definition every polynomial in $\Bbb{F}_p[X]$ splits into linear factors in $\Omega[X]$. Hence $X-\omega\in\Omega[X]$ and so $\omega\in\Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3105275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A question about integer BMO 1984 Question 4 My question is derived from BMO 1984 question 4. Given an integer n, how many r (with r bigger than 0 less than 1) can make 2nr an integer? I tried some values of n and from 1 to 9 there are 1,2,3,3,3,5,3,4,5 r that meet the requirement. I find it hard to generalise it to n.(Maybe I have done something wrong?) Could anyone help me please? Thanks.
First observe that the condition $0<r<1$ implies $0<2nr<2n$. Next observe that for any integer $m, 0 < m< 2n$, there is precisely one $r\in(0,1)$ with $m=2nr$ (namely, $r=m/2n$). Therefore the question is equivalent to the following question: for an integer $n$, how many integers $m$ satisfy $0 < m < 2n$. Clearly this is $2n-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3105378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Series of complex numbers over an uncountable set Question: Let I be an arbitary index set and $(a_i)_{i\in I} \subset \Bbb C$ such that $\sum_{i \in I}a_i$ converges. Show that there exists a countable set $I_0 \subset I$ such that $a_i=0$ for all $i \in I\setminus I_0$. I think I am able to prove this if all the real and imaginary parts of $a_i$ are positive but for any sequence $a_i$ I'm struggling. I feel like I want to use the Cauchy criterion, namely that since we have convergence, we also have that for all $\epsilon>0$ there exists a set $J_{\epsilon}$ such that $$\bigg|\sum_{i\in J\setminus J_{\epsilon}}a_i\bigg|<\epsilon$$ for all $J\supset J_{\epsilon}$ where $J\subset I$ is finite. We could then choose $\epsilon=\frac{1}{n}$ to generate a family of $J_n$ and possibly take the union over $n$ to create the countable set I'm after. But beyond that I'm stuck. Any help is much appreciated.
Note that positivity isn't really the assumption you need for the easier argument you mention - rather, you just want all the real parts to have the same sign and all the imaginary parts to have the same sign. As long as this happens, you're still happy since all the nonzero terms still "point away from zero in the same way." Now think about how the four quadrants partition the complex plane. If I have an uncountable set of nonzero points $\{a_i: i\in I\}$, then there must be an uncountable $J\subseteq I$ such that one of the following happens: * *If $a+bi, c+di\in J$ then both $a$ and $b$ are $\ge 0$ and both $c$ and $d$ are $\ge 0$. *If $a+bi, c+di\in J$ then both $a$ and $b$ are $\ge 0$ and both $c$ and $d$ are $\le 0$. *If $a+bi, c+di\in J$ then both $a$ and $b$ are $\le 0$ and both $c$ and $d$ are $\ge 0$. *If $a+bi, c+di\in J$ then both $a$ and $b$ are $\le 0$ and both $c$ and $d$ are $\le 0$. (There could be more than one such $J$, and there could be $J$s exhibiting different behaviors; that's fine.) You should be able to show that $\sum_{i\in J}a_i$ diverges. Now, can you show that $\sum_{i\in I}a_i$ diverges since $J\subseteq I$ (and get from there to the theorem you want)?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3105535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to tell if a matrix of a certain rank contains a certain kernel and image? I'm struggling with the following question True or False: There exists a 3x4 matrix A of rank 2 such that ker(A) contains the vector v = $$ \begin{bmatrix} 1\\ 1\\ 1\\ 1\\ \end{bmatrix} $$ and image(A) contains the vector w = $$ \begin{bmatrix} 1\\ 1\\ 1\\ \end{bmatrix} $$ I'm not sure how to go about this without coming up with matrices that would possibly fit these conditions. How else can you go about this question?
How about $\begin{pmatrix}1&-1&0&0\\1&-1&0&0\\1&0&-1&0\end{pmatrix}$? Not sure about how else to go about it, than providing an example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3105669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that the following is an inner product Currently, I've started to take Analysis 2 in school and we are doing Euclidean Spaces. There is an example that I'm trying to prove but I can't wrap my mind around. $x, y \in \mathbb R^d$ $\langle x,y \rangle:= \sum^d_{j=1}x_j-y_j$ So, there are 4 rules for this to be an inner product. Non-negativity, definite, symmetric, linearity. But I think this isn't non-negative; am I wrong? $y$ vector can always be bigger than $x$ and that can make the sum negative? Or am I missing something?
There is a very easy way to see that this is not an inner product. Note that it often happens that inner products are negative for given choices of $x$ and $y$. However, what happens when you compute $\langle x,x\rangle$ for any $x$? Is this a positive number for all $x\neq 0$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Will a path between $(x, y)$ and $(-x, -y)$ always intersect a 90 degree rotated copy? Suppose we have a path between two points $(x, y)$ and $(-x, -y)$. If we rotate it by 90 degrees around the origin, will the copy intersect the original? (You can add any number of assumptions to avoid pathological cases.) It seems obvious that it does (if you play with a construction you can see how changing the curve in one place to avoid an intersection with the copy makes it intersect in another place) but I cannot come up with a way to show it. My guess is there is a simple observation I am missing. Background: This is a technical step in a demonstration that polyominoes that cover two opposite corners of their hull can only tile a rectangle in a certain way. This step comes in to show that if you rotate a polyomino 90 degrees and it does not overlap the original, it also does not overlap the original when you rotate it 180 degrees.) Edit: After getting a feel for it from the posted answers, I realize it's much more technical than I thought and probably not appropriate for what I wanted to use it for. Edit: Actually, this answer given to a version of this question works well for my purposes. Also, see this more general question.
This proof assumes there is no "backtracking" or radial movement (i.e. every line through the origin intersects the curve in exactly one point, with the obvious exception of the line through $(x, y)$ and $(-x, -y)$, which intersects the curve twice). Also, the curve is continuous and goes counterclockwise around the origin from $(x, y)$ to $(-x, -y)$. Take the line through the origin and $(-y, x)$ ($90^\circ$ counterclockwise rotated copy of $(x, y)$). If the curve intersects the line at $(-y, x)$ we are done. If not, it intersects the line either outside or inside $(-y, x)$. Let's say inside (like your yellow, green and orange examples). Then the $90^\circ$ rotated curve goes on the inside of $(-x, -y)$. This means that if we take a continuous "sweep" of lines through the origin, starting with the one through $(-y, x)$ and ending with the one through $(-x, -y)$, then the original curve goes from being on the inside of the rotated curve to being on the outside. By the intermediate value theorem they must intersect.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Relation between surfaces of an infinitesimal tetrahedron Let $d\sigma_1,d\sigma_2, d\sigma_3$ denote the areas of the faces perpendicular to the axes $x_1,x_2,x_3$ and let $d\sigma_n$ denote the area of the inclined face with unit exterior normal n. My book says that this relation holds: $d\sigma_i = d\sigma_n \cos(\mathbf{n},x_i)= n_id\sigma_n \quad for (i=1,2,3)$ In the limit as the tetrahedron shrinks to the point M. I don't understand how it is derived.
$\let\a=\alpha \let\s=\sigma$ You rightly tagged "geometry" your question. And no limit or infinitesimals are needed as far as the surface is plane. Consider $\s$ (finite, not infinitesimal) and $\s_1$. They are triangles sharing a base. Can you see what's the ratio of their heights?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Linear Dependence lemma - an unclear moment from the proof I am reading linear dependence lemma, namely: If $(v_1,v_2,\dots,v_m)$ is linearly dependent and $v_1\neq 0$, there exists an index $j\in \{2,\dots,m\}$ such that: $v_j\in \text{span} (v_1,\dots,v_{j-1}).$ Proof: Since $(v_1,\dots,v_m)$ is linearly dependent there exist $a_1,\dots,a_m\in \mathbb{F}$ not all zero such that $a_1v_1+\dots+a_mv_m=0$. Since by assumption $v_1\neq 0$, not all of $a_2,\dots,a_m$ can be zero (why?). Let $j\in \{2,\dots,m\}$ be largest such that $a_j\neq0$. Then we have $$v_j=-\dfrac{a_1}{a_j}v_1-\dots-\dfrac{a_{j-1}}{a_j}v_{j-1}.$$ From here we get our desired result. Let me ask you question: If $a_2=\dots=a_m=0$ then we have $a_1v_1=0$ where $a_1\neq 0$ and $v_1\neq 0$. And where is the contradiction?
The contradiction is that for $a_2=\dots=a_m=0$ and $a_1\ne0,\ v_1\ne 0$, we get $$0=a_1v_1+\dots +a_mv_m=a_1v_1\ne 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find basis of fundamental subspaces with given eigenvalues and eigenvectors Let $\Lambda_1=0,\Lambda_2=1,\Lambda_3=2$ and $x_1=\begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix},x_2=\begin{bmatrix} 0 \\ 1 \\ 2 \\ \end{bmatrix},x_3=\begin{bmatrix} 0 \\ 1 \\ 1 \\ \end{bmatrix}$ are eigenvalues and eigenvectors of matrix $A \in \Bbb{M_{3x3}}$. Find basis of fundamental subspaces, without calculating (finding) matrix $A$. $$-$$ We know that different eigenvalues give different linearly independent eigenvectors,so eigenvectors $x_1,x_2,x_3$ are linearly independent. When $\Lambda_1$ is zero, it means that eigenvector $x_1$ is in null space of matrix $A \ (Ax_1=\Lambda_1 x_1)$, so we have $$\operatorname{Ker}(A)=\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix}\right\}$$Because column space of $A^{T}$ is orthogonal on null space of matrix $A$, we have $$\operatorname{Im}(A^{T})=\left\{\begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix},\begin{bmatrix} 0 \\ 0 \\ 1 \\ \end{bmatrix}\right\}$$ Using rank-nullity theorem, we know that $\dim [\operatorname{Im}(A)]=2$, so other two eigenvectors make basis of column space of matrix $A$ $$\operatorname{Im}(A)=\left\{\begin{bmatrix} 0 \\ 1 \\ 2 \\ \end{bmatrix},\begin{bmatrix} 0 \\ 1 \\ 1 \\ \end{bmatrix}\right\}$$ At the end, we know that column space of $A$ is orthogonal on the null space of matrix $A^{T}$, so I think that the only vector I can use in $\operatorname{Ker}(A^{T})$ is vector $\begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix}.$ I'm not sure can I use the same vector for $\operatorname{Ker}(A^{T})$ and $\operatorname{Ker}(A)$. Is that actually possible?
Your reasoning is correct; by what we are given the null space of the matrix must be generated by the vector $x_1$, and it follows that the column space must be generated by the vectors $x_1$ and $x_2$. The conclusion that $\operatorname{Ker}(A) = \operatorname{Ker}(A^T)$ is no contradiction; for example, this also happens with the diagonal matrix with $(0, 1, 1)$ on the diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Particle on a line question with integrals calc2 I have a question for calculus two regarding a particle on a line. Question: A particle moves along a line with acceleration $a(t)=−1(t+3)~2\text{ft}/\text{sec}^2$. Find the distance traveled by the particle during the time interval $[0,1]$, given that the initial velocity $v(0)$ is $13~\text{ft/sec}$. I think I'm supposed to use integrals, but I'm quite sure how to go about it. If anyone could help me out I'd be grateful, thank you <3
The velocity over $[0,1]$ is $$ v(t) = v(0) + \int_{0}^{t} a(x) \ dx $$ The distance (not displacement) over this interval is $$ \int_{0}^{1} |v(t)| \ dt $$ So you can find the distance with two integrals. It looks like your $a(t)$ is negative over $[0,1]$, but the initial velocity is much higher than $\int a(t) \ dt$ on that interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Study the continuity of a function I have the function $f:\mathbb{R}^2\rightarrow\mathbb{R}\ f(x,y) = \left\{\begin{matrix} \sin\frac{x^3y}{x^4+y^4}, & (x,y) \in \mathbb{R}^2 \setminus\{(0,0)\}\\ 0, & (x,y) = (0,0). \end{matrix}\right.$ I need to study the continuity of the function $f$. I tried to calculate the limits as $(x,y) \rightarrow (0,0)$ of $f$. So $\lim_{(x,y)\rightarrow(0,0)}\sin\frac{x^3y}{x^4+y^4} = \sin(0) = 0$ So this would imply that $f$ is continuous at $(0,0)$, right?
HINT: Note that along $x=0$, $\sin\left(\frac{x^3y}{x^4+y^4}\right)=0$. Along $x=y$, $\sin\left(\frac{x^3y}{x^4+y^4}\right)=\sin(1/2)$. What can you can conclude?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3106753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the area of $\triangle ABC$ where $\triangle ADC$ is cyclic, point $P$ is on the circumference and $AD = AP$? $\triangle ABC$ is a right angled triangle. The perpendicular drawn form $A$ on $BC$ intersects $BC$ at point $D$. A point $P$ is chosen on the circle drawn through the vertices of $\triangle ADC$ such that $CP$ $\perp$ $BC$ and $AP$ = $AD$.A square is drawn on the side $BP$ and its area is 350 unit$^\text{2}$. What is the area of $\triangle ABC$? My Attempt: Here, $ADCP$ is a square because $AD = AP$ and $\angle ADC$ = $\angle DCP$ = 90$^\circ$. So, by expressing $AD = x$ and $BD$ = $y$, From the right angled $\triangle ADB$, $AB^\text{2}$ = $x^\text{2} + y^\text{2}$ And now, from the $\triangle ABC$, $x^\text{2} + y^\text{2}$ + ($\sqrt 2 x$)$^\text{2}$ = $(x + y)$$^\text{2}$.....($AC$ is the diagonal of the square $ADCP$) $x^\text{2}$ + $y^\text{2}$ + 2x$^\text{2}$ = $x^\text{2}$ + $y^\text{2}$ + 2$xy$ $\implies$ $x$ =$y$ So, $BD$ = $x$ and $AB$ = $\sqrt(2x^\text{2})$ $\implies$ $AB$ = $\sqrt2 x$. And now from $\triangle BCP$, ($2x$)$^\text{2}$ + $x$$^\text{2}$ = ($\sqrt350$)$^\text{2}$ $5x$$^\text{2}$ = $350$ $\implies$ $x$$^\text{2}$ = $70$ $\implies$ $x$ = $\sqrt70$ After that, the area of $\triangle ABC$ = $\frac{1}{2}$×$\sqrt2x$×$\sqrt2x$ = $\frac {1}{2}$×$2x$ $^\text{2}$ = $x^\text{2}$ = ($\sqrt70$)$^\text{2}$ = $70$. Is my answer correct? If not so, can anyone please provide me with another solution or method for better learning process? Or simply any kind of clue or hint will be so much helpful. Thanks in advance.
Since $AC$ is the diagonal of square $APCD$, then $\angle{C}=45^{\circ}$. Then, we know $\triangle{ABC}$ is a 45-45-90 right triangle. Notice that $ABC$ is isosceles, so $BD=DC=CB=x$. Then, you can use Pythagorean Theorem to find $x$. Then, it is easily seen that $[ABC]=\frac{(\sqrt{2\cdot 70})^2}{2}=70$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sum of Sine and Cosine to Higher Powers Is Constant (Recursion, Proofs) I have observed this empirically, but I have no idea how to prove it or if it has been proven before. If this has been proven before, in any form, either more or less generic, please point me to such a proof. If not, can you prove it? Given: $$ \begin{align} & n\in \mathbb{\{1, 2, 3, \cdots\}} \\ & c_n=\left( \frac{1}{2}\right)^{2^n-1}\\ & f_1(x)=\sin(x)\\ & f_n(x)=f_{n-1}^2(x)-c_n\\ & g_1(x)=\cos(x)\\ & g_n(x)=g_{n-1}^2\left(x-\frac{\pi}{2^n}\right)-c_n \end{align}$$ Prove that: $$f_n^2(x)+g_n^2(x)=\left( \frac{1}{2} \right)^{2^n-2}$$ Here are some graphs for reference: https://www.desmos.com/calculator/01pshoohrl
The trick here is that they're all versions of the Pythagorean identity. Your iteration $f_n(x)=f_{n-1}^2(x)-c_n$ and $g_n(x)=g_{n-1}^2(x-\frac{\pi}{2^n})-c_n$ is a double-angle formula. We have $f_2(x)=\sin^2 x-\frac12 = -\frac12\cos(2x)$, $g_2(x)=\cos^2(x-\frac{\pi}{4})-\frac12=\frac12\cos(2(x-\frac{\pi}{4}))=\frac12\cos(2x-\frac{\pi}{2})$ $=\frac12\sin(2x)$. By a similar process, $f_3(x)=\frac14\cos^2(2x)-\frac18=\frac18\cos(4x)$ and $g_3(x)=\frac14\sin^2(2x-2\frac{\pi}{8})-\frac18=-\frac18\cos(4x-\frac{\pi}{2})=-\frac18\sin(4x)$. It stabilizes from there, and $f_n(x)=2^{1-2^{n-1}}\cos\left(2^{n-1}x\right)$, $g_n(x)=-2^{1-2^{n-1}}\sin\left(2^{n-1}x\right)$ for all $n\ge 3$. To prove the result, then, we prove these formulas inductively. We do the special case calculations for $n=1$, $n=2$, and $n=3$, then do the inductive step to go from $n-1$ to $n$ for $n\ge 4$. In all of these cases, we're using the double-angle formulas $\sin^2 t-\frac12=-\frac12\cos(2t)$ and $\cos^2 t-\frac12=\frac12\cos(2t)$ and the translation formula $\cos(t-\frac{\pi}{2})=\sin t$. Once we have the formulas, the result about the sum of squares is immediate: $$f_n^2(x)+g_n^2(x)=\left(2^{1-2^{n-1}}\right)^2\left(\cos^2(2^{n-1}x)+\sin^2(2^{n-1}x)\right)=2^{2-2^n}$$ (and the special cases for small $n$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find quantity of elements in group with given order Let $G = ( \mathbb { Z } / 133 \mathbb { Z } ) ^ { \times }$ be the group of units of the ring $\mathbb { Z } / 133 \mathbb { Z }$ . Find the number of elements of $G$ of order $9 .$ 133 cannot be divided by 9. So what is the solution to the problem? Or my consideration is too simple to realize the problem precisely?
The fact that $9\nmid133$ is irrelevant here, since the group $\mathbb{Z}_{133}^\times$ has $108$ elements. Since $9\mid108$, Lagrange's theorem is not an obstacle to the existence of elements of order $9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }