Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Can we show that for any $s > t ≥ 0$, we have $\mathbb{E}[\omega_s|\omega_t] =e^{\kappa(s−t)}\omega_t$ and why? Suppose that the initial state $\omega_0$ is normally distributed with mean $\mu_0$ and variance $\sigma_0^2$. The state then evolves according to the stochastic differential equation $$\tag{*}d\omega_t=\kappa\omega_tdt+\sigma dZ_t$$ where the driving process $\{Z_t\}_{t≥0}$ is a standard Brownian motion, independent of the initial state $\omega_0$. The variance rate $\sigma^2$ is strictly positive. The percentage drift $\kappa$ has unrestricted sign. If $\kappa < 0$, then the state follows a mean-reverting Ornstein–Uhlenbeck process. If $\kappa = 0$, then $\omega_t − \omega_0 = \sigma Z_t$, so the state follows a Brownian motion with zero drift. If $\kappa > 0$, then the state process is explosive. The solution of $(*)$ is given by $$\omega_t=\omega_0 e^{-\kappa t} +\sigma\int_{0}^te^{\kappa(s-t)}dZ_s$$ The effect of $\kappa$ can be seen in the formula for the conditional expectation $\mathbb{E}[\omega_s|\omega_t]$. Can we show that for any $s > t ≥ 0$, we have $\mathbb{E}[\omega_s|\omega_t] =e^{\kappa(s−t)}\omega_t$ and why?
Take the expected value of both sides of your equation, since the expected value of the Brownian motion is zero, you have $$ \frac{d\langle \omega_t\rangle}{dt} = \kappa \langle\omega_t\rangle. $$ which you can integrate from $t$ to $s$ given the initial condition at $\omega_t$ to obtain your result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4535907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve the equation $\frac{\sqrt[7]{x-\sqrt2}}{2}-\frac{\sqrt[7]{x-\sqrt2}}{x^2}=\frac{x}{2}\sqrt[7]{\frac{x^3}{x+\sqrt2}}$ Solve the equation $$\dfrac{\sqrt[7]{x-\sqrt2}}{2}-\dfrac{\sqrt[7]{x-\sqrt2}}{x^2}=\dfrac{x}{2}\sqrt[7]{\dfrac{x^3}{x+\sqrt2}}$$ We have $x\ne0;-\sqrt2$. Let's multiply both sides of the equation by $2x^2\ne0$ to get $$x^2\sqrt[7]{x-\sqrt2}-2\sqrt[7]{x-\sqrt2}=x^3\sqrt[7]{\dfrac{x^3}{x+\sqrt2}}$$$$(x^2-2)\sqrt[7]{x-\sqrt2}=x^3\sqrt[7]{\dfrac{x^3}{x+\sqrt2}}$$ Let's multiply both sides of the equation by$\sqrt[7]{x+\sqrt2}\ne0$ to get $$(x^2-2)\sqrt[7]{x^2-2}=x^3\sqrt[7]{x^3}$$ $$(x^2-2)^8=x^{24}$$
Hint Write the equation as $$\frac{\sqrt[7]{x-\sqrt2}}{2}-\frac{\sqrt[7]{x-\sqrt2}}{x^2}=\frac{x}{2}\sqrt[7]{\frac{x^3(x-\sqrt2)}{x^2-2}}$$ In the transformation above, take care of the situation $x=\sqrt 2$. $$\frac{\sqrt[7]{x-\sqrt2}}{2}-\frac{\sqrt[7]{x-\sqrt2}}{x^2}=\frac{x}{2}\sqrt[7]{\frac{x^3(x-\sqrt2)}{x^2-2}}$$ $$\left(\sqrt[7]{x-\sqrt2}\right)\left(\frac{1}{2}-\frac{1}{x^2}-\frac{x}{2}\sqrt[7]{\frac{x^3}{x^2-2}}\right)=0$$ $$\frac{1}{2x^2}\left(\sqrt[7]{x-\sqrt2}\right)\left(x^2-2-x^3\sqrt[7]{\frac{x^3}{x^2-2}}\right)=0$$ $$\frac{1}{2x^2}\left(\sqrt[7]{x-\sqrt2}\right)\left(x^2-2-x^3\sqrt[7]{\frac{x^3}{x^2-2}}\right)=0$$ $$\frac{x^2-2}{2x^2}\left(\sqrt[7]{x-\sqrt2}\right)\left(1-\frac{x^3}{x^2-2}\sqrt[7]{\frac{x^3}{x^2-2}}\right)=0$$ $$\left(\frac{x^2-2}{2x^2}\right)\left(\sqrt[7]{x-\sqrt2}\right)\left(1-\left(\frac{x^3}{x^2-2}\right)^{8/7}\right)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4536302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A scalar field satisfying the Laplace's Equation is zero everywhere Q: A scalar field $A(y_1,y_2,y_3)$ is equal to zero on a closed surface $S$ in $\mathcal{R}$. If $A$ satisfies the Laplace equation $A_{,ii}=0$ in $\mathcal{R}$, show that $A=0$ everywhere in $\mathcal{R}$. I have to show the above statement is true using tensor notations. To me it looks like solving a differential equation that is \begin{align*} A&=0\qquad\text{on } S\subset\mathcal{R}=\mathbb{R}^3\\ \nabla^2A&=0\qquad\text{in }\mathcal{R}=\mathbb{R}^3\\ \end{align*} Is my idea correct? If so, how do I go about solving this? Don't you need at least two conditions to solve a second order PDE?
I'm going to assume that $S$ is smooth closed surface that is embedded into $\mathbb R^3$. The smoothness assumption can be relaxed to $C^1$. Also, I think the assumption that $S$ is embedded is natural in this context. If you do actually want to consider the case $S$ is not embedded then let me know. Proof: Since non-orientable surfaces cannot be embedded into $\mathbb R^3$, $S$ must be orientable. Then, by the Jordan-Brouwer separation theorem, $\mathbb R^3 \setminus S$ has two connected components: one bounded which we will call $\Omega$ and one unbounded. For a proof of the Jordan-Brouwer separation theorem in the case $S$ is smooth see The Jordan-Brouwer Separation Theorem for Smooth Hypersurfaces by Lima and references therein for the $C^1$ case. Since $A$ is harmonic in $\Omega$ and $A=0$ on $\partial \Omega=S$, by the maximum principle $A \equiv 0$ in $\Omega$. But then, since $A$ is harmonic in all of $\mathbb R^3$, it is analytic in $\mathbb R^3$. Since $A$ vanishes in $\Omega$ and $A$ is analytic in all of $\mathbb R^3$, it follows from unique continuation that $A \equiv 0$ in $\mathbb R^3$. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4536449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Meaning of 'argument of a function' What is a function's argument? I often see '$f(x,y)$ has two arguments $x$ and $y$, Is having two arguments either the fact that the function can take two numbers, or does it have two specific variables which are tied to it? It seems wrong to talk this way, as I can easily have $f(x)$ or $f(z)$ and that doesn't imply $x=z$. Is, 'in the expression $f(x,y)$ $x$ and $y$ are the arguments (the number $x$ and the number $y$ are acting as the inputs).
We can start from a natural language example, the expression "The capital of France is Paris". In it we have two names: "Paris" and "France". If we replace into this expression the name "France" with a different name, e.g. "Italy" what we get is a different expression with a different meaning. And the same if we replace the name "Paris" with "Rome". Having performed this "operation", we may imagine that the expression is composed of a stable part: "The capital of... is___", expressing the relation between two objects (denoted by the names) standing in this relation. We can describe this fact in an abstract way saying that the first component is a function and the latter its arguments. Freely derived from G.Frege's Begriffsschrift (1879). This is the ubiquitous use of functions in mathematics: the sine and cosine functions are trigonometric functions that, for a specified angle, express the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle (the hypotenuse) and the ratio of the length of the adjacent leg to that of the hypotenuse, respectively. For an angle $\theta$, the sine and cosine functions are denoted simply as $\sin \theta$ and $\cos \theta$, where to be precise, $\sin$ and $\cos$ are the names of the two functions and $\theta$ is the name of the argument. See also the post Concept of a function and idea of a formula as a function for the modern origin of the mathematical concept of function as a sort of "rule" [expressed symbolically] that, having received a value as “input” allows us to calculate a corresponding “output” value. Regarding your misunderstanding: "I can easily have $f(x)$ or $f(z)$ and that doesn't imply $x=z$", you have to try to understand the abstract concept of function, that we can express in many ways. A function, see the natural language above, can be described with the expression "Blah blah ..." where the dots denotes the argument place, i.e. the empty place to be filled with the name of the input value. The customary usage is $f(x)$ but we can as well use $f( \ )$ or $f(\_)$. But we may have problems with functions of more than one argument, where we must write something like $f(\circ, \square)$ or $f(\__{1},\__{2})$. Common practice is $f(x,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4536615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Without any software and approximations prove that $\sec(52^{\circ})-\cos(52^{\circ})>1$ Without any software and approximations prove that $$\sec(52^{\circ})-\cos(52^{\circ})>1$$ We can use some known trig values like $18^{\circ}$,$54^{\circ}$,etc My try: I considered the function: $$f(x)=\sec(x)-\cos(x)-1,\: x\in \left (0, \frac{\pi}{3}\right)$$ We have the derivative as: $$f'(x)=\sec x\tan x+\sin x >0$$ so $f$ is Monotone increasing. So we have: $$f(52^{\circ})>f(45^{\circ})=\frac{1}{\sqrt{2}}-1$$ but not able to proceed
Hint . Can you show the inequality for $10\leq x\leq 65$: $$f\left(x\right)=x\left(\sec\left(x\cdot\frac{\pi}{180}\right)-\cos\left(\frac{\pi}{180}\cdot x\right)\right)> \frac{2(x-9)^{3}-2(x-9)^{2}+15(x-9)+75}{3000}$$ ? Some other hint : Define : $$h(x)=\left(\sec\left(x\cdot\frac{\pi}{180}\right)-\cos\left(\frac{\pi}{180}\cdot x\right)\right),p(x)=\frac{2(x-9)^{3}-2(x-9)^{2}+15(x-9)+75}{3000}$$ Then we have for $10<x<65$ : $$h''(x)>0,p''(x)>0$$ So we have using strong convexity for $x\in[45,52]$ : $$xh(x)\geq x\left(h'\left(45\right)\left(x-45\right)+h\left(45\right)+\frac{h''\left(45\right)}{2}\left(x-45\right)^{2}\right)>p(x)$$ If $0<a<1$ and $0\leq x\leq 2a\pi$ then : $$1-\cos\left(x\right)-\left(\frac{\sin\left(a\pi\right)}{a\pi}\right)^{2}\cdot\frac{x^{2}}{2}\geq 0$$ See [1] for a reference . Now a lemma using the concavity of $\sin(x)$ on $(0,\pi/2)$ : We have : $$0<\frac{\sin\left(\frac{\pi}{6.25}\right)-\sin\left(\frac{\pi}{6}\right)}{\frac{\pi}{6.25}-\frac{\pi}{6}}-\frac{\sqrt{3}}{2}$$ A second lemma : $$\pi<\frac{185}{100}\sqrt{3}$$ Using lemma 1 and 2 with $a=1/6.25$ and $x=52\cdot\frac{\pi}{180}$ for the first inequality we got : $$\left(\frac{6.25}{\pi}\left(\frac{\sqrt{3}}{2}\left(\frac{1.85\sqrt{3}}{6.25}-\frac{1.85\sqrt{3}}{6}\right)+\frac{1}{2}\right)\right)^{2}\cdot\frac{\left(52\cdot\frac{\pi}{180}\right)^{2}}{2}=1934881/512000<1-\cos\left(52\cdot\frac{\pi}{180}\right)$$ Wich is sufficient to show the claim proposed in the comment @MartinR. Reference : [1] Becker,Michael and Lawrence E.Stark An extremal inequality for the Fourier coefficients of positive cosine polynomials ,Univ. Beograd .Publ. Elektrotehn Fak. Ser . Mat ., No.577-No.588 (1977)57-58.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4536778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 3 }
Proving a claim that $Z_n = \max\{(X_n^{(1)})^2, X_n^{(2)}\}$ is a submartingale Let $X_n^{(1)}, X_n^{(2)}, n \in \mathbb{N}$ be martingales. Let $\mathcal{F}_n$ be a filtration. I believe that $Z_n := \max\{X_n^{(1)}, X_n^{(2)}\}$ is a submartingale. I tried to show it by using Jensen's inequality since the mapping $(x,y) \mapsto \max\{x^2, y\}$ is convex. So, \begin{align*} \mathbb{E}[Z_n|\mathcal{F}_{n-1}] \geq \max\{\mathbb{E}[(X_n^{(1)})^2|\mathcal{F}_{n-1}], \mathbb{E}[X_n^{(2)}|\mathcal{F}_{n-1}]\} \end{align*} Well, $\mathbb{E}[X_n^{(2)}|\mathcal{F}_{n-1}] = X_{n-1}^{(2)}$ by assumption. But I am not sure how to justify $\mathbb{E}[(X_n^{(1)})^2|\mathcal{F}_{n-1}] = (X_{n-1}^{(1)})^2$.
I think that just another application of Jensen's inequality does the trick: \begin{align*} \mathbb{E}[(X_n^{(1)})^2|\mathcal F_{n-1}] &\ge (\mathbb{E}[X_n^{(1)}|\mathcal F_{n-1}])^2 \\ &=(X_{n-1}^{(1)})^2, \end{align*} so \begin{align*} \mathbb{E}[Z_n|\mathcal F_{n-1}] &\ge \max\{\mathbb{E}[(X_n^{(1)})^2|\mathcal F_{n-1}],\mathbb{E}[X_n^{(2)}|\mathcal F_{n-1}]\} \\ &\ge \max\{(X_{n-1}^{(1)})^2, X_{n-1}^{(2)}\} \\ &= Z_{n-1}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4536987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using logical equivalences to show $(A \Delta B) \cap C = (A \cap C) \Delta (B \cap C)$ I'm trying to show $(A \Delta B) \cap C = (A \cap C) \Delta (B \cap C)$ using logical equivalences. (NOTE: "$\Delta$" denotes exclusive or, i.e. $A \Delta B = (A \cup B) \setminus (A \cap B) = (A \setminus B) \cup (B \setminus A)$.) My workings so far: \begin{align*} x ∈ (A △ B) ∩ C & \iff x ∈ (A △ B) ∧ x ∈ C &&\text{(definition of ∩)} \\& \iff [(x ∈ A \lor x ∈ B) ∧ ¬(x ∈ A ∧ x ∈ B)] ∧ x ∈ C &&\text{(definition of △)} \\ \text{Let $P = x ∈ A$, $Q = x ∈ B$, $R = x ∈ C$} \\& \iff [(P \lor Q) ∧ ¬(P ∧ Q)] ∧ R \\& \iff [(P \lor Q) ∧ R] ∧ ¬(P ∧ Q) &&\text{(associativity + commutativity)} \end{align*} This is close to the result, I just need to turn $P \land Q$ into $P \land R \land Q \land R$, but I'm not sure how to do this. I tried using the tautology law $P ∧ ⊤ ⟺ P$ to "create" an $R$ term. However, this also introduces an $¬R$ term, which I cannot get rid of: \begin{align*} [(P \lor Q) ∧ R] ∧ ¬(P ∧ Q) & \iff [(P \lor Q) ∧ R] ∧ ¬[(P ∧ Q) ∧ (R \lor ¬R)] &&\text{(P ∧ ⊤ ⟺ P)} \\& \iff [(P \lor Q) ∧ R] ∧ ¬[(P ∧ (R \lor ¬R)) ∧ (Q ∧ (R \lor ¬R))] &&\text{(distributivity)} \end{align*} Could someone please give me a hint as to what I could have used in place of $P ∧ ⊤ ⟺ P$? I also tried $(P ∧ Q) ∧ (R \lor T)$, but this collapses back into $(P ∧ Q)$ via the absorption law.
Suppose that x is in (A XOR B) $\cap$ C. Two cases, WLOG assume that x is in A, not in B and in C. But then x is certainly in A $\cap$ C, and since it is not in B, it cannot be in B $\cap$ C. But this suffices. For the other direction, again assume WLOG that x is in $A \cap C$ and not in $B \cap C$. It follows that x is in A, x is in C. Suppose that x were in B. Then it would be in B $\cap C$ since it is also in C. Hence, x is not in B. But this suffices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
an example of tangent equation calculation, from a book, that I do not understand I am in trouble with an example of an equation of a tangent from a book. Here's what my book is writing (in french) : I translate it (summarizing a bit) : take a T(X,Y) point on the tangent, the slope between M and T is $\frac{Y - f(x_{0})}{X - x_{0}}$ This slope is also the derived number on M, $f'(x_{0})$ : $\frac{Y - f(x_{0})}{X - x_{0}} = f'(x_{0})$ The relationship between the coordinates (X,Y) of T are thus $Y - f(x_{0}) = f'(x_{0}).(X - x_{0})$ But here is the sample given, that troubles me : What the equation of the tangent in $\mathbf{x_{0} = 2}$ to the parabole of equation $\mathbf{y = x^2}$ ? (this drawing isn't from the author, it's mine, to figure what is $y = x^2$, and what $x_{0} = 2$ or $x_{0} = 3$ would then mean) On $x_{0} = 3, y_{0} = 9$ ; the derivative being $y' = 2x$, we have $y_{0}' = 6$ * *First question : why does the author assign $x_{0} = 3$ if he said he is looking for $x_{0} = 2$ the line just before? reassigning an $x_{0}$ looks strange to me. According to 8.7, the equation to the tangent on $x_{0} = 2$ *Here, $x_{0}$ returns to its previous assignment : $x_{0} = 2$... to the parabole is : $Y - 9 = 6(X - 3)$ or $Y = 6X - 9$ It's very troublesome. Especially because when I check with M(2,4), $y_{m} = 6x_{m} - 9$ with $x_{m}=2$ gives $y_{m} = 6 \times 2 - 9 = 12 - 9 = 3$ which is not on the curve, and I am supposed to be on the tangent equation. *If with $y = x^2 $ M has for coordinates M(2,4), why this tangent equation is returning me a point of coordinates (2, 3) for it?
L'auteur a fait 2 fois la même coquille. Remplace simplement ses deux "$x_{0} = 2 $" par "$x_{0} =3 $". The author did twice the same typo. Only replace the two "$x_{0} = 2$"'s by "$x_{0} = 3$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Mandelbrot set, Fourier analysis, zero line in the Riemann zeta function and a cup of tea. There is a funny geometric pattern I have seen several different places and I wonder if someone know if they are somehow related. * *The Mandelbrot set: *In Fourier analysis, when "the winding frequency matches up with the frequency of the signal": *The image of the zero line of the Riemann zeta function: *I my tea cup when the light hits it slightly side ways:
The main island in the Mandelbrot set For a complex number $c$, consider the map $f_c(z) := z^2+c$. We want to find the set of $c$ so that $f_c$ has an attracting fixed point. That is, we want a point $z_0$ so that $f_c(z_0) = z_0$ and $|f'(z_0)|<1$. Solve $f_c(z) = z$ to get $$ z_0 = \frac{1+\sqrt{1-4c}}{2} $$ Now $f_c'(z)=2z$, so we want $|1+\sqrt{1-4c}| < 1$. The boundary of this region is parameterized as $$ 1+\sqrt{1-4c} = e^{it},\qquad 0 \le t < 2\pi . $$ or $$ c = \frac{2e^{it}-e^{i2t}}{4},\qquad 0 \le t < 2\pi $$ This is a cardioid,
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of choosing two random cards of the same value I know this question was asked here: What is the probability of choosing two cards of the same value? But I'm trying to understand it in terms of combinations. Two cards are randomly chosen from a deck of 52. What is the probability that they have the same value? I'm thinking: $\frac{\binom{13}{1} \binom{3}{1}}{\binom{52}{2}}$ Out of 13 possibilities you choose the first card (say it's a 7). Then for the second card you need to pick another 7 out of the remaining three 7's. Can anyone explain to me where I'm going wrong in my thinking? Thanks
When calculating probabilities for drawing without replacement it is better to treat the draws in sequence, so instead of $\binom{52}2$ in the denominator you have $52×51$. The first factor in the numerator is $52$ since one card doesn't constrain the possibility of both cards having the same value. The second factor, for the second draw, is $3$ since that is the number of remaining cards sharing a value with the first draw. Thus we have $\frac{52×3}{52×51}=\frac1{17}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Matrices: Equivalent Definitions of Singularity I am looking to know as many definitions that are equivalent to the definition of a singular matrix as possible - that is conditions that imply singularity. Trivially we have the widely known definition: Let $A$ be a $n \times n$ matrix. $A$ is non-singular if $A^{-1}$ exists. We also have: Let $A$ be a $m \times n$ matrix. $A$ is singular if $m \ne n$ One last example: Let $A$ be a $n \times n$ skew-symmetric matrix; $A$ is singular if $n$ is odd. With that being said, what other conditions on a matrix $A$ implies singularity that may not be so obvious? As always, I appreciate any and all contributions made.
Something that may not be as obvious is that if $A$ is $n\times n,$ and its rows or columns are linearly dependent. This is equivalent to saying $A$ has $0$ as an eigenvalue, and additionally that one can apply Gaussian elimination on $A$ and find that it is row (column) equivalent to a matrix with a row (column) of all $0$s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_0^\infty \frac{t^2 - \sin^2 t}{t^4} dt$ Evaluate $\int_0^\infty \frac{t^2 - \sin^2 t}{t^4} dt$. The integrand is clearly positive as for all $t > 0, t > |\sin t|.$ I'm not sure if finding the Fourier series of some functions might be useful. Or maybe a Taylor expansion or telescoping sum might be useful? $\sin^2 t$ has period $\pi$, so we can split the integral into the sum $\sum_{k=0}^{\infty} \int_{k\pi}^{(k+1)\pi} \frac{t^2 - \sin^2 t}{t^4} dt.$ This sum, even if valid, doesn't seem very useful. In hindsight, it seems the question was fairly straightforward, provided one can accept the fact that $\int_0^\infty (\sin x)/xdx = \pi/2.$ Many proofs are provided here for instance.
Integrate by parts repeatedly to reduce the integral \begin{align} \int_0^\infty \frac{t^2 - \sin^2 t}{t^4} dt =& \ -\frac13\int_0^\infty ({t^2 - \sin^2 t})\ d(\frac1{t^3})\\ \overset{ibp}=& \ -\frac16\int_0^\infty ({2t- \sin 2t})\ d(\frac1{t^2})\\ \overset{ibp} =& \ -\frac13 \int_0^\infty ({1- \cos 2t})\ d(\frac1{t})\\ \overset{ibp} =& \ \frac23\int_0^\infty \frac{ \sin 2t}t\ dt =\frac23 \cdot \frac \pi2=\frac\pi3 \end{align} where $\int_0^\infty \frac{ \sin 2t}tdt\overset{2t\to t}= \int_0^\infty \frac{ \sin t}tdt=\frac\pi2 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Solution verification: Let $A$ be a matrix; $A^TA=AA^T$. If $x$ is an eigenvector of $A$ with eigenvalue $\lambda$, then $A^Tx=\lambda x$ Here is the proposition I wish to show holds true: Let $A$ be an $n \times n$ matrix such that $A^TA=AA^T$. Show that if $x$ is an eigenvector of matrix $A$ with eigenvalue $\lambda$, then $x$ is an eigenvector of $A^T$ with eigenvalue $\lambda$. $\it {\text{Hint: How are the expressions}}$ $||Ax-\lambda x||$ $\it{\text{and}}$ $||A^Tx-\lambda x||$ $\it{\text{related?}}$ $\textbf{Solution}$: First, we assume that $x$ is an eigenvector of $A$ with eigenvalue $\lambda$ such that $Ax= \lambda x$. Using the hint, if $x$ is an eigenvector of $A^T$ with the same eigenvalue, we require $A^Tx=\lambda x$. Thereby, if this is the case, then $||Ax-A^Tx||=0$. We try to show this by the following: $$ \begin{align} ||Ax-A^Tx||^2 & = (Ax-A^Tx)^T(Ax-A^Tx) \\ &=(x^TA^T-x^TA)(Ax-A^Tx) \\ &=x^TA^TAx-x^TAAx-x^TA^TA^Tx+x^TAA^Tx \end{align} $$ As $A^TA=AA^T$, we have that $x^TA^TAx=x^TAA^Tx=||Ax||^2$. $$ \begin{align} ||Ax-A^Tx||^2 & = 2||Ax||^2-x^TAAx-x^TA^TA^Tx \\ &= 2||Ax||^2 -x^TA \lambda x -(Ax)^TA^Tx \\ &= 2||Ax||^2 -x^TA \lambda x -(\lambda x)^TA^Tx \\ &=2||Ax||^2 -x^TA \lambda x -\lambda x^TA^Tx \hspace{3mm}(\text{as}\hspace{1mm} \lambda^T=\lambda) \\ &=2||Ax||^2 -\lambda x^TA x - x^TA^T \lambda x \\ &=2||Ax||^2 -(\lambda x)^TA x - x^TA^T (\lambda x) \\ &=2||Ax||^2 -(Ax)^TA x - x^TA^T (Ax) \\ &=2||Ax||^2 -||Ax||^2 - ||Ax||^2 \\ &=0 \end{align} $$ This shows that vector $(Ax-A^Tx)$ is orthogonal to itself, i.e it must be the zero vector. Hence $Ax=A^Tx$ and since $Ax=\lambda x$, we have that $\lambda x = A^T x$ and we are done. Is this correct/ any other thoughts on my proof?
Shorter argument from the hint: \begin{aligned} ||A'x-\lambda x||^2&=(A'x-\lambda x)'(A'x-\lambda x)\\ &=x'AA'x-\lambda x'Ax-\lambda x'A'x+\lambda^2x'x\\ &=x'A'Ax-\lambda x'(Ax)-\lambda(Ax)'x+\lambda^2x'x\\ &=(\lambda x)'(\lambda x)-\lambda x'(\lambda x)-\lambda^2x'x+\lambda^2x'x\\ &=(\lambda^2-\lambda^2-\lambda^2+\lambda^2)x'x=0 \end{aligned} The third line uses $AA'=A'A$ and the fourth line uses $Ax=\lambda x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4537968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
prove that $\dfrac{\alpha}{\beta}$ can be written in the form $a + b\sqrt[3]{2} +c\sqrt[3]{4}$ for rationals a,b,c. Let $\alpha = \sum_{i=0}^2 a_i(\sqrt[3]{2})^i$ where each $a_i$ is rational and let $\beta = \sum_{i=0}^2 b_i (\sqrt[3]{2})^i$ where each $b_i\in \mathbb{Q}$ and not all the $b_i$'s are zero. Prove that $\dfrac{\alpha}{\beta}$ can be written in the form $a + b\sqrt[3]{2} +c\sqrt[3]{4}$ for rationals a,b,c. I'm not sure how to "rationalize" the denominator of $\dfrac{\alpha}{\beta}.$ The standard trick for square roots obviously doesn't work here, but I think there's a variant that might work. I know $1+\sqrt[3]{2}+\sqrt[3]{4} = \dfrac{1}{\sqrt[3]{2}-1}.$ One can of course assume all the $a_i$'s and $b_i$'s are integers by multiplying the numerator and denominator of $\alpha/\beta$ by the least common multiple of the $a_i$'s and $b_i$'s.
Really you are saying: if $\alpha,\beta\in\Bbb Q(\sqrt{2})$ and $\beta\neq0$, then $\alpha/\beta$ can be expressed in terms of the basis $\{1,\sqrt[3]{2},\sqrt[3]{4}\}$ i.e. $\alpha/\beta\in\Bbb Q(\sqrt[3]{2})$. Written like so, the answer is obvious since $\Bbb Q(\sqrt[3]{2})$ is a field... the key point being: $\{1,\sqrt[3]{2},\sqrt[3]{4}\}$ is actually a basis. So why is that true? I'll make an abstract argument here that you can massively generalise to any algebraic field extension. $\Bbb Q(\sqrt[3]{2})\cong\Bbb Q[x]/(x^3-2)$ where the isomorphism symbolically evaluates each polynomial class on the RHS at $x=\sqrt[3]{2}$ (this is creating the linear span of $\{1,\sqrt[3]{2},\sqrt[3]{4}\}$). Since $\Bbb Q$ is a field, $\Bbb Q[x]$ is a principal ideal domain; since $x^3-2$ is an irreducible polynomial, the ideal $(x^3-2)$ is maximal - if another ideal $J$ contains it, then $J=(p)$ for some polynomial $p$ and $(p)\supset(x^3-2)$ means $p$ divides $(x^3-2)$, and the only possible divisors $p$ are $(\pm)(x^3-2)$ or $(\pm)1$ so $J$ is always either $(x^3-2)$ itself or the entire ring $\Bbb Q[x]$. Maximality means $\Bbb Q[x]/(x^3-2)$ is a field, because: if a nonzero element $p$ is not invertible, then $(p)$ does not contain $1$, so $(p)\neq\Bbb Q[x]/(x^3-2)$ and it is also a nonzero ideal, which means $(p)$ is a proper container of $(x^3-2)$ when lifted back into $\Bbb Q[x]$ - this is impossible. Therefore all nonzero elements are invertible! What on Earth does all that mean? Suppose an element $p=a+b\sqrt[3]{2}+c\sqrt[3]{4}$ is not invertible (there do not exist $\alpha,\beta,\gamma\in\Bbb Q$, $(\alpha+\beta\sqrt[3]{2}+\gamma\sqrt[3]{4})p=1$). Then the set of all multiplications $p\cdot(\alpha+\beta\sqrt[3]{2}+\gamma\sqrt[3]{4})$ does not contain $1$ so is not the entirety of $\Bbb Q(\sqrt[3]{2})$, but if $p\neq0$ then this set contains nonzero elements (namely, $p$...) - let's call this set (principal ideal) $J_1$. Consider all polynomials $a_0+a_1x+a_2x^2+\cdots+a_nx^n$ in rational coefficients. For any $q\in J_1$ and some polynomial $f_0$, we have that $q=f_0(\sqrt[3]{2})$. In fact, this is true for very many $f_0$ - any $f(x)=f_0(x)+g(x)(x^3-2)$ will have $f(\sqrt[3]{2})=p$. Let's call the set of all such $f(x)$ (ranging over $q\in J_1$) and all multiplications $f(x)h(x)$ the set (principal ideal) $J_2$, where $h(x)$ is any rational polynomial. Our assumptions are that $J_2\neq\{0\}$ and $J_2\neq\Bbb Q[x]$ - $1\notin J_2$. Moreover, since $q=0$ is an option, so $f_0(x)=0$ is an option and thus $f(x)=x^3-2$ is an option, we know that $x^3-2\in J_2$ as well as all multiples of $x^3-2$. But it follows from polynomial division that some polynomial $T(x)$ must divide every element of $J_2$. That means $T(x)$ divides $x^3-2$, so either $T=\pm1$ or $T=\pm(x^3-2)$. In the first case, we get $J_2=\Bbb Q[x]$ which is false. In the second case, we get $J_2$ to be all multiples of $x^3-2$. That means, when we evaluate any element of $J_2$ at $\sqrt[3]{2}$, we get zero. In particular it means $p=0$. But this is a contradiction! (the same as the one above, but hopefully more concrete). It follows that there must be some $\alpha+\beta\sqrt[3]{2}+\gamma\sqrt[3]{4}$ that serves as an inverse to $p$. Even more concretely: (following Will Jagy's hint): $(a+b\sqrt[3]{2}+c\sqrt[3]{4})(a^2+b^2\sqrt[3]{4}+c^2\sqrt[3]{16}-bc\sqrt[3]{8}-ac\sqrt[3]{4}-ab\sqrt[3]{2})=a^3+b^3(2)+c^3(4)-3abc\sqrt[3]{8}$. In other words: $$(a+b\sqrt[3]{2}+c\sqrt[3]{4})(a^2-2bc+[2c^2-ab]\sqrt[3]{2}+[b^2-ac]\sqrt[3]{4})=a^3+2b^3+4c^3-6abc$$So: $$\frac{1}{a+b\sqrt[3]{2}+c\sqrt[3]{4}}=\frac{(a^2-2bc)+(2c^2-ab)\sqrt[3]{2}+(b^2-ac)\sqrt[3]{4}}{a^3+2b^3+4c^3-6abc}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4538143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let $G$ be a $p$-group and let $H$ be a proper subgroup of $G$. Show that $H$ is a proper subgroup of $N_G(H)$ Let $G$ be a $p$-group and let $H < G$. Show that $H < N_G(H)$. If $H \trianglelefteq G$, then it should be clear that $H<N_G(H)$. So we suppose, $H$ isn't normal. So we let $H$ act on the set $$S = \{gHg^{-1}: g \in G, gHg^{-1} \neq H\}$$ via conjugation. If I can show that $p$ doesn't divide $|S|$, then I can use the fixed point theorem to complete the problem. Any advice on how to show that $p$ doesn't divide $|S|$?
The result is only true if $G$ is finite, which you are assuming but never state explicitly. As Derek Holt points out in comment, that is bad practice: if you are considering only finite groups, you should say so (or, in this site, at least tag it as [finite-groups]). You state that you only need to show that the cardinality of $S$ is not a multiple of $p$ when $H$ is not a normal subgroup of $G$, so that is what the argument below will show. Since you talk about the fixed point theorem, presumably you know about group actions. Let $G$ act on its subgroups by conjugation. Let $H$ be a proper subgroup of $G$. Let $T=\{gHg^{-1}\mid g\in G\}$ be the set of all conjugates of $H$. Your set $S$ is just $T\setminus\{H\}$. The set $T$ is the orbit of $H$ under the action. By the Orbit-Stabilizer Theorem, the cardinality of $T$ is equal to the index of the stabilizer of $H$ under the action. The stabilizer is $$N_G(H) = \{g\in G\mid gHg^{-1}=H\}.$$ Thus, the cardinality of $T$ is $[G:N_G(H)]$, and the cardinality of $S$ is $|T|-1 = [G:N_G(H)] - 1$. Because $G$ is a $p$-group, every subgroup has order a power of $p$ and index a power of $p$. So the cardinality of $T$ is a power of $p$. Say $p^i$. That means that the cardinality of $S$ is one less than a power of $p$. If $i\gt 0$, then $|T|\equiv 0\pmod{p}$, so $|S|\equiv -1\pmod{p}$, Since $-1$ is never a multiple of a prime, then it follows that $|S|$ is not a multiple of $p$ and we are done. If $i=0$, so $|T|=1$, then that means that $gHg^{-1}=H$ for all $g\in G$, so $H\triangleleft G$. Since we are assuming that $H$ is a proper subgroup of $G$, then this gives $H\subsetneq N_G(H)=G$, and we are done without having to worry about $S$ at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4538218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Dimension of the intersection of $2$ subspaces: $\dim(W \cap X) \leq \min \{\dim(W), \dim(X)\}$ Let $V$ be an $n$-dimensional vector space, $W$ a subspace of $V$, and $X$ another subspace of $V$. Prove that $$ \dim(W \cap X) \leq \min \{\dim(W), \dim(X)\}. $$ I'm not sure the best way to prove it. Any idea? Thanks!
$W \cap X \subseteq W$ and $W \cap X \subseteq X$. So, any basis for $W$ spans $W \cap X$. Hence, the dimension of $W \cap X$ is at most the dimension of $W$. The same applies for $X$, and the result follows immediately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4538343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
boundary of a simply connected, compact manifold with boundary For a simply connected, compact n-manifold with boundary (n > 1), is its boundary connected? It’s obviously false when n = 1, but how to prove or disprove the statement when n > 1? I’m especially interested in n = 2. Thanks.
Let $X$ be a simply connected and compact $2$-manifold, hence a surface. According to "Riemann Surfaces" by Simon Donaldson (See here, Chapter 7, page 92), we have the following inequality for surfaces: $$b_0(\partial X)\leq 2-\chi(X),$$ where $b_0(\partial X)$ (zeroth Betti number) denotes the number of path-connected components of $\partial X$. We need to compute those for $X$ to compute its Euler characteristic: * *Since $X$ is simply connected and therefore per definition path-connected, we have $b_0(X)=1$. *Since $X$ is simply connected, we have $H_1^\text{sing}(X,\mathbb{Z})\cong\pi_1(X)^\mathrm{ab}\cong 1$ and therefore $b_1(X)=0$. *Since the Betti number are the ranks of their respective homology groups, we have $b_2(X)\geq 0$. *Since $X$ is a $2$-manifold, we have $b_n(X)=0$ for $n>2$. We therefore have: $$\chi(X)=b_0(X)-b_1(X)+b_2(X)\geq 1$$ resulting in: $$b_0(\partial X)\leq 1,$$ which proves that $\partial X$ is path-connected and therefore connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4538571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Probability of choose point in interval (0,1) Choose a point random uniformly in $(0,1)$. Then, this point divides the interval (0,1) into two sub-intervals. * *Compute the expected length of the interval containing a fixed point $s \in [0,1]$. *Compute the expected distance of the randomly chosen point from $s$. My approach: My intuition is that the expected length of the interval containing $s$ is 1/4, since on average the sub-intervals should be about length 1/2 each, and $s$ is in exactly one of them. I don't know how to mathematically prove this intuition if it's correct. I have no idea how to approach 2.
Hint. By symmetry, you can assume $s \le 1/2$. Then for random $x$ the interval containing $s$ has length $x$ if $x > s$ and length $1-x$ if $x<s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4538754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to show $ \int_{-\infty}^{\infty} \frac{e^{-(x+1)^2}}{1+e^{-x}}\mathrm{d}x = \frac{\left(2\sqrt[4]{e} -1 \right)\sqrt{\pi}}{2e}$? I was recently looking at this post where the following formula is shown: $$ \int_{-\infty}^{\infty} \frac{E(x)}{1+\mathcal{E}(x)^{O(x)}}\mathrm{d}x= \int_0^{\infty} E(x) \mathrm{d}x $$ where $E(x), \mathcal{E}(x)$ are even functions and $O(x)$ is an odd function. One nice application of this formula would be the integral $$ \int_{-\infty}^{\infty} \frac{e^{-x^2}}{1+e^{-x}}\mathrm{d}x = \frac{\sqrt{\pi}}{2} $$ where the problem reduces to the evaluation of the Gaussian integral. I then wondered what would happen if I made slight alterations to the above integral, like changing $x^2\to (x+1)^2$. WA evaluates said integral as: $$ \int_{-\infty}^{\infty} \frac{e^{-(x+1)^2}}{1+e^{-x}}\mathrm{d}x = \frac{\left(2\sqrt[4]{e} -1 \right)\sqrt{\pi}}{2e} $$ The even/odd formula can't be applied since the $+1$ makes the function not even anymore. Recalling that $\int^\infty_{-\infty} e^{-(ax^2 + bx+c)}\mathrm{d}x=\sqrt{\frac{\pi}{a}}e^{\frac{b^2}{4a}-c} $ I attempted to evaluate the integral using geometric series $$ \int_{-\infty}^{\infty} \frac{e^{-(x+1)^2}}{1+e^{-x}}\mathrm{d}x = \sum_{n\ge 0}(-1)^n \int_{-\infty}^{\infty}e^{-(x^2+(n+2)x+1)}\, \mathrm{d}x = \frac{\sqrt{\pi}}{e} \sum_{n\ge 0}(-1)^n e^{\frac{(n+2)^2}{4}} $$ but the resulting series is divergent, so this method won't work. Does anyone have any ideas on how to evaluate this integral? Thank you!
$\begin{align} \int_{-\infty}^{\infty}\frac{e^{-(x+1)^2}}{1+e^{-x}}dx &=\int_{0}^{\infty}\frac{e^{-(x+1)^2}+e^{-x}e^{-(-x+1)^2}}{1+e^{-x}}dx\\ &=\int_{0}^{\infty}\frac{e^{-x^2-1}(e^{-2x}+e^{x})}{1+e^{-x}}dx\\ &=\int_{0}^{\infty}\frac{e^{-x^2-1}(e^{\frac{3}{2}x}+e^{-\frac{3}{2}x})}{e^{\frac{1}{2}x}+e^{-\frac{1}{2}x}}dx\\ &=\int_{0}^{\infty}e^{-x^2-1}(e^x-1+e^{-x})dx\\ &=\int_{0}^{\infty}e^{-x^2-x-1}dx+\int_{0}^{\infty}e^{-x^2+x-1}dx-\int_{0}^{\infty}e^{-x^2-1}dx\\ &=\int_{0}^{\infty}e^{-x^2-x-1}dx+\int_{-\infty}^{0}e^{-x^2-x-1}dx-\frac{\sqrt{\pi}}{2e}\\ &=\int_{-\infty}^{\infty}e^{-x^2-x-1}dx-\frac{\sqrt{\pi}}{2e}\\ &=\sqrt{\pi}e^{\frac{1}{4}-1}-\frac{\sqrt{\pi}}{2e}\\ &=\frac{\sqrt{\pi}}{2e}(2\sqrt[4]{e}-1)\\ \end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4538963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
A bound for a vaguely convergent sequence of measures Let $X$ be an open bounded set in Euclidean space and $M(X) = C_c(X)^*$ be the space of signed Radon measures. For $\mu_n$, $\mu \in M(X)$, by definition of vague convergence, $\mu_n \to \mu$ if and only if $\int_X f d\mu_n \to \int_X f d\mu$ for all compactly supported $f \in C_c(X)$. Now, suppose that $\mu_n$ and $\mu$ are both positive and that $\mu_n \to \mu$ vaguely. I know that $\mu(X) < \infty$, but can I conclude that $\sup_n \mu_n(X) < \infty$? The naive idea is to take $f(x) = 1$ everywhere but of course, $f$ does not have compact support. Then I was thinking taking a sequence of positive functions $f_m$ that approximates $f$ from below, but I couldn't justify how to exchange the limits $\lim_{n\to \infty}$ and $\lim_{m \to \infty}$.
$X=(0,1), \mu=n\delta_{1/n}, \mu=0$ is a counter-example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4539276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In multivariable calculus, why do we normalize $\frac{\partial}{\partial \theta}$ in polar coordinates? I'm TA'ing multivariable this semester, and I just noticed that we always tend to normalize all our basis vectors when using polar coordinates. This is in stark contrast what I'm used to in differential geometry, as we'd prefer that our coordinate basis to transform by the law \begin{align*} \frac{\partial}{\partial x}&=\frac{\partial r}{\partial x}\frac{\partial}{\partial r}+\frac{\partial \theta}{\partial x}\frac{\partial}{\partial \theta}\\ &=\cos\theta\frac{\partial}{\partial r}-\frac{\sin\theta}{r}\frac{\partial}{\partial \theta}, \end{align*} and similarly with $\frac{\partial}{\partial y}$. This $\frac{1}{r}$ factor makes up for the fact that if we travel in the angular direction, we cover more ground the further we are from the origin. So for example, the gradient in these "geometric" polar coordinates would take on the form $$\nabla f |_{(r,\theta)}=(\frac{\partial}{\partial r},\frac{1}{r^2}\frac{\partial}{\partial \theta})$$ which agrees with the usual way of defining gradients locally by $\nabla f=g^{ij}(\partial_if)\partial_j$. This in opposition to the more common $\frac{1}{r}$ factor which comes using the normalized polar coordinate system. So why are we normalizing these coordinates? If you insist on working in an orthonormal frame, why not call it a polar frame instead of polar coordinates to avoid bad practices in the future? Edit: Let me put in an explicit computation in with the "geometric" (which I learned is called holonomic) basis. Consider $$f(x,y)=\frac{x}{x^2+y^2},$$ so that in polar coordinates, $$f(r,\theta)=\frac{\cos\theta}{r}.$$ One sees: \begin{align*} \nabla f&=\frac{\partial f}{\partial x}\bigg\vert_{(r,\theta)}\frac{\partial}{\partial x}+\frac{\partial f}{\partial y}\bigg\vert_{(r,\theta)}\frac{\partial}{\partial y}\\ &=\frac{\sin^2\theta-\cos^2\theta}{r^2}\left(\cos\theta\frac{\partial}{\partial r}-\frac{\sin\theta}{r}\frac{\partial}{\partial \theta}\right)-\frac{2\cos\theta\sin\theta}{r^2}\left(\sin\theta\frac{\partial}{\partial r}+\frac{\cos\theta}{r}\frac{\partial}{\partial \theta}\right)\\ &=\frac{\sin^2\theta\cos\theta-\cos^3\theta-2\cos\theta\sin^2\theta}{r^2}\frac{\partial}{\partial r}+\frac{-\sin^3\theta+\cos^2\theta\sin\theta-2\cos^2\theta\sin\theta}{r^3}\frac{\partial}{\partial \theta}\\ &=-\frac{\cos\theta}{r^2}\frac{\partial}{\partial r}-\frac{\sin\theta}{r^3}\frac{\partial}{\partial \theta}\\ &=\frac{\partial f}{\partial r}\frac{\partial }{\partial r}+\frac{1}{r^2}\frac{\partial f}{\partial \theta}\frac{\partial}{\partial \theta}. \end{align*}
My naïve guess is simply that people like orthonormal bases so that they can apply a Pythagoras-like formula to get lengths $$\|a e_r + b e_\theta \| =\sqrt{a^2+b^2}$$ and so they avoid introducing the components of the metric tensor. Projection formulas are also easier, like the component of $v$ in the angular direction is $(v\cdot e_\theta)e_\theta$. This last point can be remedied by using the dual basis of the dual space, but again most people avoid talking about linear functionals by using their Riesz representatives. Finally, the vector calculus formalism is not designed to work nicely in arbitrary coordinates (an example of this is how the vector Laplacian has to be defined using the curl of the curl). People using vector calculus tend to not care about coordinate invariance of their expressions, so they are happy (or at least won't complain as much as a differential geometer would) having different expressions in different coordinate systems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4539422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find $\cos\frac{\pi}{12}$ given $\sin(\frac{\pi}{12}) = \frac{\sqrt{3} -1}{2 \sqrt{2}}$ Find $\cos\frac{\pi}{12}$ given $\sin(\frac{\pi}{12}) = \frac{\sqrt{3} -1}{2 \sqrt{2}}$ From a question I asked before this, I have trouble actually with the numbers manipulating part. Using trigo identity, $\sin^2 \frac{\pi}{12} + \cos^2 \frac{\pi}{12} = 1$ so , $\cos^2 \frac{\pi}{12} = 1- \sin^2 \frac{\pi}{12}$ To find $\cos \frac{\pi}{12} = \sqrt{1- \sin^2 \frac{\pi}{12}}$ $\sin^2 \frac{\pi}{12} = (\frac{\sqrt{3} -1}{2 \sqrt{2}})^2 = \frac{(\sqrt{3}-1)^2}{(2\sqrt{2})^2} = \frac{2- \sqrt{3}}{4}$ $\cos \frac{\pi}{12} = \sqrt{1-(\frac{\sqrt{3} -1}{2 \sqrt{2}})^2} $ $\cos \frac{\pi}{12} = \sqrt{1- \frac{2-\sqrt{3}}{4}}$ $\cos \frac{\pi}{12} = \frac{\sqrt{2+\sqrt{3}}}{2}$ What is wrong with my steps?
Let $$\sqrt{2+\sqrt{3}}=\sqrt{x}+\sqrt{y}\implies x+y=2. xy=3/4 \implies x=3/2, y=1/2.$$ So $$\cos(\pi/12)=\frac{\sqrt{3}+1}{2\sqrt{2}}$$ OP is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4539557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Bound of the surface area of subset of the unit sphere in $\mathbb{R}^3$ with no pair of orthogonal points I am trying to bound the measure of a measurable subset of the unit sphere in $\mathbb{R}^3$ that contains no pair of orthogonal points by $\frac{4\pi}{3}$. Any help to handle this problem from a probabilistic viewpoint would be helpful.
Let $S$ be your set. Let $x,y,z$ be a uniform random orthonormal basis, such that $x$, $y$ and $z$ are all uniform on the sphere (one needs to rigorously show such a process exists, it amounts to taking a uniform element of $SO(3)$). Only one of the three can be in $S$ at once, so that $\{x\in S\}, \{y\in S\}$ and $\{z\in S\}$ are disjoint events. So $3\mu(S)/\mu(\mathbb S^2) = P(x\in S)+ P(y\in S)+P(z\in S)\leq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4539757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Elementary Number Theory : Prime Problem : Determine all twin primes $p$ and $p+2$ for which $p(p+2)-2$ is also prime. First, $3$ and $5$ are primes and $3*5-2=13$ is also prime. Assume that $p=3m+n$ $(n=0, 1, 2)$ and 1$\leq$m. i.e. $p>3$ Then $n$ cannot be $0$ since $p$ is prime. And $p$ cannot be $3n+1$ since $(3n+1)+2=3(n+1)$ is not prime. So $p=3n+2$ Then $p(p+2)-2=(3n+2)(3n+4)-2=3(3m^2+6m+2)$ is not prime. Hence $3, 5$ are the only pair. Is this right proof?
Apart from a mistyping (you said $3×5=15$ where you meant $3×5-2=13$) the proof looks correct. An alternate version: If $p\equiv2\bmod3$ then we can render $p(p+2)-2\equiv2(2+2)-2\equiv6\equiv0\bmod3$ without computing the full polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4540123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of contradiction when showing the supremum of a function Say you wanted to show $\sup{A}=4$ for every $a_{n}=\frac{4n}{n+1}, n\in \mathbb{N}$ by proof of contradiction. First, note that $a_{n}=\frac{4n}{n+1} < \frac{4n}{n} = 4$ And we assume it is $\sup{A}$ Then, you may simplify the expression of $a_n$ $a_{n}=\frac{4n}{n+1}=4 - \frac{4}{n+1}$ Now we're looking to show that 4 is in fact the supremum of A. Doing so with proof of contradiction, we assume there is a supremum $4 - \varepsilon, \varepsilon > 0$ Look for an n such that $a_{n}=4-\frac{4}{n+1} > 4 - \varepsilon$ And then you solve for $n$. But apparently it means for all $n$ larger than whatever is on the other side, there is a number larger than if the supremum was $4-\varepsilon$. Now, what if you first assume the supremum of A is 5, and you do the same process? There is nothing apparent in the solution that makes the answer seem wrong! So the question is how does the proof work, if numerically it makes sense for other values for the supremum? Thanks
You can immediately rule out any real number larger than $4$ as the supremum of your sequence once you've rewritten the terms of your sequence in the form $a_n = 4 - \frac{4}{n + 1}$. It is now clear that all the terms of your sequence are strictly less than $4$, so $4$ is an upper bound for the sequence. By definition of the supremum, it follows that no real number larger than 4 could be the supremum of this sequence. So, the proof by contradiction you propose really just needs to show that assuming the supremum of the sequence is less than 4 yields a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4540262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Wrong sign in integration by substitution Consider the following integral: $$I:=\int\limits_0^1 \int\limits_0^1 (x-y)^2 \,\mathrm dx\, \mathrm dy = \frac{1}{6}\tag{1} \quad .$$ Suppose that we want to solve it via integration by substitution and thus define $w(x,y):=x$ and $z(x,y):=x-y$. The absolute value of the Jacobi-determinant equals one and so I think that $I$ can be rewritten as $$I=\int\limits_0^1 \int\limits_w^{w-1} z^2 \,\mathrm dz\,\mathrm dw \tag{2} \quad .$$ However, we then obtain $I=-\frac{1}{6}$. Obviously the negative sign comes from the integration limits of the $z$-integration, i.e. the sign error can be fixed by swapping the limits. But is there a reason why we should swap the limits (because I think that from a simple calculations this order is correct)? Where is the mistake?
You have to decide if you are doing signed integration or not. If you are doing signed integration, you don't take the absolute value of the Jacobian, just like you wouldn't in one dimension where signed integration is customary. If you decide to use unsigned integration like in the link you provided, your substitution changes the domain from $[0,1] \times [0,1]$ to $$\{(w,z) \in \mathbb{R}^2 : w \in [0,1], z \in [w-1,w]\}.$$ Thus, by Fubini, we get $$I = \int\limits_0^1 \int\limits_{w-1}^w z^2 \,\mathrm dz\,\mathrm dw$$ and all is fine again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4540492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A formal power series such that $f(f(x))=x$ Let $$f(x)=\sum_{n=1}^\infty a_n x^n$$ be a formal power series with no constant term such that $f(f(x))=x$. We find that $$f(f(x))=a_1^2x+(a_1a_2+a_1^2a_2)x^2+(a_1a_3+2a_1a_2^2+a_3a_3)x^3+\dots$$ so $a_1^2=1$. If $a_1=1$, you need all the other terms to be zero, however if $a_1=-1$ we get a family of nontrivial solutions. Let $a_2=a$, and requiring the higher coefficients of $f(f(x))$ to be zero we can find $a_3=2a^2$, $a_4=-\frac{11}2a^3$, $a_5=\frac{11}2a^4$, $a_6=\frac{105}4a^5$, $a_7=-\frac{279}2 a^6$... Is there a closed form for these numbers?
The claim that the coefficients are unique given the quadratic term is incorrect. Before going into that, here's some background on Anne's answer. We consider the simpler question of how to determine solutions to $f(f(x)) = x$ where $f$ is a Mobius transformation $f(x) = \frac{ax + b}{cx + d}$. Over a field $K$ the group of Mobius transformations is isomorphic to the projective general linear group $PGL_2(K)$, with the isomorphism given by sending a $2 \times 2$ matrix $\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]$ to $\frac{ax + b}{cx + d}$. So the problem reduces to finding matrices $M$ squaring to a scalar multiple of the identity. So write $M^2 = c$ for some scalar $c$, so that the eigenvalues of $M$ are $\pm \sqrt{c}$ (over an algebraically closed field, say $\mathbb{C}$). Since we only need to work up to scale we might as well divide $M$ by $c$, or equivalently assume WLOG that $c = 1$, so $M^2 = 1$ and $M$ has eigenvalues $\pm 1$. If $M$ is conjugate to a nontrivial Jordan block then it squares to another nontrivial Jordan block so can't square to $1$; hence $M$ is diagonalizable. If $M$ has eigenvalues $\{ 1, 1 \}$ or $\{ -1, -1 \}$ then it's a scalar multiple of the identity; otherwise its eigenvalues are $\{ 1, -1 \}$. This implies that $\text{tr}(M) = 0$ and $\det(M) = -1$, so $M$ has the form $\left[ \begin{array}{cc} a & b \\ c & -a \end{array} \right]$ where $\det(M) = -a^2 - bc = -1$. This gives $bc = 1 - a^2$. Now we add the constraint that as a formal power series $f(x)$ should have no constant term. This means $f(0) = 0$ which gives $b = 0$. Then $a = \pm 1$ and we can take $a = 1$ WLOG, which gives $M = \left[ \begin{array}{cc} 1 & 0 \\ c & -1 \end{array} \right]$, hence $$f(x) = \frac{x}{cx - 1} = -x - cx^2 - c^2 x^3 - \dots $$ as in Anne's answer (with $a = -c$). On the other hand Klaus's answer implies that the resulting power series cannot be unique, since we can conjugate by an invertible (with respect to composition) formal power series. With a little more effort (showing that we can even arrange for the first $n$ coefficients of this series past $x$ to vanish) we can show that the first $n$ coefficients never uniquely determine the others. So you did something funny with your calculations but I'm not sure what. Generally, it's known that every formal power series of finite order (with respect to composition) is conjugate to $f(x) = \zeta x$ for $\zeta$ some root of unity (above we have $\zeta = -1$, and in general we probably need to work over an algebraically closed field). This implies: Classification: If $f(x)$ is a formal power series satisfying $f(0) = 0$ and $f(f(x)) = x$, then either $f(x) = x$ or $\boxed{ f(x) = g(-g^{-1}(x)) }$ where $g(x) = x + \dots$. Given $g(x)$, the coefficients of $g^{-1}(x)$ can be computed using Lagrange inversion. This gives solutions depending on an infinite number of parameters, namely the higher coefficients $g_i$ of $g(x)$, and the first $n$ coefficients of $f$ depend only on the first $n$ coefficients of $g$. For example, if $g(x) = x + g_2 x^2 + \dots$ then $g^{-1}(x) = x - g_2 x^2 + \dots$ which gives $f(x) = -x + 2 g_2 x^2 + \dots$. To give a relatively explicit example, take $g(x) = x - ax^2$. Then $g^{-1}(x) = \frac{1 - \sqrt{1 - 4ax}}{2a}$ by the quadratic formula (we have to take the minus sign so that $g^{-1}(0) = 0$); this is a version of the generating function of the Catalan numbers. If $y - ay^2 = x$ then $g(-y) = -y - ay^2 = x - 2y$, which gives $$\boxed{ \begin{align*} f(x) &= x - \frac{1 - \sqrt{1 - 4ax}}{a} \\ &= -x - 2ax^2 - 4a^2 x^3 - 10a^3 x^4 - \dots \\ &= -x - \sum_{n=2}^{\infty} \frac{2}{n} {2n-2 \choose n-1} a^{n-1} x^n. \end{align*} }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4541104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 3 }
Find the locus of points |z-1|= -Im(z). If I wish to find the locus of complex points satisfying $ |z-1|= -\text{Im}(z)$, then would I be right in supposing it represents the half-circle $(x-1)^2 + (y-1/2)^2 = 1/4, y \leq 0$? My work follows: * *First, notice $|z-1| \geq 0 \Rightarrow \text{Im}(z) \leq 0.$ *Second, $$|z-1|^2 = (-\text{Im}(z))^2 \Rightarrow |z|^2 - 2\text{Re}(z)+ 1 = \text{Im}(z) \\ \Rightarrow x^2 +y^2 - 2x - y + 1 =0 \\ \Rightarrow (x-1)^2 + (y-1/2)^2 = 1/4,$$ in completing the square. As $y = \text{Im}(z)$, we further restrict $y \leq 0.$
From the given definition, we know that $y$ must be negative. Thus, $z$ must lie on the lower half plane. A geometric approach would be like : Distance between $z$ and $(1,0)$ is equal to it's distance from real axis. Thus, $z$ must lie on the straight line $Re(z)= 1$ because a hypotenuse is always greater than other sides of right triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4541274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is it true that if $\sum_{1}^{\infty} |x_n|^3$ converges then $\sum_{1}^{\infty} \frac{|x_n|}{n}$ converges? Is it true that if $\sum_{1}^{\infty} |x_n|^3$ converges then $\sum_{1}^{\infty} \frac{|x_n|}{n}$ converges? I am trying to show whether $d(x_n,y_n) = \sum_{1}^{\infty} \frac{|x_n - y_n|}{n}$ is a metric over $l_3 = \{(x_n)_n: \sum_{1}^{\infty} |x_n|^3\ < \infty\}$. My attempt to show d is not a metric was to find a sequence such that $d(x,0) = \infty$ but I couldn't.
By Hölder's inequality we have $\sum\frac {|x_n|} n \leq (\sum |x_n|^{3})^{1/3} (\sum \frac 1 {n^{3/2}})^{2/3} <\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4541409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Asymptotic expansion of $\exp(-\frac{1}{\epsilon^2+\epsilon^3})$ I am trying to find the asymptotic approximation of $\exp(-\frac{1}{\epsilon^2+\epsilon^3})$. I reformed it into $\frac{\exp(\frac{1}{\epsilon})}{\exp(\frac{1}{\epsilon^2})\exp(\frac{1}{1+\epsilon})}$ so I can now Taylor expand $\exp(\frac{1}{1+\epsilon})$ but haven't got any further with the other two. I also wasn't able to derive the expansion using integration by parts because I can't evaluate the expression when $\epsilon$ is 0. Any hints would be appreciated!
We have \begin{align*} \exp \left( { - \frac{1}{{\varepsilon ^2 + \varepsilon ^3 }}} \right) &= \exp \left( { - \frac{1}{{\varepsilon ^2 }}\frac{1}{{1 + \varepsilon }}} \right) = \exp \left( { - \frac{1}{{\varepsilon ^2 }}\left( {1 - \varepsilon + \varepsilon ^2 + \mathcal{O}(\varepsilon ^3 )} \right)} \right) \\ & = \exp \left( { - \frac{1}{{\varepsilon ^2 }} + \frac{1}{\varepsilon } - 1 + \mathcal{O}(\varepsilon )} \right) = \exp \left( { - \frac{1}{{\varepsilon ^2 }} + \frac{1}{\varepsilon } - 1} \right)(1 + \mathcal{O}(\varepsilon )), \end{align*} as $\varepsilon \to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4541610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Maximum possible number of colored cells in a 5x5 grid where no 3x3 subgrid can include more than 5 colored cells. In a 5x5 grid, each cell can be either colored or uncolored. What is the maximum number of cells you are able to color where no 3x3 subgrid can contain more than 5 colored cells? Follow up: Could you explain your thought process in answering this question? I can get $18$ colored squares: color the 2x2s in the corners and either the cell in the first row, third column and last row, third column, or third row, first column and third row, last column. I was asking for your thought process because I simply used trial and error, recognizing that the optimal cells to color tend to be those shared by the fewest 3x3 subgrids. I was wondering if there was a more methodical approach in addition to an answer for the problem. I saw this problem posted on a display advertising a club at school. When I contacted the club leaders they did not know the solution, and so I thought I'd ask this community. This is in no way schoolwork.
aabbb aabbb ccddd ccddd ccddd Assume there exists a solution with 19. There are at most 5 in the d region, at most 5 in the b, and at most 5 in the c region, hence a is completely filled with 4. So all is sharp, i.e., there are exactly 5 in each of b, c, d. Then at least one of the left two b fields and at least one of the top two c fields is coloured, giving us already 6 coloured fields in the top left $3\times3$, contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4541784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is $e^{\gamma t}$ Hölder continuous, with $\gamma<0$. Is $e^{\gamma t}$ Hölder continuous?, with $\gamma<0$. This question appear in something that i am working, i am not sure if the answer is yes or not. My only attempt is for definition $|e^{\gamma t}-e^{\gamma s}|=|\int_{s}^{t}\frac{1}{\gamma}e^{\gamma x}dx |\leq \frac{1}{|\gamma||}|t-s|$. If somebody knows can help me please, thank you so much.
Yes. Notice that since $\gamma < 0$, $0\leq e^{\gamma t} < 1$, hence $|e^{\gamma s}-e^{\gamma t}| \leq 1$. Now, using your computation, for any $\alpha\in[0,1]$, $$ |e^{\gamma s}-e^{\gamma t}| = |e^{\gamma s}-e^{\gamma t}|^{\alpha}\, |e^{\gamma s}-e^{\gamma t}|^{1-\alpha} \leq \left(\tfrac{1}{|\gamma|}\,|t-s|\right)^\alpha 1^{1-\alpha}. $$ Therefore $$ \frac{|e^{\gamma s}-e^{\gamma t}|}{|t-s|^\alpha} \leq \tfrac{1}{|\gamma|^\alpha}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4541919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the natural density of a counting number being divisible by 3 or 5 approximately 0.47? I ran the following calculation to estimate the natural density of a counting number being divisible by 3 or 5 as follows: import numpy as np import matplotlib.pyplot as plt matches = [] N = 10**4 for n in range(N): if n % 3 == 0 or n % 5 == 0: matches.append(1) else: matches.append(0) means = np.cumsum(matches) / np.arange(1, N+1) plt.plot(means) plt.xlabel('n') plt.ylabel('$f_n$') plt.show() The calculated result for $f_{10^4} \approx 0.4667$. Only as I was typing up this question did the site recommend I read this post which led me to this answer which suggests I simply calculate $$1 - \left(1 - \frac{1}{3} \right)\left(1 - \frac{1}{5} \right)$$ which gets 0.46666666666666656. But I will confess I don't know where this formula comes from or why it works.
This is essentially a very simple form of the Inclusion Exclusion formula. Basically, it is easier to ask when your divisibility criterion is NOT satisfied (and then take the complement). In our case, roughly $\frac{2}{3}=1 -\frac{1}{3} $ of the numbers are not divisible by $3$ while roughly $\frac{4}{5}=1 -\frac{1}{5} $ of the numbers are not divisible by $5$. By the product rule, to "miss" both is the same as multiplying those odds, namely: $\frac{2}{3}\frac{4}{5} = \frac{8}{15} = 0.5\bar{3}$. The complement of that is about $\frac{7}{15} = 0.46\bar{7}$ Note: One has to be careful when using a distribution on infinite sets like $\mathbb{N}$ but this logic works well here. If you work in $\mathbb{Z}_{15}$ you can make this precise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4542051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find all functions $f$ such that $f(x)\sin(1/x)$ continuous at $x_0 = 0$ A high-school problem. There is a mistake with the problem setting since $f(x)\sin(1/x)$ is undefined at $x=0$. You may alternatively use the limit here. The problem would thus become finding all functions $f$ such that $\lim_{x\to 0} f(x)\sin(1/x)$ exists.
Let $g(x)=f(x)\sin\frac{1}{x}$ and $g(0)=0.$ We show that $$g(x)=x^2\sin\frac{1}{x}$$ is continuous at $x=0.$. We find $\lim_{x\to 0} g(x)=0$ by sandwich theorem by noting that $$-x^2\le x^2\sin \frac{1}{x}\le x^2\implies \lim_{x \to 0} -x^2= \lim_{x \to 0} x^2 \sin \frac{1}{x}=\lim_{x\to 0} x^2$$ Next, $$\frac{dg(x)}{dx}=\lim_{h \to 0} \frac{h^2\sin \frac{1}{h}-0}{h}=\lim_{h\to 0} h \sin \frac{1}{h}=0, $$ by sandwich theorem as $$-x\le x\sin \frac{1}{x}\le x\implies \lim_{x \to 0} -x= \lim_{x \to 0} x \sin \frac{1}{x}=\lim_{x\to 0} x=0.$$ Hence $g(x)$ when $f(x)=x^2$ is both continuous and differentiable at $x=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4542214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Polya's urn martingale proof At time $0$, an urn contains $1$ black ball and $1$ white ball. At each time $1,2,3,...,$ a ball is chosen at random from the urn and is replaced together with a new ball of the same colour. Just after time $n$, there are therefore $n+2$ balls in the urn, of which $B_n+1$ are black, where $B_n$ is the number of black balls chosen by time $n$. Let $M_n = \frac{B_n + 1}{(n + 2)}$, the proportion of black balls in the urn just after time $n$. Prove that (relative to a natural filtration which you should specify) $M$ is a martingale. There are several ways of solving the problem. One proof starts off by letting $\mathcal{F}_n=\sigma(B_1,...,B_n)$ and then stating $$E[M_n|\mathcal{F}_{n-1}]=P(B_n=B_{n-1})\frac{B_{n-1}+1}{n+2}+P(B_n=B_{n-1}+1)\frac{B_{n-1}+2}{n+2}$$ Intuitively speaking, I agree completely with the equation. It reminds of the basic definition of conditional expectation for discrete rv $X$ given $Y=y$: $$E(X|Y=y)=\sum_xx\cdot P(x|y)$$ How can one prove the equation rigorously? There are similar problems like De Moivre's martingale (https://en.wikipedia.org/wiki/Martingale_(probability_theory)) which make use of the same property of conditional expectation to state: $$E(Y_{n+1}|X_1,...,X_n)=p\left(\frac{q}{p}\right)^{X_n+1}q\left(\frac{q}{p}\right)^{X_n-1}$$ from which it is easy to then show that $(Y_n)_{n\in\mathbb{N}}$ is a martingale. What is the general formula that I am missing which applies to both of these problems?
We set $\mathscr{F}_k:=\sigma(B_u,u\leq k)$. The differences $(B_n-B_{n-1})_{n \in \mathbb{N}}$ can only take the values $\{0,1\}$. We have, for $f$ measurable and under the integrability condition $E[|f(B_n-B_{n-1})|]<\infty$ $$\begin{aligned}E[f(B_n-B_{n-1})|\mathscr{F}_{n-1}]&=\sum_{k \in \{0,1\}}f(k)P(B_n-B_{n-1}=k|\mathscr{F}_{n-1}) \end{aligned}$$ If we define $f(x)=\frac{x+1}{n+2}$ we obtain $$\begin{aligned}E[f(B_n-B_{n-1})|\mathscr{F}_{n-1}]&=\underbrace{P(B_n-B_{n-1}=0|\mathscr{F}_{n-1})}_{:=p_{0,n-1}}\frac{1}{n+2}+\underbrace{P(B_n-B_{n-1}=1|\mathscr{F}_{n-1})}_{:=p_{1,n-1}}\frac{2}{n+2}\end{aligned}$$ By rearranging the terms $$\begin{aligned}E\bigg[\frac{B_n+1}{n+2}\bigg|\mathscr{F}_{n-1}\bigg]&=\frac{B_{n-1}}{n+2}+p_{0,n-1}\frac{1}{n+2}+p_{1,n-1}\frac{2}{n+2}=\\ &=p_{0,n-1}\frac{B_{n-1}+1}{n+2}+p_{1,n-1}\frac{B_{n-1}+2}{n+2}\end{aligned}$$ Similarly, in the other problem we obtain for $\mathscr{G}_k:=\sigma(X_u,u\leq k)$ $$\begin{aligned}E[Y_nY_{n-1}^{-1}|\mathscr{G}_{n-1}]&=E\bigg[\bigg(\frac{q}{p}\bigg)^{X_n-X_{n-1}}\bigg|\mathscr{G}_{n-1}\bigg]=\\ &=\sum_{k \in \{-1,1\}}\bigg(\frac{q}{p}\bigg)^kP(X_n-X_{n-1}=k|\mathscr{G}_{n-1})=\\ &=\bigg(\frac{q}{p}\bigg)^{-1}q+\bigg(\frac{q}{p}\bigg)p\end{aligned}$$ and so $$E[Y_n|\mathscr{G}_{n-1}]=Y_{n-1}\bigg(\frac{q}{p}\bigg)^{-1}q+Y_{n-1}\bigg(\frac{q}{p}\bigg)p$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4542715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does a bar of two variables mean in linear regression? I am learning simple linear regression and I was given the following equations to estimate $\beta_0$ and $\beta_1$: \begin{align*} \hat{\beta_0}=\bar{y}-\hat{\beta_1}\bar{x}\\ \hat{\beta_1}=\frac{\sum x_iy_i-n\overline{xy}}{\sum x^2_i-n\bar{x}^2} \end{align*} I was trying to calculate the values but I do not really know how to treat $\overline{xy}$, things I have considered: Calculating $\frac{\sum x_i y_i}{n}$, $\bar{x} \times \bar{y}$, and considering them as two dependent random variables, but all seem like they make sense to me so I am confused. If anyone could help it would be appreicated
The ones who typeset this equation were a little lazy. The overline is properly separate for each variable: $$\hat{\beta_1}=\frac{\sum x_iy_i-n\bar x\bar y}{\sum x^2_i-n\bar x^2}$$ So the means of $x$ and $y$ are multiplied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4542822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Arithmetic-Geometric limit $\lim\limits_{n\to\infty} x_n = \lim\limits_{n\to\infty} y_n$ For $x,y>0$, define two sequences $(x_n)$ and $(y_n)$ by $x_1=x,y_1=y$ and $x_{n+1}=(x_n+y_n)/2$ and $y_{n+1}=\sqrt{x_ny_n}$. Prove that $\lim\limits_{n\to\infty} x_n = \lim\limits_{n\to\infty} y_n= \dfrac{\pi}{\int_0^\pi \dfrac{d\theta}{\sqrt{x^2 \cos^2\theta + y^2\sin^2\theta}}}.$ I think it might be easier to prove $\lim\limits_{n\to\infty} x_n = \lim\limits_{n\to\infty} y_n.$ Let the LHS of this equation be denoted $L$ and let the RHS be denoted $M$. By the AM-GM inequality and induction, $x_{n}\leq y_n$ for all $n\ge 2$. It could be useful to define a new sequence with a limit that's easier to evaluate. We have $L\leq M$ by limit properties, so we just need to show $L\ge M$ to get $L=M$. Suppose for a contradiction that $L < M.$ Then by definition, there exists $N$ so that for all $n\ge N, x_n < \frac{L+M}2$ and $y_n > \dfrac{L+M}2.$ How can I proceed from here? As for showing it equals an expression involving a given integral, I think the integral is actually fairly hard to compute explicitly, so one should use some properties of the sequences to show the desired equality.
Putting together (essentially) the proof from comments that the limits exist and are equal, by @AbhijeetVats and @DanielWainfleet: If $x=y$, then $x_n=y_n=x=y$ for all $n$, so obviously the limits exist and are equal. If $x \neq y$, then $y_2 < x_2$ by the AM-GM inequality. Also note $y_2 > 0$. Now prove by induction that for every $n \geq 2$, $$ y_2 \leq y_n < y_{n+1} < x_{n+1} < x_n \leq x_2 $$ Since $x_n > y_n > 0$, $y_{n+1} = \sqrt{x_n y_n} > \sqrt{y_n y_n} = y_n$. Since $y_n \geq y_2$, this also shows $y_{n+1} > y_2$. Since $y_n < x_n$, $x_{n+1} = \frac{x_n+y_n}{2} < \frac{x_n+x_n}{2} = x_n$. And by the AM-GM inequality again, $y_{n+1} < x_{n+1}$. The induction proof is complete. So $(x_n)$ is a strictly decreasing sequence bounded below by $y_2$, and therefore converges to a real value $L = \lim_{n \to \infty} x_n$. And $(y_n)$ is a strictly increasing sequence bounded above by $x_2$, and therefore converges to a real value $M = \lim_{n \to \infty} y_n$. But then $$ L = \lim_{n \to \infty} x_{n+1} = \lim_{n \to \infty} \frac{x_n+y_n}{2} = \frac{M+L}{2} $$ which implies that $L=M$: $$ \lim_{n \to \infty} x_n = \lim_{n \to \infty} y_n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4543087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
There exists an edge coloring of $K_{10,10}$ with two colors such that there are at most $56$ monochromatic copies of $K_{3,3}$. Show that there exists an edge coloring of $K_{10,10}$ with two colors such that there are at most $56$ monochromatic copies of $K_{3,3}$. There are $100$ edges in $K_{10,10}$ and $9$ edges in $K_{3,3}$. If we consider $56$ copies of $K_{3,3}$, then there are a total of $56*9 = 504$ edges which is much more than $100$. So there is certainly intersections between the monochromatic copies of $K_{3,3}$. I am not getting any idea how to approach the problem. Any hints will be appreciable. Thanks.
In fact, it is possible to have $0$ monochromatic copies of $K_{3,3}$. For example, use colors $1$ and $2$ according to this $10 \times 10$ matrix, where each row corresponds to a left node and each column corresponds to a right node: 2 1 1 1 1 2 2 2 1 2 1 1 1 2 2 2 2 2 1 1 1 1 1 2 1 2 2 1 2 2 1 2 1 1 2 2 1 2 2 1 1 1 2 1 2 1 2 1 2 2 1 2 2 2 1 2 1 2 1 2 2 2 2 1 2 1 1 2 1 2 2 2 1 2 1 1 2 1 1 2 2 1 2 2 1 1 1 1 2 1 2 2 1 1 2 1 1 1 1 1
{ "language": "en", "url": "https://math.stackexchange.com/questions/4543189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integrating the sign function Let $sgn(t)=1$ when $t>0$, $sgn(t)=0 $ when $t=0$, and $sgn(t)=-1$ when $t<0$. Show that $$S(x):=\int_{-1}^{x}sgn(t)dt$$ is equal to $|x|-1$ for all $x\in[-1,1]$ My attempt: If $x=-1$, then $S(-1)=\int_{-1}^{-1}sgn(t)dt=0=|-1|-1=|x|-1.$ If $x=0$, then $S(0)=\int_{-1}^{0}sgn(t)dt=\int_{-1}^{0}-1\cdot dt=-\int_{-1}^{0}1\cdot dt=-1(0--1)=-1=|x|-1$ If $x=1$ then $S(1)=\int_{-1}^{1}sgn(t)dt=\int_{-1}^{0}sgn(t)dt+\int_{0}^{1}sgn(t)dt=-1+\int_{0}^{1}1\cdot dt=-1+(1-0)=0=|x|-1$. If $x\in(-1,0)$ $S(x)=\int_{-1}^{x}sgn(t)dt=\int_{-1}^{x}-1\cdot dt=-\int_{-1}^{x}1\cdot dt=-1(x--1)=-x-1$ If $x\in(0,1)$ then $S(x)=\int_{-1}^{x}sgn(t)dt=\int_{-1}^{0}sgn(t)dt+\int_{0}^{x}sgn(t)dt=-1+\int_{0}^{x}1\cdot dt=-1+(x-0)=-1+x$. Is my solution correct? Does it show what I was asked to prove? Thanks!
It is correct, but you could have saved a lot of work by showing that $|x|'=\operatorname{sgn}(x)$ (with an arbitrary choice for $x=0$, which doesn't matter since isolated discontinuities can be ignored in integration) and then applying the fundamental theorem of calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4543376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why must the area under a curve require a non-negative function? For context, this is the first math class in Bachelor's electrical engineering. Our teacher has just given us a definition for the area under a curve as the following: If $f(x)$ is an integrable and non-negative function on a closed interval $[a, b]$, then the area under the curve $y = f(x)$ in that range is the integral of $f(x)$ evaluated from $a$ to $b$. I asked him why the non-negative condition is given, and we're a little stuck trying to find an answer. In a real-life context, I understand that areas are negative. But when applied to stuff like physics, we'll have to account for the sign as it often denotes ideas like direction. Moreover, imo pure mathematics should allow for negative areas. Does anyone have any answers?
Usually, a geometrical area is always between two curves or more. Let us consider the example of two curves $y=f(x)$ and $y=g(x)$. Further, if $f(x)> g(x)$ or $f(x) < g(x)$ in $(a,c)$; at $x=a,c$ the two may be equal. Then the geometrical area is $$A=\int_{a}^{c} |f(x)-g(x)|~ dx.$$ But if these two curves cross each other at a single point $x=b$, then the geometrical area is $$A=\int_{a}^{b} |f(x)-g(x)|~dx+\int_{b}^{c} |f(x)-g(x)|~dx,$$ without even knowing which one is more or less in $(a,b)$ or $(b,c)$. In case there are more crossing points $b_1,b_2,b_3,...b_n$, the integral will be broken into $n+1$ integrals. Most often it is asked to find area area projected on the $x$-axis by a curve $y=(x)$ from $x=a$ to $x=c$. and the curve does not cross $x$-axis in the interval $(a,c).$ Then the geometrical area is $$A=\int_{a}^{c} |f(x)-0|~dx.$$ If the curve crosses x-axis at $x=b$, the the geometrical area will be $$A=\int_{a}^{b} |f(x)| dx+\int_{b}^{c} |f(x)| dx$$ There could also be a concept of vector area which could be positive or negative. So the integrals $\int_{a}^{c}f(x) dx$ and $\int_{a}^{c} (f(x)-g(x))~dx$ represent the vector area, which may not be the same as the geometrical area. For instance $$\int_{0}^{2\pi} \sin x dx=0$$ is the vector but the geometrical area made by $\sin x$ on $x-$axis from $x=0$ to $x=2\pi$ is $$\int_{0}^{2\pi} |\sin x|~dx=2 ~\text{sq. units}$$ I hope that this discussion may be helpful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4544074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Find the number of positive integers n such that $\sqrt{n+\sqrt{n+\sqrt{n}}}<10$ for any finite number of square root signs. Find the number of positive integers $n$ such that $\sqrt{n+\sqrt{n+\sqrt{n}}}<10$ for any finite number of square root signs. I know something with squaring and repeating, but what does "finite number of square root signs" mean?
It seems that the question means to ask you, as mentioned by the user lulu, that: Find all natural numbers $n$ such that each of $\sqrt n,\sqrt{n+\sqrt n}, \sqrt{n+\sqrt{n+\sqrt n}},\dots$ is less than 10. Which is a round-about way of asking: Find all natural numbers $n$ such that $\sqrt{n+\sqrt{n+\sqrt {n+\dots}}}<10$. To solve it, we may use the standard operating procedure: $$\text{Let }\sqrt{n+\sqrt{n+\sqrt {n+\dots}}}=x$$ $$\Rightarrow\sqrt{n+x}=x$$ $$\Rightarrow x^2-x-n=0$$ $$\Rightarrow x=\frac{1\pm\sqrt{1+4n}}{2}$$ And we are given that $x<10$. So: $$\frac{1\pm\sqrt{1+4n}}{2}<10$$ $$\pm\sqrt{1+4n}<19$$ $$1+4n<361$$ $$n<90$$ But for an infinite sequence would $n=90$ not be less than $10$. Thus, for a finite sequence $n=90$ shouldn't be a problem. Had we got something like $n<89.92$ then we would have concluded that max value $n$ can assume is $89$. Thus, for your case, $n$ can be any value from $1$ to $90$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4544284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Notation of 2 variables function I often see 2 versions of notation and it really confuses me. You can write $f(x,y)=x^2+y^2$ and this is of course a function of 2 variables But you can also write $z=x^2+y^2$,and this is also a function of 2 variables because z is the corresponding constant value So is it correct to simply say that $z=f(x,y)$?
The notation $f(x,y)=x^2+y^2$ (assuming that the domain is $\Bbb R^2$) describes a function from $\Bbb R^2$ to $\Bbb R$, which, formally, is a subset of $\Bbb R^3$. The notation $z=x^2+y^2$ describes a subset of $\Bbb R^3$, namely the elements that hold the equation, so they are really the same object. The precisest ways to write them are: * *As a function: $$\begin{array}{c}f:\Bbb R^2\to \Bbb R\\(x,y)\mapsto x^2+y^2\end{array}$$ *As a set: $$\{(x,y,z)\in \Bbb R^3:z=x^2+y^2\}$$ (I have assumed that $\Bbb R^2\times \Bbb R=\Bbb R^3$, which, strictly speaking, is not true, but I don't know any situation where the difference is relevant).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4544440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f(z) = \sum_{n=0}^{\infty} a_n(z-z_0)^n$ has radius convergence $ R > 0$. if $f(z) = 0$ for all $z$ $|z-z_o| < R$ show that $a_0 = a_1 = ... =0$. If $f(z) = \sum_{n=0}^{\infty} a_n(z-z_0)^n$ has a radius of convergence $R > 0$ and if $f(z) = 0$ for all $z$ $|z-z_o| < R$ show that $a_0 = a_1 = ... =0$. Proof Attempt: If it has a radius of convergence $R > 0$ then I know $\frac{1}{R} = \lim_{n \rightarrow \infty} |\frac{a_{n+1}}{a_n}| > 0$. Pick $z^*$ to be in the radius of convergence, then we know that : $$ 0 = a_0 + a_1(z^*-z_0) + ...+a_n(z^*-z_0)^n + ...$$ Which means that for some $a_i \rightarrow 0 \leq i \leq n$ we can write: $$a_i = (z^* - z_0)^{-i} (-a_0 - a_1(z^* - z_0) - ...-a_{i+1}(z^* - z_0)^{i+1} -....-a_n(z^*-z_0)^n) + ....$$ $$a_{i+1} = (z^* - z_0)^{-(i+1)} (-a_0 - a_1(z^* - z_0) - ...-a_{i+2}(z^* - z_0)^{(i+2} -....-a_n(z^*-z_0)^n). + ...$$ I want to setup now a contradiction, by using the fact that as $i \rightarrow 0$ then $ | a_{i+1}/a_i | > 0 $: $$ \lim_{i \rightarrow \infty}| \frac{a_{i+1}}{a_i} | = (z^* - z_0)^{-1}\frac{(-a_0 - a_1(z^* - z_0) - ...-a_{i+2}(z^* - z_0)^{i+2} -....-a_n(z^*-z_0)^n). + ...)}{(-a_0 - a_1(z^* - z_0) - ...-a_{i+1}(z^* - z_0)^{i+1} -....-a_n(z^*-z_0)^n) + ....)} $$ and I am having troubles concluding anything from this ... can someone suggest maybe another way? Attempt II I am attempting a solution based on $f(z) = f'(z) = ... = f^k(z) = 0$ $$(1) \ a_0 + a_1(z-z_0) + a_2(z-z_0)^2 + a_3(z-z_0)^3 + ... = 0 $$ $$(2) \ a_1 + 2a_2(z-z_0) + 3a_3(z-z_0)^2 + ... = 0 $$ $$ (3) \ 2a_2 + 3\times 2a_3(z-z_0) + ... = 0 $$ Take $(2) \times (z-z_0)$: $$(2) \ a_1(z-z_0) + 2a_2(z-z_0)^2 + 3a_3(z-z_0)^3 + ... = 0 $$ Subtract it from (1)? $$a_0 - [a_2(z-z)^2 + 2a_3(z-z_0)^3 + ... ] = 0 $$ Not sure where to go with this...
Hint $f^{(n)}(z_0)=n!\cdot a_n$. Or, since $\lvert z-z_0\rvert \lt R$ has an accumulation point, you can use the identity theorem. The power series is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4544637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Draw two numbers, A and B, from a set {1, 2, 3, 4, 5, 6}. A and B are drawn sequentially without replacement. Find the variance Var(3A+B). You draw two numbers, $A$ and $B$ from a set of integers $\{1,2,3,4,5,6\}$. The numbers are drawn sequentially from the set, without replacement. Find the variance $Var(3A+B)$. I have tried working on this question for a bit but cannot seem to find a quick way to compute it - perhaps there is some insight that would allow us to skip a lot of the computation?. I do not have a lot of experience with similar questions and so I have just been trying a brute force approach so far. This quickly gets quite convoluted for me, as expanding to $Var(3A+B) = Var(3A) + Var(B) + Cov(3A,B)$ all require further sub steps. Any help greatly appreciated! My approach so far
Variance and covariance can have scaling factors taken out: $$\newcommand{Var}{\operatorname{Var}}\newcommand{Cov}{\operatorname{Cov}}\Var(3A)=9\Var(A)$$ $$\Cov(3A,B)=3\Cov(A,B)$$ Thus $$\Var(3A+B)=9\Var A+\Var B+6\Cov(A,B)$$ where $6$ and not $3$ is the correct multiplier for $\Cov$ because the variance of a sum involves an $(a+b)^2$-type expansion. $A$ and $B$ are identically distributed because $(A,B)$ is a uniform random $2$-sample from the set, so the expression becomes $$\Var(3A+B)=10\Var(A)+6\Cov(A,B)$$ We now derive the explicit numbers: $$E(A)=E(B)=\frac{1+\cdots+6}6=\frac72$$ $$E(A^2)=\frac{1^2+\cdots+6^2}6=\frac{91}6$$ $$\Var(A)=\frac{91}6-\frac{49}4=\frac{35}{12}$$ $$E(AB)=\frac{(1+\cdots+6)^2-1^2-\cdots-6^2}{30}=\frac{35}3$$ $$\Cov(A,B)=\frac{35}3-\frac{49}4=-\frac7{12}$$ Finally $$\Var(3A+B)=10\cdot\frac{35}{12}-6\cdot\frac7{12}=\frac{308}{12}=\frac{77}3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4544778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\log(x)$ uniformly continuous on $(\frac{1}{2},\infty)$? I think the answer will be true. Here's an attempt: Let us divide the interval in $(\frac{1}{2},1)$ and $[1,\infty)$. In the domain $[1,\infty)$ the derivatives of $log(x)$ is bounded so it is uniformly continuous. In the domain $(\frac{1}{2},1)$ the function is continuous and also $lim_{x \to 1^{-}} log(x)$ exists finitely and $lim_{x \to \frac{1}{2}^{+}} log(x)$ also exists finitely. Hence the function is uc on the entire domain.
Yes, and you don't even need to split. On your interval you have $|\log'x|≤2$. Hence, by the Mean Value Theorem, $$ |\log x-\log y|=|\log'\xi|\,|x-y|≤2|x-y|,\qquad\qquad x, y≥\frac12. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4544939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Computing the infinite product $\prod_{n=1}^{\infty}(1-\alpha^n) = \dfrac{1-3\alpha}{1-2\alpha}$ I want to prove that if $\alpha < \dfrac{1}{3}$, then $$\prod_{n=1}^{\infty}(1-\alpha^n) = \dfrac{1-3\alpha}{1-2\alpha}.$$ I've proved (after some calculations) that if $$P = (1-\alpha)(1-\alpha^2)(1-\alpha^3)(1-\alpha^4)\ldots,$$ then $$P = 1-\alpha-\alpha^2+\alpha^5+\alpha^7-\alpha^{12}-\alpha^{15}+\alpha^{22}+\alpha^{26}-\alpha^{35}-\alpha^{40}+\alpha^{51}+... $$ where the behaviour of the exponent is given by: * *the odd positions from $\alpha^2$, the exponents are given by the sequence of general term $a_n = \dfrac{3n^2+n}{2}$, *the difference between the exponent of the even possitions on the right side of the pow described in the item before is given by the sequence of odd numbers from $3$, *the terms before $\alpha^7$ are computed expanding the product, *the sign of the terms of the sequence change each two positions. For example, if we want to compute the exponent in the 6th position (which is $12$), which is an even position, has to differ from the previous exponent, which is $7$, in $5$. Then, the exponent has to be $12$. To compute the exponent in the 7th position, which is odd, we use the formula $a_n=\dfrac{3n^2+n}{2}$; this exponent is the third one in the odd positions, therefore it corresponds to $n=3$, i.e., $a_3 = \dfrac{3\cdot 3^2+3}{2}=15$, and so on. I have my mind open to other procedures or explanations, and I really appreciate any proof.
In fact \begin{align} \prod^n_{k=1}(1-\alpha^k)>\frac{1-3\alpha}{1-2\alpha},\qquad 0<\alpha<1/3\tag{0}\label{zero} \end{align} Both quantities in the inequality above correspond to the measure of some well known fat Cantor sets $F$ and $K$ that I am constructing below. Starting with the interval $F_0=K_0=[0,1]$, remove the middle open subinterval of length $\alpha$. This yields a set $F_1=K_1$ consisting of two subintervals of length $\frac{1-\alpha}{2}$ each. This is the first step of the construction Construction of $F$: The $n$-th step of the construction yields a set $F_n$ which is the union of $2^n$ disjoint closed subintervals each of length $$\frac{1}{2^n}(1-\alpha)\cdot\ldots\cdot(1-\alpha^n)$$ From each such subinterval subtract the middle subinterval of length proportional to $\alpha^{n+1}$ of its length, that is, a middle subinterval of length $$\frac{1}{2^n}(1-\alpha)\cdot\ldots\cdot(1-\alpha^n)\alpha^{n+1}$$ This yields a set $F_{n+1}\subset F_n$ which is the union of $2^{n+1}$ closed subintervals. The length of $F_{n+1}$ (or rather its Lebesgue measure) is $$\ell(F_{n+1})=(1-\alpha)\cdot\ldots\cdot(1-\alpha^n)(1-\alpha^{n+1})$$ The set $F$ is defined as $F=\bigcap_nF_n$. Its length measure is $$\ell(F)=\prod^\infty_{n=1}(1-\alpha^n)$$ Construction of $K$: ImThe $n$-th step of the construction yields a set $K_n$ consisting of the union of $2^n$ disjoint closed subintervals each of length $$\frac{1-(\alpha+\ldots +2^{n-1}\alpha^n)}{2^n}$$ From each subinterval, we subtract a middle open subinterval of length $\alpha^{n+1}$. This yields a set $K_{n+1}\subset K_n$ with length measure $$\ell(K_{n+1})=1-(\alpha+2\alpha^2+\ldots + 2^n\alpha^{n+1})$$ $K$ is defined as $K=\bigcap_nK_n$. Its length measure is $$\ell(K)=1-\sum^\infty_{n=1}2^{n-1}\alpha^n=\frac{1-3\alpha}{1-2\alpha}$$ Notice that the length of each of the subintervals in the $n+1$-th step of the construction of $F$ is $$\frac{(1-\alpha)\cdot\ldots\cdot(1-\alpha^n)\alpha^{n+1}}{2^n}<\alpha^{n+1}$$ The total measure of the subintervals removed from $[0,1]$ in the construction of $F$ and $K$ respectively satisfy $$1-\ell(F)=\sum^\infty_{n=0}(1-\alpha)\cdot\ldots\cdot(1-\alpha^n)\alpha^{n+1}<\sum^\infty_{n=0}2^n\alpha^{n+1}=1-\ell(K)$$ Inequality \eqref{zero} follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4545108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
complex analysis/ integral: $\int_0^\infty \frac{1-\cos x }{x^2}dx$ I have a question about an example in Stein/Shakarchi. Actually there is another thread here on SE Integrating $\int_0^\infty \frac{1-\cos x }{x^2}dx$ via contour integral. Regarding the integral $\int_0^\infty \frac{1-\cos x }{x^2}dx$, I understand the indented semicircle contour, the division into 4 integrals, I also understand it until letting $R \rightarrow \infty$ and then applying ML estimation. However afterwards, I can not follow it anymore, where did the integrals of the two horizontal lines go? Do they cancel each other out (if yes, how can I see it?) and why do they write $f(z)$ as $f(z)=\frac{-i z}{z^2} + E(z)$ ? It would be really kind if someone could explain these last steps to me
Remember the equation $$\int_{-R}^{-\epsilon}+\int_{\gamma_\epsilon^+}+\int_{\epsilon}^R+\int_{\gamma_R^+}=0.$$ The argument shows that the limit of the second term is $-\pi$ and the limit of the last term is $0$. So, by this equation, the limit of the first and third terms must be $\pi$. But the limit of the first and third terms as $R\to\infty$ and $\epsilon\to 0$ is just an integral over the entire real line. So this shows exactly that the integral $\int_{-\infty}^\infty$ is equal to $\pi$, as claimed. The point of writing $f(z)=\frac{-iz}{z^2}+E(z)$ is that the length of the contour $\gamma_\epsilon^+$ goes to $0$ as $\epsilon\to 0$. So, as long as $E(z)$ is a bounded function near $0$, its integral over $\gamma_\epsilon^+$ will go to $0$ as $\epsilon\to 0$. This means that to compute the limit you can replace the complicated function $f(z)$ by the much simpler function $\frac{-iz}{z^2}$ which you can then just explicitly integrate over $\gamma_\epsilon^+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4545269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Possible "clever" ways to solve $x^4+x^3-2x+1=0$, with methodological justification Solve the quartic polynomial : $$x^4+x^3-2x+1=0$$ where $x\in\Bbb C$. Algebraic, trigonometric and all possible methods are allowed. I am aware that, there exist a general quartic formula. (Ferrari's formula). But, the author says, this equation doesn't require general formula. We need some substitutions here. I realized there is no any rational root, by the rational root theorem. The harder part is, WolframAlpha says the factorisation over $\Bbb Q$ is impossible. Another solution method can be considered as the quasi-symmetric equations approach. (divide by $x^2$). $$x^2+\frac 1{x^2}+x-\frac 2x=0$$ But the substitution $z=x+\frac 1x$ doesn't make any sense. I want to ask the question here to find possible smarter ways to solve the quartic.
We can look for a difference of squares factorization. Completing the square gives $$\left( x^2 + \frac{1}{2} x + c \right)^2 - \left( 2c + \frac{1}{4} \right) x^2 - (c + 2) x - (c^2 - 1)$$ and we want to find a value of $c$ such that the discriminant of the quadratic on the right is equal to zero. This gives $$\Delta = (c + 2)^2 - 4 \left( 2c + \frac{1}{4} \right) \left( c^2 - 1) \right) = - 8c^3 + 12c + 5$$ which happily has a rational root $c = - \frac{1}{2}$ (I guess we must be essentially using the resolvent cubic here). This gives us a factorization $$\left( x^2 + \frac{1}{2} x - \frac{1}{2} \right)^2 + \frac{3}{4} (x - 1)^2$$ which gives a difference of squares factorization $$\left( x^2 + \frac{1}{2} x - \frac{1}{2} + \frac{i \sqrt{3}}{2} (x - 1) \right) \left( x^2 + \frac{1}{2} x - \frac{1}{2} - \frac{i \sqrt{3}}{2} (x - 1) \right)$$ and we can use the quadratic formula from here; if you want to know what the roots end up looking like you can ask WolframAlpha.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4545364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 3 }
A skew symmetric and orthogonal matrix has eigen values (3/5) + (4i/5). How can this be possible? It must have 0 or purely imaginary values. Problem 1 Problem 1. It is Orthogonal and skew symmetric but eigen values aren't purely imaginary or zero Are the following matrices symmetric, skew-symmetric and/or orthogonal? $$\frac15\begin{bmatrix}3&-4\\4&3\end{bmatrix}$$
This matrix is not skew-symmetric, the diagonal entries are non-zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4545508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
can a bounded ratio of two polynomials have unbounded derivative? Let $P(x_1,\cdots,x_m)$ and $Q(x_1,\cdots,x_m)$ be two polynomials and assume that $$f(x_1,\cdots,x_m)=\frac{Q(x_1,\cdots,x_m)}{P(x_1,\cdots,x_m)}$$ is bounded over some region $D\subseteq \mathbb{R}^m$. Can $f$ have an unbounded derivative over $D$? By unbounded I mean that $|\nabla f|\rightarrow \infty$ somewhere in $D$.
tl; dr: Yes. (!!) In Cartesian plane coordinates $(x, y)$, let $$ P(x, y) = y,\qquad Q(x, y) = x. $$ Formally, $$ f(x, y) = \frac{y}{x},\qquad \nabla f(x, y) = \biggl(-\frac{y}{x^{2}}, \frac{1}{x}\biggr). $$ Particularly, $|y|/x^{2} < |\nabla f(x, y)|$. It remains only to find a region where $y/x$ is bounded and $|y|/x^{2}$ is not, such as $$ D = \{(x, y) : 0 < y < |x|^{3/2}\}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4545687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Projective lines over $\mathbb{Q}$ and $\mathbb{Z}$ I read on this Wikipedia page (about projective lines over rings) that "Similarly, a homography of $P(\mathbb{Q})$ corresponds to an element of the modular group, the automorphisms of $P(\mathbb{Z})$." Now, a homography of the projective line $P(\mathbb{Q})$ is an element of $\mathbf{PGL}_2(\mathbb{Q})$ (invertible $(2 \times 2)$-matrices over $\mathbb{Q}$, modulo scalars), while elements of the modular group are elements of $\mathbf{SL}_2(\mathbb{Z})$ ($(2 \times 2)$-matrices over the integers with determinant $1$). Is this correct ? As homographies are determined up to a scalar, their determinants are determined up to a square of an integer (well, actually of a rational number), so how can this statement work ?
This line in the Wikipedia article is incorrect and I have deleted it. You are correct that $PGL_2(\mathbb{Q})$ is the correct group of homographies and is strictly larger than the modular group $PSL_2(\mathbb{Z})$, e.g. the homography $x \mapsto 2x$ corresponding to the matrix $\left[ \begin{array}{cc} 2 & 0 \\ 0 & 1 \end{array} \right]$ is not in $PSL_2(\mathbb{Z})$ because its determinant is not a square. This whole Wikipedia article is strange. It feels like it was written by one person with idiosyncratic tastes and indeed if you scroll down to the external links you can find out who that person likely was...
{ "language": "en", "url": "https://math.stackexchange.com/questions/4545843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\binom{n}{0}$ + $\binom{n}{2}$ + $\binom{n}{4} + ...+\binom{n}{n-1}=\binom{n}{1}+\binom{n}{3}+\binom{n}{5}+...+\binom{n}{n}$ I want to prove that $\binom{n}{0}$ + $\binom{n}{2}$ + $\binom{n}{4} + ...+\binom{n}{n-1}=\binom{n}{1}+\binom{n}{3}+\binom{n}{5}+...+\binom{n}{n}$ for all odd numbers $n \ge 1$ I see that $\binom{n}{0}$ and $\binom{n}{n}$ are both equal to $1$ and therefor remove each other. But what can I do after that? I'm very lost, please help me move in the right direction at least.
Here's an intuitive combinatorial proof. The left-hand side is the number of even-sized subsets of an $n$-set, which we'll call $X$. The right-hand side is the number of odd-sized subsets of $X$. But $Y \mapsto X \setminus Y$ is a $1$-$1$ mapping of even-sized subsets onto odd-sized subsets, so the two numbers must be equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4546056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Area under circle of center $(0, 1)$ and radius $1$ Compute $$\int\int_A(3x^2y-y^3)\, dx\, dy$$ where $$A=\{(x, y)\ | \ x^2+(y-1)^2\leq 1\}.$$ I put $x=r\cos \theta$ and $y=1+r\sin \theta$ to get $$\int\int_A(3x^2y-y^3)\, dx\, dy=\int_{\theta=0}^{2\pi}\int_{r=0}^1(3r^2\cos^2\theta(1+r\sin\theta)-(1+r\sin\theta)^2) rdrd\theta.$$ But I am not getting it equal to $-\pi$. Is my approach correct or is there any easier way to solve this problem?
There is a wrong exponent in your transformed integral; it should be $$\int_0^{2\pi}\int_0^1(3r^2\cos^2\theta(1+r\sin\theta)-(1+r\sin\theta)^{\mathbf3})r\,dr\,d\theta=-\pi$$ Alternatively, use the normal polar coordinate transformation $x=r\cos\theta,y=r\sin\theta$, under which $A$ is parametrised by $0\le\theta\le\pi,0\le r\le2\sin\theta$: $$\int_0^\pi\int_0^{2\sin\theta}r^4(3\cos^2\theta\sin\theta-\sin^3\theta)\,dr\,d\theta=-\pi$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4546449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about Bessel sequence in a Hilbert space $\mathcal{H}$ I have a sequence in a Hilbert space $\mathcal{H}$, denoted by $\left\{f_k\right\}_{k=1}^{\infty}$. I would like to prove that, if there exists $B>0$ such that $$ \left\|\sum_{k=1}^{\infty} c_k f_k\right\|^2 \leq B \sum_{k=1}^{\infty}\left|c_k\right|^2 $$ then $\left\{f_k\right\}_{k=1}^{\infty}$ is a Bessel sequence with bound $B$. I tried to obtain a proof but I'm afraid I was wrong. Anyway it's this: what do you think? Proof. $$\left\|\sum_{k=1}^{\infty} c_k f_k\right\|=\sup_{\|g\|=1}\left|\langle \sum_{k=1}^{\infty} c_k f_k,g \rangle\right|\leq\sup_{\|g\|=1}\sum_{k=1}^{\infty} |c_k|\, \left|\langle f_k,g \rangle\right|\leq \sqrt{\sum_{k=1}^{\infty} |c_k|^2}\, \sup_{\|g\|=1}\sqrt{\sum_{k=1}^{\infty} \left|\langle f_k,g \rangle\right|^2}$$ Thus if in the latter I require that $ \left\|\sum_{k=1}^{\infty} c_k f_k\right\|^2 \leq B \sum_{k=1}^{\infty}\left|c_k\right|^2 $ then $$\left\|\sum_{k=1}^{\infty} c_k f_k\right\|^2\leq \sum_{k=1}^{\infty} |c_k|^2\, \sup_{\|g\|=1} \sum_{k=1}^{\infty} \left|\langle f_k,g \rangle\right|^2\leq B \sum_{k=1}^{\infty}\left|c_k\right|^2$$
This follows by duality. Consider the operator $R:\ell_2(\mathbb{N})\rightarrow H$ as $Rg=\sum_ng(n)h_n$. This operator is bounded by assumption: $\|Rg\|_H\leq B^{1/2}\|g\|_{\ell_2}$. The adjoint operator $R^*:H\rightarrow\ell_2$ is a bounded linear operator with $\|R^*\|_{L(H,\ell_2)}=\|R\|_{L(\ell_2,H)}\leq B^{1/2}$ such that \begin{align} \langle Rg,h\rangle_H&=\langle \sum_ng(n)h_n, h\rangle_H=\sum_n\langle g(n)h_n,h\rangle_H\\ &=\sum_ng(n)\overline{\langle h,h_n\rangle_H} =\langle g, R^*h\rangle_{\ell_2} \end{align} It follows that that the sequence $n\mapsto\langle h,h_n\rangle$ is in $\ell_2$ (and application of the uniform boundedness theorem for example) and that $(R^*h)(n)=\langle h,h_n\rangle$ for each $n\in\mathbb{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4546581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trying to solve $\frac{1}{\sigma\sqrt{2\pi}}\int_r^\infty xe^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}dx$ I'm looking to simplify/solve $$\frac{1}{\sigma\sqrt{2\pi}}\int_r^\infty xe^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}dx$$ , where $\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}$ is the pdf of the normal distribution. I tried to look at solving it with substitution, but I struggle at a certain point. So if I define $u=-\frac{1}{2}(\frac{x-\mu}{\sigma})^2$, then $du = -\frac{x-\mu}{\sigma^2}dx$ Now, I want to come to: $$\frac{1}{\sigma\sqrt{2\pi}}\int_r^\infty e^udu$$ but is the following correct? $$\mu(1-F(r))- \frac{1}{\sigma^3 \sqrt{2\pi}}\int_r^\infty e^udu$$, where $F(.)$ is the CDF of the normal distribution. I believe I'm making a mistake here, but just don't see what. Any help or hints are appreciated. EDIT I’m not looking for a closed form solution but a solution in terms of the pdf and cdf of the normal distribution.
$$ \sqrt{2 \pi} \sigma I = \int_r^\infty x e^{-\frac{1}{2\sigma^2} (x - \mu)^2}\,dx = [u = x-\mu, dx = du] = \int_r^\infty (u +\mu) e^{-\frac{1}{2\sigma^2} u^2}\,du = \sqrt{2 \pi} \sigma (I_1 + I_2) $$ $$ \sigma\sqrt{2 \pi} I_1 = \int_r^\infty u e^{-\frac{1}{2\sigma^2} u^2}\,du = \sigma^2 e^{-\frac{1}{2\sigma^2} (r-\mu)^2} $$ $$ I_2 = \frac{1}{\sqrt{2 \pi} \sigma}\mu \int_r^\infty e^{-\frac{1}{2\sigma^2} u^2}\,du = \mu\bigg(1-\Phi\Big(\frac{r-\mu}{\sigma}\Big)\bigg) $$ Putting 1 and 2 together, $$ I = \sigma\phi\Big(\frac{r-\mu}{\sigma}\Big) + \mu\bigg(1-\Phi\Big(\frac{r-\mu}{\sigma}\Big)\bigg). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4546730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Partial Fraction of $\frac{1-x^{11}}{(1-x)^4} $ for Generating Function The original question involves using generating functions to solve for the number of integer solutions to the equation $c_1+c_2+c_3+c_4 = 20$ when $-3 \leq c_1, -3 \leq c_2, -5 \leq c_3 \leq 5, 0 \leq c_4$. Using generating functions I was able to get it into the rational polynomial form: $$f(x) = {\left(\frac{1}{1-x}\right)}^3\left(\frac{1-x^{11}}{1-x}\right) = \frac{1-x^{11}}{{(1-x)}^4}$$ I was also able to determine that the sequence could be represented in two factors: $${\left(1+x^1+x^2+x^3+\cdots\right)}^3\left(1+x^1+x^2+\cdots+x^{10}\right)$$ However, to find the coefficient on $x^{31}$ to solve the problem, I figured I would have to get the term $\frac{1-x^{11}}{{(1-x)}^4}$ into a more typical generating function summation form. Thus, I endeavored to find the partial fraction decomposition of the term, however, I can't seem to do it at all. How would I find the partial fraction decomposition of $\frac{1-x^{11}}{{(1-x)}^4}$? Or is there a better method in using ${(1+x^1+x^2+x^3+\ldots)}^3(1+x^1+x^2+\ldots+x^{10})$ in order to find the coefficient on $x^{31}$? Thank you very much for your help, I've been trying this partial fraction for a while now and Wolfram alpha doesn't seem to be giving me an answer that is of much value.
I would use the generalized binomial theorem in the form $ \dfrac1{(1-x)^s} =\sum_{k=0}^{\infty} \binom{s+k-1}{s-1}x^k $. Then $\begin{array}\\ \dfrac{1-x^a}{(1-x)^s} &=\sum_{k=0}^{\infty} \binom{s+k-1}{s-1}x^k -x^a\sum_{k=0}^{\infty} \binom{s+k-1}{s-1}x^k\\ &=\sum_{k=0}^{\infty} \binom{s+k-1}{s-1}x^k -\sum_{k=0}^{\infty} \binom{s+k-1}{s-1}x^{k+a}\\ &=\sum_{k=0}^{\infty} \binom{s+k-1}{s-1}x^k -\sum_{k=a}^{\infty} \binom{s+k-a-1}{s-1}x^{k}\\ &=\sum_{k=0}^{a-1} \binom{s+k-1}{s-1}x^k +\sum_{k=a}^{\infty} \binom{s+k-1}{s-1}x^k -\sum_{k=a}^{\infty} \binom{s+k-a-1}{s-1}x^{k}\\ &=\sum_{k=0}^{a-1} \binom{s+k-1}{s-1}x^k +\sum_{k=a}^{\infty} \left(\binom{s+k-1}{s-1}-\binom{s+k-a-1}{s-1}\right)x^k \\ &=\sum_{k=0}^{a-1} \binom{s+k-1}{s-1}x^k +\sum_{k=a}^{\infty} \left(\dfrac{(s+k-1)!}{(s-1)!k!}-\dfrac{(s+k-a-1)!}{(s-1)!(k-a)!}\right)x^k \\ &=\sum_{k=0}^{a-1} \binom{s+k-1}{s-1}x^k +\sum_{k=a}^{\infty} \left(\dfrac{(s+k-1)!}{(s-1)!k!}-\dfrac{(s+k-a-1)!}{(s-1)!(k-a)!}\right)x^k \\ \end{array} $ You can do more manipulation if you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4546911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How Many Colours Will You Need If Colouring Polygons Enclosed By Straight Lines? There is an infinite plane. I can draw straight infinitely long lines in the plane(mind you,only finitely many). Now,I want to colour the polygons. The colours of adjacent polygons must not be the same(if the polygons only share the same point,then they aren't adjacent). How many colours do I need to colour the polygons enclosed by the straight line(including the infinite ones)? Example I think the answer is two, but I couldn't prove it. Can anyone prove it for me(or prove that it needs 3 or more colours)?
Two colours suffice. Think of adding the lines one at a time, starting with the plane coloured, say, red. Each time you add a line, leave one of the half-planes into which it divides the plane alone, and flip all the colours (red $\to$ blue, blue $\to$ red) in the other half-plane. Thus any region cut by the new line becomes two regions, one red and one blue. Any two adjacent regions will still be different colours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4547079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving $\sum \limits_{j=0}^{n+1- (a+b)} \binom{a+j-1}{a -1} \binom{n- a-j}{b -1} =\binom{n}{a+b-1} $ I have run into quiete a tricky sum, I am reasonably sure I know what is sums to but I have been unable to prove it $$ \forall a,b,n \in \mathbb{N}, \ n\geq a+b \geq 2:\quad \sum \limits_{j=0}^{n+1- (a+b)} \binom{a+j-1}{a -1} \binom{n- a-j}{b -1} =\binom{n}{a+b-1} $$ I have tried to evaluate it for quiete a number of particular values and it seems to hold. I've tried to prove it by induction however that didn't seem to lead anywhere but I could be wrong.
We seek to verify that with $n\ge a+b\ge 2$ $$\sum_{j=0}^{n+1-a-b} {a+j-1\choose a-1} {n-a-j\choose b-1} = {n\choose a+b-1}.$$ The LHS is $$\sum_{j=0}^{n+1-a-b} {a+j-1\choose a-1} {n-a-j\choose n+1-a-b-j} \\ = [z^{n+1-a-b}] (1+z)^{n-a} \sum_{j\ge 0} {a+j-1\choose a-1} \frac{z^j}{(1+z)^j} $$ Here we have extended to infinity because the coefficient extractor enforces the upper limit. Continuing, $$[z^{n+1-a-b}] (1+z)^{n-a} \frac{1}{(1-z/(1+z))^a} \\ = [z^{n+1-a-b}] (1+z)^{n} = {n\choose n+1-a-b} = {n\choose a+b-1}.$$ This is the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4547254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to reasonably (numerically) estimate $n\int_0^\infty\left(1-\left(1-e^{-t}\left(1+t+\ldots+{{t^{m-1}}\over{(m-1)!}}\right)\right)^n\right)dt$? Recently in doing some expected value calculations, I've derived the following two integrals: $$6\int_0^\infty\left(1-\left(1-e^{-t}\left(1+t\right)\right)^6\right)dt$$ $$6\int_0^\infty\left(1-\left(1-e^{-t}\left(1+t + {{t^2}\over2}\right)\right)^6\right)dt$$ Plugging these into Wolfram Alpha can get us numerical answers. For instance, the first integral comes out to about $24.1$, and the second integral comes out to about $32.7$. However, I'm wondering if anyone can give a reasonable numerical estimate for these $2$ integrals from first principles (pencil and paper) without using a calculator, Wolfram Alpha, or a computer. I've tried but made little to no progress, and I consulted some nearby PhD students and they didn't know either, so asking here.
I think the sigmoid approximation is a little more workable by hand than leonbloy suggests. Following his answer, we will approximate the integral as $6t_0$ where $g(t_0) = \frac{1}{2}$. For the first integral this means estimating the solution to $$\frac{1 + t}{e^t} = 1 - 2^{-\frac{1}{6}} \approx \frac{\log 2}{6}$$ or $$\frac{e^t}{1 + t} \approx \frac{6}{\log 2} \approx 9$$ (using $\log 2 \approx 0.69 \dots $). The LHS can be shown to be an increasing function of $t$ so we can try to estimate the root using the intermediate value theorem. Substituting $t = 3$ gives $\frac{e^3}{4} < \frac{3^3}{4} < 7$ while substituting $t = 4$ gives $\frac{e^4}{5}$. We can estimate this by just computing $e^4$ to two significant digits, which gives $$e^2 \approx 2.7^2 \approx 7.3, e^4 \approx 7.3^2 \approx 53$$ so $\frac{e^4}{5} \approx 10.6$. So $t_0$ in this case is between $3$ and $4$ and likely closer to $4$. If we just estimate $t_0 \approx 3.5$ as the midpoint we get an estimate of $6 \times 3.5 = \boxed{ 21 }$ for the first integral, sadly with no error bound. For the second integral similarly we want to estimate the solution to $$\frac{e^t}{1 + t + \frac{t^2}{2}} \approx 9.$$ Substituting $t = 4$ gives $\frac{e^4}{13} < \frac{81}{13} \approx 6$ while substituting $t = 5$ gives $\frac{e^6}{18.5}$. We again compute to two significant digits that $e^6 \approx 140$, so $\frac{e^6}{18.5} \approx \frac{140}{18.5} = 10 - \frac{45}{18.5} \approx 7.5$. So $t_0$ in this case is a bit larger than $5$ (we could substitute $t = 6$ to confirm this but I won't). If we just estimate $t_0 \approx 5$ we get an estimate of $6 \times 5 = \boxed{ 30 }$ for the second integral, again sadly with no error bound. For the general integral we now want to estimate the solution to $$e^{-t} \left( \sum_{i=0}^{m-1} \frac{t^i}{i!} \right) = 1 - 2^{-\frac{1}{n}} \approx \frac{\log 2}{n}.$$ We can think of the LHS as $\mathbb{P}(X \le m-1)$ where $X \sim \text{Pois}(t)$. Since we want this probability to be small we want $t \ge m-1$, so using the Chernoff bound adapted to this case we get $$\mathbb{P}(\text{Pois}(t) \le a) \le \exp \left( a - t + a \log \frac{t}{a} \right)$$ where $a = m-1$. Setting $t = a + x$, which simplifies things a bit, we get $$\mathbb{P}(\text{Pois}(a + x) \le a) \le \exp \left( - x + a \log \left( 1 + \frac{x}{a} \right) \right).$$ Taking the Chernoff bound as our approximation and setting it equal to $\frac{\log 2}{n}$ gives $$x \approx \log n - \log \log 2 + (m-1) \log \left( 1 + \frac{x}{m-1} \right)$$ which has dominant growth $x \approx \log n$; substituting this into the logarithm term gives $$x_0 \approx \log n - \log \log 2 + (m-1) \log \left( 1 + \frac{\log n}{m-1} \right)$$ which gives $t_0 \approx m-1 + x_0$, so we get a final approximation for the quantity in the title of your question: $$\boxed{ n \left( \log n + (m-1) \left( 1 + \log \left( 1 + \frac{\log n}{m-1} \right) \right) - \log \log 2 \right) }.$$ The dominant growth here looks like $n \log n + (m-1)n \log \log n$ which seems roughly reasonable as an answer to the coupon collector problem you describe in the comment. For $m = 2, n = 6$ we get an approximation of $$6 \left( \log 6 + \left( 1 + \log \left( 1 + \log 6 \right) \right) - \log \log 2\right) \approx \boxed{ 25.1 }$$ whereas for $m = 3, n = 6$ we get an approximation of $$6 \left( \log 6 + 2 \left( 1 + \log \left( 1 + \frac{\log 6}{2} \right) \right) - \log \log 2\right) \approx \boxed{ 32.7 }.$$ The Chernoff bound itself can be used to estimate the integrand (we should split into two integrals, one from $0$ to something like $m$ and another from something like $m$ to $\infty$) which can maybe do better; the integrand itself can be written $1 - \mathbb{P}(\text{Pois}(t) \ge m)^n$, which means for fixed $t$ it gives the probability that the minimum of $n$ iid Poisson random variables with intensity $t$ is $\le m-1$. This seems closely related to the coupon collector problem you describe but I have to admit I don't actually see the relationship at the moment. Edit: The asymptotics for the generalized coupon collector problem are actually described on Wikipedia and given by $n \log n + (m-1) n \log \log n + O(n)$, due to Newman and Shepp. So the above estimate is not too bad!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4547432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Invariance of a symmetric sum Consider two matrices $M$ and $N$ of same dimension, such that $L=MN+NM$ is the symmetric sum in the sense that interchanging $M$ and $N$ does not change $L$. Is there any transformation $T[M]=M^\prime$ and $T[N]=N^\prime$ such that $M^\prime N^\prime+N^\prime M^\prime=MN+NM$? In other words, the symmetric sum is invariant under the transformation $T$.
Suppose the elements of the matrices are taken from a field whose characteristic is not $2$. Then the only possible choices of $T$ are $\pm\operatorname{Id}$. Let $A=T(I)$. The given condition implies that $$ T(X)A+A\,T(X)=2X\tag{1} $$ for every matrix $X$. In particular, if we put $X=I$, we obtain $A^2=I$. Hence $A$ is diagonalisable and its only possible eigenvalues are $1$ and $-1$. Suppose both $1$ and $-1$ are eigenvalues of $A$. Then $v^TA=-v$ and $Au=u$ for some nonzero vectors $u$ and $v$. But then we may pick two vectors $x$ and $y$ such that $v^Tx=y^Tu=1$ and with $X=xy^T$, $(1)$ implies that $$ 2=2v^T(xy^T)u=v^T\left(T(X)A+A\,T(X)\right)u =v^TT(X)(Au)+(v^TA)T(X)u=0, $$ which is a contradiction. Therefore $A=\pm I$ and $(1)$ implies that either $T(X)=X$ for all $X$ or $T(X)=-X$ for all $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4547586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
how many white squares does a $2022 \times 2021$ block have? An infinite checkerboard is coloured black and white so that every $2\times 3$ block has exactly two white squares. Prove (or disprove) that every $2022\times 2021$ block has the same number of white squares and if so, find this number. Does the result hold if we additionally assume every $3\times 2$ block has exactly 2 white squares and if so, what is the number of white squares in a $2022\times 2021$ block? My initial thought was to consider various patterns. Let W be a white square and B a black square. Then we have the following possible patterns for a 2 by 3 block: $1)\begin{pmatrix}WWB \\ BBB\end{pmatrix},2)\begin{pmatrix}WBB \\ BBW\end{pmatrix},3)\begin{pmatrix}WBB \\ BWB\end{pmatrix},4)\begin{pmatrix}WBB \\ WBB\end{pmatrix},5)\begin{pmatrix}BWW \\ BBB\end{pmatrix},6)\begin{pmatrix}WBW \\ BBB\end{pmatrix},7)\begin{pmatrix}BWB \\ WBB\end{pmatrix},8)\begin{pmatrix}BWB \\ BWB\end{pmatrix},9)\begin{pmatrix}BWB \\ BBW\end{pmatrix},10)\begin{pmatrix}BBW \\ WBB\end{pmatrix},11)\begin{pmatrix}BBW \\ BWB\end{pmatrix},12)\begin{pmatrix}BBW \\ BBW\end{pmatrix}.$ Let the $2022\times 2021$ block have coordinates where the top left entry has coordinates $(1,1)$ and coordinates increase from left to right and top to bottom (coordinates are defined for all squares on the infinite checkerboard). Specify a rectangle with top left corner $(a,b)$ and bottom right corner $(c,d)$ as $[(a,b)-(c,d)].$ Suppose that each case represents the squares with coordinates $(1,1),(1,2),(1,3),(2,1),(2,2),(2,3).$ I'm not sure how to analyze the cases and it seems like it should be possible to take advantage of symmetry. The answer below assumes that an $x\times y$ block has height y and width x. For the other interpretation, $1\leq x \leq 2021, 1\leq y\leq 2022$ (x increases right, y increases down) $x\equiv 1\mod 3$ (giving $2022\times 674$ white squares) or $x\equiv 0\mod 3$ (giving $2022\times 673$ white squares). Both work because if $[(a,b)-(a+2,b+1)]$ is a $2\times 3$ block, then exactly one of $a,a+1, a+2$ is congruent to $0$ or $1$ modulo 3, say c, and in each case both squares in the 2 by 3 block with x-coordinate c are white.
The answer to the first part is no: Associate the board's squares with coordinates $(x,y)$, where $1 \leq x \leq 2022$ and $1 \leq y \leq 2021$. Color the board by solid columns. Board A has $(x,y)$ colored white if and only if $y \equiv 1 \pmod 3$. That is, $y = 3k+1$ where $0 \leq k \leq 673$, so Board A has $2022 \times 674$ white squares. Board B has $(x,y)$ colored white if and only if $y \equiv 0 \pmod 3$. Then $1 \leq \frac{y}{3} \leq 673$, and Board B has $2022 \times 673$ white squares. For the second part, every $2022 \times 2021$ checkerboard has exactly $\frac{2022 \times 2021}{3} = 674 \times 2021$ white squares. There can't be two adjacent whites: WWB BBB BB If there were, then at least one orientation relative to the adjacent whites will have spaces within the board in the $3 \times 3$ pattern shown, and all $6$ of the shown squares must be black. But then the bottom $6$ squares already have $5$ of them colored black, and can't contain two white squares. There can't be two whites in a row or column separated by exactly one black: WBW BBB WBW BBB If there were, then at least one orientation relative to the topmost WBW row will have spaces within the board in the $4 \times 3$ pattern shown. Since that row contains two white squares, the adjacent row must have three black squares. The third row must then have two white squares again; since they can't be adjacent, the row must be WBW. Since the third row contains two whites, the fourth row must be BBB. But then this would create two $3 \times 2$ blocks with only one white square each. There can't be three consecutive black squares in any row or column. If there were, then the three adjacent squares in an adjacent row/column would need to contain two white squares, violating one of the two conclusions above. Therefore every row and every column contains some rotation of the periodic pattern WBBWBBWBB.... The $2022 \times 2021$ checkboard can be divided into $674 \times 2021$ disjoint rectangles of size $3 \times 1$. Each rectangle contains exactly one white square, so the total number of white squares is also $674 \times 2021$. (The valid boards for the second part are repeating diagonal stripes, which can be described as either $(x,y)$ is white if $x+y \equiv c \pmod{3}$, or $(x,y)$ is white if $x-y \equiv c \pmod{3}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4547772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
easy way to calculate the limit $\lim_{x \to 0 } \frac{\cos{x}- (\cos{x})^{\cos{x}}}{1-\cos{x}+\log{\cos{x}}}$ I have been trying to use L'Hôpital over this, but its getting too long, is there a short and elegant solution for this? The Limit approaches 2 according to wolfram.
$$\lim_{x \to 0 } \frac{\cos(x)- (\cos(x))^{\cos(x)}}{1-\cos(x)+\log{\cos( x)}}=\lim_{t \to 1 }\frac{t-t^t}{1-t+\log (t)}$$ $$t^t=1+(t-1)+(t-1)^2+\frac{1}{2} (t-1)^3+O\left((t-1)^4\right)$$ $$\log(t)=(t-1)-\frac{1}{2} (t-1)^2+\frac{1}{3} (t-1)^3+O\left((t-1)^4\right)$$ $$\frac{t-t^t}{1-t+\log (t)}=\frac{-(t-1)^2-\frac{1}{2} (t-1)^3+O\left((t-1)^4\right)} {-\frac{1}{2} (t-1)^2+\frac{1}{3} (t-1)^3+O\left((t-1)^4\right) }$$ $$\frac{t-t^t}{1-t+\log (t)}=2+\frac{7 }{3}(t-1)+O\left((t-1)^2\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4547927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
The convergence speed of $ \int_0^{\frac{\pi}{2}} \sin ^n(x) \operatorname{d}x $? I have already known how to prove \begin{equation*} \lim _{n \rightarrow \infty} \int_0^{\frac{\pi}{2}} \sin ^n(x) \operatorname{d}x = \sqrt{\frac{\pi}{2n}} \end{equation*} with Wallis's formula \begin{equation*} \quad \frac{\pi}{2}=\frac{2 \cdot 2 \cdot 4 \cdot 4 \cdot 6 \cdot 6 \cdot 8 \cdot 8 \cdots}{1 \cdot 3 \cdot 3 \cdot 5 \cdot 5 \cdot 7 \cdot 7 \cdot 9 \cdots} \end{equation*} But the method I used was considered not to be universal. How to prove that \begin{equation*} \lim _{n \rightarrow \infty} \int_0^{\frac{\pi}{2}} \sin ^n(x) \operatorname{d}x = \frac{\sqrt{2 \pi}}{2} \cdot \frac{1}{n^{\frac{1}{2}}}-\frac{\sqrt{2 \pi}}{8} \cdot \frac{1}{n^{\frac{3}{2}}}+\frac{\sqrt{2 \pi}}{64} \cdot \frac{1}{n^{\frac{5}{2}}} \end{equation*} And is \begin{equation*} \lim _{n \rightarrow \infty} \int_0^{\frac{\pi}{2}} \sin ^n(x) \operatorname{d}x = \frac{\sqrt{2 \pi}}{2} \cdot \frac{1}{n^{\frac{1}{2}}}-\frac{\sqrt{2 \pi}}{8} \cdot \frac{1}{n^{\frac{3}{2}}}+ \dots + (-1)^{k}\cdot\frac{\sqrt{2 \pi}}{2^{\frac {k(k+1)}{2}}} \cdot \frac{1}{n^{\frac{2k+1}{2}}} \end{equation*} true? Are there any more powerful tools, like numerical methods to calculate the integration?
You need to have an equivalent of the factorial, or equivalently, of the gamma function. A well known is the Stirling's approximation but you can go further as $$ {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).}$$ where the coefficients corresponds to the "Stirling series".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4548070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Why do the composition of relations and the matrix product look so alike? For reference: Matrix product: $A : X\times Y \rightarrow R$ $B : Y\times Z \rightarrow R$ $AB := (x,z) \mapsto \int_{y\in Y} A(x,y)B(y,z) d\mu : X\times Z \rightarrow R$ To make the pattern clearer I generalized matrices $A$ and $B$ over any (semi-)ring $R$ to have indices ranging over arbitrary sets instead of just finite ones. In this case we need to give $Y$ a measure. What you get is the $L^2$ inner product. Relation composition: $S \subseteq X\times Y$ $T \subseteq Y\times Z$ $S \circ T := \{(x,z)\ |\ \exists y \in Y.\ (x,y) \in S\ \land\ (y,z) \in T\} \subseteq X\times Z$ Thinking in terms of monoids might be an easy way out, but we are losing a lot of structure doing that and I believe that if there's anything deep going on here, it lies in the $\ \exists \leftrightarrow \int\ $, $\ \land \leftrightarrow \cdot\ $ connection. What is the thing that generalizes both of these? What is going on here? Thanks in advance!
After being given a hint, I'm gonna give myself a different and very simple answer here. The idea is to think of a relation $S \subseteq X\times Y$ as a function $S : X\times Y\rightarrow 2$, and pick our semi-ring $R$ to be $(2,\vee,\wedge) = (2,\max,\min)$. We can take $\mu : Y \rightarrow 2$ to be just constant $1$ for nonempty measurable sets. Then $$\int_{y\in Y} S(x,y)\wedge T(y,z) d\mu = \int_{y\in Y'}S(x,y)\wedge T(y,z) d\mu $$ where $Y'$ is the subset of $Y$ where $S(x,y)\wedge T(y,z) = 1$. I hope it's clear with that, but just in case I'm going to give a silly example with finite sets to make it visual. Say we have the relations (as matrices): $S:=\left(\begin{matrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix}\right)$ and $T:=\left(\begin{matrix} 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{matrix}\right)$. Then doing the matrix product substituting $+$ with $\max$ and $\cdot$ with $\min$ we have $$ST = \left(\begin{matrix} 0 & 1\wedge1\ \vee\ 0\wedge1\ \vee\ 0\wedge1 & 0 \\ 0 & 1\wedge1\ \vee\ 0\wedge1\ \vee\ 0\wedge1 & 0 \\ 0 & 0\wedge1\ \vee\ 1\wedge1\ \vee\ 0\wedge1 & 0 \end{matrix}\right) = \left(\begin{matrix} 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \end{matrix}\right)$$ Cheers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4548276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why factoring with quadratic equation gives two different answers? I was working on a problem about induction, and I got stuck with a factorization: $$(2n^2+7n+6)$$ If I find It's roots, I get $x=-2$ and $x=-3/2$. Then I can write it as: $$a(x+2)(x+\frac{3}{2})$$ But how could I find $a$? I don't have a solid background in this topic as I mostly ignored middle school mathematics and now I'm interested in it, been trying to search "how to factor using quadratic formula", the only thing I got right is that if I decide to factor the $2$, I get $(x+2)(2x+3)$ which now is the same parabola. What core topic I'm missing?
The coefficient $a$ is the coefficient of the degree-2 term, which tells you that $a=2$. Another way to see is evaluating at some point other than zero. For instance if you evaluate both expressions at 1, you get $$ 2+7+6=a\times 3\times \frac52=\frac{15a}2. $$ This shows that $15=15a/2$ and so $a=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4548415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can every non-hermitian matrix be written as a sum of hermitan matrices? If $X$ is non-hermitian i.e., $X^\dagger = (X^*)^T \ne X$, can one express it as $X = \sum_i \alpha_i Y_i$ where $\alpha_i$ is a complex scalar and $Y_i$ is hermitian?
If you really do intend to take complex-linear combinations, then, yes, this is possible: $$ X \;=\; {1\over 2}\cdot (X+X^*) + {i\over 2}\cdot \Big({X-X^*\over i}\Big) $$ and both expressions in parentheses are hermitian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4548536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Which field of mathematics does Brouwer's Fixed Point Theorem belong to? Forgive me for asking a seemingly stupid question: Is Brouwer's Fixed Point Theorem an Analysis theorem, or a Topology theorem? It's talking about functions so I assume it's part of Analysis. But Wikipedia says it's a theorem in Topology. A follow-up question is, if I want to study something like Brouwer's Theorem in future, could you suggest me a specific branch of mathematics? eg. a specific branch in Algebraic Topology or Functional Analysis, something like that. (I'm not sure if these fields are related to this theorem, I'm just trying to give examples) p.s. I've just started my undergraduate year 2 and not even Point-Set Topology has been taught. So forgive me if it seems that I have a lack of mathematical knowledge.
Branches of mathematics don't exist, there's no such thing, they're made up categories. Analysis and topology are closely related and Brouwer's fixed point theorem is a place where they meet and this is not a problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4548736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a function from only knowing its derivative at a point. This question on my last calc exam, I am told is very easy, but I do not understand how it is possible. $ \lim_{h \to 0} \frac{\sqrt[3]{27+h}-3}{h}$ is the derivative of $f(x)$ at point $a$. Find $f(x)$ and find $a$. I recognize the definition of the derivative. Therefore I know that $f(a + h) = \sqrt[3]{27+h}$ and $f(a) = 3$ Then I am stuck. What is this question asking? How can we know the integral (the entire function $f(x)$) from the derivative at a single point? In lieu of an explanation I will accept an algorithm which I can use to solve it.
You don't know the function $f$ from the single derivative $f'(a)$. You know it from the expression $f(a+h)=\sqrt[3]{27+h}$, which holds for all $h$, so that you can simply substitute $h=x-a$ on both sides. The answer that you get will be valid for any number $a$ that you choose, so the question is somewhat strangely formulated ($a$ can be anything, i.e., it isn't at all determined by the given data).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4549287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $A$ be ring, then $A[x]$ is formally smooth over $A$. I'm having trouble with the following example of the polynomial ring $A[x]$ for some ring $A$ being formally smooth: Let $h \colon C \to C /J$ with $J \subset C$ an ideal such that $J^2 = 0$. Given the commutative diagram of rings, \begin{array}{ccc} A & \xrightarrow f & C \\ \downarrow g & & \downarrow h \\ A[x] & \xrightarrow {\overline{f}} & C/J \end{array} choose $c_i \in C$ such that $h(c_i) = \overline{f}(x_i)$. Define $F \colon A [x] \to C$ as the $A$-algebra map $x_i \mapsto c_i$. Then $F \circ g = f$ and $h \circ F = \overline{f}$. My question/trouble is that I do not see how we're using $J$.
$A[x]$ satisfies a much stronger universal property, such that $J^2=0$ is not needed. In fact, all we need is $C\rightarrow C/J$ is surjective so that the image of $x$ in $C/J$ has a preimage in $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4549432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fourier transform as change of basis The Fourier transform $\hat{f}$ of some function $f$ is often presented as a change of basis to a basis of complex exponentials. This however begs the question: since $\hat{f}$ is expressed with respect to a basis, is $f$ too expressed with respect to some basis? In a linear algebra setting, the initial and final bases are clearly defined, for example from the standard basis to some linear combination of it. Here, however, it seems that the initial basis is just left out of the discussion.
Building on what mcd has written, the usual way to expand in the orthonormal basis $e_n$ is: $$ f = \sum_{n=-\infty}^{\infty}\langle f,e_n\rangle e_n. $$ That's essentially what you're doing with the Fourier transform with regard to a basis with a continuous index 's' that ranges over the real numbers: $$ f = \int_{\mathbb{R}}\left\langle f,\frac{e^{isx'}}{\sqrt{2\pi}}\right\rangle \frac{e^{isx}}{\sqrt{2\pi}}ds $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4549628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why $X^TX$ be the identity matrix is equvalent with saying there is no correlation between features? This was mentioned in the lecture as one of the assumptions for the problem of finding a hard/soft thresholding estimator. There was another time that my professor mentioned this assumption, so I think it is important. But I don't know why - why the design matrix $X$ being orthonormal is equivalent with saying there is no correlation between features? Name $x_1,...,x_p$ the column of my design matrix $X$, then orthonormal means that for any $x_i, x_j$, their inner product is 0; on the other hand, no correlation is equivalent as saying the matrix cov($x_i,x_j$)=0. But the first one is purely linear algebra result, whereas the second one involves with probability of $x_i,x_j$.
The statement is kind of true, with two caveats: 1. You need to center the columns of $X$, and the magnitude of the diagonal elements doesn't matter. So, the statement should be: for a design matrix $X$ with centered columns, the features are uncorrelated, if and only if $X^TX$ is a diagonal matrix. By the design matrix $X$, usually one means an $n\times p$ matrix, where the columns are the features and the rows are the different observations. Consider any two features $x_j,x_k$ with $j\neq k$. The empirical covariance is defined as: $$ Cov(x_j,x_k):=\frac{1}{n}\sum_{i=1}^n(x_{ij}-\bar{x}_j)(x_{ik}-\bar{x}_k) $$ Here, $\bar{x}_j:=n^{-1}\sum_{i=1}^nx_{ij}$. It is easy to show that: $$ =\frac{1}{n}\sum_{i=1}^nx_{ij}x_{ik}-\bar{x}_j\bar{x}_k=\frac{1}{n}(X^TX)_{jk}-\bar{x}_j\bar{x}_k $$ Now, if you knew that $X^TX$ is a diagonal matrix, it would follow that $(X^TX)_{jk}=0$. In particular $$ =-\bar{x}_j\bar{x}_k $$ So, if $\bar{x}_j=0$ or $\bar{x}_k=0$, in particular after centering the columns of $X$, $X^TX$ being a diagonal matrix implies that all $x_j$ and $x_k$ are uncorrelated. But $X^TX=I$ itself is neither sufficient nor necessary for uncorrelated features. It is not sufficient: Let $X=I$, say with $p=n=2$. Then $X^TX=I$, but you can check that the empirical covariance matrix is $Cov(x_1,x_2)=1/4$. This exemplifies that we need mean zero in for the columns for the argument to work. It is not necessary: Again let $p=n=2$, and this time define $X$ with columns $x_1:=(1,-1)/2$ and $x_2:=(-1,1)/2$. A simple calculation shows that $X^TX=2I$, and $Cov(x_1,x_2)=0$. This exemplifies that we only need $X^TX$ to be diagonal in the argument above, the magnitude of the diagonal elements themselves is irrelevant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4549790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Reference(book or article) for an explicit formula of Legendre polynomials The following explicit formula is stated for Legendre polynomials on Wikipedia. \begin{equation} P_n(x)=\sum_{k=0}^n {n\choose k}{n+k \choose k} \left(\dfrac{x-1}{2}\right)^2 \end{equation} Do you know any proof or reference for this formula?
In Wikipedia's page there is the following Bonnet's recursion formula: $$(n+1)P_{n+1}(x)=(2n+1)xP_n(x)-nP_{n-1}(x)$$ Now, let $a_{n,k}=\binom{n}{k}\binom{n+k}{k}$. Then, esentially, we must show that $P_n(x)=\sum_{k=0}^na_{n,k}\left(\frac{x-1}{2}\right)^k$ satisfies this recursive equation. Then, we have $$(n+1)\sum_{k=0}^{n+1}a_{n+1,k}\left(\frac{x-1}{2}\right)^k=(2n+1)x\sum_{k=0}^na_{n,k}\left(\frac{x-1}{2}\right)^k-n\sum_{k=0}^{n-1}a_{n-1,k}\left(\frac{x-1}{2}\right)^k.$$ Lets do this trick: $x=2\left(\frac{x-1}{2}\right)+1$. Then we, essentially. must show that $$(n+1)a_{n+1,k}=2(2n+1)a_{n,k-1}+(2n+1)a_{n,k}-na_{n-1,k}$$ And after some simplifications, by multiplying from numerators or denominators, I got $$(n+1)(n+k+1)(n+k)=2(2n+1)k^2+(2n+1)(n-k+1)(n+k)-n(n-k+1)(n-k)$$ and these are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4549925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Order of $\langle f(g)\rangle$ If $f$ is a homomorphism from $G$ to $H,$ if $g\in G$, in general do $\langle g\rangle$ and $\langle f(g)\rangle$ have the same order? I feel like they can have different orders, but I can't think of a counterexample...
It's a good question. Actually all that are needed is a few very basic ingredients (more basic than Lagrange and the first isomorphism theorem): * *$x^n=e\implies \lvert x\rvert \mid n$ (this may be the trickiest, you need the division algorithm) *$h$ a homomorphism $\implies h(x^n)=h(x)^n$ *$x^{\lvert x\rvert}=e$ *$h$ a homomorphism $\implies h(e)=e$ (follows from the second bullet btw). Now try putting these together to prove $$\lvert h(x)\rvert \mid \lvert x\rvert $$. In general, we don't get equality (whenever the kernel of $h$ is non-trivial).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4550077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Making an ODE exact, when formula's of exactness do not provide a solution I have $$\left(2x+ 1-\frac{y^2}{x^2}\right)dx+ \frac{2y}{x}dy= 0$$ which is not exact. However, it could be made exact by using the formula for integrating factors. I have two ways, With $$M=\left(2x+ 1-\frac{y^2}{x^2}\right)$$ and $$N=\frac{2y}{x}$$ we can form the following integrating factors: $$\phi(x)=\frac{N_x-M_y}{M}=-\frac{\frac{4y}{x^2}}{2x+1-\frac{y^2}{x^2}}$$ or $$\psi(x)=\frac{M_y-N_x}{N}=\frac{ (x^3 - x^2 + y^2)}{yx^2}$$ Then multiplying these in, should give an exact form of the ODE. But neither of the two make the ODE exact. Are there other formulas one can use? Thanks
$$\left(2x+ 1-\frac{y^2}{x^2}\right)dx+ \frac{2y}{x}dy= 0$$ $$\left(2x+1\right)dx+\dfrac {(-y^2dx +{2xy}dy)}{x^2}= 0$$ $$\left(2x+1\right)dx+\dfrac {(-y^2dx +{x}dy^2)}{x^2}= 0$$ $$\left(2x+1\right)dx+d \left (\dfrac {y^2}{x}\right)= 0$$ Integrate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4550228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve a non-exact ODE by a different method The following equation $$(x^3+2xy)dx-x^2dy=0$$ is not exact, since $$\frac{\partial M}{\partial y}=2x\ne\frac{\partial N}{\partial x}=-2x$$ I wanted to try the following then, $$-x^2dy=-(x^3+2xy)dx$$ $$\frac{dy}{dx}=\frac{x^3+2xy}{x^2}$$ $$dy=\frac{x^3+2xy}{x^2}dx$$ $$\int dy=\int xdx+ \int \frac{2y}{x}dx$$ But the last integral, according to what I suspect does not make any sense for finding a solution to the original problem. That would mean that the answer by this approach is: $$y=\frac{x^2}{2}+2y\ln x$$ However, this is not correct. Any ideas how to solve this? Thanks
$$(x^3+2xy)dx-x^2dy=0$$ $$x^3dx+ydx^2-x^2dy=0$$ $$\dfrac {dx}{x}+\dfrac {ydx^2-x^2dy}{x^4}=0$$ $$\dfrac {dx}{x}-d \left (\dfrac {y}{x^2}\right)=0$$ Integrate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4550426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the probability of "$\le$" the same as "$\lt$"? The current in a certain circuit as measured by an ammeter is a continuous random variable X with the following density function: $f(x)= 0.075x+0.2$ for $3\le{x}\le5$ $f(x) = 0$ otherwise. Calculate $P(X\le4)$ and compare to $P(X\lt4)$. In my solution, I calculate $P(X\le4)$ and $P(X\lt4)$ by integral and I see that $P(X\le4)=P(X\lt4)$. My questions are: * *Are they always equal to each other in every other case? *Why can they equal to each other while $P(X=4)\neq0$? Thanks a lot for your help!
To complement the other answer I want to say that probability is, in some sense, a branch of measure theory. That is, probability is a way to assigns to some subsets "a kind of area", or "length", loosely speaking. And in measure theory you realize you can have sets which although not being empty they have "length" $0$ (for example $A=\{4\}$ in your case). And this is the reason why you can find non impossible events whose probability is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4550572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Probability that $\exists k\in\mathbb{N}: \sum_{j=1}^{3k}X_j = 2k$ is less than $1$ Let $X_j$, $j\in\mathbb{N}$, be IID random variables taking value in $\{0,1\}$ with each with an equal probability. Then I want to prove that the probability that there exists $k\in\mathbb{N}$ such that $\sum_{j=1}^{3k}X_j = 2k$ is strictly in between $0$ and $1$. I was able to show that the probability is larger than $0$ by computing the probability for any fixed $k$ which yields a lower bound. However, I couldn't figure out how to show the latter. I'd appreciate any help.
The main idea is that, since $\sum_{j=1}^{3k}X_j$ should be concentrated near its mean $3k/2$, if this sum isn't equal to $2k$ for any small $k$ it should be unlikely to equal $2k$ for any large $k$. We can certainly ensure with positive probability that the sum can differ for small $k$ (e.g. by setting $X_j=0$ for small $j$); it only remains to make explicit the "large $k$ unlikely" step. Let $N$ be a positive integer. Suppose $X_j=0$ for all $1\leq j\leq 3N$ (this happens with positive probability $1/2^{3N}$), and let $Y_k$ be the event that $\sum_{j=1}^{3(k+N)}X_j=2(k+N)$. Note that $$\Pr[Y_k]=\frac{\binom{3k}{2k+2N}}{2^{3k}}=\frac{\binom{3k}{k-N}}{2^{3k}}\leq \frac{\binom{3k}k}{2^{3k}}.$$ This probability is pretty small; we have $$1=\left(\frac 13+\frac 23\right)^{3k}=\sum_{i=0}^{3k}\binom{3k}{i}\left(\frac 13\right)^i\left(\frac 23\right)^{3k-i}\geq \binom{3k}{k}\frac{2^{2k}}{3^{3k}},$$ so $$\Pr[Y_k]\leq \frac{\left(\frac{3^3}{2^2}\right)^k}{2^{3k}}=\left(\frac{27}{32}\right)^k.$$ In particular, by the union bound, the probability that $Y_k$ occurs for some $k\geq 2N$ is at most $$\sum_{k=2N}^\infty\left(\frac{27}{32}\right)^k=\frac{32}5\cdot\left(\frac{27}{32}\right)^{2N-1}.$$ Note that $Y_k$ cannot occur for any $k<2N$ since $$\sum_{j=1}^{3(k+N)}X_j\leq 3k<2(k+N)$$ for such $N$. So, the probability that $\sum_{j=1}^{3\ell}X_j\neq 2\ell$ for all $\ell>0$ is at least the probability that $X_j=0$ for all $1\leq j\leq 3N$ times $1-\frac{32}5\cdot\left(\frac{27}{32}\right)^{2N-1}$. For large $N$, this is positive, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4550864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Positive invertible elements and pure states in unital C*-algebra In a unital C*-algebra $\mathcal{A}$, if $a\in\mathcal{A}$ is a positive element such that $\|a\|=1$, and, for every pure state $f$, $f(a)>0$. Show $a$ is invertible. Can I conclude, for every state $g\in S(\mathcal{A})$, we always have $g(a)>0$? Since that's true for every pure state? I'm trying to use that potentially problematic claim to prove invertibility of $a$.
Let $\mathcal{B}$ be the $C^*$-algebra generated by $a,1$. Then I claim that if $g$ is a pure state on $\mathcal{B}$, then also $g(a)> 0$. To see this, choose a pure state $\overline{g}$ on $\mathcal{A}$ that restricts to $g$ on $\mathcal{B}$. Then $g(a) = \overline{g}(a)> 0$ by your assumption. Thus, your condition is also satisfied for the commutative $C^*$-algebra $\mathcal{B}$. Thus, we may assume that $\mathcal{A}$ is commutative (by replacing it by $\mathcal{B}$). But from there, the proof is easy: By the commutative Gelfand-Naimark theorem, we have $\mathcal{A}\cong C(X)$ where $X$ is some compact Hausdorff space. Then recall that the pure states on $C(X)$ are precisely the characters $\{\operatorname{ev}_x: x \in X\}$ so your assumption means that $$a(x) = \operatorname{ev}_x(a)>0$$ for any $x\in X$. Hence, $a$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4551023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenvalues of $A^\dagger A$ With a $2\times 2$ matrix $A$, let $u$ be an eigenvector of $B=A^\dagger A$. My question is: when can the eigenvalue $\lambda = u^\dagger B u$ lie between $0$ and $1$, i.e., what are the conditions under which $\lambda \in [0,1]$? Here $\dagger$ denotes the Conjugate-Transpose. Also, $u$ is normalized i.e., $u^\dagger u = I$.
Since we're working with 2x2 matrices, we can get the eigenvalues from the trace and determinant. Let $A$ be given by $$ A=\begin{bmatrix} a&b\\c&d\end{bmatrix}. $$ $\mathrm{tr}(A^\dagger A) = |a|^2+|b|^2+|c|^2+|d|^2$ is just the matrix norm of $A$. Meanwhile, $\det(A^\dagger A) = |\det(A)|^2 = |ad - bc|^2$. For brevity, let $N$ be the norm of $A$ and $D$ be its determinant (the usual shorthands would result in $\left\|A\right\|^2$ and $\left|\left|A\right|\right|^2$, respectively, which would just lead to problems). Then we have $$ \lambda = \frac{N\pm\sqrt{N^2-4|D|^2}}{2}. $$ This is always positive, so we just need to check $\lambda < 1$, and in particular only need to check the $+$ sign. Doing a bit of algebra gives $$ \frac{N+\sqrt{N^2-4|D|^2}}{2}<1\Longrightarrow N^2-4|D|^2<(2-N)^2\Longrightarrow N< 1 + |D|^2. $$ So the matrix will have eigenvalues in the range $(0,1)$ if $$ |a|^2+|b|^2+|c|^2+|d|^2 < 1 + |ad - bc|^2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4551368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Faithful representations and small tensor powers Let $ G $ be a finite group. Let $ (\pi,V) $ be a faithful representation of $ G $. Then every irrep of $ G $ is contained in some tensor power of $ V $. See https://mathoverflow.net/questions/18194/faithful-representations-and-tensor-powers In particular this answer https://mathoverflow.net/a/192103/387190 shows that every irrep shows up in some tensor power less than or equal to the size of $ G $. I am interested in a tighter bound. In particular, replacing the size of $ G $ by just the number of conjugacy classes of $ G $. That is: Let $ G $ be a finite group with $ k $ conjugacy classes. Let $ (\pi,V) $ be a faithful representation of $ G $. Does every irrep of $ G $ appear as a subrepresentation of $$ \bigoplus_{j=1}^k V^{\otimes j} $$ In other words, does every irrep of $ G $ appear in some small tensor power $ V^{\otimes j} $ for $ j \leq k $.
Yes, in fact we can write down an even better bound than this. If $\chi$ is a faithful character and $\psi$ is an irreducible character, then for $n \ge 1$ the sequence $\langle \chi^n, \psi \rangle$ satisfies a linear recurrence relation of order the number $d$ of distinct nonzero values that the character $\chi$ takes (which satisfies $d \le c(G)$, the number of conjugacy classes, but can be strictly less than it). (We need to take $n \ge 1$ so we can ignore the zero values.) Hence if it is zero for $n = 1, \dots d$ then it vanishes identically (since we can compute the rest of the terms using the linear recurrence). So any of the other arguments which show that this sequence is not identically zero in fact show that $\langle \chi^n, \psi \rangle \neq 0$ for some $\boxed{ 1 \le n \le d }$. This argument can be extended slightly to give a proof, which is also given in the linked MO thread. We just consider the generating function $$\sum_{n \ge 0} \langle \chi^n, \psi \rangle t^n = \sum_{n \ge 0} \frac{1}{|G|} \sum_{g \in G} \chi^n(g^{-1}) \psi(g) t^n = \frac{1}{|G|} \sum_{g \in G} \frac{\psi(g)}{1 - \chi(g^{-1}) t}.$$ Because $\chi$ is faithful, $g = e$ is the only element satisfying $\chi(g) = \chi(e)$, so the corresponding term $\frac{\psi(e)}{1 - \chi(e) t}$ is not canceled out by any other term in the above sum (it is the dominant singularity of the generating function), so the above generating function is nonzero and hence has some nonzero coefficient. Alternatively we can use a Vandermonde matrix, which is also alluded to in the linked MO thread. As a simple application of this stronger bound, if we take $\chi$ to be the character of the regular representation then $d = 1$ which gives that every irreducible appears in the regular representation. Of course we knew this already but it's nice to see that this result is strong enough to give it. As a more interesting application, let $V$ be the $n$-dimensional permutation representation of $S_n$. The character of this representation is the number of fixed points of a permutation, which takes exactly $d = n-1$ distinct nonzero values, namely $1, 2, \dots n-2, n$ (so excluding $n-1$). We conclude that every irreducible of $S_n$ appears in $V^{\otimes k}$ for some $1 \le k \le n - 1$, which is much better than either the bound $|S_n| = n!$ or the bound $|c(S_n)| = p(n)$. We ought to have a similar bound for induced representations but I don't know what that bound should be exactly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4551620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Computing the eigenvalues and eigenvectors of a $ 3 \times 3$ with a trick The matrix is: $ \begin{pmatrix} 1 & 2 & 3\\ 1 & 2 & 3\\ 1 & 2 & 3 \end{pmatrix} $ The solution says that $ B\cdot \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} = \begin{pmatrix} 6 \\ 6 \\ 6\end{pmatrix}$ $ B\cdot \begin{pmatrix} 1 \\ 1 \\ -1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0\end{pmatrix}$ $ B\cdot \begin{pmatrix} 3 \\ 0 \\ -1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0\end{pmatrix}$ Thus the eigenvalues are $ \lambda_{1}=6,\lambda_{2}=0 $ My question is, how can I easily find $\begin{pmatrix} 0 \\ 0 \\ 0\end{pmatrix}$ and $\begin{pmatrix} 6 \\ 6 \\ 6\end{pmatrix}$? Is there any way to see it "quickly"?
The range of the given matrix is spanned by one vector: $$ \left[\begin{array}{c}1 \\ 1 \\ 1\end{array}\right] $$ Therefore, any eigenvector (which must be non-zero by definition) must be a scalar multiple of this vector, or it must be in the null space of the given matrix. The above is an eigenvector of the given matrix. The null space is spanned by $$ \left[\begin{array}{r}2 \\ -1 \\ 0\end{array}\right],\left[\begin{array}{r}3 \\ 0 \\ -1 \end{array}\right] $$ You can easily rewrite this null space in terms of the vectors given in your statement of the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4551846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Probability of triangle inequality given probability density functions For three random variables $A$,$B$,$C$, I have their probability density functions $f_A$, $f_B$, and $f_C$. The PDFs are polynomial. I am trying to calculate the probability that $a+b>c$, or $$Pr[a+b>c]$$ using the PDFs. Is there a method for this? I know that I can calculate things like $Pr[x_0 \leq a \leq x_1 ] = \int_{x_0}^{x_1}f_A(x) \ dx$. but I'm not sure how to determine something like $Pr[a+b>c]$.
I know that I can calculate things like $\displaystyle\Pr[x_0≤A≤x_1]=\int_{x_0}^{x_1}f_A(x)\,\mathrm d x$. but I'm not sure how to determine something like $\Pr[A+B>C]$. Basically, it is the same principle. You integrate over the supported domain where the condition is also met; you just do so for all three variables. $\qquad\begin{align}\Pr(A+B>C) &= \iiint_{x+y>z} f_{A,B,C}(x,y,z)\,\mathrm d z\,\mathrm d y\,\mathrm d x\\[1ex]&=\int_{-\infty}^\infty\int_{-\infty}^{\infty}\int_{-\infty}^{x+y} f_A(x)\,f_B(y)\,f_C(z)\,\mathrm d z\,\mathrm d y\,\mathrm d x\end{align}$ If the variables are independent, then $f_{A,B,C}(x,y,z)=f_A(x)\,f_B(y)\,f_C(z)$, but if they are not, then you will need to know what is that joint probability density function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4552007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Difference in interval notation What is the difference between these two exercise questions (regarding the intervals)? * *Show that for each $\epsilon > 0$, $f_n$ is uniformly convergent on $[\epsilon, \infty)$ *Show that $f_n$ is not uniformly convergent on $(0, \infty)$ I mean in the first interval we don't have the 0 because $\epsilon > 0$ and in the second we don't have the 0 because it is an open interval. So what is the difference?
[I gave a glib answer above as a comment, but hopefully this answer will give you some actual insight. :-} ] For a cartoon version that might make the difference easier to see, replace "$f_n$ is uniformly convergent on" with "we can find a positive number that is strictly less than all numbers in". What happens here, and probably what happens in your exercise, is that the truth of the statement requires you to find some number such that .... something is true about it. In a question like yours, about a sequence converging, I'll bet that you have to find some $N$ such that for all $i \gt N$ .... something holds. So for each $\epsilon \gt 0$ you can find such an $N$ - and from now on I'm going to write it as $N_{\epsilon}$ to emphasize that it depends on $\epsilon$. But as your $\epsilon$ gets smaller, $N_{\epsilon}$ gets bigger and grows beyond any upper bound. So you can't find a single $N$ that will work for all $\epsilon$ values $\gt 0$. Which is what happens in your original question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4552143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Given the subspaces $S=\{(1,2,1),(0,2,0)\}$ and $P=\{(x,y,z): x+y=y-kz=0\}$ Given the subspaces $S=\{(1,2,1),(0,2,0)\}$ and $P=\{(x,y,z): x+y=y-kz=0\}$, try to find $k$ such that: $S+P=\mathbb R^3$ and $S∩P={(0,0,0)}$. Sorry, I forgot to put what I did try first. Here I go: First, I became aware that if $S+P$ is equal to $P$, then there the elements on the basis of S plus the elements on the basis of $P$ correspond to the basis of $\mathbb R^3$. But we know that $\mathbb R^3$ has a canonical basis, that is: {$(1,0,0), (0,1,0), (0,0,1)$}, and so now we can arrange $P$ so it can create this set. This is my main idea, but I don't know how to continue.
You have a basis for $S$ (presuming $S$ is the span of the two vector set, and thus a subspace). Note that the vectors are linearly independent, as they are not scalar multiples of each other. Now you need a basis for $P$. The equations $x + y = 0$ and $y - kz = 0$ simplify to $y = kz$ and $x = -y = -kz$. So, for $(x, y, z) \in P$, $$(x, y, z) = (-kz, kz, z) = z(-k, k, 1).$$ Thus, every element of $P$ is a scalar multiple of $(-k, k, 1)$. You can also check that $(-k, k, 1)$ (and hence all of its multiples) belongs to $P$, so $\{(-k, k, 1)\}$ is a basis. Now you can apply the result, and I'll leave the rest to you: for which values of $k$ is $\{(1, 2, 1), (0, 2, 0), (-k, k, 1)\}$ a basis for $\Bbb{R}^3$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4552350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Spivak, Ch. 22, "Infinite Sequences", Problem 1(iii): How do we show $\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[4]{n+1}\right ]=0$? The following is a problem from Chapter 22 "Infinite Sequences" from Spivak's Calculus * *Verify the following limits (iii) $\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[4]{n+1}\right ]=0$ The solution manual says $$\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[4]{n+1}\right ]$$ $$=\lim\limits_{n\to \infty} \left [\left (\sqrt[8]{n^2+1}-\sqrt[8]{n^2}\right )+\left (\sqrt[4]{n}-\sqrt[4]{n+1}\right )\right ]$$ $$=0+0=0$$ (Each of these two limits can be proved in the same way that $\lim\limits_{n\to \infty} (\sqrt{n+1}-\sqrt{n})=0$ was proved in the text) How do we show $\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[8]{n^2}\right ]=0$? Note that in the main text, $\lim\limits_{n\to \infty} (\sqrt{n+1}-\sqrt{n})=0$ was solved by multiplying and dividing by $(\sqrt{n+1}+\sqrt{n})$ to reach $$0<\frac{1}{\sqrt{n+1}+\sqrt{n}}<\frac{1}{2\sqrt{n}}<\epsilon$$ $$\implies n>\frac{1}{4\epsilon^2}$$
\begin{align} \sqrt[8]{n^2+1}-\sqrt[4]{n+1}&=\big(n^2+1)^{1/8}-(n+1)^{2/8} \\ &=\big(n^2+1)^{1/8}-(n^2+2n+1)^{1/8}\\ &=\frac{\big(1+\tfrac{1}{n^2}\big)^{1/8}-1-\Big(\big(1+\tfrac{2}{n}+\tfrac{1}{n^2}\big)^{1/8}-1\Big)}{\tfrac{1}{n^{1/4}}} \end{align} The function $f(x)=x^{1/8}$, $x>0$ has a finite derivative at $x=1$. Letting $h=\frac{1}{n^2}$, we obtain that $$\frac{f(1+h)-f(1)}{h^{1/8}}=h^{7/8}\frac{f(1+h)-f(1)}{h}\xrightarrow{h\rightarrow0}0\cdot f'(1)=0.$$ Similarly, letting $h=\frac{1}{n}$ we obtain $$\frac{f(x+2h+h^2)-f(1)}{h^{1/4}}=(2h^{3/4}+h^{7/4})\frac{f(1+2h+h^2)-f(1)}{2h+h^2}\xrightarrow{h\rightarrow0}0\cdot f'(1)=0$$ Hence $$\lim_{n\rightarrow}\sqrt[8]{n^2+1}-\sqrt[4]{n+1}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4552533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Solving ODE with derivative boundary condition with finite difference method by central approximation I am trying to solve the following ODE: $$ \frac{d^2y}{dx^2}=y(x) $$ Where I have two boundary conditions: $ y(0)=10 $; and $ \frac{dy(x\rightarrow\inf)}{dx}=0 $ I am trying to solve the problem through finite difference by central approximation, so: $$ y''(x_i)=\frac{f(x_{i+1})-2\cdot y(x_i)+y(x_{i-1})}{h^2} $$ Which if plugged into my initial ODE: $$ y(x_{i+1})+(-2-h^2)\cdot y(x_i)+y(x_{i-1})=0 $$ In this case I am discretizing over a range of: $ 0\leq x \leq 5$, with 500 nodes (so $h=0.01$), and $I$ goes from 1 to $N+1$, where the approximated ODE would be valid for $2\leq i \leq N$ (all discretized points except for the boundaries. Hope I made sense so far and you are still with me here, my question is, how do I set up the right hand side boundary condition $ (\frac{dy(x\rightarrow\inf)}{dx}=0) $, since I do not have a specific value to set it to? Thanks for your time!
At the risk of coming across as overbearing, I will answer my own question (XD)... I came across this MIT post that solves exactly this problem. The condition can be evaluated again by using finite difference, where by central approximation: $$ \frac{dy(x_i)}{dx}=\frac{y(x_{i+1})-y(x_{i-1})}{2\cdot h} $$ Applied to the BC: $$ \frac{dy(x_N)}{dx}=\frac{y(x_{N+1})-y(x_{N-1})}{2\cdot h}=0\rightarrow y(x_{N+1})-y(x_{N-1}) \approx y(x_{N+2})-y(x_{N})=0$$ Where the approximation works under the assumption of $ h\rightarrow 0$. Here we are adding an additional term $(y_{N+2})$, so the system of equations used to solve the approximated ODEs at each of the discretized points will also have to grow from N+1, to N+2, where the new value added to the end will be "fictitious", and should later be removed from the final solution. I'm unsure as to how well I explained this, but in case of any doubt, follow the link if it's still up, they do a far better job at explaining it than me :').
{ "language": "en", "url": "https://math.stackexchange.com/questions/4552679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find amount of vertice of grade 3 or 4 in planar graph A planar, loop free connected graph has 8 vertices of either degree 3 or 4 7 faces How many of the vertices are either degree 3 or 4 Since its a planar graph I assume we use |V | − |E| + #(faces) = 2 and we change the formula to E = V + F - 2 and we get 13 edges. I'm not sure how to go from there .
We know Euler's formula for a connected planar simple graph $G=(V,E)$ with $e=|E|$ and $v=|V|$: $$ v+f=e+2,\tag{1} $$ From $(1)$ we will get that your graph has $13$ edges. Also we know the next formula (the "First Theorem of Graph Theory" or the "Handshaking Lemma"): $$ \sum_{{v}\in{V}}\text{$deg(v)$}=2e,\tag{2} $$ From $(2)$ we will have that $$ \begin{cases} v_1+v_2=8,\\ 3v_1+4v_2=26.\tag{3} \end{cases} $$ Here $v_1$ is the quantity of vertices having the degree is equal to $3$ and $v_2$ is the quantity of vertices having the degree is equal to $4$. From $(3)$ we will obtain that your graph consists of $2$ vertices having $4$ degrees and $6$ vertices having $3$ degrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4552851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
if $f;R\rightarrow R$ is continuous, and $f(x)=e^{\frac{x^2}{2}}+\int_0^x tf(t)dt$, determine which of the followng is right $f(x)=e^{\frac{x^2}{2}}+\int_0^x tf(t)dt$ and the given options are A) $5<f(\sqrt2)<6$ b) $2<f(\sqrt2)<3$ c) $3<f(\sqrt2)<4$ d) $4<f(\sqrt2)<5$ now,as it's only continuous it's surely integrable and not necessarily diffferentiable so, we have. $f(x)=e^{\frac{x^2}{2}}+\int_0^x tf(t)dt----1$ $e^{\frac{x^2}{2}}+t \int_0^xf(t)dt-\int_0^x\int_0^xf(t)dt$ but this doesn't help at all I'm tempeted to differentiate to maybbe obtain a useful expression, so $$f'(x)=2xe^{\frac{x^2}{2}}+xf(x)$$ which allows me to use integration by parts in $1$ so $f(x)=e^{\frac{x^2}{2}}+\frac{x^2f(x)}{2}-\int_0^x \frac{x^2}{2}f'(x)dx$ which can further be simplified using $f'(x)$ which gives us $f(x)=e^{\frac{x^2}{2}}+\frac{x^2f(x)}{2}-\int_0^x \frac{x^2}{2}2xe^{\frac{x^2}{2}}+xf(x)dx$ beyond which I'm lost, as the integral seems to become zero I'd really appreciate a HINT NOT AN EXPLICIT SOLUTON FOR NOW
Hint: You already have $$f'(x)=2xe^{\frac{x^2}{2}}+xf(x)$$ and hence $$ f'(x)-xf(x)=2xe^{\frac{x^2}{2}}. \tag1$$ The integral factor of (1) is $$ \mu(x)=\exp\bigg(\int(-x)dx\bigg)=e^{-\frac{x^2}{2}} $$ and multiplying (1) by $\mu(x)$ gives $$ \bigg(e^{-\frac{x^2}{2}}f(x)\bigg)'=2x. $$ You can do the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4553026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Product of Inverse Hankel Matrix Consider $H_n$, the $n\times n$ Hankel matrix of the Catalan numbers starting from $2$: $$H_n = \begin{bmatrix} 2 & 5 & 14 & 42 & 132\\ 5 & 14 & 42 & 132 & 429\\ 14 & 42 & 132 & 429 & 1430 & \cdots\\ 42 & 132 & 429 & 1430 & 4862\\ 132 & 429 & 1430 & 4862 & 16796\\ &&\vdots\end{bmatrix}$$ It is known that $\text{det}(H_n) = n + 1$. (see Hankel Matrix) Consider the column vector, $$c_n = \begin{bmatrix}1 \\ 2 \\ 5 \\ 14 \\ \vdots \end{bmatrix}$$ that contains the first $n$ Catalan numbers. I have found a pattern that I have checked up to $n=240$, that $$(c_n)^T(H_n)^{-1}(c_n) = \frac{n}{n+1}$$ Is there any method I can take to prove this, or is there a counterexample? Note also that this product is the only non-zero eigenvalue of $(c_n)(c_n)^T(H_n)^{-1}$.
In the provided link, it says that if $S_{n+1}=\begin{pmatrix} 1 & v^t\\ v & H_n \\ \end{pmatrix}$, where $v^t=(1,2,5,\ldots)$, then $\det(S_{n+1})=1$. Notice that $$\begin{pmatrix} 1 & -v^tH_n^{-1}\\ 0 & Id \\ \end{pmatrix}\begin{pmatrix} 1 & v^t\\ v & H_n \\ \end{pmatrix}\begin{pmatrix} 1 & 0^t\\ -H_n^{-1}v & Id \\ \end{pmatrix}=\begin{pmatrix} 1-v^tH_n^{-1}v & 0^t\\ 0 & H_n \\ \end{pmatrix}.$$ Now, $\det\begin{pmatrix} 1 & -v^tH_n^{-1}\\ 0 & Id \\ \end{pmatrix}=\det\begin{pmatrix} 1 & 0^t\\ -H_n^{-1}v & Id \\ \end{pmatrix}=1$, which implies $$1=\det\begin{pmatrix} 1 & v^t\\ v & H_n \\ \end{pmatrix}=(1-v^tH_n^{-1}v).\det(H_n)=(1-v^tH_n^{-1}v).(n+1)$$ Therefore, $v^tH_n^{-1}v=1-\frac{1}{n+1}=\frac{n}{n+1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4553175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Closed Form Formula for Nonlinear Recurrence $a_{n+1}=\frac{a_{n}}{2} + \frac{5}{a_{n}}$ I'm trying to find a closed form solution to the sequence $a_{n+1}=\frac{a_{n}}{2} + \frac{5}{a_{n}}$ I tried using a generating function approach in the following way: Let $$f(x) = \sum_{n=1}^\infty a_n x^n$$ Then multiplying the original equation with $x^n$ and summing over all $n$, we get: $$\sum_{n=1}^\infty a_{n+1}x^n= \sum_{n=1}^\infty \frac{a_{n}}{2} x^n + \sum_{n=1}^\infty\frac{5}{a_{n}} x^n$$ $$= \frac{f(x) - a_1}{x} = \frac{f(x)}{2} + 5 \sum_{n=1}^\infty \frac{x^n}{a_n}$$ But now I don't know how to simplify the last term and I get stuck. I put it on Wolframalpha and found the general solution to the recurrence relation as: $$a_n = -i \sqrt{10} \cot\left(c_1 2^n\right)$$
Based on the Wolfram Alpha answer, this is how one would get a closed form for $a_n$: First, the substitution $a_n = b_n\sqrt{10}$ yield the equation $$b_{n+1}=\frac{b_n+b_n^{-1}}{2}$$ Now take $b_n = \coth c_n$ to get $$\coth c_{n+1} = \frac{\coth c_n + \tanh c_n}{2} = \coth (2c_n)$$ From this, $c_{n+1} = 2 c_n$. This means that $c_n = k\, 2^n$ for some $k$. So $a_n = \sqrt{10} \coth(k\, 2^n)$. It's not hard to check that this satisfies the recurrence relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4553375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
I want to prove $f$ in continuous on $[a,b]$ implies $F(x)=\sup f([a,x])$ is continuous on $[a,b]$? Goal: I want to prove $f$ in continuous on $[a,b]$ implies $F(x)=\sup f([a,x])$ is continuous on $[a,b]$. Proof: Suppose $f$ in continuous on $[a,b]$. Let $c \in (a,b)$. Let $\epsilon>0$. By the continuity of $f$ at $c$, there exists $\delta>0$, such that $-\epsilon/2 +f(c)<f(x)<\epsilon/2 +f(c)$ for all $x \in (c-\delta, c+\delta)$ so $f(x)<\epsilon/2 +f(c)$ for all $x \in [c , c+ \frac{\delta}{2}]$ hence, $\sup f([c , c+ \frac{\delta}{2}]) \le\epsilon/2+f(c) \le \epsilon/2 +\sup f([a,c])$. so if $y \in f([c , c+ \frac{\delta}{2}]) \cup f([a,c])$, then $y \le \epsilon/2 +\sup f([a,c])$ hence, $\sup f([a ,c+ \frac{\delta}{2}]) \le \epsilon/2 +\sup f([a,c])< \epsilon +\sup f([a,c])$. by the monotone property of suprema we have $\sup f([a ,x])< \epsilon +\sup f([a,c])$ for all $x \in (c- \frac{\delta}{2},c+ \frac{\delta}{2})$ moreover, suppose $\sup f([a,c])-\sup f([a,c-\frac{\delta}{2}])\ge \epsilon$ in this case we must have: $\sup f([c-\frac{\delta}{2},c])-\sup f([a,c-\frac{\delta}{2}])\ge \epsilon$ because otherwise $0 \ge \epsilon >0$ then, $\sup f([c-\frac{\delta}{2},c])\ge \epsilon +\sup f([a,c-\frac{\delta}{2}]) \ge \epsilon + f(c-\frac{\delta}{2})$ by the extreme value theorem, there exists $t \in [c-\frac{\delta}{2},c]$ s.t. $|f(t)-f(c-\frac{\delta}{2})| \ge \epsilon$ but $|f(t)-f(c)|<\epsilon/2$ and $|f(c-\frac{\delta}{2})-f(c)|<\epsilon/2$ so, $|f(t)-f(c-\frac{\delta}{2})|\le|f(t)-f(c)|+|f(c-\frac{\delta}{2})-f(c)|<\epsilon/2+\epsilon/2=\epsilon$, which is a contradiction. so we must have $\sup f([a,c])-\sup f([a,c-\frac{\delta}{2}])< \epsilon$. by the monotone property of suprema $\sup f([a,c])-\epsilon< \sup f([a,c-\frac{\delta}{2}])\le \sup f([a,x])$ for all $x \in (c- \frac{\delta}{2},c+ \frac{\delta}{2})$ in conclusion, $|x-c|<\delta/2 \implies \sup f([a,c])-\epsilon< \sup f([a,x])< +\sup f([a,c])+\epsilon \implies |\sup f([a,x])-\sup f([a,c])|<\epsilon$ $$$$ I would really appreciate it if community members could check the details of my proof. Are there any mistakes? Is there anything weird about it? What are your general thoughts? I have tried to include as much justification as I thought reasonable, but is there anything you find lacking? Note that I have only attempted to prove continuity on $(a,b)$ and will update to deal with the endpoints later.
Your proof is correct, but somewhat laborious. Here is my suggestion. We have to show that for each $c \in [a,b]$ and each $\epsilon > 0$ there exists $\delta > 0$ such that $\lvert x - c \rvert < \delta$ implies $\lvert F(x) - F(c) \rvert < \epsilon$, i.e. $F(c) - \epsilon < F(x) < F(c) + \epsilon$. Since $F$ is monotonically increasing, this is equivalent to the existence of $\delta > 0$ such that * *For $x$ such that $c \le x < c + \delta$ we have $F(x) < F(c) + \epsilon$. *For $x$ such that $c - \delta < x \le c$ we have $F(c) < F(x) + \epsilon$. Since $f$ is continuous, there exists $\delta > 0$ such that $\lvert \xi - c \rvert < \delta$ implies $\lvert f(\xi) - f(c) \rvert < \epsilon/3$. This $\delta$ will do for 1. and 2. 1.: For $c \le x < c + \delta$ we have $$F(x) = \sup f([a,x]) = \max(\sup f([a,c]), \sup f([c,x]) = \max(F(c),\sup f([c,x]).$$ If $\xi \in [c,x]$ we have $\lvert \xi - c \rvert < \delta$, thus $f(\xi) < f(c) + \epsilon/3$ which implies $\sup f([c,x]) \le f(c) + \epsilon/3 < f(c) + \epsilon \le F(c) + \epsilon$. Therefore $\max(F(c),\sup f([c,x]) < F(c) + \epsilon$. 2.: For $c - \delta < x \le c$ we have $$F(c) = \sup f([a,c]) = \max(\sup f([a,x]), \sup f([x,c]) = \max(F(x),\sup f([x,c]).$$ If $\xi \in [x,c]$ we have $\lvert \xi - c \rvert \le \lvert x- c \rvert < \delta$, thus $$f(\xi) = f(x) + f(\xi) - f(x) \le f(x) + \lvert f(\xi) - f(x) \rvert \le f(x) + \lvert f(\xi) - f(c) \rvert + \lvert f(c) - f(x) \rvert \\ < f(x) + \epsilon/3 + \epsilon/3$$ which implies $\sup f([x,c]) \le f(x) + 2\epsilon/3 < f(x) + \epsilon \le F(x) + \epsilon$, hence $\max(F(x),\sup f([x,c]) < F(x) + \epsilon$. Note that these arguments also cover the cases $c = a$ and $c = b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4553496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that there exists a numeric sequence $c_1, c_2,...$ such that $\xi_n / c_n \to 0$ almost surely as $n \to \infty$. Let $\xi_1, \xi_2,...$ be a sequence of random variables defined on the same probability space. Prove that there exists a numeric sequence $c_1, c_2,...$ such that $\xi_n / c_n \to 0$ almost surely as $n \to \infty$. My attempt: I guess the proof needs First Borel-Cantelli Lemma. But I don't know how to prove.
Every random variable $X$ is tight so that for every $\epsilon >0$ there exists a $m$ such that: $$P(X\geq m) \leq \epsilon.$$ So for a given sequence $X_i$ we can choose a sequence $m_i$ such that $$\sum_i^\infty P(X_i\geq m_i)<\infty.$$ So by the Borel-Cantelli lemma, $$P(\sup_i X_i\geq m_i) =0.$$ Now choose a sequence $a_i=m_i \epsilon$ and apply the above result to this sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4553877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Dividing 100 members into two groups. The question is: An organisation has 100 members where each member is friends with exactly 56 other members (friendship is mutual). It is also known that there are 50 members who are friends with one another. Show that the 100 members can be divided into two groups such that for each group, each two members in it are friends. I'll admit I haven't the slightest idea how to solve this question. My first thought was to use graph theory, but I don't think it would help to solve this question. Can anyone give me a clue?
Since there is a group of $50$ members who are friends of each other. Let's start with this group first. Call this group "A" and its members $A_1, A_2, \cdots, A_{50}$. Let the remaining $50$ members form a group $B$ with members $B_1, B_2, \cdots, B_{50}$. In group $A$, each member knows $49$ other $A$ group members. Because each $A_k$ knows exactly $56$ other group members, therefore each $A_k$ must know exactly $7$ $B$ group members. For example $A_1$ may know $B_1, B_2, \cdots, B_7$ and no more. Let's use the order pair $\left(A_i, B_k \right)$ to denote that $A_i$ and $B_k$ are friends, then there are $50 \times 7 = 350$ such order pairs. Now consider the $B$ group. Each $B$ group member can know at most $49$ other $B$ group members. Thus each $B$ group members must know at least $7$ $A$ group members. Thus from the perspective of $B$ group members, the number of order pairs $\left(A_i, B_k \right)$ is at least $50 \times 7 = 350$. Also if any $B$ knows more than $7$ $A$'s, the number of order pairs is greater than $350$. Since the number of order pairs is exactly 350, each $B$ must know exactly 7 $A$'s and hence know exactly exactly $49$ other $B$'s. In other words, any two $B$ group members are friends of each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4554041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving two chords are perpendicular I am stuck on the following problem: Let A, B, C, and D be placed consecutively on a circle. Let W, X, Y, and Z be the midpoints of AB, BC, CD, and DA, respectively. Show that chords WY and XZ are perpendicular. I have been trying to relate things to 180 degrees, but I am getting stuck. Attached to this is my work so far. Anyone have any tips or tricks?
EDIT 1: We use a vector approach and begin with the origin at A or O. A circle of unit diameter touching x-axis at A has equation in polar coordinates $ r= \sin \theta$ and in cartesian coordinates for point B with polar angle $\alpha$ we have for $$ B: (x,y)= (\sin \alpha \cos \alpha, \sin^2 \alpha) $$ Let $\angle{BAC}= \beta, \angle {CAD}=\gamma$ and we mark out arc centers $(W,X,Y,Z)$ coordinates by chasing angles as shown around the circle. Angle subtended by half of an arc is half of that for full arc. $$ A:(0,0);~B: (x,y)=( \sin \alpha \cos \alpha, \sin^2 \alpha)$$ $$ W: ( \sin \alpha/2 \cos \alpha/2, \sin^2 \alpha/2)$$ $$ C:( \sin (\alpha+\beta) \cos(\alpha+\beta),\sin^2(\alpha+\beta) ) $$ $$ X:( \sin (\alpha+\beta/2) \cos(\alpha+\beta/2),\sin^2(\alpha+\beta/2) ) $$ $$ D:( \sin (\alpha+\beta+\gamma) \cos(\alpha+\beta+\gamma),\sin^2(\alpha+\beta+\gamma) ) $$ $$ Y:( \sin (\alpha+\beta+\gamma/2) \cos(\alpha+\beta+\gamma/2),\sin^2(\alpha+\beta+\gamma/2) $$ $$ Z:( \sin (\alpha/2+\beta/2+\gamma/2+\pi/2) \cos(\alpha/2+\beta/2+\gamma/2+\pi/2),\sin ^2(\alpha/2+\beta/2+\gamma/2+\pi/2)) $$ We need to show that vector $\vec {ZX}$ is perpendicular to $\vec{YW}.$ This has been checked in effect analytically by obtaining exact zero dot product of vectors connecting alternate arcs midpoints among the four arcs marked red... for arbitrary $ (\alpha,\beta,\gamma)$. To reduce tedium of trig simplifications, coded the above on Mathematica. $$ (\vec Z-\vec X)\cdot(\vec Y-\vec W)=0~; $$ Aliter In hindsight the center of the circle can also be taken as origin, that is more simple. For arbitrary $(al,bt,gm): $ > Clear["`*"]; A[0,0]; B={Cos[al],Sin[al]};W={Cos[al/2],Sin[al/2]}; X={ > Cos[al+bt/2],Sin[al+bt/2]};CC={ Cos[al+bt],Sin[al+bt]}; > Y={Cos[al+bt+gm/2],Sin[al+bt+gm/2]}; > Z={Cos[al/2+bt/2+gm/2+Pi],Sin[al/2+bt/2+gm/2+Pi]}; X-Z ;Y-W; > Simplify[(X-Z).(Y-W)]
{ "language": "en", "url": "https://math.stackexchange.com/questions/4554340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Proof that $\frac{xy^2}{x^2+y^6}$ is unbounded on $\mathbb{R}^2$ I am trying to prove that $\frac{xy^2}{x^2+y^6}$ is unbounded. Is the following correct? Consider arbitrary $M \in \mathbb{R} \setminus\{0\}$ (the case in which $M=0$ is trivial). Set $y:=\sqrt{\frac{|x|}{2}}$. Thus, for any $x$ $\in \mathbb{R} \setminus \{0\}$, we have that $|x|>y^2$, and so that $\frac{xy^2}{x^2+y^6}>\frac{y^4}{x^2+y^6}$. We now argue that there exists an $x \in \mathbb{R}$ such that $\frac{xy^2}{x^2+y^6}>\frac{y^4}{x^2+y^6}>M$. Note that $\frac{y^4}{x^2+y^6}$ is monotonically increasing as $x \rightarrow 0$. Also, note that $\frac{y^4}{x^2+y^6}$ is bounded above by $\frac{1}{y^2}=\frac{2}{x}$. Thus, as $x \rightarrow 0$, $\frac{y^4}{x^2+y^6}$ becomes arbitrarily large (as the upper bound towards which it is monotonically increasing becomes arbitrarily large). Thus, there must exist an $x \in \mathbb{R}$ such that $\frac{xy^2}{x^2+y^6}>M$, for $(x,y):=(x, \sqrt{\frac{x}{2}})$.
Let $x=\frac{1}{n^3}$, $y=\frac{1}{n}$, where $n$ is a positive integer. Then $$ \frac{xy^2}{x^2+y^6}=\frac{n}{2}, $$ which $\to \infty$ when $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4554629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Study irreducibility of $f(x) = x^{20} + 5x^{15}+25x^{10}+125x^5+625$ in $\mathbb Q[x]$ I started with the idea of looking for a $b \in \mathbb Z$ where $f(x+b)$ is a polinomyal that I can use Einsestein for $p = 5$. According with $\dbinom{5}k$ are multiples of $5$ I just need the independent term dividing $5$ and not $5^2$. Lets write this term $i(b) = b^{20} +5b^{15}+25b^{10}+125b^5+625$ according to Fermats Theorem $b^5 \equiv b \ (mod 5)$ then $i(b) \equiv b^4 \ mod(5)$ but I can't find the b with this and I do not know how to continue, some advide? Thanks
Because $2$ is a primitive root modulo $25$, the cyclotomic polynomial $\Phi_{25}(x)$ is irreducible modulo two (or in the ring $\Bbb{Z}_2[x]$). Your polynomial reduces to $\Phi_{25}(x)$ modulo two, so it is irreducible in $\Bbb{Z}[x]$. Gauss's Lemma and friends then imply that it is irreducible in $\Bbb{Q}[x]$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4554783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
orientation difference between two triangles in 3D space Lets there be 2 sets of 3 points in 3D space, representing 2 congruent isosceles triangles. The apexes of both triangles are located at the point (0,0,0). How do I calculate the difference between the orientation of the two triangles in terms of roll yaw and pitch? That is, by how much would I have to rotate one triangle in order for it to be in the exact same location as the other one?
Let the first triangle be $\triangle ABC$ and the second triangle be $\triangle A B' C' $ where $A = (0,0,0)$, then we want to relate the vertices through a rotation. So, we have $B' = R B $ $C' = R C $ we still need a third independent vector to specify the $3 \times 3$ matrix $R$. That third vector can conveniently be choosen as the cross product between $B'$ and $C'$ and between $B$ and $C$, and we have from the properties of a rotation matrix that $ B' \times C' = R B \times R C = R (B \times C) $ Putting all of the above in matrix format $\begin{bmatrix} B' && C' && B' \times C' \end{bmatrix} = R \begin{bmatrix} B && C && B \times C \end{bmatrix} $ From which it follows that $ R = \begin{bmatrix} B' && C' && B' \times C' \end{bmatrix} \begin{bmatrix} B && C && B \times C \end{bmatrix}^{-1} $ This gives the complete rotation $R$ from $\triangle ABC $ to $\triangle A B'C'$. Now suppose the triangle being rotated is initially (at $t = 0$) at $\triangle A B C $ and finally (at $t = T$) at $\triangle A B' C' $, and you want to find the coordinates of the triangle vertices at an instant $t, 0 \le t \le T$, then you need to decompose the overall rotation matrix $R$ into the axis-angle format (which you can also call the Rodrigues Rotation Matrix format). Any rotation matrix, can be put in the form $ R = \mathbf{aa}^T + (I - \mathbf{aa}^T ) \cos \theta + S_a \sin \theta $ where $ S_a = \begin{bmatrix} 0 && - a_z && a_y \\ a_z && 0 && - a_x \\ -a_y && a_x && 0 \end{bmatrix} $ One can show based on this expression (which is the Rodrigues Rotation Matrix Formula) that the angle of rotation satisfies $ \text{trace}(R) = R_{11} + R_{22} + R_{33} = 1 + 2 \cos \theta $ From which you can solve for $\theta$ (the overall rotation angle). Now we need to find the axis of rotation $\mathbf{a}$. Careful consideration of the off-diagonal entries of the above matrix $R$, leads to the following formulas $ a_x = \dfrac{ R_{32} - R_{23} }{ 2 \sin \theta }$ $ a_y = \dfrac{ R_{13} - R_{31} }{ 2 \sin \theta }$ $ a_z = \dfrac{ R_{21} - R_{12} } {2 \sin \theta } $ Having found the axis of rotation $\mathbf{a}$ and the total angle of rotation $\theta$ (which corresponds to $t = T$ ), and assuming the triangle rotates at a constant rate, the angle of rotation at $ t $ where $0 \le t \le T$ is given by $ \theta_1 = \bigg( \dfrac{t}{T} \bigg) \theta $ We can now compute the rotation matrix at $ t $, it is the same as before, but with the angle $\theta$ replaced with $\theta_1$ $ R(t) = \mathbf{aa}^T + (I - \mathbf{aa}^T ) \cos \theta_1 + S_a \sin \theta_1 $ Having found the rotation matrix $R(t)$, we can now compute the rotated triangle vertices at the instant $t$. $ B'(t) = R(t) B $ $ C'(t) = R(t) C $ Of course, vertex $A$ is unaffected by the rotation and stays at $(0,0,0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4555092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to derive $\lim\limits_{x,n→∞}\prod\limits_{k=0}^{n-1}\left(1-\frac kx\right)=1-\frac eα\frac1x$ where $α=2e-4$? Background $\def\d{\mathrm{d}}\def\e{\mathrm{e}}\def\i{\mathrm{i}}$ The following two equations are well known, $$\begin{align*} \left(1+\frac1x\right)^x &= \sum_{n=0}^\infty {x \choose n}\frac1{x^n}, \\[1ex] \e &= \sum_{n=0}^\infty\dfrac{1}{n!}. \end{align*}$$ Subtract them, which will produce the product in the question, and then perform a backward derivation (see end for details) from $$\lim_{x\to\infty}x\left[\left( 1+\dfrac{1}{x} \right)^{x}-\e \right] = -\dfrac{\e}{2},$$ yields the result in the question. P.S. The limit above comes from a small exercise, its solution steps are:    ① Reciprocal substitution, i.e. $x=\dfrac1t$;    ② Use L'Hôpital's rule once;    ③ Utilize the derivation formula for compound logarithmic functions, i.e. $(u^v)' = u^v \left( \ln u^{v'} + \dfrac{u'}{u}v \right)$;    ④ Use L'Hôpital's rule twice. Q How to solve $$\lim_{\substack{n\to\infty\\x\to\infty}}\prod_{k=0}^{n-1}\left(1-\frac kx\right)$$ in a forward approach? (while requiring the retention of the "exact form" in the title) Actually, what I'd like to ask more is: Is there a general solution for products of this form? In other words, is it possible to assume that the result is in the form containing {$\e, \alpha$} and find it? Side note The specific process of backward derivation $$\begin{align*} \Sigma_1-\Sigma_2&=\sum_{n=0}^1\,(1-1) + \sum_{n=2}^{\infty}\left( \frac{x(x-1)\cdots(x-(n-1))}{n!}\frac{1}{x^n}-\frac{1}{n!} \right) \\ &=\sum_{n=2}^{\infty}\dfrac{1}{n!}\left[ 1\left( 1-\dfrac{1}{x} \right)\cdots\left( 1-\dfrac{n-1}{x} \right) - 1 \right] \\ &=\sum_{n=2}^{\infty}\dfrac{1}{n!}\left[ {\color{teal}{\prod_{k=0}^{n-1}\left( 1-\dfrac{k}{x} \right) - 1}} \right], \end{align*}$$ Substitute into the equation in the question, immediately get $$\lim_{x\to\infty} x\,\left(\Sigma_1-\Sigma_2\right) = {\color{teal}{-\dfrac{\e}{2}\dfrac{1}{\e-2}}}\sum_{n=2}^{\infty}\dfrac{1}{n!}=-\dfrac{\e}{2}.$$ The result is consistent with the known conclusions from the normal calculation, so the equation in question is verified.
The product $$ \prod_{k=0}^{n-1} \left( 1 - \frac{k}{x} \right) $$ can be seen as $$ \prod_{k=0}^{n-1} \left( 1 - \frac{k}{x} \right) = \frac{x (x-1) (x-2)\cdots (x-n+1)}{x^n} = \frac{x!}{x^n \, (x-n)!}. $$ Using $n! \approx \sqrt{2 \pi n} \, n^n \, e^{-n}$ then $$ \prod_{k=0}^{n-1} \left( 1 - \frac{k}{x} \right) = \frac{x!}{x^n \, (x-n)!} \approx \left( 1- \frac{n}{x}\right)^{n-x-1/2} \, e^{-n}. $$ If $ n \to \infty$ then $$ \lim_{n \to \infty} \, \prod_{k=0}^{n-1} \left( 1 - \frac{k}{x} \right) = 0. $$ If $x \to \infty$ then, by use of $\lim_{x \to \infty} \left(1 - \frac{n}{x}\right)^{-x} = e^{n}$, $$ \lim_{x \to \infty} \, \prod_{k=0}^{n-1} \left( 1 - \frac{k}{x} \right) = e^{-n} \, \lim_{x \to \infty} \left( 1 - \frac{n}{x}\right)^{n-1/2} \, \left(1 - \frac{n}{x}\right)^{-x} = 1. $$ Additional note: Consider the limit $$ \lim_{x \to \infty} \, x \, \left[ \left(1 - \frac{a}{x}\right)^x - e^{-a} \right]$$ as follows. \begin{align} \left(1 - \frac{a}{x}\right)^x - e^{-a} &= e^{x \, \ln\left(1 - \frac{a}{x}\right)} - e^{-a} \\ &= e^{x \, \left( - \frac{a}{x} \right) - \frac{a^2}{2 \, x^2} + \cdots} - e^{-a} \\ &= e^{-a - \frac{a^2}{2 \, x} - \cdots} - e^{-a} = e^{-a} \, \left( -1 + e^{\frac{a^2}{2 \, x} + \cdots} \right) \\ &= e^{-a} \, \left( -1 + 1 + \left( -\frac{a^2}{2 \, x} - \frac{a^3}{3! \, x^2} + \cdots \right) + \frac{1}{2!} \, \left( -\frac{a^2}{2 \, x} - \frac{a^3}{3! \, x^2} + \cdots \right)^2 + \cdots \right) \\ &= e^{-a} \, \left( -\frac{a^2}{2 \, x} + \frac{a^3 \, (3 a - 4)}{4! \, x^2} + \mathcal{O}\left(\frac{1}{x^3}\right) \right) \\ &= -\frac{a^2 \, e^{-a}}{2 \, x} \, \left( 1 + \frac{2 a (4 - 3 a)}{4! \, x} + \mathcal{O}\left(\frac{1}{x^2}\right) \right). \end{align} This leads to \begin{align} \lim_{x \to \infty} \, x \, \left[\left(1 - \frac{a}{x}\right)^x - e^{-a} \right] &= \lim_{x \to \infty} \, -\frac{a^2 \, e^{-a}}{2} \, \left( 1 + \frac{2 a (4 - 3 a)}{4! \, x} + \mathcal{O}\left(\frac{1}{x^2}\right) \right) \\ &= -\frac{a^2 \, e^{-a}}{2}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4555513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linear maps that have the same matrix regardless of the bases chosen for domain and codomain Question: Other than the zero map, what linear map has the same matrix $A_{E,F}$ with respect to all $E$ and $F$? For linear map $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$, given a basis $E$ for domain and basis $F$ for codomain, I can find a unique corresponding matrix $A_{E,F}$, where $A_{E,F}$ generally depends on $E$ and $F$. Note that for any $E$ and $F$, the matrix corresponding to the zero map is always the zero matrix, since the map sends all vectors in $E$ to $\mathbf{0}$, and $\mathbf{0}$ can only be represented by $(0,...,0)$ with respect to any basis $F$.
Let $A$ be matrix of $T$ with basis on both the range and domain as the columns of the identity matrix $I$. Then $A/2 $ the matrix with the same basis on the domain and $2I$ as the basis on the range. So, $A/2 = A$, or $A=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4555670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Orthogonal projection of a line onto a plane Let $\pi:x+y+z=1$ describe a plane and let $L$ be a line s.t. $L:(x,y,z)= (1,1,-1)+t(1,0,1)$, $t\in\mathbb{R}$. When $L$ is orthogonally projected onto $\pi$ a new line is made, write this new line in scalar form. I know how to write a line of an orthogonal projection of a point onto a plane, but here I'm not sure what to do and I don't think I understand the question. That $L$ is orthogonally projected onto $\pi$, I understand as every point on $L$ is orthogonally projected onto $\pi$ so there will be infinite many new lines not just one.
"I understand as every point on $L$ is orthogonally projected onto $\pi$ so there will be infinite many new lines not just one" We claim that the orthogonally projected points are all along the same line. The projection of $P_t=(1,1,−1)+t(1,0,1)$, which belongs to the line $L$, onto the plane $x+y+z=1$ is given by the intersection of the plane and the line $s\to P_t+(1,1,1)s$, which is orthogonal to $\pi$: $$(1+t+s)+(1+s)+(-1+t+s)=1\implies s=-\frac{2t}{3}.$$ Hence the orthogonal projection of $P_t$ onto $\pi$ is $$Q_t=P_t-\frac{2t}{3}(1,1,1)=(1,1,−1)+t(1,0,1)-\frac{2t}{3}(1,1,1)= (1,1,−1)+\frac{t}{3}(1,-2,1). $$ Notice that any projected point $Q_t$ lays along the same line: $t\to Q_t=(1,1,−1)+\frac{t}{3}(1,-2,1).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4555774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }