Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Constant of integration Consider the integral $$\int x \arctan (x) \;dx$$ Evaluate this integral using integration by parts. Then find a constant of integration that makes the last integration trivial. Compare the answers and explain any differences. I know how to integrate by parts, I got $$\frac{x^2 \arctan (x) + \arctan (x) - x}{2} + C$$ but I don’t get the part where it asks to a find a constant of integration that makes the last trivial. If somebody could please explain?
$$ \begin{aligned} \int x \arctan (x) \;dx &= \int \frac 12(x^2+1)' \arctan (x) \;dx \\ &= \frac 12(x^2+1)\arctan (x) - \int \frac 12(x^2+1) \arctan' (x) \;dx \\ &= \frac 12(x^2+1)\arctan (x) - \int \frac 12(x^2+1) \cdot\frac 1{x^2+1} \;dx \\ &= \frac 12(x^2+1)\arctan (x) - \int \frac 12\;dx \\ &= \frac 12(x^2+1)\arctan (x) - \frac x2 +\text{(locally) constant .} \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2967916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proof verification of $x_n = \sqrt[3]{n^3 + 1} - \sqrt{n^2 - 1}$ is bounded Let $n \in \mathbb N$ and: $$ x_n = \sqrt[^3]{n^3 + 1} - \sqrt{n^2 - 1} $$ Prove $x_n$ is bounded sequence. Start with $x_n$: $$ \begin{align} x_n &= \sqrt[^3]{n^3 + 1} - \sqrt{n^2 - 1} = \\ &= n \left(\sqrt[^3]{1 + {1\over n^3}} - \sqrt{1 - {1\over n^2}}\right) \end{align} $$ From here: $$ \sqrt[^3]{1 + {1\over n^3}} \gt 1 \\ \sqrt{1 - {1\over n^2}} \lt 1 $$ Therefore: $$ \sqrt[^3]{1 + {1\over n^3}} - \sqrt{1 - {1\over n^2}} \gt 0 $$ Which means $x_n \gt 0$. Consider the following inequality: $$ \sqrt[^3]{n^3 + 1} \le \sqrt{n^2 + 1} \implies \\ \implies x_n < \sqrt{n^2 + 1} - \sqrt{n^2 - 1} $$ Or: $$ x_n < \frac{(\sqrt{n^2 + 1} - \sqrt{n^2 - 1})(\sqrt{n^2 + 1} + \sqrt{n^2 - 1})}{\sqrt{n^2 + 1} + \sqrt{n^2 - 1}} = \\ = \frac{2}{\sqrt{n^2 + 1} + \sqrt{n^2 - 1}} <2 $$ Also $x_n \gt0$ so finally: $$ 0 < x_n <2 $$ Have i done it the right way?
Your prove is fine but a lot more work than necessary. As $n \ge 1$ we have $n = \sqrt[3]{n^3} < \sqrt[3]{n^3 + 1} < \sqrt[3]{n^3 + 3n^2 + 3n + 1} = \sqrt[3]{(n+1)^3} = n+1$ and $n = \sqrt{n^2} > \sqrt{n^2 -1 } = \sqrt{n^2 - 2 + 1} \ge \sqrt{n^2 - 2n + 1} = \sqrt{(n-1)^2} = n-1$. So $0 = n - n < \sqrt[3]{n^3 + 1} - \sqrt{n^2 -1} < (n+1) - (n-1) = 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Combinatorial proof that $\sum_{i=0}^n 2^i\binom{n}{i}i!(2n-i)! = 4^n(n!)^2$ I'm looking for a combinatorial proof that $$\sum_{i=0}^n 2^i\binom{n}{i}i!(2n-i)! = 4^n(n!)^2.$$ My thoughts so far: the RHS counts the number of pairs of permutations on $n$ elements along with an $n$-tuple whose entries come from 4 choices. The LHS might count the same thing but partitioned into cases somehow.
Alternatively, a more combinatorial flavor approach: Multiply both sides by $\binom{2n}{n}$ so you get $$\sum_{i=0}^n 2^i\binom{2n}{\color{red}{n}}\binom{\color{red}{n}}{i}i!(2n-i)! = 4^n(n!)^2\frac{(2n)!}{n!^2},$$ then, using that $\binom{a}{b}\binom{b}{c}=\binom{a}{c}\binom{a-c}{b-c},$ we get $$\sum_{i=0}^n 2^i\binom{2n}{i}\binom{2n-i}{n-i}i!(2n-i)! = 2^{2n}(2n)!,$$ cancelling the factorials, we get $$\sum_{i=0}^n 2^i\binom{2n-i}{n} = \underbrace{2^0\binom{2n-0}{n}}_{*_1}+2\sum_{i=1}^n 2^{i-1}\binom{2n-i}{n}=2^{2n}.$$ This last expression can be proved as follows: Consider a binary string $x\in \{0,1\}^{2n},$ is clear that either $\underbrace{|x|_0= |x|_1}_{*_1}$ or $|x|_0\neq |x|_1,$ where $|x|_a$ is the number of symbols equal to $a$ in the string. If it is the first case(i.e., $*_1$), then there are $\binom{2n}{n}=2^0\binom{2n-0}{n}$ ways to do this. If not, then is clear that at some point(going from left to right) in the string, there is a symbol(we have $2$ ways to choose which symbol) that will occur $n+1$ times, call this point $2n-i+1$ with $1\leq i\leq n.$ You need to pick where are the remaining $n$ symbols to the left in $\binom{2n-i}{n}$ and it does not matter what happens to the right so there are $2^{2n-(2n-i+1)}=2^{i-1}$ ways to do this. By double counting, the LHS is equal to the RHS. Going backwards, and considering the combinatorial interpretation of each step, one can construct a story proof around this with the subject being signed permutations of $[2n].$ For example, when you divide by the $\binom{2n}{n}$ you are saying that instead of considering any permutation, you are going to consider just permutations that the first $n$ are in the first $n$ and respectively the last $n.$ So, essentially, you have in the RHS colored permutations of tuples of $n!,$ in the LHS the same, but taking into account where do you see the $n+1-$th occurrence of the most frequent symbol in the coloring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding real part of complex number in exponential form in fraction Given this complex number, $$e^{9ix/2} \frac{\sin 4x }{ (\sin (x/2) }$$ The real part of this complex number can be worked out easily, by replacing the $e^{9ix/2}$ with $\cos(9x/2)$ However if I'm given the complex number, $$\frac{3} {3 - e^{ix} }$$ I cannot work out the real part by replacing the $e^{ix}$ with $\cos x $. I want to understand why I can do this replacement in the first example and why I can't do it in the second example; and what I should look for when doing practice questions myself. And how I would actually go about working out the real part of the second example. Thanks, any help would be appreciated.
HINT Use that $$\frac{3} {3 - e^{ix} }=\frac{3} {3 - e^{ix} }\frac{3 - e^{-ix} } {3 - e^{-ix} }=\frac{9 - 3e^{-ix} } {10 -3 (e^{ix}+e^{-ix}) }$$ then recall that $z+\bar z= 2\Re(z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Using $ \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!}=\frac1e$ evaluate first $3$ decimal digits of $1/e$. Using the series $\displaystyle \sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!}=\frac{1}{e}$, evaluate the first $3$ decimal digits of $1/e$. Attempt. In alternating series $\displaystyle \sum_{k=0}^{\infty}(-1)^{k+1}\alpha_n$, where $\alpha_n \searrow 0$, if $\alpha$ is the sum of the series then $$|s_n-\alpha|\leq \alpha_{n+1}.$$ So, in our case we need to find $n$ such that $|s_n-1/e|<0.001$, where $\displaystyle s_n=\sum_{k=0}^{n-1}\frac{(-1)^{k}}{k!}$ and it is enough to find $n$ such that $\dfrac{1}{n!}<0.001$, so $n\geq 7$. Therefore: $$s_7=\sum_{k=0}^{6}\frac{(-1)^{k}}{k!}=0.36805\ldots$$ so I would expect $\dfrac{1}{e}=0.368\ldots$. But: $\dfrac{1}{e}=0.36787944\ldots$. Where am I missing something? Thanks in advance.
Note $$\bigg|\sum_{k=0}^{n}\frac{(-1)^{k}}{k!}-\frac{1}{e}\bigg|=\bigg|\sum_{k=n+1}^{\infty}\frac{(-1)^{k}}{k!}\bigg|\le\sum_{k=n+1}^\infty\frac{1}{3^{k}}\le\frac{1}{2\cdot3^{n}}.$$ Let $\frac{1}{2\cdot3^{n}}<0.001$ and then $n>\frac{\ln500}{\ln 3}\approx5.65678$. Now set $n=6$ and then $$ \bigg|\sum_{k=0}^{6}\frac{(-1)^{k}}{k!}-\frac{1}{e}\bigg|<0.001. $$ In fact, it is easy to see $$ \bigg|\sum_{k=0}^{5}\frac{(-1)^{k}}{k!}-\frac{1}{e}\bigg|=0.001212774505,\bigg|\sum_{k=0}^{6}\frac{(-1)^{k}}{k!}-\frac{1}{e}\bigg|=0.0001761143841<0.001. $$ So $n=6$ is the small number such that $|s_n-\frac1e|<0.001$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Global optimum in a convex space/set Recall that a set $S⊂\mathbb R^n$ is said to be convex if for any $x,y∈S$, and any $λ∈[0, 1]$, we have $λx+(1−λ)y∈S$. Let $f:\mathbb R^n→\mathbb R$ be a convex function and let $S⊂\mathbb R^n$ be a convex set. Let $x^∗$ be an element of $S$. Suppose that $x^∗$ is a local optimum for the problem of minimizing $f(x)$ over $S$, that is, there exists some $ε > 0$ such that $f(x^∗) \leqslant f(x)$ for all $x ∈ S$ for which $\|x−x^∗\| \leqslant ε$. Prove that $x^∗$ is globally optimal, that is, $f(x^∗) \leqslant f(x)$ for all $x ∈ S$. I have been looking at this problem for a while now and I feel like I can kind of picture it in my head, but am having a hard time putting it into words. Can anyone help me out?
Suppose $x^*$ is not a global minimizer; there is some $x$ with $f(x) < f(x^*)$. Then every point $y$ on the line segment from $x^*$ to $x$ has $f(y) < f(x^*)$, too. Points on the line segment get arbitrarily close to $x^*$, so $x^*$ is not a local minimizer, either. To prove the inequality $f(y) < f(x^*)$, apply convexity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding the Probability from the sum of 3 random variables Let $X_1, X_2$ and $X_3$ be three independent normal random variables having mean $\mu= 0$ and variance $\sigma^2=16.$ Compute $P(X_1^2+X_2^2+X_3^2>8).$ Hint: First transform the random variables to standard normal. I transformed the random variables to $Z$ standard normal and got $Z_1=X_1/4,\, Z_2=X_2/4$ and $Z_3=X_3/4.$ I am unsure about where to go from here. I know that the sum of random variables is the same as the product of their moment generating functions but how do I apply that here?
Suggested outline. (1) Use MGFs (or a transformation method) to show that each of the three $Z_i,$ for $i=1,2,3,$ has a chi-squared distribution with $1$ degree of freedom. (2) Use MGFs to show that $Q = Z_1^2 + Z_2^2 + Z_3^2$ has a chi-squared distribution with $3$ degrees of freedom. (3) Use software or printed tables of the distribution $\mathsf{Chisq}(df=3)$ to evaluate (or approximate) $P(Q > 0.5).$ Using R statistical software, I get about 0.9189. (Depending on the printed tables available, you may be able to say only that the answer is between .90 and .95.) 1-pchisq(.5, 3) [1] 0.9188914 In the figure below, the desired probability is represented by the area under the density curve to the right of the vertical red line at 0.5. Note: A simulation in R, accurate to two or three places. set.seed(1024); m = 10^6 x1 = rnorm(m, 0, 4) x2 = rnorm(m, 0, 4) x3 = rnorm(m, 0, 4) s = x1^2 + x2^2 + x3^2 mean(s > 8) ## 0.918736
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Name of the following structure? I need the name of the algebraic structure that is like a vector space, but the vectors form a monoid, not a group; the field and the scalar multiplication stays the same.
I think the closest thing to what you are talking about is a semimodule over a semifield which is a commutative monoid acted upon by a semifield. For both the semimodule and the semiring, we've dropped the condition that what used to be an abelian group now does not necessarily have inverses for its elements. The situation is like Joppy mentioned in the comments: if you $V$ which is a vector space except that it does not have additive inverses, and you have a field acting on it in the normal way, then in fact $-1\cdot v$ is defined for every $v\in V$, and that is the additive inverse of $v$. So to have something truly different you need to drop the requirement of additive inverses from "the field" as well. Actually everything above can still be said if we're talking about just a semimodule over a semiring (with identity.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2968902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can i determine if a function will increase in the future? Is it possible to determine whether a function will have increased in the future relative to starting points, given a sample of the first $m$ points? For example, given the 4 first values of a function $(f(1), f(2), f(3), f(4) )= (2, 1, 4, 5 )$ can I with some probability determine whether the $f(n)$ will be larger than all $f(1)$ to $f(4)$. I apologize if this is too vague. Any answers appreciated EDIT: Thanks for commenting. If i let my function be something like a ratio of how many white and black marbles i have in my collection, and i keep adding marbles (non-fair with no known probability), can i then pursue something?
If there is a causal relation between the successive function values, you can hope for reliable extrapolation. For instance by linear prediction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
$G$ a group of order 36, $P$ is the only Sylow 3-subgroup, which is normal, prove the existence of a homomorphism $\phi:G\rightarrow S_4$ Specifically the question I was asked was to prove that there was either a normal subgroup of $G$ of index 4 or that there was a non-trivial homomorphism $\phi:G\rightarrow S_4$, i.e., such that $\phi(g)\neq 1$ for some $g$. As $n_3=|G:N_G(P)|$ for any Sylow p-subgroup and $n_3=1,4$, if $n_3=4$ I'm done so I set $n_3=1$ and proved the information I have in the title but I can't get any further.
If $P$ is the unique Sylow subgroup it must be normal (a conjugate of $P$ would be a different Sylow subgroup).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Annihilators and semisimple rings Let $R$ be a semisimple ring and let $I$ be a left ideal of $R$. Denote $\text{ann}_l(S)$, resp. $\text{ann}_r(S)$, for the left (resp. right) annihilator of a left ideal $S$ of $R$. Any tips on how to show that $\text{ann}_l(\text{ann}_r(I)) \subseteq I$? (the reverse inclusion is straightforward)
Since $R$ is semisimple, $I=Re$ for some idempotent $e$. It's not hard to show that $ann_r(Re)=(1-e)R$ and $ann_l(eR)=R(1-e)$. Then $ann_l(ann_r(Re))=ann_l((1-e)R)=R(1-(1-e))=Re=I$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometric sequence problem with the last term (2018) undetermined. $x_0 = 1$ $x_{n+1}=2x_{n}+1$ $S_n=x_0+x_1+\ldots+x_n$ $\text {Find } S_{2018}$ How can I solve it? I tried to understand the sum of sequence but I couldn't and this is what I got: $$1+3+7+15+31+\ldots$$ I really don't know how to calculate the $2018^{\text {th}}$ term. Can anyone help me?
Hint: Write the sequence $x_0+1$, $x_1+1$, $x_2+1$, $x_3+1$, ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question regarding the proof of the Hartogs cardinal theorem. I am reading through the text The Foundations of Mathematics by Kenneth Kunen. On pages 54 and 55 he gives the following proof of the Hartogs cardinal theorem. Theorem For every set $A$, there is a cardinal $\kappa$ such that $\kappa \npreceq A$. Proof: Let $W$ be the set of pairs $(X,R) \in \mathcal{P}(A) \times \mathcal{P}(A \times A)$ such that $R \subseteq X \times X$ and $R$ well-orders $X$. So, $W$ is the set of all well-orderings of all subsets of $A$. Observe that $\alpha \preceq X$ iff $\alpha = \text{type}(X,R)$ for some $(X,R) \in W$ (See Exercise I.$11.19$). Applying the Replacement Axiom, we can set $\beta = \text{sup}\{\text{type}(X,R) + 1: (X,R) \in W\}$. Then $\beta > \alpha$ whenever $\alpha \preceq A$, so $\beta \npreceq A$. Let $\kappa = |\beta|$. Then $\kappa \approx \beta$, so $\kappa \npreceq A$. $\Box$ The exercise referred to in the proof is that for any set $A$, it can be well-ordered in type $\alpha \in \text{On}$ iff there is a bijection between them, so $\alpha \approx A$. I completely understand every other part of the proof except the following. So the third line in the proof seems to be using a modified version of this statement. Clearly if $\alpha = \text{type}(X,R)$ for some $(X,R) \in W$, then by the exercise there is a bijection $f: \alpha \rightarrow X$, which will also be an injection so that $\alpha \preceq X$. My issue is with going the other way. Suppose now that $\alpha \preceq X$. Then for $\alpha$ to be $\text{type}(X,R)$ for some well ordering $R$ of $X$, we would need first of all an order isomorphism of $(\alpha, \in)$ with $(X,R)$ which would need to be a bijective function, but how can I be guaranteed the existence of a bijective function with just an injection of $\alpha$ into $X$ given?
This is just a typo. In the sentence Observe that $\alpha \preceq X$ iff $\alpha = \text{type}(X,R)$ for some $(X,R) \in W$ (See Exercise I.$11.19$). it should say $\alpha\preceq A$ instead of $\alpha\preceq X$. (It does not even make sense to say $\alpha\preceq X$, since no specific $X$ has been defined and the $X$ appearing later in the sentence is a bound variable confined to the right side of the "iff".) Specifically, if $\alpha\preceq A$ then there is an injection $f:\alpha\to A$, and then we can take $X$ to be the image of $f$ so $f$ gives a bijection between $\alpha$ and $X$. So, by the exercise, there exists a well-ordering $R$ of $X$ with order-type $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is every measure $0$ set a set of discontinuities of a Riemann integrable function? Let $f:[a,b]\rightarrow\mathbb{R}$ be bounded, and let $D$ be its set of discontinuities. Then Lebesgue's criterion states that $f$ is Riemann-integrable if and only if $D$ has Lebesgue measure $0$. My question is, for any subset $D$ of $[a,b]$ with Lebesgue measure $0$, does there exist a Riemann integrable function $f:[a,b]\rightarrow\mathbb{R}$ whose set of discontinuities is $D$? Would the characteristic function of $D$ suffice, or is more complicated than that?
It is well known that any $F_{\sigma}$ set is the set of discontinuities of some function. We can also make this function bounded. So any $F_{\sigma}$ set of measure $0$ is the set of discontinuities of a Riemann integrable function. As pointed out in the comments not every of measure $0$ is the set of discontinuities of a Riemann integarble function. Proof of the fact that any $F_{\sigma}$ set is the set of discontinuities of a bounded function: Let $A=\cap_{n=1}^{\infty }G_{n}$ with $G_{n}$ open and $% G_{n+1}\subset G_{n}$ for all $n$. Let $f_{n}=I_{C_{n}\backslash E_{n}\text{ }}$where $C_{n}=G_{n}^{c}$ and $E_{n}=% %TCIMACRO{\U{211a} }% %BeginExpansion \mathbb{Q} %EndExpansion \cap C_{n}^{0}$. Let $f= \sum_{n=1}^{\infty } \frac{1}{n!}f_{n}$. We claim that $f$ has the desired properties. First let $x\in A$. Then $% f_{n}(x)=0$ for all $n$. In fact, for each $n$, $f_{n}$ vanishes in a neighbourhood of $x$. Hence each $f_{n}$ is continuous at $x$. By uniform convergence of the series defining $f$ we see that $f$ is also continuous at $x$. Now let $x\in A^{c}.$ Let $k$ be the least positive integer such that $% x\in C_{k}$. If $x\in C_{k}^{0}$ then, in sufficiently small neighbourhoods of $x,$ $f_{k}$ take both the values $0$ and $1$ and so its oscillation at $% x $ is $1$. We claim that the oscillation of $f_{j}$ at $x$ is $0$ for each $% j<k:$ since $x\notin C_{j}$ it follows that points close to $x$ are all in $% C_{j}^{c}$ and hence $f_{j}$ vanishes at those points. Now $\omega (f,x)\geq \frac{1}{k!}\omega (f_{k},x)- _{j=k+1}^{\infty }\frac{1}{j!}$ since $\omega (f_{j},.)\leq 1$ everywhere. Thus $\omega (f,x)\geq \frac{1}{k!% }- _{j=k+1}^{\infty }\frac{1}{j!}\geq \frac{1}{k!}% [1- _{j=k+1}^{\infty }\frac{1}{(k+1)(k+2)...(j)}]>\frac{1}{k!}% [1- _{j=k+1}^{\infty }\frac{1}{2^{j-k}}]=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given initial positions and velocities of two boats, do they collide? This is a homework question from a precalculus class that I'm a TA for. Boat $A$ is initially at position $(1,4)$ and moves at a constant velocity $\langle 3,5 \rangle$. Boat $B$ is at position $(7,2)$ and moves at a constant velocity of $\langle 1,10 \rangle$. Do the paths of the boats ever cross? If so where? Will the boats collide? If they don't collide, what's the closest the boats get to each other? I wanted to write up a thorough solution to this exercise for my class, and figured I'd post it online to help anyone else who may wander across it.
Let $P_A(a)$ denote the position of the boat $A$ at time $a$, and let $P_B(b)$ denote the position of boat $B$ at time $b$. From the initial positions and velocities given, we have: $$ \begin{align} P_A(a) &= (1,4) + a\langle 3,5 \rangle &\qquad P_B(b) &= (7,2) + b\langle 1,10 \rangle \\ &= (3a+1, 5a+4) &\qquad &= (b+7,10b+2) \end{align} $$ Now these equations give the paths of the boats starting at time $a=b=0$. The paths of the boats cross only if at some time $a$ and some time $b$ after each starts moving they have the same position. In terms of those equations, the paths of the boats will cross if there are positive times $a$ and $b$ such that $P_A(a) = P_B(b)$. Now the boats collide if not only is there a location where their paths cross, but if they are at that location at the same time. So the boats collide if $P_A(a) = P_B(b)$ for some positive $a$ equal to $b$. So we can proceed by setting $P_A = P_B$: $$ (3a+1, 5a+4) = (b+7,10b+2) \implies \begin{cases} 3a+1=5a+4 \\ b+7=10b+2 \end{cases}\ \implies \begin{cases} 3a-b=6 \\ 5a-10b=-2 \end{cases}\,, $$ This, being a system of linear equations, has at most a single solution, which we can calculate to be $a = \frac{62}{25}$ and $b = \frac{36}{25}$. These are both positive times, so the paths of the boats do cross, but since this is the only solution and $a \neq b$, the boats do not collide. To find the actual coordinates where they do cross will be the location of boat $A$ at time $a=\frac{62}{25}$ (which should equal the location of $B$ at time $b=\frac{36}{25}$ if we've done our calculations correctly), which we can calculate: $$ P_A\left(\frac{62}{25}\right) = \left(3\cdot\frac{62}{25}+1, 5\cdot\frac{62}{25}+4\right) = \left(\frac{211}{25} , \frac{410}{25} \right)\,. $$ To figure out how close the boats get, we can write a function to represent the distance between the boats at a time $t$ and minimize that function. We now need to consider the boats in the same time-frame and let $a=b=t$. The distance between boat $A$ and boat $B$ is given by $$\begin{align} d(t) &= \sqrt{(3t+1-t-7)^2+(5t+4-10t-2)^2} \\ &= \sqrt{29t^2-4t+40} \end{align}$$ The minimum of the function $d$ will occur at the minimum of the quadratic $29t^2-4t+40$ since the square root is a strictly increasing function. And the minimum of that quadratic occurs at $t = \frac{4}{2\cdot 29} = \frac{2}{29}$. So the actual minimum distance they achieve is $d( \frac{2}{29}) = \frac{34}{\sqrt{29}}$ miles apart.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Continuity of $\begin{cases}(xy+y^2)/(x^4+y^2)&\text{if }(x,y)\neq(0,0),\\0&\text{if }(x,y)=(0,0)\end{cases}$ at origin using polar coordinates Study the continuity of $$f(x,y)=\begin{cases}\dfrac{xy+y^2}{x^4+y^2}&\text{if }(x,y)\neq(0,0),\\0&\text{if }(x,y)=(0,0),\end{cases}$$ at $(x,y)=(0,0)$ using polar coordinates. I know that $f(0,0)=0$ so, if $\lim_{(x,y)\to(0,0)}{f(x,y)}$ exists then it must be equal to $0$. I want to prove that is not continuous at origin using polar coordinates. Let $(x,y)=(r\cos\theta,r\sin\theta)$. Then $$ \lim_{(x,y)\to(0,0)}{f(x,y)}=\lim_{r\to0}{\frac{r^2\cos\theta\sin\theta+r^2\sin^2\theta}{r^4\cos^4\theta+r^2\sin^2\theta}}=\lim_{r\to0}{\frac{\cos\theta\sin\theta+\sin^2\theta}{r^2\cos^4\theta+\sin^2\theta}}=\frac{\cos\theta\sin\theta+\sin^2\theta}{\sin^2\theta}=1+\cot\theta, $$ so, because the limit depends on the value of $\theta$, then the limit does not exist, hence $f(x,y)$ is not continuous at $(0,0)$. Is that correct? Can we use polar coordinates here? Thanks!
Although your argument contains a grain of truth, it is not quite correct as it is written, since you wrote that the limit $\lim_{(x,y)\to (0,0)} f(x,y)$, which doesn't exist, is equal to the limit $\lim_{r \to 0} (\cdots)$, which does exist (for $\sin \theta \neq 0$) if you just view it as an ordinary single-variable limit which depends parametrically on a constant $\theta$; in fact, you just computed it yourself like that and wrote that it's equal to $1 + \cot \theta$. The problem is that you want $\theta$ to be able to vary independently of $r$ as $r \to 0$, so you can't treat $\theta$ as a constant here. It should be viewed as an arbitrary function $\theta(r)$. In fact, polar coordinates are mainly useful for proving that a limit exist, namely if you can write $f(x,y)$ as a bounded factor times another factor which depends only on $r$ (no $\theta$!) and which tends to zero as $r \to 0$ (really just a single-variable limit here!), then $f(x,y)\to 0$ as $(x,y)\to(0,0)$. To show that a limit does not exist, you instead find two ways of approaching the point such that you get two different values. In your case, consider $f(t,0)$ and $f(0,t)$ as $t\to 0$, for example. So actually I don't quite know how I would like to write the argument in a nice way if someone forced me to use polar coordinates in order to show that a limit does not exist! I would probably just write $f(x,y)$ in polar coordinates first, $$ f(r \cos\theta(r), r \sin\theta(r)) = \cdots, $$ (no “$\lim$” here) and then say that by making different choices of the function $\theta(r)$ (for example different constant functions!), you can make $f$ approach different values. And I would give examples of two such function $\theta(r)$ which give different limits for $f(r \cos\theta(r), r \sin\theta(r))$ as $r \to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2969960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which grows at a faster rate $\sqrt {n!}$ vs $(\sqrt {n})!$ when $n \rightarrow \infty$? Which grows at a faster rate $\sqrt {n!}$ vs $(\sqrt {n})!$ ? How to solve such type of questions considering $n \rightarrow \infty$?
As alluded to in the comments, $(\sqrt{n})!$ doesn't make sense, so I'm going to compare the growth of $n!$ to the growth of $\sqrt{(n^2)!}$. Or, equivalently, compare $(n!)^2$ to $(n^2)!$. Let $a_n = \frac{(n!)^2}{(n^2)!}$. Then \begin{align*} \frac{a_{n+1}}{a_n} &= \frac{((n+1)!)^2 \div (n!)^2}{((n+1)^2)! \div(n^2)!} \\ &= \frac{(n+1)^2}{(n + 1)^2 ((n + 1)^2 - 1)((n + 1)^2 - 2) \ldots(n+1)} \\ &= \frac{1}{((n + 1)^2 - 1)((n + 1)^2 - 2) \ldots(n+1)}. \end{align*} So, the ratio between $(n!)^2$ and $(n^2)!$ very quickly approaches $0$, telling you that $(n^2)!$ grows much faster than $(n!)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
A matrix calculus problem in backpropagation encountered when studying Deep Learning I am studying the Algorithm 6.4 in the textbook Deep Learning, which is about backpropagation. I am confused by this line: $$\nabla_{W^{(k)}}J = gh^{(k-1)T}+\lambda\nabla_{W^{(k)}}{\Omega(\theta)}$$ This equation is derived by calculating the gradient of the equation(from Algorithm 6.3) below: $$a^{(k)}= b^{(k)}+W^{(k)}h^{(k-1)}$$ But shouldn't the gradient of $W^{(k)}h^{(k-1)}$ with respect to $W^{(k)}$ be $h^{(k-1)}$ ? Why is there a transpose $^T$ here?
I'm going to use subscripts because they're easier to type, and retains the use of superscripts to indicate things like transposes and conjugates. Algorithm 6.4 tells you how to calculate the vector $g$. It's a chain of derivatives extending from the output layer back to the $k^{th}$ layer $$\eqalign{ g &= \frac{\partial L}{\partial {\hat y}} \frac{\partial {\hat y}}{\partial h_l} \frac{\partial h_l}{\partial a_l} \frac{\partial a_l}{\partial h_{l-1}} \frac{\partial h_{l-1}}{\partial a_{l-1}} \frac{\partial a_{l-1}}{\partial h_{l-2}} \ldots \frac{\partial h_{k+1}}{\partial a_{k+1}} \frac{\partial a_{k+1}}{\partial h_{k}} \frac{\partial h_{k}}{\partial a_{k}} &= \frac{\partial L}{\partial a_{k}} }$$ even though $\frac{\partial {\hat y}}{\partial h_l}=1,\,\,$ I added it to the chain for clarity. Use $g$ to write the differential of $L$ then change variables to $W_k$ $$\eqalign{ dL&= g:da_k\cr &= g:dW_k\,h_{k-1}\cr &= gh_{k-1}^T:dW_k\cr \frac{\partial L}{\partial W_k} &= gh_{k-1}^T \cr }$$ where the colon denotes the trace/Frobenius product, i.e. $$A:B = {\rm tr}(A^TB)$$ The properties of the trace allow one to write things like $$\eqalign{ &{\rm tr}(ABC) = {\rm tr}(CAB) = {\rm tr}(BCA) \cr &{\rm tr}(AB) = {\rm tr}(BA) = {\rm tr}(B^TA^T) \cr }$$ which correspond to rules for rearranging the terms in a Frobenius product $$\eqalign{ &A:BC = B^TA:C = AC^T:B \cr &A:B = B:A = B^T:A^T \cr }$$ Note that the object on each side of the colon must have the same shape, i.e. equal numbers of rows and columns. In that sense, it's similar to a Hadamard product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it true that if the set $A=\{x:f(x) =c\}$ is measurable for every $c$ in $\Bbb R$, then $f$ is measurable? Let $f: [0;1] \to \Bbb R$. Is it true that if the set $A=\{x:f(x) =c\}$ is measurable for every $c$ in $\Bbb R$, then $f$ is measurable? I have this counter example $f(x) =x$ if $x$ belong to $P$, $f(x) =-x$ otherwise, where $P$ is a non measurable set on $[0,1]$. $A$ is measurable but $f$ is not measurable function. I had proved that $f$ is not measurable, but how can I prove that $A$ is measurable and what is $A$ in this example ?
As you think, the answer to the question is no. However, I am not sure how your example will work. Maybe it does, but I am unable to show that there exists a measurable set $U$ such that $f^{-1}(U)$ is not measurable in your example. I had a lapse of intelligence a little bit. The comments both under the question and under my answer have explained how the OP's function works. Nonetheless, I have a different counterexample. Note that there exists (due to the Axiom of Choice) a non-measurable subset $X$ of $\left[0,1\right]$. This set has the same cardinality as $\left[\dfrac12,1\right]$, so there exists a bijection from $g:X\to \left[\dfrac12,1\right]$. Note also that $[0,1]\setminus X$ and $\left[0,\dfrac12\right)$ are also equinumerous, so there exists a bijective function $h:\big([0,1]\setminus X\big) \to \left[0,\dfrac12\right)$. Define $f:[0,1]\to\mathbb{R}$ to be $$f(x):=\begin{cases}g(x)&\text{if }x\in X\,,\\h(x)&\text{if }x\in[0,1]\setminus X\,.\end{cases}$$ Note that $f^{-1}\big(\{c\}\big)$ is either empty or a singleton for each $c\in\mathbb{R}$ (it is empty if $c\notin[0,1]$, and is a singleton if $c\in[0,1]$). However, for the measurable set $M:=\left[\dfrac12,1\right]$, $f^{-1}(M)=X$ is not measurable. That is, $f$ is not a measurable function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
We have $\triangle ABC$ and we must find $PM$ Here we have $\triangle ABC$ and some information related: $$\angle A=60^\circ$$ $$\angle B=45^\circ$$ $$AC=8$$ $$CM=MB$$ The vertical line $PM$ is perpendicular to $BC$. So now we want to calculate $PB$. $$PB=?$$ I have tried some different ways to calculate it. But it was unsuccesfully not true. What I tried was to calculate $\angle C=75^\circ$ and dividing the $\triangle ABC$ into 4 pieces and try to find the side $PM$.like this: Do you have any ideas? Please explain your answer briefly.
Guide: I would construct line $PC$ and show that $\angle APC$ is $90^\circ$. After that I can use trigonometry to obtain $PC$ and I can use trigonometry one more time to obtain the length of $PM$. Edit: $PC$ is the line connecting $P$ to $C$. Notice that we have $PM = CM$. Then, we can compute $\angle ACP = 75^\circ - 45^\circ=30^\circ$. Hence $\angle APC = 90^\circ$. Hence $PC= AC \cos 30^\circ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to calculate the envelope of the trajectory of a double pendulum? Consider a double pendulum: Background For the angles $\varphi_i$ and the momenta $p_i$ we have (with equal lengths $l=1$, masses $m=1$ and gravitational constant $g=1$): $\dot{\varphi_1} = 6\frac{2p_1 - 3p_2\cos(\varphi_1 - \varphi_2)}{16 - 9\cos^2(\varphi_1 - \varphi_2)}$ $\dot{\varphi_2} = 6\frac{8p_2 - 3p_1\cos(\varphi_1 - \varphi_2)}{16 - 9\cos^2(\varphi_1 - \varphi_2)}$ $\dot{p_1} = -\frac{1}{2}\big( \dot{\varphi_1}\dot{\varphi_2} \sin(\varphi_1 - \varphi_2) +3\sin(\varphi_1) \big)$ $\dot{p_2} = -\frac{1}{2}\big( -\dot{\varphi_1}\dot{\varphi_2} \sin(\varphi_1 - \varphi_2) +\sin(\varphi_1) \big)$ To see the relations more clearly: $\dot{\varphi_1} = B(2p_1 + Ap_2)$ $\dot{\varphi_2} = B(8p_2 + Ap_1)$ $\dot{p_1} = -C + 3D$ $\dot{p_2} = +C + D $ with $A = -3\cos(\varphi_1 - \varphi_2)$ $B = 6/(16 - A^2)$ $C = \dot{\varphi_1}\dot{\varphi_2}\sin(\varphi_1 - \varphi_2)/2$ $D = -\sin(\varphi_1)/2$ Observations With initial angles $\varphi_1^0 = \varphi_2^0 = 0$ and different combinations of small values for $p_1^0$, $p_2^0$ a number of intriguiung patterns can be observed when plotting the tractory of the tip of the pendulum: * *$p_1^0 = 1$, $p_2^0 = 1$ * *$p_1^0 = 1, p_2^0 = -1$ * *$p_1^0 = 0$, $p_2^0 = 1$ * *$p_1^0 = 0$, $p_2^0 = 2$ * *$p_1^0 = 0$, $p_2^0 = 3$ * *$p_1^0 = 0$, $p_2^0 = 3.7$ * *$p_1^0 = 0$, $p_2^0 = 4$ What these patterns have in common: The tip of the pendulum draws a curve which more or less slowly "fills" an area enclosed by a specific envelope, intermediately exhibiting seemingly regular patterns which inevitably eventually vanish. Questions * *Can the envelope be given in closed form, depending only on the two parameters $p_1^0, p_2^0$? *Can the positions of the two inner cusps which can be seen clearly for $p_1=0$, $p_2=2,3$ be given in closed form, depending only on the two parameters $p_1^0, p_2^0$? [The envelope for $p_1^0=0, p_2^0 = 1,2,\dots$ looks like a canoe whose bow and stern bend to each other, eventually amalgamating. Can anyone guess what's the explicit formula for this shape?]
This is not a complete or rigorous answer, but it should point you in the right direction. The double pendulum is a Hamiltonian system; energy is conserved. The most extreme points of the trajectory occur when both legs are at standstill ($p_1=p_2=0$) as all the energy is positional. The points where this is the case can be computed as parametrised curves as follows. * *The parameter of your curve is the angle of excitation of the first pendulum ($φ_1$). *For each $φ_1$, you can calculate the two angles of the inner pendulum ($φ_2$) such that the potential energy of the system corresponds to your total energy. *From both angles ($φ_1$ and $φ_2$) obtain the position of the inner pendulums end as the point of your curve. I would conjecture that the outermost of the two curves is the envelope in case of a chaotic dynamics such as your last example. If the double pendulum is ergodic, the above follows, as all points on the proposed envelope have the same energy and thus have to be visited.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why is my translation of $\exists{x}\,(C(x) \rightarrow F(x))$ into an English sentence wrong? Let $\text{C(x): x is a comedian}$ and $\text{F(x): x is funny}$ Let $$\alpha:\quad\exists{x}\,(C(x) \rightarrow F(x))$$ and the domain consists of all people. I needed to translate $\alpha$ into English so what I did was I looked at when $\beta:\; C(x) \rightarrow F(x)$ is true. There are 2 cases for that: * *$C(x)$ is false and $F(x)$ is either true or false *$C(x)$ is true and $F(x)$ is true. Using this, I translated it as follows: There is a comedian that is funny $\textit{(referring to case 2)}$ or there is no comedian. $\textit{(case 1)}$ The solution in the book is: There exists a person such that if they are a comedian then they are funny. My professor told me that the way I translated it was wrong. Why is my translation wrong?
The domain is of people, not commedians. $\exists x~(\lnot C(x)\lor F(x))$ would read as "There is some person who is not a commedian or is funny." Although equivalent, $\exists x~(C(x)\to F(x))$ more directly translates as "There is some person who would be a commedian only if they were funny."
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
When is the exponential law in topology discontinuous? By $Y^X$ I mean the space of continuous functions $X \to Y$ with the compact-open topology (by compact I don't require Hausdorff. Consider the exponential law $Z^{X \times Y} \to (Z^Y)^X$ where the map is well-defined a a set map (this is not hard to see). However I don't think it's always continuous, but I don't have any examples of it being discontinuous. I know that $X$ Hausdorff implies continuous. So any counterexample must have $X$ non-hausdorff.
There is a paper by P. Booth and J. Tillotson, Pacific J. Math. Vol. 88. No. 1, 198 (downloadable) which discusses exponential laws $X ^{Z \times Y} \cong (X^Y)^Z$ for various function space topologies, and topologies on $X \times Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integrate $\int x\sec^2(x)\tan(x)\,dx$ $$\int x\sec^2(x)\tan(x)\,dx$$ I just want to know what trigonometric function I need to use. I'm trying to integrate by parts. My book says that the integral equals $${x\over2\cos^2(x)}-{\sin(x)\over2\cos(x)}+C$$
Let $u = x, dv = \sec^2 x\tan xdx\implies v = \displaystyle \int\sec^2 x\tan xdx= \displaystyle \int \tan xd(\tan x)= \displaystyle \int wdw, w = \tan x$. Can you put it together ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2970958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Derivation of the bivariate normal distribution I am reading through a derivation of the bivariate normal distribution, (from the US Defence Department!) and came across an expression that I can't understand. The derivation starts off with the observation that the total area, $A$, under the curve of the distribution is $1$, since it is a probability distribution. Also, the distribution can be expressed as a differential equation: $$dy=-k \ xy \ dx \tag{1}$$ $$y= Ce^{-k{x^2\over2}} \tag{2}$$ $$A= C\int_{-\infty}^{\infty}e^{-k{x^2\over2}} \tag{3}$$ Using integration by parts, the text proceeds to evaluate $$A= C|e^{-k{x^2\over2}}|_{-\infty}^{\infty}+C\int_{-\infty}^{\infty}x^2e^{-k{x^2\over2}} \tag{4}$$ (I skipped some steps for brevity. Please tell me if that creates confusion.) Then the text says the following line: Since the first expression takes an indeterminate form, new numerators and denominators are obtained by independent derivatives, and the limiting value of the expression then becomes zero. Since, by definition, the n-th moment of a frequency distribution is defined as $$\text {n-th moment} = \int_{b}^{a}x^n f(x)dx$$ and since from fundamental principles the standard deviation is the square root of the second moment about the mean, then it follows, considering $(3)$, that the second expression in $(4)$ becomes $$A=Ak\sigma_X^2 \tag{5}$$ I think I understand how we got rid of the first expression $C|e^{-k{x^2\over2}}|_{-\infty}^{\infty}$ in $(4)$, but what I don't understand is, how does this last step, $(5)$, follow from the previous steps? Also, where did the $C$ go?
$C$ is part of $f(x)$ as can be seen in step $(3)$ if one would just put it in front of the integral sign, and is therefore part of the variance, $σ_X^2$. $A$ is just $1$. The point of this step is to prove that $k={1\over σ_X^2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit with factorial (Stolz & Stirling) We have this limit: $\lim\limits_{n \to \infty} (\ln(n!))/n \to \infty$ I know that it should be answered with Stolz or Stirling, but I don't know how.
Stirling's approximation tells us that the following is true: $$ln(n!) = n \cdot ln(n) - n + O\left(ln(n)\right)$$ If we divide this by $n$, we have: $$\frac{ln(n!)}{n} = ln(n) - 1 + O\left(\frac{ln(n)}{n}\right)$$ As $n \rightarrow \infty, \frac{ln(n!)}{n} \rightarrow \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Prove that every even degree polynomial function has a maximum or minimum in $\mathbb{R}$ Prove that every even degree polynomial function $f$ has maximum or minimum in $\mathbb{R}$. (without direct using of derivative and making $f'$) The problem seems very easy and obvious but I don't know how to write it in a mathematical way. For example if the largest coefficient is positive, it seem obvious to me that from a point $x=a$ to $+\infty$ the function must be completely ascending. And from $-\infty$ to a point $x=b$ the function must be completely descending. If it is not like that, its limit will not be $+\infty$ at $\pm\infty$. Now, because it is continuous, it will have a maximum and minimum in $[b,a]$ so it will have a minimum (because every other $f(x)$ where $x$ is outside $[b,a]$ is larger than $f(a)$ or $f(b)$ and we get the minimum that is less than or equal to both of them) . We can also the same for negative coefficient. But I can't write this in a formal mathematical way.
Let $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$. Let us assume that $a_n>0$ (the case in which $a_n<0$ is similar). Then\begin{align}\lim_{x\to\pm\infty}f(x)&=\lim_{x\to\pm\infty}a_nx^n\left(1+\frac{a_{n-1}}{a_nx}+\frac{a_{n-2}}{a_nx^2}+\cdots+\frac{a_0}{a_nx^n}\right)\\&=+\infty\times1\\&=+\infty.\end{align}Therefore, there is a $R>0$ such that $\lvert x\rvert>R\implies f(x)\geqslant f(0)$. So, consider the restriction of $f$ to $[-R,R]$. Since $f$ is continuous and since $[-R,R]$ is closed and bounded, $f|_{[-R,R]}$ attains a minimum at some point $x_0\in[-R,R]$ and, of course, $f(x_0)\leqslant f(0)$. Since outside $[-R,R]$ you always have $f(x)\geqslant f(0)$, $f$ attains its absolute minimum at $x_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
is there are any short cut method to find the determinant of A? Find the determinant of A $$A=\left(\begin{matrix} x^1 & x^2 & x^3 \\ x^8 & x^9 & x^4 \\ x^7 & x^6 & x^5 \\ \end{matrix}\right)$$ My attempts : By doing a laplace expansion along the first column i can calculate,but it is a long process My question is that is there are any short cut method to find the determinant of A? Thanks u
\begin{align*} \det(A) =\begin{vmatrix} x^1 & x^2 & x^3 \\ x^8 & x^9 & x^4 \\ x^7 & x^6 & x^5 \\ \end{vmatrix} &=\color{red}{x^5}\begin{vmatrix} x^1 & x^2 & x^3 \\ x^8 & x^9 & x^4 \\ \color{red}{x^2} & \color{red}{x^1} & \color{red}{1} \end{vmatrix}\\ &=x^5\cdot x^4 \cdot x^1\begin{vmatrix} 1 & x & x^2 \\ x^4 & x^5 & 1 \\ x^2 & x & 1 \\ \end{vmatrix}\\ &=x^{10}\cdot \color{red}{x^1}\begin{vmatrix} \color{blue}{1} & \color{red}{1} & x^2 \\ \color{blue}{x^4} & \color{red}{x^4} & 1 \\ \color{blue}{x^2} & \color{red}{1} & 1 \end{vmatrix}\\ &=x^{11}\begin{vmatrix} \color{blue}{0} & 1 & x^2 \\ \color{blue}{0} & x^4 & 1 \\ \color{blue}{x^2-1} & 1 & 1 \end{vmatrix}\\ &=x^{11}(x^2-1)(1-x^6). \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Elementary contour integral I have an integral $$ \int_{-\infty}^{\infty}\frac{1}{(\omega^{2}-4)(\omega-2-i)(\omega+2-i)}d\omega $$ And I wish to evaluate this using Cauchy's Integral Theorem. I understand that with a simple pole on the real axis like $$ \frac{sin(x)}{x} $$ We can break the contour around $x=0$ and use Jordan's Lemma as the real axis goes to infinity. However I'm still unconfident in dealing with two poles on the real axis ($\omega=\pm2)$. How should I go about this?
Hint: All the poles are simple, so you could break the integrand into a sum of four simple fractions of the form $\frac{c_k}{w-p_k}$, right? Then just deal with each integral separately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Orthogonality of a matrix where inner product is not the dot product Here are the definitions I am using: * *Orthogonality: Two vectors $x$ and $y$ are orthogonal iff $\langle x,y \rangle=0$. *Orthonormal: If two vectors $x$ and $y$ are orthogonal and $||x|| = 1 = ||y||$ then $x$ and $y$ are orthonormal. *Orthogonal Matrix: A square matrix $\mathbf A \in \Bbb {R}^{n \times n}$ is an orthogonal matrix iff its columns are orthonormal so that $\mathbf {AA^T = I = A^TA}$. Suppose we use an inner product defined by: $$\langle x, y \rangle = x^T \begin{bmatrix} 2 & 1 & 0 \\ 1 & 2 & -1 \\ 0 & -1 & 2 \\ \end{bmatrix}y$$ We define $e_1, e_2, e_3$ as the standard basis vectors in $\Bbb {R}^{3}$. When we evaluate the inner products we find: $\langle e_1, e_3 \rangle = 0$ $\langle e_1, e_2 \rangle = 1$ $\langle e_3, e_2 \rangle = -1$ So clearly using this inner product, only the basis vectors $e_1$ and $e_3$ are orthogonal. Separately, if I construct a matrix made by the basis vectors: $\mathbf A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$ It holds true that $\mathbf {AA^T = I = A^TA}$. Is the matrix $\mathbf A$ orthogonal? Because the columns (using our definition of the inner product) are not actually orthogonal and therefore cannot be orthonormal, but the multiplication to identity holds.
Yes, the matrix is orthogonal. All orthogonal matrices have columns with orthonormal vectors with respect to the dot product, regardless of your choice of inner product. There is something called an orthogonal linear transformation, that is, $TT^* = T^*T = I$. The matrix representation of these linear transformations with respect to an orthonormal basis (any inner product) is always an orthogonal matrix. In your example, you just showed that the identity transformation is orthogonal in all choices of inner product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
General solution of $y''-2xy'+2y=0$ I have to find the general solution of the differential equation $$y''-2xy'+2y=0.$$ An obvious solution is $y(x)=ax$ but I am unable to find another solution. Any hint as to how to proceed or which kind of method to apply is greatly appreciated.
Making $z = x y$ we obtain $$ z''-\frac{2(x^2-1)}{x}z' = 0 $$ now making $v = z'$ $$ v'-\frac{2(x^2-1)}{x}v = 0 $$ which is separable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Poisson probability with Bayes? The number of goals scored every month is a Poisson with lambda 5 : $$ P(X=x) = \frac{e^{-5}5^x}{x!} (x=0,1,2,3,4....) $$ What is the probability of at least 4 goals scored next month given two goals scored next month. I need to compute the following : P(x >= 4 | X=2) $$ P(x \ge 4)=1 - \sum_{i=0}^3 \frac{e^{-5}5^i}{i!} $$ $$ P(x = 2)=\frac{e^{-5}5^2}{2!} $$ How can these probabilities be combined to produce the probability of at least 4 goals scored next month given two goals scored next month ? This seems like a Bayes theorom problem as it's a conditional probability but I'm unsure how to map the provided information into the formula ?
The problem statement isn't quite clear but it looks like it wants you to find $\mathsf P(X\geq4\mid X\geq 2)$ It's clear then from Bayes' Theorem that $$\mathsf P(X\geq4\mid X\geq 2)=\frac{\mathsf P(X\geq4,X\geq 2)}{\mathsf P(X\geq2)}=\frac{\mathsf P(X\geq4)}{\mathsf P(X\geq2)}=\frac{1-\mathsf P(X\leq3)}{1-\mathsf P(X\leq1)}$$ In R we get $$\mathsf P(X\geq4\mid X\geq 2)\approx 0.766$$ > (1-ppois(3,5))/((1-ppois(1,5))) [1] 0.7659392 A simulation in R agrees with this result x<-rpois(10^6,5) x<-subset(x,x>=2) mean(x>=4) 0.7659072
{ "language": "en", "url": "https://math.stackexchange.com/questions/2971939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of Heine-Borel Theorem; Bartle I'm reading through the proof for the Heine-Borel Theorem in Bartle's Elements of Real Analysis and getting caught on one point: We assume that $K$ is a compact subset of $\mathbb{R}^p$ and let $x$ be an element in the complement of $K$. Then we let $G_m=\{y\in\mathbb{R}^p:|x-y|>\frac{1}{m}\}$, $m\in \mathbb{N}$, so that for some $M\in \mathbb{N}$, such that $K$ is contained in $G_M$ and the union of all the $G_m$'s contain all of $\mathbb{R}^p$ except $x$. I'm good up to here. Then he says, the neighborhood $\{z\in\mathbb{R}^p:|z-x|<\frac{1}{M}\}$ doesn't intersect with $K$, therefore the complement of $K$ is open. Why does this follow? Where are the elements exactly $1/M$ distance from $x$?
Note that $\{G_m : m \in \Bbb{N}\}$ is an open cover of $K$. By compactness of $K$, there must be a maximum $M \in \Bbb{N}$ such that only finite number of $G_m$ with $m\leq M$ cover $K$. But since those are nested subsets, then $G_M$ alone must cover $K$ and does not contain $x$. Because $G_M$ and $ \{z : |z-x| \leq 1/M\}$ are complement of each other, then $\{z : |z-x| < 1/M\}$ is an open neighbourhood of $x$ that does not intersect $K$. Since this is true for every $x \in K^c$, then $K^c$ is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do we know if a third vector is on the plane of the first and second vector? I have to add its components and if it gives me zero,then is it on the plane? Suppose that the vectors have three components. Thank you.
Use the fact that a plane is a two dimensional object. That means that the size of the base is two (you can describe any vector in that plane as a linear combination of the vectors in the base). Now create a matrix with the components of those three vectors as rows (or columns). If the determinant is not zero, then the vectors are independent, so they span a three dimensional space, not two. You need to have the determinant equal to $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Study the converge $\sum_{n=1}^{\infty}(\sqrt[n]{n}-1)$ I need to study the convergence of the series $\sum_{n=1}^{\infty}(\sqrt[n]{n}-1)$ Now, I know that if we have a series $\sum_{n=1}^{\infty}a_n$ with positives elements and we can find a series $\sum_{n=1}^{\infty}b_n$ so that $0<a_n<b_n$ then if $\sum_{n=1}^{\infty}b_n$ is convergent then $\sum_{n=1}^{\infty}a_n$ is convergent. Else, if $\sum_{n=1}^{\infty}a_n$ is divergent then $\sum_{n=1}^{\infty}b_n$ is divergent. The problem is I do not really know how to choose the series. Can you help me out?
The series diverges by comparison with $\sum \frac{1}{n}\log n$. Let $n \ge 2$. Since $\sqrt[n]{n} = e^{\frac{1}{n}\log n}$, the mean value theorem gives $\sqrt[n]{n} - 1 = e^{c_n}\cdot \frac{1}{n}\log n$ for some $c_n\in \left(0, \frac{1}{n}\log n\right)$. Now $e^{c_n} > 1$, so that $$\sqrt[n]{n} - 1 > \frac{1}{n}\log n$$ Now you can compare your series with the divergent series $\sum \frac{1}{n}\log n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Numerical differentiation with Binomial Theorem In George Shilov's Elementary Real and Complex Analysis, there is a problem which asks us prove If $f$ is twice differentiable on some open interval and the second derivative is continuous at $x$, then prove that $$f''(x)=\lim_{h\rightarrow 0}\frac{f(x)-2f(x+h)+f(x+2h)}{h^2}\,.$$ This is a common fact in numerical differentiation to approximate derivatives at the left-hand point and is fairly immediate from two applications of Taylor's Theorem with Lagrange Remainder. However, this was not the end of Shilov's problem. He also states Find a similar expression for $f^{(n)}(x)$ (with appropriate hypotheses). In the back of his book, he asserts that $$f^{(n)}(x)=\lim_{h\rightarrow 0}\frac{1}{h^n}\sum_{k=0}^n (-1)^k\binom{n}{k}f(x+kh)$$ which I found interesting enough to at least remember, if not attempt. However, I recently came upon an application where this formula would be useful and attempted to prove it. However, it seems there was an error in Shilov's claim. He must have meant $$(-1)^nf^{(n)}(x)=\lim_{h\rightarrow 0}\frac{1}{h^n}\sum_{k=0}^n (-1)^k\binom{n}{k}f(x+kh)$$ because working out $n=3$ and applying Lagrange's Remainder three times results in $$\frac{f(x)-3f(x+h)+3f(x+2h)-f(x+3h)}{h^3}=\frac{1}{3!}\left(-3f'''(\xi_1)+24f'''(\xi_2)-27f'''(x_3)\right)$$ which gives the corrected limit (with continuity of $f^{(3)}$ at $x$ assumed). Is there an easy way to go about proving this result in general? We can attack this fairly directly, without induction. But this becomes equivalent to proving several interesting binomial identities: $$\sum_{k=0}^n(-1)^k\binom{n}{k} k^m=\begin{cases} (-1)^n n!&\text{ if }m=n\\0&\text{ if }0\leq m<n\end{cases}$$ The first of which was tackled here while the others seem to have gone largely unasked. The case $m=1$ is tackled here and here, and I can see that I could continue the approaches taken in these answers by differentiating several times. The book-keeping isn't too awful because all these identities are just sums of $0$s. Thus $0\leq m<n$ isn't too bad, if we can do $m=1$. However, proving the cases $m=1$ and $m=n$ aren't entirely trivial. Shilov seems to have hidden an interesting exercise in a terse sentence without any hint that it would be interesting. This makes me wonder if there's an easier way to go about proving this result.
I'm not sure I'm contributing anything, maybe I misunderstood since this feels like essentially a duplicate of this link. $\newcommand{\fd}{\Delta}$ Its not so hard (via e.g. repeated applications of l'Hopital, as Paramanand shows in that link) to show that you are actually interested in iterated forward finite differences $$ \fd_h^1 f(x) := f(x+h)-f(x), \quad \fd_h^{n+1} f(x) := \fd_h^1[\fd_h ^nf] (x)=\fd_h ^nf(x+h) - \fd_h ^nf(x)$$ And your hope is that $$ \fd_h ^nf (x) = \sum_{k=0}^{n} (-1)^{n-k} \binom{n}k f(x+kh)$$ By rescaling $f$, and choosing a different $x$, it would suffice to do this for $h=1$; let $\fd:=\fd^1_1$ for easy typing. Now the inductive proof for this is straightforward using the Pascal triangle, \begin{align} \fd^{n+1}f(x)&=\Delta\left[\sum_{k=0}^{n} (-1)^{n-k} \binom{n}k f(\bullet +k)\right](x) \\ &= \sum_{k=0}^{n} (-1)^{n-k} \binom{n}k f(x +k+1) - \sum_{k=0}^{n} (-1)^{n-k} \binom{n}k f(x+k)\\ &= \sum_{k=1}^{n+1} (-1)^{n+1-k} \binom{n}{k-1} f(x +k) + \sum_{k=0}^{n} (-1)^{n+1-k} \binom{n}k f(x+k)\\ &=(-1)^{n+1} f(x) + f(x+n+1) + \sum_{k=1}^n (-1)^{n+1-k} \underbrace{\left(\binom{n}{k-1} +\binom{n}{k}\right)}_{=\binom{n+1}{k}}f(x +k)\\ &=\sum_{k=0}^{n+1} (-1)^{n+1-k} \binom{n+1}{k}f(x +k) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Find the value of an integer $a$ such that $ a^2 +6a +1 $ is a perfect square. I was able to solve this but it required me using hit and trial at one step. I was wondering if i could find a more solid method to solve it. p.s. this is the first time im asking a question here so sorry if i couldn't construct the question properly. this is my solution - $a^2 + 6a + 1 = k^2$ $(a+3)^2 = k^2 + 8$ $k^2 + 8 = m^2 $ where [$m=a+3$] $(m-k)(m+k) = 8$ ... here by inspection $m= 3$ and $k =1$ hence, $a = 0, -6$.
hint $$a^2+6a+1=b^2$$ $$\iff a^2+6a+1-b^2=0$$ the reduced discriminant is $$\Delta'=9-1+b^2=b^2+8$$ $$a=-3\pm \sqrt{b^2+8}$$ thus $$b^2+8=c^2$$ and $$(c+b)(c-b)=8$$ $$=4×2=-4×(-2)$$ $$c+b=\pm 4,\;\; c-b=\pm 2$$ gives $$\;\; b=\pm 1$$ and in all cases, $$a=0 \text{ or } a=-6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Inequality of real numbers with exponent For $a,b>0$ are two real numbers and $p\geq 1$. Is the following inequality true $$|a^p-b^p|\leq|a-b|^p\;\;?$$
Let $b\geq a$ then $\>b=a+\delta\>$ and $\>\delta>0$ $$ \\|a^p-b^p|\leq|a-b|^p \\(a+\delta)^p-a^p\leq\delta^p \\(a+\delta)^p\leq a^p+\delta^p $$ but $a>0$ and $\delta\geq0 => (a+\delta)^p=a^p+\delta^p+\delta*x\>(x\geq0)($ Binomial theorem$)=>$ $$ \\a^p+\delta^p\geq(a+\delta)^p=a^p+\delta^p+\delta*x\geq a^p+\delta^p $$ $=>\delta*x=0=>\delta=0\>$ or $\>x=0=>$ $$ \\|a^p-b^p|\leq|a-b|^p $$ if and only if $$ a=b $$ or $$ p=1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Curve of possible locations of a post office between two buildings One of my students had to answer this question, and it is actually stumping me. Let $A,B$ be two buildings 10 miles apart. Suppose $P$ is a post office such that the distance between $A$ and $P$ is 2 miles more than the distance between $P$ and $B$. Then what shape is the curve of possible locations of $P$? My instinct is that it should be an ellipse or (perhaps) a hyperbola, but I cannot see this formally.
Your desired locus is one branch of a hyperbola. You can see this by placing A and B at convenient points on the Cartesian plane--say A at (-5, 0) and B at (5, 0). Place P at (x, y). Use the distance formula to get an equation for your locus: $$\sqrt{(x--5)^2+(y-0)^2} = 2 + \sqrt{(x-5)^2+(y-0)^2}$$ Simplify that equation by the usual method of squaring both sides of the equation, simplifying, isolating the square root, squaring both sides, and simplifying. You will get an equation of a hyperbola: $$\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$$ You then note that, in the original equation (or using geometric considerations), $x$ must be positive. This limits the locus to the right branch of the hyperbola. With some more work you can show that each point on the right branch satisfies the original equation, so the locus is the entirety of the right branch. I'll leave the details to you. Ask if you need more help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $L = F(β)$, where $β^p = F$ via Hilbert's Theorem 90 Let $F$ be a field that contains a primitive $p$-th root of unity, where $p$ is a prime. I wish to prove that if $L$ is Galois over $F$ and $[L : F] = p$, then $L = F(β)$, where $β^p = F$. Does anyone know of a proof via Hilbert's Theorem 90? That is, making use of the fact that: Given $Gal(L|F)$ is cyclic of order $n$ and $σ$ is a generator of $G$, suppose $δ_0 = α, δ_1 = ασ(α), δ_2 = ασ(α)σ^2(α), · · · ,δ_{n−1} = ασ(α)· · · σ^{n−1}(α) =$ 'norm' $N(α) = 1$ and $γ ∈ L$ is such that $β = δ_0γ + δ_1σ(γ) + · · · + δ_{n−1}σ ^{n−2}(γ) + σ^{n−1}(γ) ≠ 0$. Then we have $α = βσ(β)^{−1}$
Let $\zeta$ be a primitive $p$-th root of unity in $F$. Then $N_{L/F}(\zeta)=\zeta^p=1$. By Hilbert 90, there is $\alpha\in L^*$ with $\sigma(\alpha)/\alpha=\zeta$, that is $$\sigma(\alpha)=\zeta\alpha.$$ But then $$\sigma(\alpha^p)=\sigma(\alpha)^p=\zeta^p\alpha^p$$ so that $\alpha^p=a\in F$. But $\sigma(\alpha)\ne\alpha$ so $\alpha\notin F$. Thus $L=F(\alpha)$ with $\alpha^p\in F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2972880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Bezout's Identity: Finding A Pair of Integers There is only one integer $x$, between 100 and 200 such that the integer pair $(x, y)$ satisfies the equation $42x + 55y = 1$. What's the value of $x$ in this integer pair? We know that $$\begin{align} x &= x_0 + 5t \\ y &= y_0 - 4t \end{align}$$ But we need to know what $x_0, y_0$ are. By applying the GCD algorithm we can get the answer to be $x_0 = 17$ and $y_0 = 13$. So we need to find $100 \leq 17 + 5t \leq 200$. But treating this parametrically yields too many solutions. How do I discover the one solution?
First you need to find all integer solutions of $42x+55y=1$.[See this post] Here $\gcd(42,55)=1$, so by Euclidean algorithm, $$1=42(-17)+55(13)\;[\text{check!}]$$ So integer solutions are $$x=-17+55r$$ $$y=13-42r$$ where $r \in \Bbb{Z}$ For your task, you need to find one $r$ so that $100 \leq x \leq 200$ and $r$ satisfies this. Clearly for $r=1,2$, $x <100$. For $r=3$, we get $x=3(55)-17=\color{red}{148}$, which is the required number. For $r>3$ we get $x >200$ and if $r <0$ then $x<0$. Hence the required pair is $$(\color{red}{148},-113)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Independence of random products mod $p$ Let $p$ be a large prime and $1 \leq q < p$ be chosen uniformly at random. Let $0 \leq r_1 \neq r_2 < p$ be arbitrary but fixed. Question: for sufficiently large $p$, are $qr_1 \bmod p$ and $qr_2 \bmod p$ statistically independent? (i.e. if I take $p$ sufficiently large, can I make the covariance arbitrarily small?)
I have two questions to you: * *Why do you allow $r_1=0$ or $r_2=0$? I think you should not. *Do you mean that $r_1$ and $r_2$ remain fixed when you change (said, increase) $p$? Anyway... For $0 < r_1, r_2 < p$, the expected value of each of these two random variables is $\dfrac{p}{2}$, the variance is $\dfrac{p(p-2)}{12}$, and their covariance is $\dfrac{p^2}{p-1}D(r_1,r_2;p)$, where $D$ is the Dedekind sum; the correlation coefficient is then $\dfrac{12pD(r_1,r_2;p)}{(p-1)(p-2)}$. In particular, if $r_1$ and $r_2$ remain fixed while $p$ grows, the Rademacher identity implies that the correlation coefficient approaches nonzero limit $\gcd^2(r_1,r_2)/(r_1 r_2)$ (funny - this is GCD/LCM).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Empty Forest in Graph Theory I was reading the Kruskal's Algorithm application and there was a statement about starting with an empty forest. WHat should I understand from the term 'empty forest'? Does it mean an edge that is not connected or should I think of it as a forest with 1-component?
Usually an empty object simply doesn't contain any of whatever its highest-level member happens to be. You can talk about forests in terms of nodes and edges, but the conceptual vantage point of a forest is that it contains trees. If it's empty, it just doesn't contain any trees. By extension then, it wouldn't contain any nodes or edges since a forest only contains nodes or edges if its corresponding trees contain those same nodes or edges. Extras: I'm not sure of the author's intention here (whether a forest is defined to be an object containing trees or whether a forest is a graph whose connected components are trees), but in the first interpretation you can run into the interesting state of affairs where a forest has no nodes or edges but is still not empty. How you ask? Simple -- the forest contains any positive number of empty trees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is a "singular system" mean? Currenly I'm working on a problem as an apprentice. I'm currently reading the book "An introduction to the Mathematical Theory of Inverse Problems" by Andreas Kirshch. In that book, on page 32, in Theorem 2.6, there is a sentence like this Let $K: X \to Y$ be compact with singular system $(\mu_j, x_j, y_j )$ ... I am not sure what was the author meant when he mentioned "singular system" here. I have the most basic knowledge about functional analysis, and I also search it up but didn't come up with a proper answer. Please help me with this. Sorry if I missed something crucial. Thank you.
In the Appendix, theorem $A.53$ clarifies your doubt! Theorem $A.53$ (Singular Value Decomposition). Let $K : X \rightarrow Y$ be a linear compact operator, $K^∗ : Y \rightarrow X$ its adjoint operator, and $μ_1 ≥ μ_2 ≥ μ_3 . . . > 0$ the ordered sequence of the positive singular values of $K$, counted relative to its multiplicity. Then there exist orthonormal systems $(x_j) ⊂ X$ and $(y_j) ⊂ Y$ with the following properties: $$Kx_j = μ_jy_j$$ and $$K^∗y_j =μ_jx_j$$ for all $j ∈ J$. The system $(μ_j ,x_j ,y_j)$ is called a singular system for $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that $GL_n$ ($ \mathbb{R}$) $: = \{A \in \mathbb{R}^{n \times n}: \det A \neq 0\}$ is open in $\mathbb{R}^{n \times n}$ How can one prove that the set of invertible matrices $GL_n := \mathbb{R} : = \{A \in \mathbb{R}^{n \times n}: \det A \neq 0\}$ is open in $\mathbb{R}^{n \times n}$? ${\mathbb{R}^{n \times n}}$ has the following norm: The domain of $A \to \det (A)$ is $\mathbb{R}^{n \times n}$ and $A \to \det A \in \mathbb{R} $ is the transformation rule, so $A$ can be a random matrix in in $\mathbb{R}^{n \times n}.$ $GL_n(\mathbb R)$ is the inverse image $\det^{-1} (\mathbb{R}$ \ {$0$}). Since $\det$ is continuous, the inverse image is also open in $\mathbb{R}^{n \times n}$. Is that correct? And how can I show that $\mathbb{R}^{n \times n} \ni A \to \det A \in \mathbb{R}$ is continuous? I don't know how I should use the Laplace expansion here...
The determinant function $$\det: \mathbb{R^{n^2} \to \mathbb{R}}\\A \mapsto \det A$$ is continuous because it is a polynomial one. The set $GL_n(\mathbb{R})=\{A \in M_n(\mathbb{R}) | \det A \ne 0\}$ is the preimage of the open set $\mathbb{R} \setminus \{0\}$, so, for the continuity of the determinant function, it is open in $\mathbb{R^{n^2}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Partial derivative of cross-entropy I am trying to make sense of this question. $$E(t,o)=-\sum_j t_j \log o_j$$ How did he derive the following? $$\frac{\partial E} {\partial o_j} = \frac{-t_j}{o_j}$$
You are missing a reformulation Christopher Bishop (1995) took for the cross-entropy, because the formulation \begin{equation} E=-\sum_j t_j \log (y_j) \end{equation} does not have a minimum value of zero. However, his reformulation below for cross-entropy error has a minimum value of zero: \begin{equation} E=-\sum_j t_j \log \left( \frac{y_j}{t_j} \right). \end{equation} For the above equation, the partial derivative of the error $E$ w.r.t $y_j$ is simply \begin{equation} \frac{\partial E}{\partial y_j}=-\frac{t_j}{y_j} . \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that a sequence $a_n: n\in\mathbb{N}$ is (not) monotonic, bounded and converging $$a_n = \left(\dfrac{n^2+3}{(n+1)^2}\right)\text{ with } \forall n\in \mathbb{N}$$ $(0\in\mathbb{N})$ Monotonicity: To prove, that a sequence is monotonic, I can use the following inequalities: \begin{align} a_n \leq a_{n+1}; a_n < a_{n+1}\\ a_n \geq a_{n+1}; a_n > a_{n+1} \end{align} I inserted some $n$'s to get an idea on how the sequence is going to look like. I got: \begin{align} a_0&=3\\ a_1&=1\\ a_2&=\frac{7}{9}\approx 0.\overline{7}\\ a_3&=\frac{3}{4}=0.75 \end{align} Assumption: The sequence is monotonic for $\forall n\in \mathbb{N}$ Therefore, I show that \begin{align} a_n \leq a_{n+1}; a_n < a_{n+1}\\ a_n \geq a_{n+1}; a_n > a_{n+1} \end{align} I am having problems when trying to prove the inequalities above: \begin{align} & a_n \geq a_{n+1}\Longleftrightarrow \left|\frac{a_{n+1}}{a_n}\right |\leq 1\\ & = \left|\dfrac{\dfrac{(n+1)^2+3}{(n+2)^2}}{\dfrac{n^2+3}{(n+1)^2}}\right|\\ & = \frac{4 + 10 n + 9 n^2 + 4 n^3 + n^4}{12 + 12 n + 7 n^2 + 4 n^3 + n^4}\\ & = \cdots \text{ not sure what steps I could do now} \end{align} Boundedness: The upper bound with $a_n<s_o;\; s_o \in \mathbb{N}$ is obviously the first number of $\mathbb{N}$: \begin{align} a_0=s_o&=\frac{0^2+3}{(0+1)^2}\\ &=3 \end{align} The lower bound $a_n>s_u;\; s_u \in \mathbb{N}$ $s_u$ should be $1$, because ${n^2+3}$ will expand similar to ${n^2+2n+1}$ when approaching infinity. I don't know how to prove that formally. Convergence Assumption (s.a) $\lim_{ n \to \infty} a_n =1$ Let $\varepsilon$ contain some value, so that $\forall \varepsilon > 0\, \exists N\in\mathbb{N}\, \forall n\ge N: |a_n-a| < \varepsilon$: \begin{align} \mid a_n -a\mid&=\left|\frac{n^2+3}{(n+1)^2}-1\right|\\ &= \left|\frac{n^2+3}{(n+1)^2}-\left(\frac{n+1}{n+1}\right)^2\right|\\ &= \left|\frac{n^2+3-(n+1)^2}{(n+1)^2}\right|\\ &= \left|\frac{n^2+3-(n^2+2n+1)}{(n+1)^2}\right|\\ &= \left|\frac{2-2n}{(n+1)^2}\right|\\ &= \cdots \text{(how to go on?)} \end{align}
Hint: $$a_{n+1}-a_n=2\,{\frac {{n}^{2}-n-4}{ \left( n+2 \right) ^{2} \left( n+1 \right) ^{ 2}}} $$ Second hint: $$a_n=\frac{n^2(1+\frac{3}{n^2})}{n^2(1+\frac{2}{n}+\frac{1}{n^2})}$$ this tends to $1$ for $n$ tends to infinity
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Expanding the supremum metric on $C[0,1]$ to $C[a,b]$ Let $X=C[0,1]$ be the set of all continuous functions on the interval $[0,1]$. Define: $$d_1(f,g)= \sup {\{ \left \lvert {f(t)-g(t)} \right \rvert : \ t \in [0,1]} \}$$ I want to expand this supremum metric of continuous functions defined on $[0,1]$ to the supremum metric of continuous functions on any interval $[a,b]$, that is: $$d_2(f,g)= \sup {\{ \left \lvert {f(t)-g(t)} \right \rvert : \ t \in [a,b]} \}$$ The following hint was proposed to me: There exists a bijection from $C[a,b]$ to $C[0,1]$. For every function $f(x)$ on $C[a,b]$ we can define a function $g(x)$ on $C[0,1]$ with: $$g(x)=f \left(\frac{x-a}{b-a} \right)$$ I don't quite understand how this proves that if $d_1(f,g)$ is metric on $C[0,1]$, then $d_2$ is also a metric on $C[a,b]$. I imagine there is a certain theorem that connects these two.
The map you have given is in fact not a bijection; remark that if $0\leq b \leq 1$, $g$ is not even defined at $x=b$. Instead, for a function $f \in C[a,b]$, we define $g \in C[0,1]$ to be: \begin{align*} g(x) = f \left( (b-a)x + a \right) \end{align*} You should check for yourself that this is indeed a bijection. However, in your case, what would be more helpful is the inverse of this bijection, which would in fact be that $g \in C[0,1]$ maps to $f \in C[a,b]$, where: \begin{align*} f(x) = g \left( \frac{x-a}{b-a} \right) \end{align*} Now, we can show that your metric $d$ carries over to $C[a,b]$ (see remark at the bottom). Let $f_1, f_2 \in C[a,b]$, and let them be the images of $g_1, g_2 \in C[0,1]$ under the above bijection. Then we have: \begin{align*} \sup \limits_{a \leq x \leq b} |f_1(x) - f_2(x)| = \sup \limits_{a \leq x \leq b} \left| g_1 \left( \frac{x-a}{b-a} \right) - g_2 \left( \frac{x-a}{b-a} \right) \right| \end{align*} But in fact, since $x \mapsto \frac{x-a}{b-a}$ is a bijection from $[a,b]$ to $[0,1]$, this is in fact the same as: \begin{align*} \sup \limits_{0 \leq y \leq 1} |g_1(y) - g_2(y)| \end{align*} And so we have that: \begin{align*} d(f_1,f_2) = d(g_1,g_2) \end{align*} Therefore the metric is preserved when it is applied to the images of elements under this bijection, and so it is a metric on the resulting space. REMARK: $d$ as you defined it is not actually a metric on $C[a,b]$; indeed with $d(g_1,g_2)$ we take the supremum of $|g_1(t) - g_2(t)|$ over $[0,1]$. This may not be defined for functions in $C[a,b]$. What you really need is a slightly modified metric $\tilde{d}$ which takes this supremum over $[a,b]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sum the first $n$ terms of the series $1 \cdot 3 \cdot 2^2 + 2 \cdot 4 \cdot 3^2 + 3 \cdot 5 \cdot 4^2 + \cdots$ The question Sum the first $n$ terms of the series: $$ 1 \cdot 3 \cdot 2^2 + 2 \cdot 4 \cdot 3^2 + 3 \cdot 5 \cdot 4^2 + \cdots. $$ This was asked under the heading using method of difference and the answer given was $$ S_n = \frac{1}{10}n(n+1)(n+2)(n+3)(2n+3). $$ My approach First, I get $$ U_n=n(n+2)(n+1)^2. $$ Then I tried to make $U_n = V_n - V_{n-1}$ in order to get $S_n = V_n - V_0$. But I really don't know how can I figure this out.
\begin{align*} \sum_{k=1}^n k(k+2)(k+1)^2&=\sum_{k=1}^n (k+3)(k+2)(k+1)k-2\sum_{k=1}^n (k+2)(k+1)k\\ &=24 \sum_{k=1}^n \binom{k+3}{4}-12 \sum_{k=1}^n \binom{k+2}{3}\\ &=24\binom{n+4}{5}-12\binom{n+3}{4} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2973979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to prove that if the integral of a nonnegative function is zero, then the function is zero This is required to prove that the expression $d(f,g)=\int|f-g|dt$ is a metric. We need to show that if $\int |f-g|dt=0$ then $f=g$. It seems obvious but how to prove it? This relation is not given in Paul's Online Math Notes for example. f,g continuous on [a,b]
Let's forget continuity for a minute. Let $u$ be a measurable function that's also non-negative almost everywhere on $I=[a, b]$, such that $$\int_a^bu=0$$ Then for $\epsilon>0$, let $$S_\epsilon=\{x\in[a,b] \textrm{ such that } u(x)\geq\epsilon\}$$ Then by definition of $S_\epsilon$, $$\int_a^bu\geq\int_{S_\epsilon}u\geq\epsilon\mu(S_\epsilon)$$ where $\mu(S_\epsilon)$ is the Lebesgue measure of $S_\epsilon$. That inequality $$\int_a^bu\geq\epsilon\mu(S_\epsilon)$$ is known as the Chebyshev inequality. Because $u$ has a $0$ integral on $[a,b]$, it follows that $S_\epsilon$ is of measure $0$. So $u<\epsilon$ almost everywhere on $[a, b]$, and that, for all $\epsilon >0$. It's now easy to conclude that $u=0$ almost everywhere (for instance take the sequence $\epsilon_n=\frac 1 n$). Now if you assume that $u$ is also continuous, that means that $u=0$ everywhere on $[a, b]$. Finally, in the world of probabilities, this is how you prove that if a random variable has zero variance, the that variable is not that random: It is constant, almost surely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2974157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Covariance intuition to formula How we prove below transition mathematically? * *Double Summation from $$ \dfrac{1}{N^2}\sum_{i=1}^{N}\sum_{j=i+1}^{N}(x_i - x_j)(y_i - y_j) \tag{1} $$ to $$ \dfrac{1}{2N^2}\sum_{i=1}^{N}\sum_{j=1}^{N}(x_i - x_j)(y_i - y_j) \tag{2} $$ I understand this visually though (imagining a table, and {1} represents half of it diagonally, and {2} whole, so {2} double of {1}, thus we half it to equate to {1}. *Double Summation from $$ \dfrac{1}{2N^2}\sum_{i=1}^{N}\sum_{j=1}^{N}(x_i - x_j)(y_i - y_j) \tag{2} $$ to single $$ \dfrac{1}{N}\sum_{i=1}^{N}(x_i - \overline{x})(y_i - \overline{y}) \tag{3} $$ Context: I am trying to intuitively understand the evolution of Covariance formula. Naturally stumped upon this link where it is explained with rectangles (but rectangles between two points, not one and mean). Apparantly rectangles visualization equate to equation {2} if we avoid duplicate rectangles, but then I want to transfer this notion to regular covariance formula with mean as in equation {3}. Mathematically proving that could be a bridge to show how they (equation {2} to {3}) are one and the same. This paper does that in reverse, where I also did not understand the notion $\overline{x}\cdotp\overline{y}$.
The paper you have linked indeed answers your question. The problem that it is done "in reverse" is not an issue since an equality is proven. The equality holds both ways. Doing it in reverse would be an issue when proving a theorem which is only a necessary, but not a necessary and sufficient condition. $\overline{x} \cdot \overline{y}$ is critical in understanding the proof. Write out both of these expressions, then you will see why the cross-terms in formula (2) vanish. I hope this helps. I admit that it is probably still not completely intuitive, but sometimes an algebraic answer is also an answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2974227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Covariant derivative on the dual bundle The covariant derivative on the dual bundle is defined as follows: $\nabla^{*}: \Gamma(TM) \times \Gamma(E^*) \ni (X, t) \mapsto \nabla_X^{*} t \in \Gamma(E^*)$ , where for any section $s \in \Gamma(E)$, $(\nabla_X^{*} t)(s) = L_X(t(s)) - t(\nabla_X s)$. Remark: $\Gamma(E^*)$ is the set of sections of the dual bundle and $\Gamma(TM)$ is the set of vector fields. I would like to check whether this is indeed a covariant derivative. I have already proved that it satisfies function-linearity, but now I have difficulties to show that it satisfies Leibniz rule, i.e. $(\nabla_X^* ft) = (L_X f)t + f\nabla^*_X t$. I need to verify that $(\nabla_X^* ft)(s) = L_X(ft(s)) - ft(\nabla_X s) = \ldots = (L_X f)t(s) + f(\nabla^*_X t)(s). $ Can someone help me to find the intermediate steps ? Thanks.
As written in the post, $$\nabla^*_Xft(s)=X(ft(s))-ft\nabla_Xs.$$ Now, using the Leibniz rule for the first summand on the right, $$=X(f)t(s)+fX(t(s))-ft\nabla_Xs=X(f)t(s)+f\nabla^*_Xt(s).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2974614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can any integer, not a multiple of three, be represented as $n = \sum_{i=0}^{a-1} 3^i \times 2^{b_i}$? This question has some relevance to the Collatz conjecture. It was originally based on trying to represent a number like this: Finding whether $\dfrac{2^k - (2\cdot3^{n-1} + 2^{t_0}3^{n-2} + 2^{t_0+t_1}3^{n-3} .... + 2^{t_0+t_1+...+t_{n-1}})}{3^n}$ can describe all integers However, I generalised that and tried to ask a simpler question instead, now this is simply out of curiosity rather than anything useful, as in often the case with the Collatz conjecture. Is it possible to represent a number that is not divisible by three as: $$n = \sum_{i=0}^{a-1} 3^i \times 2^{b_i}$$ $$n, a, b_i \in \Bbb{Z^+_0}$$ Where $b_i$ is some arbitrary integer such that $b_{i+1} \leq b_i$ For example, $13=1+3+9$. EDIT: My original question asked what happened with these conditions: $$n, a, b_i \in \Bbb{Z^+}$$ $$b_{i+1} < b_i$$ (Notice the subtle differences in specifications.) Under these above conditions, some numbers cannot be represented, like $13$. What are some characteristics of numbers that cannot be represented like this? Any help will be much appreciated. Thanks.
I deduced the same thing when study the Collatz conjecture, here is the proof without the restrictions and some stuff related. Let $G_n = \{m \in \mathbb{N} \,|\,\gcd(m,n) = 1\}$ We can do this for all $G_n$, for example $n = 2$, but we only interested in $n = 3$. Lemma: For all $n \in G_3$, exists $a \in \mathbb{Z} : 0 \le a$ such that: * *$n = 2^a \hspace{5pt} (\text{mod } 3)$ *$n \neq 2^a \hspace{5pt} (\text{mod } 9)$ Proof: Without loss of generality, we use this table: +---+---+---+---+----+----+ | 1 | 2 | 4 | 8 | 16 | 32 | +-----+---+---+---+---+----+----+ |Mod 3| 1 | 2 | 1 | 2 | 1 | 2 | +-----+---+---+---+---+----+----+ |Mod 9| 1 | 2 | 4 | 8 | 7 | 5 | +-----+---+---+---+---+----+----+ From the table: * *For all $n \in G_3$, exists $a$ such that $n = 2^a\ (\text{mod }3)$. *For all $n \in G_3$, exists $a$ such that $n = 2^a\ (\text{mod }9)$. *By contradiction, suppose that for all $a$ we have $n = 2^a\ (\text{mod }3)$ and $n = 2^a\ (\text{mod }9)$. By the table, exists $a'$ such that $2^a = 2^{a'}\ (\text{mod }3)$ and $2^{a} \neq 2^{a'}\ (\text{mod }9)$. Then, $n = 2^{a'}\ (\text{mod }3)$ and $n \neq 2^{a'}\ (\text{mod }9)$. Finally, by contradiction, exists $a$. Corollary: For all $n \in G_3$, exists $a \in \mathbb{Z} : 0 \le a$ and $k \in G_3$ such that $n = 2^a + 3k$. Notation: If a number $n$ can be represented in this form, for some $l \in \mathbb{Z} : 0 \le l$: $$ n = 2^{\alpha_0} + 2^{\alpha_1}\,3^{1} + 2^{\alpha_2} + ... + 2^{\alpha_{l}}\,3^{l} $$ such that $\alpha_i \in \mathbb{Z} : \alpha_i \ge 0$ for $i = 1, ..., l$. Then, we said that n can be represented in a $l$-canonical form and if exists $l \in \mathbb{Z} : 0 \le l$ such that n is a $l$-canonical form then we said n can be represented in the canonical form. Theorem: Every number in $G_3$ can be represented in the canonical form. Proof: We prove by induction that $n$ can be represented in that form, for all $n \in G_3$. Base case: You can find a expression for all $n \in G_3 : n \le 9$. Inductive case: Suppose this is holds for all $k \in G_3 : k < n$: By the corollary, exists $a$ and $k \in G_3$ such that $n = 2^a + 3 k$. Because $k < n$, $k = 2^{\alpha_0} + 2^{\alpha_1}\,3 + ... + 2^{\alpha_i}\,3^i$. Finally, because $n = 2^{a} + 3\,(2^{\alpha_0} + 2^{\alpha_1}\,3 + ... + 2^{\alpha_i}\,3^i)$, every number in $G_3$ can be represented in the canonical form. Notation: If $n \in G_3$ can be represented in a $0$-canonical form, we said $n$ can be represented in the canonical principal form. Notation: If $m$ can be represented in a $l$-canonical form and $n = m + 3^{l + 1}\,t$ then we said $t$ is in the tail of $n$, with $n,m,t \in G_3$. The Collatz conjecture implies that every number in $G_3$ is in the tail of some canonical principal form. Conjecture: If $n \in G_3$ can be represented in a $a$-canonical form and $b$-canonical form, for $a \le b$, then $n$ can be represented in a $l$-canonical form, for all $l \in \mathbb{Z} : a \le l \le b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2974802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Stirling numbers of the second kind - proof For a fixed integer k, how would I prove that $$\sum_{n\ge k} \left\{n \atop k\right\}x^n= \frac{x^k}{(1-x)(1-2x)...(1-kx)}.$$ where $\left\{n \atop k\right\}=k\left\{n-1 \atop k\right\}+\left\{n-1 \atop k-1 \right\}$
The definition of ${n\brace k}$ through the recursion is easily seen to be equivalent to the combinatorial definition "the number of ways for partitioning a set with $n$ elements into $k$ non-empty subsets". $m^n$ can be interpreted as the number of functions from $[1,n]$ to $[1,m]$: if we classify them according to the cardinality of their range, we may easily check that $$ m^n =\sum_{k=1}^{n}{n\brace k}k! \binom{m}{k} \tag{1}$$ i.e. Stirling numbers of the second kind allow to decompose monomials into linear combination of binomial coefficients. In equivalent terms ${n\brace k}k!$ is the number of surjective functions from $[1,n]$ to $[1,k]$, and $${n \brace k}k!=\sum_{j=0}^{k}\binom{k}{j}j^n (-1)^{k-j} \tag{2}$$ follows by inclusion-exclusion. $(2)$ is a natural counter-part of $(1)$, and it leads to $$ {n\brace k} x^n = \sum_{j=0}^{k}\frac{x^n j^n (-1)^{k-j}}{j!(k-j)!} \tag{3}$$ then to: $$ \sum_{n\geq k}{n\brace k}x^n = x^k\sum_{j=0}^{k}\frac{j^k (-1)^{k-j}}{j!(k-j)!(1-jx)}=x^k\sum_{j=1}^{k}\frac{j^k}{k!}\binom{k}{j}\frac{(-1)^{k-j}}{1-jx}.\tag{4}$$ By residues or equivalent techniques, the last sum can be easily checked to be the partial fraction decomposition of $\frac{1}{(1-x)(1-2x)\cdot\ldots\cdot(1-kx)}$, proving the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2974892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Partial derivative of an integral from zero to infinity How would one go about taking the derivative of this integral? $$\frac{\partial}{\partial C_T} \int_{0}^{\infty} U(C_T)e^{-\delta t}dt$$
$$ \int_{0}^{\infty} U(C_T)e^{-\delta t}dt=U(C_T)\int_{0}^{\infty} e^{-\delta t}dt=\frac{U(C_T)}{\delta} $$ and $$\frac{\partial}{\partial C_T} \int_{0}^{\infty} U(C_T)e^{-\delta t}dt=\frac{U'(C_T)}{\delta}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inverse of a primitive recursive bijection Is it true or false that the inverse of a primitive recursive bijection $f: \mathbb{N} \to \mathbb{N}$ is also primitive recursive (pr)?
No this is not true! Let's $A$ the Ackermann-Peter function (defined by $A(x)=A(x,x)$ with this definition). Note that $A$ is fast growing, recursive but not primitive recursive. But $A^{-1}$ defined by $A^{-1}(x)=1+\max\{y\;|\;A(y)\le x\}$ ($0$ if the set is empty) is recursive primitive (and very slow growing). Now consider the sets $B=\{A(x) \;|\; x \in \mathbb N\}$ and $\overline{B}=\mathbb N\setminus B $ (both infinite). And the sets $Odd=\{2x+1 \;|\; x \in \mathbb N\}$ and $\overline{Odd}=\mathbb N\setminus E $ (both infinite). Then the bijection $\mathcal B$ that matchs the $k^{th}$ element of $B$ to the $k^{th}$ element of $Odd$ and the $k^{th}$ element of $\overline{B}$ to the $k^{th}$ element of $\overline{Odd}$ is recursive primitive (because given $x$, you need to compute $y=A^{-1}(x)$ and at the same time verify that $A(y-1)=x$ (both can be done with primitive functions). if $A(y-1)=x$ then $\mathcal B(x)=2y-1$ else $\mathcal B(x)=2*(x-y)$ But $\mathcal B^{-1}$ is not primitive recursive as $\mathcal B^{-1}(2x+1)=A(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove (¬(∀(¬()))) ⊢ (∃ ()) by Natural Deduction I want to prove (¬(∀(¬()))) ⊢ (∃()) using only the basic rules of the Natural Deduction system for propositional logic and predicate logic. I am not sure how to get rid of the negation before the universal quantifier. How should I prove this?
If in doubt, try reductio .... So after the initial premiss, assume $\neg\exists xPx$ Now what? You'll have to make another assumption to get anywhere .... So suppose $Pa$ (the obvious thing ...why??) Then you can infer $\exists xPx$, contradiction! So you can infer $\neg Pa$ And that gives you $\forall x\neg Px$ And now the end is in sight, because this contradicts the initial premiss ... Join up the dots, and finish the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the Euler characteristic of the hyperboloid of one sheet I would like to know what's the Euler characteristic of the hyperboloid of one sheet. I know that $2-2g$ is the Euler characteristic where g is the number of "holes". Using this fact, Euler characteristic of the hyperboloid is -2. Am I right?
No. The "fact" you mention is not stated in a rigorous way; there is no definition of 'hole'. (I really dislike this phrasing because of confusions like this.) The precise statement is that if $\Sigma_g$ is the compact surface without boundary of genus $g$, then $\chi(\Sigma_g) = 2-2g$. The hyperboloid of one sheet is not compact, so it does not fit into this statement. It deformation retracts onto a circle, and the Euler characteristic is a homotopy invariant, so $\chi(H) = \chi(S^1) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit Evaluation - $\lim_{x\to \infty} \frac{1-e^x}{e^{2x}}$ $\lim_{x\to \infty} \frac{1-e^x}{e^{2x}}$ My guess is to evaluate by dividing all terms by $e^x$, which works and gives me Eulers identity. But why should that be right? I thought we are only supposed to divide by the highest exponent term in the denominator? But when I do that, I cannot get a solution? How and when is it ok to divide by an exponent in the numerator?
Your tought is correct but there is not needing of Euler'e identity, indeed dividing both numerator and denominator by $e^x$, we obtain $$\dfrac{1-e^x}{e^{2x}}=\dfrac{\frac1{e^x}-\frac{e^x}{e^x}}{\frac{e^{2x}}{e^x}}=\dfrac{\frac1{e^x}-1}{e^x}$$ and then it suffices to observe that the numerator tends to $-1$ (that is bounded) and the denominator diverges to $\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Prove that two sequences of integers that have the same sum and product must be the same. Given two sequences of nondecreasing distinct positive integers such that $$x_1 + x_2 + ... + x_i = y_1 + y_2 + ... + y_i , i>0$$ and that $$x_1x_2 ... x_i = y_1y_2 ... y_i$$ Prove/disprove that the sequences are equal i.e. $$x_1 = y_1, x_2 = y_2, ... , x_i = y_i$$ I started with Let $x_1x_2 ... x_i$ be $A$. If $A$ is prime, $x_1 = A = y_1$ (since $A$ cannot be factored any more) and we are done. What I don't know is what happens when $A$ is not prime. Intuitively, it sounds true, and I cannot find any counter examples.
Counterexample: $12+4+3 \ =\ 9+8+2$ $12\cdot4\cdot3 \ = \ 9\cdot8\cdot2$ Moreover, for $\ i>2\ ,\ $you can always find infinitely many counterexamples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What does '$N × Q$' represent in this relation? In relation $\{(x, y) ∈ N × Q | y = \sqrt x\}$ what does '$N × Q$' represent?
$\mathbb N$ is the set of natural numbers and $\mathbb Q$ is the set of rationals. $\mathbb N \times \mathbb Q$ is the Cartesian product of the two sets : $\mathbb N$ and $\mathbb Q$. Thus, $(x, y) \in \mathbb N \times \mathbb Q$ means that $(x, y)$ is an ordered pair where $x$ is a natural and $y$ is a rational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2975989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If a seimigroup's left identity is unique, can it be two-side identity? If a seimigroup's left identity is unique, can it be two-side identity? The answer is true if we talk it in a ring. Like construct $(be-b+e)a=a$. But in a semigroup I can't image how to construct a equation so we can use the condition uniqueness. So I am wondeting if the statement is false. But I also couldn't get a counterexample.
That is an interesting question. Let $S=${$x,y$} (where $x$ is not equal with $y$). $S$ with the following operation is a semigroup: $xy=y ,xx=y, yx=y, yy=y$. We see that $y$ is the unique left identity of $S$, but $y$ is not an identity, other wise we have $yx=x$ then we have $x=y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2976048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the smallest integer greater than 1 such that $\frac12$ of it is a perfect square and $\frac15$ of it is a perfect fifth power? What is the smallest integer greater than 1 such that $\frac12$ of it is a perfect square and $\frac15$ of it is a perfect fifth power? I have tried multiplying every perfect square (up to 400 by two and checking if it is a perfect 5th power, but still nothing. I don't know what to do at this point.
Hint: Let the required number be x: $\frac{1}{2}x= A^2$ $\frac{1}{5}x= B^5$ $\frac{1}{2}x+\frac{1}{5}x =A^2+B^5$ $\frac{5x+2x}{10}=A^2+B^5$ $7x=10(A^2+B^5)$ ⇒ $x=10k$; $k ∈ N $. So x is a power of 10. The smallest 5th power of 10 is $10^5$ so the number must be $5\times 10^5=500000$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2976181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 6, "answer_id": 4 }
Group of matrices form a manifold or euclidean space There is a very interesting question How can a group of matrices form a manifold. From the answers it looks more like group of matrices form euclidean space than a general manifold. I understand that euclidean space is a manifold, but manifold is very general and has curvature. My question is what exactly makes a group of matrices a manifold but not simply a euclidean space. * *I am not a mathematician so please correct me if there is anything wrong with the question or the way I posed it.
Take $SO(2,\mathbb{R})$, for instance. This is the group of the $2\times2$ orthogonal matrices whose determinant is $1$. But then$$SO(2,\mathbb{R})=\left\{\begin{bmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{bmatrix}\,\middle|\,\theta\in\mathbb R\right\}.$$This can be seen as a circle in $\mathbb{R}^2$. Therefore, it is naturally a manifold, but in no way an Euclidean space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2976447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is $\pi/2$ omitted from the solution of $\cot x = 3 \sin 2x$? Why is it that the solution of $$\cot x = 3 \sin 2x \quad(\text{for the interval}\; -\pi < x < \pi)$$ does not include $\pi/2$, even though if this is graphed, it shows intersections at $x = \pm\pi/2$? Please see graph below. (The solutions mentioned are only four, to the exclusion of positive and negative $\pi/2$.) Algebraically as well, one of the factors comes out to be $\cos x = 0$ (which should give $x = \pi/2$). (Hence the graph.)
You are right the values $x=\pm \frac{\pi}2$ are solutions of the equation $$\cot x=3 \sin 3x \implies \cot \pm\frac{\pi}2=3 \sin \pm\pi=0$$ maybe it was not included since it is considered a trivial solution. added after editing The values $x=\pm \frac{\pi}2$ seem to be indeed included among the solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2976738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Inner Space Projection Using Matrices In my math class today, we proved that the ratio of the area of an inner space to that of the inner space projected by some matrix $A$ is equal to $|det(A)|$. In other words, if the area of an inner space is $a$, the area of that inner space projected by a matrix $M$ $= |det(M)|*a$ So, if I am given the equation of a circle $x^2+y^2=1$, and the inner space of that circle is projected by a matrix $M = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$, then the area of the ellipse would be $\pi *|det(M)|$, which equals $2\pi$ My question is, is there anyway to find said matrix given the two geometric shapes such that the projection of one shape by a matrix results in the other. For example, if I am given a circle and an ellipse, and I know that the ratio of the areas is $R$, I understand that the determinant of the matrix would be $\pm R$, but is there a formula or method to compute the exact matrix? Thanks in Advance! P.S. If any of the terms I have used are incorrect, please let me know. I am new to MSE, as well as linear algebra, and any help is greatly appreciated!
If the transformation is linear then the matrix $M$ is determined if you know the images of two general points (i.e. two points that do not lie on the same line through the origin). Essentially, the entries in each row of $M$ are the solution to a pair of simultaneous linear equations. Of course, in many cases the transformation of one shape into another will not be linear e.g. there is no linear transformation that maps a circle into a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2976910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the value of k if equation $x^3-3x^2+2=k$ has three real roots and if one real root? I'd like help understanding this question graphically actually. My book finds the derivative of f(x)=$x^3-3x^2+2$ and gets 3x(x-2), thus finding maxima and minima of f(x) as 2 and -2. Then it just says that for three real roots if k belongs to [-2,2] and one real root if k belongs to $(-\infty,-2)$U$(2,\infty)$.Would someone please help me understand this in a graphical,intuitive manner without using discriminant?
Consider $g(x)=x^{3}-3x^{2}=x^{2}(x-3)$, so that $f(x)=g(x)+(2-k)$. The plot of $g(x)$ is as follows; So we see that $g(x)$ has $2$ roots, one at $x=0$ and the other at $x=3$. It also has a local minimum at $x=2$, with value $f(2)=-4$ and a local maximum of at $x=0$. Now, $f(x)$ is just a vertical translation of $g(x)$ by $(2-k)$. So for $(2-k)<0$ we have one root, as $g(x)$ is being translated down so its local maximum is below the $x$ axis. For $(2-k)=0$ we have two roots, as $g(x)=f(x)$. For $0<(2-k)<4$ we have three roots, as $g(x)$ has been translated up so that the $x$ axis is between its local minimum and local maximum. For $(2-k)=4$ we have two roots, as $g(x)$ has been translated up until its local minimum is sitting right on the $x$ axis, and finally for $(2-k)>4$ we have one root, as $g(x)$ has been translated until it's local minimum is above the $x$ axis. As we have $3$ roots for $0<(2-k)<4$, we have by solving the inequality that there are three real roots for $-2<k<2$, i.e. for $k\in(-2,2)$. As we have a single real, root for $(2-k)<0$ and for $(2-k)>4$, we have upon solving the inequalities that there is one real root for $k>2$ or $x<-2$, i.e. for $k\in (-2,\infty)\cup(2,\infty)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2977037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The prime index of subgroup and some investigations I have a task: "Let $p$ be a prime number and let $G$ be a group with a subgroup $H$ of index $p$ in $G$. Let $S$ be a subgroup of $G$ such that $H\subset S\subset G$. Prove that $S = H$ or $S = G$." So $[G : H] = p$ $\Rightarrow$ $|G| = |H|p$. Supposably, that $|H| = m$ and $|S| = g$. $\frac{|S|}{|H|} = \frac{g}{m}$. Hence $\frac{mp}{g} = \frac{mp}{\frac{g}{m}}$ $\Rightarrow$ $m = 1$ $\Rightarrow$ $|G| = p$. I think that my mistake in this investigation $\frac{mp}{g} = \frac{mp}{\frac{g}{m}}$, but I cannot understand why this is must be false.
You don't need all those variables. Moreover, the groups in question need not be finite. Suppose $H$ is a subgroup of $G$ with $[G:H]=p$, for some prime $p$. If $S$ is a subgroup of $G$ such that $H\subseteq S\subseteq G$, then $$p=[G:H]=[G:S]\cdot [S:H]$$ https://en.wikipedia.org/wiki/Index_of_a_subgroup#Properties hence, either $[S:H]=1$, in which case, $S=H$, or $[G:S]=1$, in which case, $S=G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2977330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to maximize returns in this scenario You have a machine. You can put money into it. You have $s$ initial budget. $p$ percent of the time the machine will double your investment. $(100-p)$ percent of the time it will just swallow your money and not return anything. You can choose a ratio $a$ of your money to reinvest at every turn into the machine. E.g. if $a=1$, then you are reinvesting all your money at every turn (and very quickly will be left with nothing). What is the optimal ratio $a$ with which your money grows the fastest with respect to the number of times you put money in the machine?
Let $p' = p/100$ denote the probability of winning at a turn with the machine. Your initial capital is $W_0 = s$. After the first turn where you bet $aW_0$, your wealth is $$W_1 = W_0 + aW_0X_1 = W_0(1 +aX_1),$$ where $X_1$ is a binary random variable such that $P(X_1 = 1) = p'$ and $P(X_1 = -1) = 1-p'$. Assume that the outcome of a turn depends in no way on the outcomes of previous turns. After $n$ turns, your wealth is $$W_n = W_0(1 +aX_1)(1+aX_2) \cdots(1+aX_n)$$ where $X_1, X_2, \ldots, X_n$ are independent and identically distributed binary random variables. The compounded rate-of-growth is $$R_n(a) = \log \left[\left(\frac{W_n}{W_0} \right)^{1/n}\right] = \frac{1}{n} \log \left(\frac{W_n}{W_0} \right)= \frac{1}{n}\sum_{k=1}^n\log(1+aX_k),$$ with the expected value $$G_n(a) = E[R_n(a)] = \frac{1}{n}\sum_{k=1}^n E[ \log(1+aX_k)].$$ Since the random variables are identically distributed, we have for all $k$, $$E[ \log(1+aX_k)] = E[ \log(1+aX_1)] = p'\log(1+a) + (1-p')\log(1-a), $$ and, hence, the expected rate-of-growth is independent of $n$: $$G_n(a) = \frac{1}{n} \sum_{k = 1}^n [p'\log(1+a) + (1-p')\log(1-a)] = p'\log(1+a) + (1-p')\log(1-a) $$ If the odds are in your favor we have $p' > 1/2$ and setting the derivative of $G_n$ equal to $0$ determines the optimal proportion $a^*$ that maximizes the rate-of-growth, that is $$G_n'(a^*) = \frac{p'}{1+a^*} - \frac{1-p'}{1 - a^*} = 0 \\ \implies a^* = 2p'-1$$ This is commonly known as the Kelly criterion. If the odds are not in your favor we have $p' \leqslant 1/2$ and there is no strategy that produce a positive expected rate-of-growth -- see Gambler's ruin. We can only establish another objective like maximizing the probability of doubling initial capital and quitting. At even odds, $p'= 1/2$, it is best to bet everything on one turn.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2977474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving Regularly Closed If $U$ is open and $A=\overline{U}$, then $A$ is regularly closed. Note that: $A=\overline{U}=U \cup U' \Rightarrow U \subset A \Rightarrow U \subset Int(A)$, since $U$ is open. A set $A$ is regularly closed iff $A=\overline{Int(A)}.$ $\Rightarrow$ Let $x\in A=\overline{U}$. Let $V$ be an open set containing $x$. Then $V \cap U \neq \emptyset$ and since $U \subset Int(A)$, we have $V \cap Int(A) \neq \emptyset$. Then since $Int(A) \subset \overline{Int(A)}$, $x \in \overline{Int(A)}$. Thus, $\overline{U}=A \subset \overline{Int(A)}$. $\Leftarrow$ Let $x \in \overline{Int(A)}$. Let $W$ be an open set containing $x$. Then $W \cap Int(A) \neq \emptyset$... Am I incorrectly assuming in the first part of the proof that if $x \in V$, then $x \in Int(A)$? Could I have some help on how to correct and finish this proof?
Since U is open U = int U. So A = cl U = cl int U. Thus cl int A = cl int cl int U = cl int U = A. Exercise. Show by set inclusions cl int cl int U = cl int U.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2977585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\alpha \nmid n \Rightarrow \alpha$ is a generator of group $G$ Let $G=\left \langle x \right \rangle$ a cyclic group s.t. $ord(x)=n$. Let $a=x^\alpha$ s.t. $\alpha \nmid n$ and $0<\alpha<n$ . Prove that $x^\alpha$ is a generator of $G$. My partial answer: Let $A:=\left \langle x^\alpha \right \rangle$. We'll assume in counterwise the oposite. Then, $|G|=n$ thus $|A|\leq n$. $|A|\ne n$ because elsewise we would get $\left \langle x^\alpha \right \rangle = G$. Thus, $|A|<n$. I would like for some help from here or to get another direction.
This is not true. For example, suppose $\text{ord}(x)=10$, and take $\alpha=6$. Then $$<x^\alpha>=\{e, x^6, x^{12}=x^2, x^8, x^{14}=x^4\}$$ which clearly is not the whole group. However, if $\gcd(\alpha,n)=1$ this is true, since then by the euclidean algorithm there exist $k,l \in \mathbb{Z}$ such that $k\alpha+ln=1$, so $$x=x^{k\alpha+ln}=x^{k\alpha}x^{ln}=(x^{\alpha})^k (x^n)^l=(x^\alpha)^k$$ where the last equality follows from lagrange’s theorem (the order of $x$ divides the order of the group). The above equality now shows $x$ is in the subgroup generated by $x^{\alpha}$, so the subgroup contains all of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2977729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find $\cos(\alpha+\beta)$ if $\alpha$, $\beta$ are the roots of the equation $a\cos x+b\sin x=c$ If $\alpha$, $\beta$ are the roots of the equation $a\cos x+b\sin x=c$, then prove that $\cos(\alpha+\beta)=\dfrac{a^2-b^2}{a^2+b^2}$ My Attempt $$ b\sin x=c-a\cos x\implies b^2(1-\cos^2x)=c^2+a^2\cos^2x-2ac\cos x\\ (a^2+b^2)\cos^2x-2ac\cos x+(c^2-b^2)=0\\ \implies\cos^2x-\frac{2ac}{a^2+b^2}\cos x+\frac{c^2-b^2}{a^2+b^2}=0 $$ $$ a\cos\alpha+b\sin\alpha=c\implies a\cos^2\alpha\cos\beta+b\sin\alpha\cos\alpha\cos\beta=c\cos\alpha\cos\beta\\ a\cos\beta+b\sin\beta=c\implies a\sin\alpha\sin\beta\cos\beta+b\sin\alpha\sin^2\beta=c\sin\alpha\sin\beta\\ c\cos(\alpha+\beta)=a\cos\beta+a\sin\alpha\cos\beta.(\sin\beta-\sin\alpha)+b\sin\alpha+b\sin\alpha\cos\beta(\cos\alpha-\cos\beta)\\ $$ I think its getting complicated to solve now. What is the simplest way to solve this kind of problems?
Guide: $c= a\cos \alpha +b\sin \alpha = a\cos \beta + b\sin \beta \implies a(\cos \alpha -\cos \beta) = b(\sin \beta-\sin \alpha)\implies \dfrac{a^2}{b^2}=\dfrac{(\sin \alpha - \sin \beta)^2}{(\cos \alpha - \cos \beta)^2}=m\implies RHS = \dfrac{m-1}{m+1}=...LHS$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2977871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
An exercise about bifunctors from Riehl's "Category Theory in Context" This is an exercise from E.Riehl's book "Category Theory in Context" (p.48, ex.1.7.vii) Prove that a bifunctors $F\colon\mathsf{C}\times\mathsf{D}\to\mathsf{E}$ determines and is uniquely determined by: * *A functor $F(c,-)\colon\mathsf{D}\to\mathsf{E}$ for each $c \in \mathsf{C}$, *A natural transformation $F(f,-)\colon F(c,-) \Rightarrow F(c',-)$ for each $f\colon c\to c'$, defined functorially in $\mathsf{C}$. I'm having trouble figuring out two things regarding this exercise: * *What is the functor $F(c,-)$ for a fixed $c \in \mathsf{C}$? Clearly, it maps each $d \in \mathsf{D}$ to $F(c,d)$, but to what it maps a morphism $g\colon d\to d'$ in $\mathsf{D}$? To $F(1_{X},g)?$ *Given an aformentioned natural transformation $F(f,-)\colon F(c,-)\Rightarrow F(c',-)$, what are it's components? That is, given $d \in \mathsf{D}$, what is a morphism $F(f,-)_d\colon F(c,d)\to F(c',d)$?
Yes, to your first question, or more precisely to $F(1_c,g)$ where $1_c$ is the identity morphism on $c$. The morphisms you seek in your second are the $F(f,1_d)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Properties of conditional variance Full statement of problem: Let $(\Omega,\mathcal{F},P)$ be a probability space and $\mathcal{G}\subset \mathcal{F}$ a $\sigma$-algebra. Let $X\in L^2$. Define $$\text{var}(X \mid \mathcal{G})=E[|X-E[X \mid \mathcal{G}]|^2\mid \mathcal{G}].$$ Prove that: (a) $\text{var}(X \mid \mathcal{G})=E[X^2 \mid \mathcal{G}]-(E[X \mid \mathcal{G}])^2$ a.s. (b)$\text{var}(X)=E[\text{var}(X \mid \mathcal{G})]+\text{var}(E[X \mid \mathcal{G}])$ Part (a) just seems like a simple computation using definitions of conditional expectation, but how do I prove (b)? Any help would be much appreciated, thanks in advance!
Hints for part (b): * *Show that $$\mathbb{E} \big[ (X-\mathbb{E}(X \mid \mathcal{G}) \mathbb{E}(X \mid \mathcal{G}) \big]=0 \tag{1}$$ and $$\mathbb{E}(X- \mathbb{E}(X \mid \mathcal{G}))=0. \tag{2}$$ *Conclude from $$X-\mathbb{E}(X) = (X-\mathbb{E}(X \mid \mathcal{G})) + (\mathbb{E}(X \mid \mathcal{G})-\mathbb{E}(X))$$ that $$\begin{align*} \text{var}(X) =& \mathbb{E} \big[ (X-\mathbb{E}(X \mid \mathcal{G}))^2 \big] + \mathbb{E} \big[ (\mathbb{E}(X \mid \mathcal{G})-\mathbb{E}(X))^2 \big] \\ &\quad +2 \mathbb{E} \big[ (X-\mathbb{E}(X \mid \mathcal{G}))(\mathbb{E}(X \mid \mathcal{G})-\mathbb{E}(X)) \big] \tag{3} \end{align*}$$ *Use Step 1 to show that the third term on the right-hand side of $(3)$ equals zero. Identify the other two terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
solution of a system of equations in algebraic closure of GF2 I have a set of equations and I want to know whether there exist solutions of these equations in an extension of Galois field $\mathbb{GF}_2$ and what are they? Is there a procedure to check this? For example, the following set of equations in variables ${a,b,c,d,e,f}$ $1 + a + c + e = 0, b + d + f = 0, 1 + a e + c e + a c = 0, b e + a f + c f + e d + b c + a d = 0, b f + f d + b d = 0$ have the solution $a=1,b=1,c=1,d=\omega,e=1,f=\omega^2$ which lie in an extension of $\mathbb{GF}_2$, $\mathbb{GF}_4= \frac{\mathbb{GF}_2[u]}{u^2+u+1}$ where $\mathbb{GF}_2[u]$ is a polynomial ring with variable $u$ and one representation of $\mathbb{GF}_4$ is ${0,1,\omega,\omega^2}$.
By Hilbert's Nullstellensatz the set of common zeros of those polynomials in the algebraic closure is empty if and only if the constant polynomial 1 is in the ideal $I$ they generate in the ring of polynomials $R$. Actually this is exactly what the weak Nullstellensatz states, and that suffices here. In your example case $R=GF(2)[a,b,c,d,e,f]$ is the ring of polynomials in the listed six variables, and $$I=\langle 1 + a + c + e , b + d + f, 1 + a e + c e + a c, b e + a f + c f + e d + b c + a d, b f + f d + b d\rangle.$$ The question whether $1\in I$ or not can be decided algorithmically by first generating a Gröbner basis of the ideal, and then checking whether there are non-zero constants in the Gröbner basis. Several computer algebra packages include an implementation Buchberger's algorithm (or something more efficient) that produces a Gröbner basis of $I$ given any set of generators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conditional Expectation on the first outcome Assume Independent trials, resulting in one of the outcomes 1, 2, 3, 4, 5 with respective probabilities $p_i$ for $i=1,2,3,4,5$ and $\sum_i p_i = 1$ Let $Z$ be the number of trials needed until the initial outcome has occurred exactly $5$ times. example: if we get $1,3,3,4,1,1,1,2,1$ then $Z=9$ 1. We want $E[Z]$ 2. Find the expected number of trials needed until both outcome $1$ and outcome $2$ have occurred? For question 1, I condition on the first outcome $O_i = 1,...,5$: $$E[Z] = E\bigg[E[Z|O_i] \bigg] = \sum_i p_i E[Z|O_i]$$ I am thinking $E[Z|O_i]=1+ $ something. I get stuck here. Any insight?
Indeed, it is $1+$ something.   You are using the Linearity of Expectation. $\mathsf E[Z\mid O_i]$, is the expected time until that first outcome ($i$) has its fourth subsequent occurance. What type of distribution is the count of Bernoulli trials until the next success? What type of distribution is the count of Bernoulli trials until the fourth success? What is the expectation of this something?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix with no negative elements = Positive Semi Definite? A matrix $A$ is positive semi-definite IFF $x^TAx\geq 0$ for all non-zero $x\in\mathbb{R}^d$. If all elements of $A$ are non-negative, does this guarantee that $A$ is positive semi-definite?
In general no. One way of defining positive definiteness is through the leading principal minors of a matrix. The $k^{th}$ leading minor if found by computing the deturminant of the matrix after deleting the last $n-k$ colomns and rows in an $n \times n $ matrix. It is quite common to see a matrix with all positive entries that has a negative deturminnant, this therefore means this matrix would not be positive definite. For example, If you look at the leading principal minors of the following matrix $ A \in \mathbb{R}^{n\times n}$: $$ A = \left( \begin{array}{cc} 1 & 1 \\ 2 & 1 \end{array}\right),$$ for $A$ to be positive definite $det(1)>0$ and $det(A)>0$. This is clearly not the case as $det(A)=-1$. In fact this particular matrix is indefinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Let $I$ be an ideal in a Noetherian ring. Show that either $I$ contains an $R$-regular element or else $aI=0$ for some $0\neq a\in R$. Let $I$ be an ideal in a Noetherian ring. Show that either $I$ contains an $R$-regular element or else $aI=0$ for some $0\neq a\in R$. How would I prove this? Also what does $aI=0$ mean?
Yes this is an important property of Noetherian rings. It is Theorem 82 in Kaplansky's Commutative Rings, which he prefaces as "a result that is among the most useful in the theory of commutative rings." In the literature you often encounter this property as Property (A) A ring is said to have Property (A) if every finitely generated dense ideal contains a regular element. (Or equivalently, if every f.g. ideal consisting entirely of zero divisors has a nonzero annihilator.) Kaplansky shows that Noetherian rings have Property (A). It also comes up in the study of the integral closure of $R[x]$ and the total ring of fractions of a reduced ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Combinatorics: How many persons like apples and pears and strawberries? Out of $32$ persons, every person likes to eat at least one of the following type of fruits: Strawberries, Apples and Pears. (Which means that there does not exist any person, who does not like to eat any type of fruit). Furthermore, we know that $20$ persons like to eat apples, $18$ persons like to eat pears and $28$ persons like to eat strawberries. (a) There are $10$ persons who like apples and pears, $16$ persons who like apples and strawberries, and $12$ persons who like pears and strawberries. How can I find out how many people like apples as well as pears as well as strawberries? To structure this a bit: * *$32$ persons *$20 \rightarrow$ apples ($\rightarrow$ meaning "like") *$18 \rightarrow$ pears *$28 \rightarrow $ strawberries And for (a) * *$10$ persons $\rightarrow$ (apples & pears) *$16$ persons $\rightarrow$ (apples & strawberries) *$12$ persons $\rightarrow$ (pears & strawberries) Since we know that the total number of persons is $32$. Can I just do the following? Because $20$ persons like apples I can just add the following numbers together: $10 \rightarrow$ ($10$ apples & $0$ pears) + $16 \rightarrow$ ($6$ apples & $10$ strawberries) $+ 12 \rightarrow$ ($12$ pears & $0$ strawberries). So in total I'd get $10 + 16 + 12 = 28$ people who like apples, pears and strawberries? Is that correct? (b) Assume that you don't have the information in (a). Give the preferably limits for the amount of persons who like to eat all kind of fruits. Since $18$ person like pears, can I just say that $18$ persons like to eat pears, apples and strawberries? (As $18$ is the minimal amount of fruits).
Part (a) is an ill-posed question (almost in the spirit of this other question). By inclusion/exclusion, using the given values the number of people liking all three fruits is $$32-(20+18+28-10-16-12)=32-28=4$$ Then the number of people liking apples and some other fruit(s) is $16+10-4=22$, which is greater than the 20 people who like apples. But never mind. For (b), since 18 people like pears, the maximum number of people liking all three fruits is 18, but we can't put exactly 18: 4 people must like only apples and 12 people only strawberries, at which point the number of people is greater than the fixed 32. We can have 17 people liking all three fruits, though, in which case * *3, 1, 11 like only apples, pears, strawberries respectively *nobody likes exactly two fruits For the other extreme, consider the least amount of people who must like at least two fixed fruits: * *20 like apples and 18 like pears, so since there are 32 people there must be at least $20+18-32=6$ people liking both apples and pears *Similarly, at least $20+28-32=16$ like apples and strawberries; at least 14, pears and strawberries Now place these "forced" people in such a way that nobody likes all three fruits. We find that there are two more people than stipulated who like apples, so at least two people like all three fruits, and we get the same result for the other fruits. Thus we must merge six people into two in the centre where people like all three fruits; fortituously the total number of people becomes exactly 32. Thus the minimum number of people who like all three fruits is 2, with * *14 liking only apples/strawberries *12 liking only pears/strawberries *4 liking only apples/pears *nobody likes exactly one fruit. Both the configurations for 17 and 2 people liking all three are physically valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How to determine the gcd of a set I'm stuck at a question. The question states that $K$ is a field like $\mathbb Q, \mathbb R, \mathbb C$ or $\mathbb Z/p\mathbb Z$ with $p$ a prime. $R$ is used to give the ring $K[X]$. A subset $I$ of R is called an ideal if: • $0 \in I$; • $a,b \in I \to a−b \in I$; • $a \in I$ $r \in R \to ra \in I$. Suppose $a_1,...,a_n \in R$. The ideal $<a_1,...,a_n>$ generated by $a_1,...,a_n$ is defined as the intersection of all ideals which contain $a_1,...,a_n$. Prove that $⟨a_1,...,a_n⟩ = {r_1a_1 +···+ r_na_n | r_1,...,r_n \in R}$. I proved this, but I got stuck on the one below: Prove that $⟨a_1,...,a_n⟩ = ⟨\gcd(a_1,...,a_n)⟩$ Because I know how to calculate the gcd, but how do I use it in this context? Because it now has more than two elements, so I don't know how to work with this
The proof is essentially the same as in $\Bbb Z$ via Euclidean descent, i.e. by division with remainder. It suffices to show $I = (a_1,\ldots,a_n) = (d)$ is principal, since then $\,a_i \in (d)$ implies $\,d\,$ is a common divisor of the $a_i,\,$ necessarily greatest since $\,d\in I\,\Rightarrow\, d = r_1 a_1 +\cdots + r_n a_n$ hence $c\mid a_i\,\Rightarrow\,c\mid d\,\Rightarrow\, \deg c\le \deg d$ Principality follows by Euclidean descent: $ $ if $d$ is a least degree element of $I$ then $d$ divides every $\,a\in I,\,$ else $\, 0\neq r = a\bmod d = a-qd\in I$ has $\,\deg r < \deg d,\,$ contra minimality of $d$. Therefore $\,(d)\supseteq (a_1,\ldots,a_n)\supseteq (d)$ Remark $ $ The same proof works for any domain enjoying division with ("smaller") remainder. Such domains care called Euclidean, and the above proof shows they are PIDs - principal idea domains, with ideals generated by any element of least Euclidean value, and this generator is a gcd of all elements in the ideal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2978988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Let $R$ be a Noetherian ring and $M$ a finite $R$-module. Show that $\ell(M)<\infty$ if and only if $\text{Supp}(M)\subset m\text{-Spec}(R)$ Let $R$ be a Noetherian ring and $M$ a finite $R$--module. Show that $\ell(M)<\infty$ if and only if $\operatorname{Supp}(M)\subset m\operatorname{-Spec}(R)$. What does $\ell(M)<\infty$ mean? Is it that $M$ is finitely generated? If so, how is it that I am supposed to use that to show $\operatorname{Supp}(M)\subset m\operatorname{-Spec}(R)$?
$l(M)<\infty$ means that $M$ has a composition series.we always assume $R$ is Noetherian in following: definition1:let $N$ be an aribitray module,a prime ideal $P$ is called the associated prime ideal of $N$ if $p=ann(x)$ for some $X\in N$.and denote $Ass(N)$ be the set of all the associted prime ideals. it is clear that $p\in Ass(N)$ iff $R/p$ is a submodule of $N$. lemma1:for any module $N$,if $N$ is not zero module,then $Ass(N)$ is not empty. if $Ass(N)$ is not empty,for $x\neq 0\in N$,$ann(x)$ is not prime,hence you can find $ab\in ann(x)$ but $a,b\notin ann(x)$.So $ann(x)\varsubsetneqq ann(ax)$. then as before,you can get a filtration.this is a contradiction. lemma2:if $N$ is finite generated module,then there is a filtration $0\varsubsetneqq N_1 \varsubsetneqq N_2 \varsubsetneqq ...\varsubsetneqq N_k=N$ such that $N_i/N_{i-1}\cong R/p_i$,where $p_i$ are prime ideals. the proof relies on lemma1. lemma3:if $N$ is finite generated module,then $N_p\neq 0$ iff $ ann(N)\subset p$. proof:it is trivial that if $N_p\neq 0$ ,then $ ann(N)\subset p$.Now suppove $ ann(N)\subset p$.denote $N=<n_1,...,n_k>$.if $N_p=0$,you can find $r_i\notin p$ such that $r_in_i=0$.denote $r=r_1r_2...r_k$,then $r\notin p$ and $rN=0$.contradiction. by lemma3,we know $Supp(R/p)=V(p)$. lemma4:if $0\rightarrow N_1\rightarrow N_2\rightarrow N_3\rightarrow 0$ is an short exact sequence,then $Supp(N_2)= Supp(N_1)\bigcup Supp(N_3)$ BY lemma2,3,4,you can get the proof of your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2979102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of : $\cap_{f\in E'}Ker(f)= \left\{0 \right\}$ Let $(E,N)$ be a normed vector space. The dimension of $E$ could be infinite. Let $E'= \left\{f:E \rightarrow \mathbb{K} ,\quad f \quad linear \quad and \quad continuous \right\}$. do we have $\cap_{f\in E'}Ker(f)= \left\{0 \right\}$ ?
If $\mathbb K = \mathbb R$ or $\mathbb C$, then this is True by Hahn-Banach Theorem. I am not certain but you have more general version of Hahn-Banach for other fields such as p-adic fields.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2979257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Group structure of rotating-square puzzle Suppose we arrange the numbers $1$ through $6$ at the "vertices" of the shape formed by aligning the sides of two squares, as shown below: In this "puzzle," the only moves allowed are rotating the vertices of either square counterclockwise. I would like to find the group $G$ that represents this puzzle, but I can't figure out how to account for the interaction between the two squares. All I know right now is that $G\subset S_6$, and that $G$ is generated by the permutations $(1254)$ and $(2365)$. However, I can't figure out how to express $G$ using well-known groups like $S_n$, $A_n$, $D_n$, and $\mathbb Z_n$, the direct product $\times$, and the semidirect product $\rtimes$ (with no corresponding homomorphism specified). Can someone please show me how to find the group corresponding to this game? NOTE: To someone who is experienced with group theory, this is probably an easy exercise; however, to a novice like myself, this is quite confusing
GAP shows that the group is in fact isomorphic to $S_5$. A geometric interpretation was requested for what $5$ things are being permuted. Consider the following $5$ sets of edges between the vertices. $(1,2,5,4)\leftrightarrow(orange,blue,purple,green)$ $(2,3,6,5)\leftrightarrow(red,blue,purple,green)$ To see that this is all of $S_5$ and not some subgroup, compute some products of elements. $(orange,blue,purple,green)*(red,blue,purple,green)=(red,purple,orange,blue,green)$ which has order $5$, so the order of the group is a multiple of $5$. $(orange,blue,purple,green)*(red,blue,purple,green)^{-1}=(red,orange,blue)$ which has order $3$, so the order of the group is a multiple of $3$. And $(orange,blue,purple,green)$ has order $4$ so the order of the group is a multiple of $4$. Now the order of the group must be a multiple of $3*4*5=60$, so either $S_5$ or $A_5$. But we have elements of order $4$ in our group, which leaves only $S_5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2979561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Ring of order $p^2$ must be commutative. See this relevant post Ring of order $p^2$ is commutative. Either I don't understand the above or it isn't quite complete (presumably the former). Let $R$ be a ring of order $p^2$. Then $char(R)=p$ and $Z(x)=p,p^2$ for any $x \in R$. In the latter case there is nothing to show. Hence, let $Z(x)=p$. Then $Z(x) \cong \mathbb{Z}/p\mathbb{Z}$ (as a group not necessarily a ring) and supposedly this is enough to say that $R$ is commutative but I do not readily see this. Especially since ring homomorphisms must send $1 \to 1$. A detailed explanation would be appreciated.
We recall that $R$ has an additive subgroup of order $\text{char}(R)$; thus, since $\vert R \vert = p^2$, either $\text{char}(R) = p$ or $\text{char}(R) = p^2$. If $\text{char}(R) = \vert R \vert = p^2, \tag 1$ we are done, since every element of $r$ is a sum of $1$s; so suppose $\text{char}(R) = p; \tag 2$ then $\Bbb Z_p \simeq \Bbb F_p \subsetneq R, \tag 3$ that is, $\Bbb F_p$ is a sub-field of $R$, isomorphic to $\Bbb Z_p \simeq \Bbb Z/p\Bbb Z$; thus, $R$ is an $\Bbb F_p$-vector space; now let $R \ni x \notin \Bbb F_p; \tag 4$ such an $x$ exists since $\vert R \vert = p^2 > p = \vert \Bbb F_p \vert$, so $R \setminus \Bbb F_p \ne \emptyset$. I claim $1_R$ and $x$ are linearly independent over $\Bbb F_p$; if not, then there are $a, b \in \Bbb F_p$, not both $0$, with $a + bx = a1_R + bx = 0, \tag 5$ or $x = -b^{-1}a \in \Bbb F_p, \tag 6$ contrary to hypothesis; we conclude that $1_R$ and $x$ are in fact linearly independent over $\Bbb F_p$; therefore $1_R$ and $x$ span a $2$-dimensional subspace of $R$; but a $2$-dimensional vector space over a $\Bbb F_p$ has precisely $p^2$ elements. Therefore $1_R$ and $x$ in fact span $R$; thus given $y, z \in R$ we may write $y = a + bx, \; z = c + dx, \; a, b, c, d \in \Bbb F_p; \tag 7$ it then follows that $yx = (a + bx)(c + dx) = ac + (ad + bc)x + bdx^2$ $= ca + (da + cb)x + dbx^2 = (c + dx)(a + bx) = zy, \tag 8$ and we see that $R$ is commutative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2979716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Combinatoric meaning of $a_n=5a_{n-1} - 6a_{n-2}$ I've solved the following recurrence relation: $a_n=5a_{n-1} - 6a_{n-2}$ using generating functions, to be: $a_n=3^n-2^n$. It is possible to give a meaning to $3^n-2^n$, and that is: Consider the following set: $S=\{a,b,c\}$. $3^n$ is the number of sequences of length n with repetition using all letters from the set $S$. $2^n$ is the number of sequences of length $n$ with repetition using two of the three letters of the set $S$. So $3^n - 2^n$ we can say that it is counting all sequences of length $n$ from the set $S$ with at least one $a$. My question now is, how to give a combinatorial interpretation, that agrees to the one I gave, to that recurrence relation: $$ a_n=5a_{n-1} - 6a_{n-2} $$ ?
Let $a_n$ be the number of sequences of length $n$ from the set $S$ with at least one $a$ Let $b_n$ be the number of sequences of length $n$ from the set $S$ with no $a$ Then using the combinatorial analogy we can easily say * *$a_n=3 a_{n-1} + b_{n-1}$ since we can append any of the three to a satisfactory sequence but only $a$ to an unsatisfactory one *$a_{n-1}=3 a_{n-2} + b_{n-2}$ in exactly the same way *$b_{n-1}=2 b_{n-2}$ since we only have two choices for extending an unsatisfactory sequence and we can combine these to eliminate $b_{n-1}$ and $b_{n-2}$ with * *$ b_{n-2}=a_{n-1}-3 a_{n-2}$ by reordering (2) *$ b_{n-1}=2a_{n-1}-6 a_{n-2}$ by substituting into (3) *$a_n=5 a_{n-1} -6 a_{n-2}$ by substituting into (1)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2979816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A continuous Function from $R$ to a Banach Space is Borel-Measurable I think the notation of my book is a bit odd, so I'm having trouble finding any other sources to help me with this proof. The thing I'm trying to prove is that a continuous function, $f$, from $R$ to a Banach Space, $B$, is Borel-Measurable. The definition that my book uses is that a function, $f$ is Borel-Measurable if there exists a sequence of functions that converges pointwise to $f$, s.t every entry of the sequence is the finite sum of scaled indicator sets of the $\sigma$-ring formed by the Borel subsets of $R$. My thoughts so far are that any point, $b \in B$ is closed, and so the pre-image of $b$ under $f$ should also be a closed set, and thus $f^{-1}[b]$ is in my $\sigma$-field. As such, I could use the indicator function for this set and express $f$ as $f = \sum_{b \in B}b\dot E_{f^{-1}[b]}$ where $E_{f^{-1}[b]}$ is the indicator function for the set $f^{-1}[b]$. However, this sum may not be finite. My next thought was to instead of summing over $b \in B$, sum over balls of radius $\frac{1}{n}$ and then this would converge to my function in the pointwise limit. However, this would assume that my Banach Space is totally bounded. It seems no matter what I think of I'm missing something. Can anyone offer any advice or suggestions? Thank you!
Fix $n$. Then $f([-n,n])$ is a compact set. hence it can be covered by a finite number of open balls of radius $\frac 1 n$, say $A_{n,1},A_{n,2},\cdots, A_{n,k}n$. Let $B_{n,1}=A_{n,1},B_{n,2}=A_{n,2}\setminus A_{n,1}$, $\cdots$, $B_{n,k_n}=A_{n,k_n}\setminus \cup_{j=1}^{k_n-1} A_{n,j}$. Then the sets $f^{-1}(B_{n,j})$ are Borel sets in $\mathbb R$ (because they are differences of two open sets). For $|t| \leq n$ define $f_n(t)=f(t)$ if $f(t) \in B_{n,j}$. Take $f_n(t)$ to be $0$ if $|t| >n$. I leave it to you to verify that this sequence has the required properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2979945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How Isolate y from $a=\sqrt{(y^2+(a+x)^2)^3}$ Just that! I'm asking for a method to isolate $y$ from such expressions: $$a=\sqrt{(y^2+(a+x)^2)^3}$$ or even easier: $$a=\sqrt{(y^2+x^2)^3}.$$ EDIT: I'm just trying to revisit some of the basics in algebraic equation manipulation. Even it's sooo easy to find the school examples with squares or cubes, dealing with radicals can twist into strange paths. The above expression ($a=\sqrt{(y^2+x^2)^3}$) can turn into a monster just as you add another simple term: $$a=\sqrt{(y^2+x^2)^3} + \sqrt{(y^2-x^2)^3}$$ I can not see now how squaring the two sides could help in y isolation...
It might help to see how $\sqrt{(y^2+(a+x)^2)^3}$ is built starting from y $\begin{align} & y \\ & {{()}^{2}}\to {{y}^{2}} \\ & +{{(a+x)}^{2}}\to {{y}^{2}}+{{(a+x)}^{2}} \\ & {{()}^{3}}\to {{({{y}^{2}}+{{(a+x)}^{2}})}^{3}} \\ & \sqrt{{}}\to \sqrt{{{({{y}^{2}}+{{(a+x)}^{2}})}^{3}}} \\ & =a \\ \end{align}$ So starting from y you do the sequence of operations $$()^2 \to +{{(a+x)}^{2}}\to ()^3 \to \sqrt{{}}$$ to produce a. You can recover y by starting from a and undoing those operations in reverse order $$()^2 \to ()^{\frac{1}{3}} \to -{{(a+x)}^{2}} \to \sqrt{{}}$$ to give $\begin{align} & a \\ & ()^2 \to a^{2} \\ & ()^{\frac{1}{3}}\to a^{\frac{2}{3}} \\ & -(a+x)^2\to a^{\frac{2}{3}}-{(a+x)}^{2} \\ & \sqrt{}\to \pm \sqrt{a^{\frac{2}{3}}-(a+x)^{2}} \\ & = y \\ \end{align}$ Parsing the original expression this way can help you to see how to unravel it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2980246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Geometric intuition of the dimension of Grassmannians and flag manfolds I wish to understand geometrically (not just algebraically) why the dimension of the Grassmanian $G(k,n)$ is $k(n-k)$ and the dimension of a flag manifold $F(k_{1},k_{2},...,k_{n},N)$ is $\sum_{i=1}^{n}k_{i}(k_{i-1}-k_{i})+Nk_{n}$ (in fact with understanding the Grassmanian case it would be enough because the flag are just "nested" Grassmanians). I am thinking in a spatial way in the well known $G(2,5)$ but I am unable to see geometrically how the space of all $2$-planes in $\mathbb{P}^{5}$ can be $6$-dimensional.
The idea is to use the standard affine charts of $G(k,n)$. Start with the $k$-plane $P \subset \mathbb{R}^n$ (say, the one spanned by $e_1, \dots, e_k$) and a $(n-k)$-plane $P^\perp$ transverse to $P$ (say, the one spanned by $e_{k+1}, \dots, e_n$). The set of all $k$-planes transverse to $P^\perp$ is an open neighborhood of $P$. Each such $k$-plane $Q$ is the graph of a linear map $A: P \rightarrow P^\perp$ and vice versa. Therefore, the dimension of $G(k,n)$ is the dimension of the space of all linear maps from a fixed $k$-dimensional subspace to a fixed $(n-k)$-dimensional subspace, i.e., the space of all $(n-k)$-by-$k$ matrices, which has dimension $k(n-k)$. When $k = 1, 2$ and $n = 2, 3$, this can be seen visually and therefore viewed as geometric intuition. Similar coordinates can be defined for a flag manifold, where the matrices are block triangular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2980359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving the division theorem with strong induction The exercise goes like this: Prove the division theorem using strong induction. That is, prove that for $a \in \mathbb{N}$, $b \in \mathbb{Z}^+$ there always exists $q, r \in \mathbb{N}$ such that $a = qb + r$ and $r < b$. In particular, give a proof that does not use $P(n−1)$ to prove $P(n)$ when $b > 1$. I have done a few proofs with strong induction before, but never with a predicate with multiple variables, so I'm unsure how to approach this. One idea I had, was to use the following as my predicate: $$P(a,b):= \exists r,q\in\mathbb{N}(a=b\cdot q+r)$$ and then use $\forall b \in \mathbb{Z}^+ .\forall i < a(P(i, b))$ as my first inductive hypothesis, and $\forall a \in \mathbb{N} .\forall i < b(P(a, i))$ as my second, proving them separately. But I'm not sure this is right, as I can't seem to prove it this way. Am I even on the right track here? Any help would be much appreciated!
You can prove it by strong induction on $a$. For $a=0$, it is trivial. Now, consider an arbitrary $a\in\mathbb N$ and assume that each $a'<a$ can be written as $qb+r$, with $r<b$. Now, if $a<b$, you can write $a$ as $0\times b+a$. Otherwise, consider $a-b$. By the induction hypothesis, it can be written as $bq+r$, with $r<b$. But then $a=(q+1)b+r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2980504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
If $z = f(x,y)$ is continuous on $\mathbb{R}^2$, then is $f(g_1(x), g_2(x))$ measurable, where $g_1$ and $g_2$ are measurable? Let $z = f(x,y)$ be a continuous function on $\mathbb{R}^2$, and $g_1(x), g_2(x)$ be real valued functions on $\mathbb{R}^1$. Prove $F(x) = f(g_1(x), g_2(x))$ is a measurable function on $[a,b]$? Proof Attempt. We need to show that the set $$F^{-1}\left([-\infty, c]\right) = \{ x \in [a,b]: F(x) \leq c\} = \{ F \leq c\}$$ is measurable. It is equivalent to show. that $\{f\left(g_1(x), g_2(x)\right) \leq c\}$ is measurable. At this point I think I want to say something like, since $f$ is continuous and its argument set $[g_1(x), g_2(x)] = [x,y]$ is measurable then $f$ is measurable. But how do I show that? Or am I heading in the wrong direction? The definition of function measurability is confusing for me.
Recall the composition of two measurable functions is measurable. Since $f$ is continuous, it is measurable. Let $g:\mathbb{R} \rightarrow \mathbb{R}^2$ map $x$ to $g(x) = (g_1(x),g_2(x))$. If $A = I_1 \times I_2 \subset \mathbb{R}^2$ is a rectangle (Cartesian product of intervals $I_1 \subset \mathbb{R}$ and $I_2 \subset \mathbb{R}$), then $$ \lbrace g(x) \in A \rbrace = \lbrace g_1(x) \in I_1 \rbrace\, \cap\, \lbrace g_2(x) \in I_2 \rbrace $$ Since $g_1$ and $g_2$ are measurable, each of the two sets on the right-hand side is measurable. This proves $g$ is measurable. Thus, $f\circ g$ is measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2980653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$t \in \mathbb{R}$ so that $f(t)=\int_{0}^{+\infty}e^{-tx}\frac{\sin x}{x}dx, t\in\mathbb{R}$ exists $$f(t)=\int_{0}^{+\infty}e^{-tx}\frac{\sin x}{x}dx, t\in\mathbb{R}$$ I need to find out for which $t \in \mathbb{R}$ this integrals exists (meaning it doesn't diverge) as Riemann-integral at first and then as Lebesgue-integral. As a Riemann integral it exists for $t \ge0$. But how can I show this or how can I show that it doesn't exist $t <0$? And what about the Lebesgue integral?
The Cauchy criterion can be helpful. The improper integral $\displaystyle \int_0^\infty f(x) \, dx$ exists and is finite if and only if $$\displaystyle \lim_{n,m \to \infty} \int_n^m f(x) \, dx = 0.$$ Let $t < 0$. Integrate over a half-period of the $\sin$ function to estimate $$\int_{2k\pi}^{2k\pi + \pi} e^{-tx} \frac{\sin x}{x} \, dx = \int_{2k\pi}^{2k\pi + \pi} e^{|t|x} \frac{\sin x}{x} \, dx \ge \frac{e^{2k\pi|t|}}{2k\pi+\pi}\int_{2k\pi}^{2k\pi + \pi} \sin x\, dx= \frac{2e^{2k\pi|t|}}{2k\pi+\pi} \to \infty$$ as $k \to \infty$. The Cauchy property fails, so the improper integral diverges. Just about the same argument shows you that if $t \le 0$ then $$\int_{0}^{\infty} \left| e^{-tx} \frac{\sin x}{x} \right| \, dx = \infty$$ so the Lebesgue integral does not exist for all $t \le 0$. If $t > 0$ use the fact that $\displaystyle \left| e^{-xt} \frac{\sin x}{x} \right| \le e^{-xt}$ and $\displaystyle \int_0^\infty e^{-xt} \, dt < \infty$ to conclude the Lebesgue integral exists for $t > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2980797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing an orthogonalisation process Can anyone show that: $\mathbf{a_{\perp}}=\mathbf{a}-\frac{\mathbf{x}\mathbf{x}^T}{\mathbf{x}^T\mathbf{x}}\mathbf{a}$, $\mathbf{a}\in\mathbb{R}^N$, $\mathbf{x}=(1,1,\dots,1)^T\in\mathbb{R}^N$ results in $\mathbf{a}$ becoming orthogonal to $\mathbf{x}$? Thank you for your help.
We have that $$\left(\mathbf{a}-\frac{\mathbf{x}\mathbf{x}^T}{\mathbf{x}^T\mathbf{x}}\mathbf{a}\right)\cdot \mathbf{x}=\mathbf{a}^T\mathbf{x}-\mathbf{a}^T\frac{\mathbf{x}\mathbf{x}^T}{\mathbf{x}^T\mathbf{x}}\mathbf{x}=\mathbf{a}^T\mathbf{x}-\mathbf{a}^T\mathbf{x}=0$$ Refer also to the related * *Writing projection in terms of projection matrix *orthogonal projection from one vector onto another
{ "language": "en", "url": "https://math.stackexchange.com/questions/2980906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relationship between $\langle \nabla f(\overline{x}), x - \overline{x}\rangle > 0$ and the the minimality of $\overline{x}$. Say $f$ is a differentiable function over a convex set $X$ with $\langle \nabla f(\overline{x}), x - \overline{x}\rangle > 0$ for all $x, \overline{x} \in X$ such that $x \neq \overline{x}$. Can we conclude from this alone that $\overline{x}$ is a local minimum? Certainly, if $f$ were pseudoconvex, this would hold trivially. However, I suspect that the strict inequality here may imply that $\overline{x}$ is indeed a local minimum
Assuming you meant that $\langle \nabla f(\bar{x}), x - \bar{x}\rangle > 0, \forall x \neq \bar{x}$, this should imply a local minimum. From the differentiability of $f$, you know that near $\bar{x}$: $$ f(x) = f(\bar{x}) + \langle \nabla f(\bar{x}), x - \bar{x} \rangle + o(\| x - \bar{x}\|), $$ where $o(\| x - \bar{x} \|)$ is a term that satisfies $ \lim_{x \to \bar{x}} \frac{o(\| x-\bar{x}\|)}{\| x - \bar{x}\|} = 0. $ Then, assuming that $\forall \epsilon > 0$, there is a $x$ with $f(x) < f(\bar{x})$ and $\| x - \bar{x} \| < \epsilon$ (i.e. $\bar{x}$ is not a local minimum), you should be able to derive a contradiction for a sufficiently small $\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the optimal way to take $16$ question true/false test with four attempts? Question: Suppose a student is taking a $16$-question true/false test with four attempts. They must keep the store that they obtain after the fourth trial. He or she does not know the answer to any question. After the test is completed, the grader tells you how many you got correct but not which ones. What's the best way to maximize the expected value of the fourth attempt? My thoughts: So, to clarify the question, one way to make the expected value equal to $9.5$ is as follows: Attempt 1: Answer question $1$ and leave the rest blank. If the grader tells you that you got $1$ right, you know the answer to question $1$ with certainty. If the grader tells you that you got $0$ correct, then you still know the question to $1$ with certainty (if you put true, then it will be false). Attempt 2: Repeat with question 2. Attempt 3: Repeat with question 3. So, we're guaranteed three questions right, and the expected value for attempt four is $3 + \sum_{i=1}^{13}1 \cdot 1/2 = 3 + 6.5 = 9.5$. So we're expected to get $9.5$ right. Another thing that someone could do is answer "TRUE" for every question on attempt $1$. That'll tell you how many true/false questions there are in total. Then, on the second attempt, leave the second half of the test blank, and answer TRUE for everything in the first half. This is sort of like a binary search. It tells you how many are true in the first half, and in the first quarter, etc. I didn't get anywhere with this. Is my solution of an expected value of $9.5$ the most optimal solution? What if there were $3$ attempts instead of $4$?
Here is one strategy that beats yours: Attempt 1: Give random answers to questions 1-5 and leave the rest blank. Attempt 2: Give random answers to questions 6-10 and leave the rest blank. Attempt 3: Give random answers to questions 11-15 and leave the rest blank. Final test: For questions 1-5 give the same answers as in attempt 1, except invert all of them if you had less than 3 correct. Similarly for 6-10 and 11-15. Answer question 16 randomly. This guarantees you at least 3 correct answers in each group of 5, and some probability of even more. If I'm calculating correctly the expected number of rights in each group is $3\frac{7}{16}$, for a total expectation of $3\cdot 3\frac{7}{16} + \frac12 = 10\frac{13}{16}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Seeking methods to solve $ I = \int_{0}^{\infty} \frac{\sin(kx)}{x\left(x^2 + 1\right)} \:dx$ I am currently working on an definite integral that requires the following definite integral to be evaluated. $$ I = \int_{0}^{\infty} \frac{\sin(kx)}{x\left(x^2 + 1\right)} \:dx$$ Where $k \in \mathbb{R}^{+}$ I was wondering what methods can be employed to solve this integral?
The method I took was: Let $$ I(t) = \int_{0}^{\infty} \frac{\sin(kxt)}{x\left(x^2 + 1\right)} \:dx$$ Take the Laplace Transform \begin{align} \mathscr{L} \left[I(t) \right]&= \int_{0}^{\infty} \frac{\mathscr{L} \left[\sin(kxt)\right]}{x\left(x^2 + 1\right)} \:dx \\ &= \int_{0}^{\infty} \frac{kx}{\left(k^2x^2 + s^2\right)x\left(x^2 + 1\right)} \:dx \\ &= \int_{0}^{\infty} \frac{k}{\left(k^2x^2 + s^2\right)\left(x^2 + 1\right)} \:dx \\ &= \int_{0}^{\infty} \left[\frac{k^3}{\left(k^2 - s^2\right)\left(k^2x^2 + s^2 \right)} - \frac{k}{\left(k^2 - s^2\right)\left(x^2 + 1\right)}\right] \:dx \\ &= \left[\frac{k^3}{\left(k^2 - s^2\right)}\frac{\arctan\left(kx\right)}{ks} - \frac{k}{\left(k^2 - s^2\right)}\arctan(x)\right]_{0}^{\infty} \\ &= \frac{k^2}{s\left(k^2 - s^2\right)}\frac{\pi}{2} - \frac{k}{\left(k^2 - s^2\right)}\frac{\pi}{2} \\ &= \frac{k}{k^2 - s^2}\left[ \frac{k}{s} - 1 \right]\frac{\pi}{2} \\ &= \frac{k}{s\left(k + s\right)}\frac{\pi}{2} \end{align} And thus, $$ I(t) = \mathscr{L}^{-1}\left[\frac{k}{s\left(k + s\right)}\frac{\pi}{2} \right] = \left[1 - e^{-kt} \right]\frac{\pi}{2}$$ Lastly, $$ I = I(1) = \left[1 - e^{-k} \right]\frac{\pi}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to rectify "time lost " by pendulum clock by alteration in its length? question: If a clock loses $5$ seconds per day ,what is the alteration required in the length of pendulum in order that the clock keeps correct time $(a)\dfrac{4}{86400} $times its original length be shortened $(b)\dfrac{1}{86400}$ times its original length be shortened $(c)\dfrac{1}{8640}$ times its original length be shortened $(d)\dfrac{4}{8640} $times its original length be shortened my attempt: there are $86400$ seconds in a day but clock is slow so it only counts $86395$ seconds so, the factor by which clock is slow is $\dfrac{86395}{86400}=0.99994212$ so, new pendulum's length should be $\left(\dfrac{86395}{86400}\right)^2\times$(original length) but none of the options are matching my answer please tell me right approach to solve this problem thank you!
$$1-\left(\dfrac{86395}{86400}\right)^2 = 1-\left(1-\dfrac{5}{86400}\right)^2 = 1-\left(1-\dfrac{10}{86400}+\dfrac{5^2}{86400^2}\right) = \dfrac{1}{8640}-\dfrac{1}{298598400}$$ which suggests to me that you might be expected to give answer $(b)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
about the p-adic logarithm It is well known that we can define the logarithm in the p-adic setup by the usual power series that converges in $B(1,1^-)$. In Schickoff "Ultrametric calculus" there is an extension of the $log$ from the unit ball to $C_p^{\times}$ (called $LOG$) and it is proved that it is locally analytic, then he defines the Iwasawa Logarithm as the function $LOG(x) -ord_p(x)$ and this is the unique multiplicative function extending the logarithm of the unit ball such that vanishes on $p$. My question is, this function is still locally analytic on $C_p^{\times}$? Thanks for the answers.
Sure. Local analyticity can be judged solely from the behavior at the identity, $1$. And there, the function is given by the series that you know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a $4$-component link such that upon removing any one of them you get the Borromean link? Is there a $4$-component link such that upon removing any one of them you get the Borromean link? I've managed to get close but not quite. What I have gets me something similar to the Borromean link but two of the components actually form what I think is the Whitehead link.
You can sort of think of the Borromean rings as lying on three faces of a tetrahedron, with the center triangle of the usual presentation at a vertex. By adding a component corresponding to the fourth face in a way so that the link has tetrahedral symmetry, you get the following: While the outer component looks funny, it is the same as any of the other three, in the sense that there is an isotopy of the link sending this diagram to itself, but moving an inner component to the outer component.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
$\infty\cdot 0$ Indetermination without L'Hopital? Evaluate $\lim_{n\to\infty}n\cdot r^n$ , being $0<r<1$. I dont know if I took the proper steps, but I get to this point: $$\lim_{n\to\infty}n\cdot r^n=\lim_{n\to\infty}n \lim_{n\to\infty}r^n = \infty \cdot 0 $$ I dont know how to solve this indetermination without L'hopital's rule.
You want $\lim_{n\to\infty}n\exp -cn$ with $c:=-\ln r>0$. Since $\int_0^\infty n\exp -cn\operatorname{d}n=c^{-2}$ is finite, the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 8, "answer_id": 3 }
About the C*-algebra of the Schrodinger representation of the Weyl C*-algebra We start with the Weyl C*-algebra $\mathcal{W}$ for a finite dimensional symplectic space and we consider the irreducible Schrodinger representation $\pi:\mathcal{W}\rightarrow \mathcal{B}(\mathcal{H})$ where $\mathcal{H}=L^2(\mathbb{R}^n)$. Since such representation is irreducible, the von Neumann algebra generated by such represenation is $\pi(\mathcal{W})''=\mathcal{B}(\mathcal{H})$. The question now is: ¿Is the C*-algebra generated by such representation $\pi(\mathcal{W})=\mathcal{B}(\mathcal{H})$, or is it strictly smaller, i.e. $\pi(\mathcal{W}) \subsetneq \mathcal{B}(\mathcal{H})$? In such case, ¿Is there any useful characterization for such C*-algebra $\pi(\mathcal{W}) \subsetneq \mathcal{B}(\mathcal{H})$?
$\pi(\mathcal{W})$ is strictly smaller than $B(H)$. Since the Weyl algebra $\mathcal{W}$ is simple, every nontrivial representation is faithful. Therefore $\pi:\mathcal{W}\rightarrow\pi(\mathcal{W})\subseteq B(H)$ is a bijective *-homomorphism (surjective to its image), hence $\pi(\mathcal{W})\cong\mathcal{W}$ as C${}^\ast$-algebras. I hope this answers your second question. As for your first, $\pi(\mathcal{W})$ does not contain the compact operators (because it's simple), so $\pi(\mathcal{W})\subsetneq B(H)$. For more info, see Photons In Fock Space And Beyond by Honegger and Rieckers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2981977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }