Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Integrate $\frac{\sin^3 \frac{x}{2}}{\cos \frac{x}{2} \sqrt{\cos x+\cos^2 x+\cos^3 x}}$ Evaluate $$\int \frac{\sin^3 \frac{x}{2}}{\cos \frac{x}{2} \sqrt{\cos x+\cos^2 x+\cos^3 x}}dx$$ I saw terms like $1+t+t^2$ in the denominator , so I thought of $t^3-1$ and then converting back into half angle but it doesn't help me. I am unable to simplify it further. Please tell me a better way to start. Thanks.
|
Let $$\displaystyle I = \int \frac{\sin ^3(x/2)}{\cos(x/2)\sqrt{\cos^3 x+\cos^2x+\cos x}}dx = \frac{1}{2}\int\frac{2\sin^2 \frac{x}{2}\cdot 2\sin \frac{x}{2}\cdot \cos \frac{x}{2}}{2\cos^2 \frac{x}{2}\sqrt{\cos^3 x+\cos^2 x+\cos x}}dx$$
So we get $$\displaystyle I = \frac{1}{2}\int\frac{(1-\cos x)\cdot \sin x}{(1+\cos x)\sqrt{\cos^3 x+\cos^2 x+\cos x}}dx$$
Now Put $\cos x = t\;,$ Then $\sin x dx = -dt$
So Integral $$\displaystyle I = -\frac{1}{2}\int\frac{(1-t)}{(1+t)\sqrt{t^3+t^2+t}}dt = -\frac{1}{2}\int\frac{(1-t^2)}{(1+t)^2\sqrt{t^3+t^2+t}}dt$$
So we get $$\displaystyle I = \frac{1}{2}\int\frac{\left(1-\frac{1}{t^2}\right)}{\left(t+\frac{1}{t}+2\right)\sqrt{t+\frac{1}{t}+1}}dt$$
Now Let $\displaystyle \left(t+\frac{1}{t}+1\right) = u^2\;,$ Then $\left(1-\frac{1}{t^2}\right)dt = 2udu$
So Integral $$\displaystyle I = \frac{1}{2}\int\frac{2u}{u^2+1}\cdot \frac{1}{u}du = \tan^{-1}(u)+\mathcal{C}$$
So we get $$\displaystyle I = \tan^{-1}\sqrt{\left(t+\frac{1}{t}+1\right)}+\mathcal{C}$$
So we get $$\displaystyle \displaystyle I = \tan^{-1}\sqrt{\left(\cos x+\sec x+1\right)}+\mathcal{C}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1781313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is a pointwise limit of $L^p$ functions in $L^p$ under certain conditions? Let $I=[0,1]$ with Lebesgue measure and $1\leq p<\infty$. If $f_k$ is a sequence in $L^p(I)$ with $\|f_k\|_p\leq 1$ for all $k$, and $f(x)=\lim_{k\rightarrow\infty} f_k(x)$ exists almost everywhere, must $f$ belong to $L^p(I)$?
|
By Fatou's lemma,
$$\int_0^1|f(x)|^p\;dx\leq \liminf_{n\to\infty}\int_0^1|f_n(x)|^p\;dx\leq 1$$
so $f$ is in $L^p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1781453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Geometric Probability Problem, Random Numbers $0$-$1+$ Triangles. Randy presses RANDOM on his calculator twice to obtain two random numbers between $0$ and $1$. Let $p$ be the probability that these two numbers and $1$ form the sides of an obtuse triangle. Find $p$.
At first, I thought that the answer would be $.25$, because the probability of both of the numbers being above $.5$ is $.5$. But I realized that this is not true, for cases such as $.99, .99, 1,$ etc. This problem has me a bit stumped. I'm fairly certain I need to use a graph, but I don't know how.
|
I bet they are looking for you to use the Pythagorean Theorem. Since the random number will be less than 1, you can assume 1 is the largest side length. For obtuse triangles, $a^2+b^2<c^2$. In this case, because c is 1, you can simplify this to $a^2+b^2<1.$ This is a two-variable probability problem, so the easiest way to do it is integration, but you needn't integrate. If you set $a=x$ and $b=y$, you get the unit circle equation $x^2+y^2<1$. The area of this circle inside the parameters $0<x<1$ and $0<y<1$ is $[\pi(r^2)]/4$, or $\pi/4$. The total area is the unit square. Thus, the probability of an obtuse triangle is $(\pi/4)/1$, or just $\pi/4.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1781582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can any presentation of a finitely presented group be reduced to a finite one? Suppose $G = \langle x_1, \ldots, x_n \mid p_1, \ldots, p_m \rangle$ is a finitely presented group, and let $\langle A \mid R \rangle$ be another presentation of $G$, with $A$ and $R$ possibly infinite. Do there always exist finite subsets $A' \subset A$, $R' \subset R$ such that $G = \langle A' \mid R' \rangle$?
I feel like the answer should be "yes." Here's my idea: we can write each $x_i$ as a product of finitely many $a \in A$ (and their inverses); denote by $A_i$ this finite set of $a$'s. Then the finite set $A_1 \cup \cdots \cup A_n$ generates $G$.
Similarly, each relator $p_i$ can be derived from a finite set of relators $R_i \subset R$. Here's my problem, though: how do I know that any relator $w$ in the letters $A_1 \cup \cdots \cup A_n$ can be reduced using these $R_i$? Not every such $w$ blocks off into $x_i$-chunks.
|
Not every presentation of a finitely presented group can be reduced to a finite one.
In his highly advisable notes on Geometric Group theory, Charles F. Miller III provides the presentation:
\begin{equation}
\langle a,b,c_o,c_1,c_2,\ldots \mid a^4=1, b^3 =1, c_0^{-1} b c_0 = a^2, c_1^{-1} c_0 c_1 =b, c_2^{-1} c_1 c_2 = c_0 , c_3^{-1}c_2 c_3 = c_1,\ldots\rangle
\end{equation}
which defines a cyclic group of order $2$, but any of its finite subpresentations define a group of order $3$, a group of order $4$, or an infinite group.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1781695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
}
|
Give a graph model for a permutation problem
Describe a graph model for solving the following problem: Can the permutations of $\{1,2,\ldots,n\}$ be arranged in a sequence so that the adjacent permutations $$p:p_1,\ldots,p_n \text{ and } q:q_1,\ldots,q_n$$ satisfy $p_i\neq q_i$ for all $i$?
I have problem understanding what the exercise asks. What does "adjacent permutation" mean? Also, in a follow-up question, it says that the statement is true for $n\geq 5$.
|
I think they mean permutations that are adjacent in the sequence. If the sequence is $\pi_1,\pi_2,\ldots,\pi_{n!}$, then e.g. $\pi_5$ and $\pi_6$ would be adjacent. And where it says "the adjacent permutations" I think they mean "any two adjacent permutations".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1781911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Covariance of stochastic integral I have a big problem with such a task:
Calculate $\text{Cov} \, (X_t,X_r)$ where $X_t=\int_0^ts^3W_s \, dW_s$, $t \ge 0$.
I've tried to do this in this way: setting up $t \le r$
$$\text{Cov} \, (X_t,X_r)=\text{Cov} \, \left(\int_0^ts^3W_s \, dW_s,\int_0^rs^3W_s \, dW_s \right)=\int_0^t s^6 W^2_s \, ds$$
And I've stacked.
|
$\text{Cov} \, (X_t,X_r)=\text{Cov} \, (\int_0^ts^3W_sdW_s,\int_0^rs^3W_sdW_s)=\int_0^t s^6 W^2_s ds$
This identity does not hold true. Note that the left-hand side is a fixed real number whereas the right-hand side is a random variable.
If you apply Itô's isometry correctly, you find
$$\text{cov} \, (X_t,X_r) = \color{red}{\mathbb{E} \bigg( }\int_0^t s^6 W_s^2 \, ds \color{red}{\bigg)}.$$
Applying Tonelli's theorem yields
$$\text{cov} \, (X_t,X_r) = \int_0^t s^6 \mathbb{E}(W_s^2) \, ds.$$
Now, since $W_s \sim N(0,s)$, we have $\mathbb{E}(W_s^2) = s$, and therefore the integral can be easily calculated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Unramified primes of splitting field I would like to show the following:
Theorem: Let $K$ be a number field and and $L$ be the splitting field of a polynomial $f$ over $K$. If $f$ is separable modulo a prime $\lambda$ of $K$, then $L$ is unramified above $\lambda$.
This should follow from the following theorem:
Theorem: Let $L / K$ be a finite extension of number fields, and $B$ resp. $A$ the ring of integers of $L$ resp. $K$. Let $\mathfrak{p}$ be a prime of $K$ and $p$ the prime number lying under $\mathfrak{p}$. Let $\alpha \in B$. Let $f$ be the minimal polynomial of $\alpha$ over $K$, and let $\overline{f} = \overline{g_1}^{e_1} \cdots \overline{g_r}^{e_r}$ be the distinct irreducible factors of $f$ modulo $\mathfrak{p}$. If $p$ does not divide the order of $B / A[\alpha]$, then $\mathfrak{p}B = \mathfrak{P}_1^{e_1} \cdots \mathfrak{P_r}^{e_r}$.
How can I do this?
Thanks a lot!
|
I'll start by reformulating the first theorem:
Theorem: Let $F = K(\alpha)$, where $f$ is the minimal polynomial of $\alpha$ over $K$. Suppose that $\mathfrak p$ is a prime of $K$, and the $f(X)$ splits as a product of distinct irreducibles modulo $\mathfrak p$ (i.e. $f$ is separable mod $\mathfrak p$). Then $\mathfrak p$ is unramified in $F$.
Why is this the same thing? As the splitting field of $f$, $L$ is the compositum of the fields $K(\alpha_i)$ where the $\alpha_i$ are the roots of $f$. Each of these fields is isomorphic, so if $\mathfrak p$ is unramified in one of them, it is unramified in all of them, and hence it is unramfied in $L$.
Let $p$ be the rational prime lying under $\mathfrak p$.
Case 1: $p\nmid [\mathcal O_F:\mathcal O_K[\alpha]]$.
Here, we are in the case of the second theorem. By assumption, $$\overline f=\overline g_1\cdots\overline g_n\pmod {\mathfrak p}$$
where the $g_i$ are distinct, so $\mathfrak p\mathcal O_F$ splits as a product of distinct primes. Hence it is unramified.
Case 2: $p\mid [\mathcal O_F:\mathcal O_K[\alpha]]$.
In this case, the theorem does not apply directly as stated. However, the proof of the theorem shows that $\mathfrak p$ splits as a product of distinct primes in $\mathcal O_K[\alpha]$. If $\mathfrak P^2\mid \mathfrak p\mathcal O_F$ for some prime $\mathfrak P$ of $\mathcal O_L$, then, taking $\mathfrak q = \mathfrak P\cap\mathcal O_K[\alpha]$, we see that $\mathfrak q^2\mid \mathfrak p\mathcal O_K[\alpha]$, which does not happen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Is it possible that Gödel's completeness theorem could fail constructively? Gödel's completeness theorem says that for any first order theory $F$, the statements derivable from $F$ are precisely those that hold in all models of $F$. Thus, it is not possible to have a theorem that is "true" (in the sense that it holds in the intersection of all models of $F$) but unprovable in $F$.
However, Gödel's completeness theorem is not constructive. Wikipedia claims that (at least in the context of reverse mathematics) it is equivalent to the weak König's lemma, which in a constructive context is not valid, as it can be interpreted to give an effective procedure for the halting problem.
My question is, is it still possible for there to be "unprovable truths" in the sense that I describe above in a first order axiomatic system, given that Gödel's completeness theorem is non-constructive, and hence, given a property that holds in the intersection of all models of $F$, we may not actually be able to effectively prove that proposition in $F$?
|
Given a sentence $\phi$ that holds in all models of a first-order theory $F$, you can effectively find a proof of $\phi$: just enumerate all proofs in $F$ until you find the one that proves $\phi$ (there is such a proof, since you are given that $\phi$ holds in all models of $F$ and hence, by the completeness theorem, you are given that $\phi$ is provable). So there are no unprovable truths in first-order logic.
The non-constructive nature of the proof of the completeness theorem is that it uses non-constructive methods to prove the existence of a model of a sentence that cannot be disproved. When you apply the theorem to some sentence $\phi$ that you can prove (maybe non-constructively) to be true in all models, finding the proof of $\phi$ is effective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Largest root as exponent goes to $+\infty$ Let $a\geq 1$ and consider
$$
x^{a+2}-x^{a+1}-1.
$$
I am interested to see what is the largest root of this polynomial as $a\to +\infty$.
In order to find a root, we surely have to have
$$
x^{a+2}-x^{a+1}=x^{a+1}(x-1)=1.
$$
Hence, I guess we have to look for which $x$ we have that
$$
x^{a+1}(x-1)\to 1\text{ as }a\to+\infty.
$$
Intuitively, if $x$ tends to some value larger than $1$ for $a\to\infty$, the whole thing should diverge. On the other side, if $x$ tends to some value smaller than $1$, then the whole expression should converge to $0$. Hence I guess that $x\to 1$ as $a\to\infty$ in order to get a root.
|
If we set
$$ p_a(x) = x^{a+2}-x^{a+1}-1 $$
we may easily see that $p_a(x)$ is negative on $[0,1]$, increasing and convex on $[1,+\infty)$, so the largest real root is in a right neighbourhood of $x=1$. We may also notice that:
$$ p_a\left(1+\frac{\log(a+1)}{a+1}\right) = \frac{\log(a+1)}{a+1}\left(1+\frac{\log(a+1)}{a+1}\right)^{a+1}-1>0 $$
by Bernoulli's inequality, hence the largest root of $p_a$ is between $1$ and $1+\frac{\log(a+1)}{a+1}$.
A more effective localization can be achieved by performing one step of Newton's method with starting point $x=1+\frac{\log(a+1)}{a+1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
}
|
Extremum of $f (x,y) := x^2+y^2$ subject to the equality constraint $x+y=3$ I had to find the extremum of $z=x^2+y^2$ subject to the constraint $x+y=3$. I used Lagrange multipliers to reach the conclusion that $(1.5,1.5)$ is an extremum point, but had no way of determining whether it's a maximum or a minimum (we did not study the Sylvester criteria). Regardless, intuitively, the most symmetric sum usually gives the largest result, and this is what I used as a justification for the point being a maximum. This is, of course, hardly a mathematical way of showing the correctness of a statement, which is why I ask here what way there is to show it's a maximum in a correct well defined orderly fashion?
|
By CS inequality:
$$
x+y=(x,y)\cdot (1,1)\le \sqrt{x^2+y^2}\sqrt{2}
$$
Since $x+y=3$:
$$
3\le \sqrt{2}\sqrt{x^2+y^2}
$$
Now, squaring both sided yields
$$
9\le 2(x^2+y^2)
$$
In other words
$$
x^2+y^2\ge \frac{9}{2}
$$
This lower bound is attained when $x=y=3/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluation of $\sum_{n=1}^\infty \frac{(-1)^{n-1}\eta(n)}{n} $ without using the Wallis Product In THIS ANSWER, I showed that
$$2\sum_{s=1}^{\infty}\frac{1-\beta(2s+1)}{2s+1}=\ln\left(\frac{\pi}{2}\right)-2+\frac{\pi}{2}$$
where $\beta(s)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^s}$ is the Dirichlet Beta Function.
In the development, it was noted that
$$\begin{align}
\sum_{n=1}^\infty(-1)^{n-1}\log\left(\frac{n+1}{n}\right)&=\log\left(\frac21\cdot \frac23\cdot \frac43\cdot \frac45\cdots\right)\\\\
&=\log\left(\prod_{n=1}^\infty \frac{2n}{2n-1}\frac{2n}{2n+1}\right)\\\\
&=\log\left(\frac{\pi}{2}\right) \tag 1
\end{align}$$
where I used Wallis's Product for $\pi/2$.
If instead of that approach, I had used the Taylor series for the logarithm function, then the analysis would have led to
$$\sum_{n=1}^\infty(-1)^{n-1}\log\left(\frac{n+1}{n}\right)=\sum_{n=1}^\infty \frac{(-1)^{n-1}\eta(n)}{n} \tag 2$$
where $\eta(s)=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s}$ is the Dirichlet eta function.
Given the series on the right-hand side of $(2)$ as a starting point, it is evident that we could simply reverse steps and arrive at $(1)$.
But, what are some other distinct ways that one can take to evaluate the right-hand side of $(2)$?
For example, one might try to use the integral representation
$$\eta(s)=\frac{1}{\Gamma(s)}\int_0^\infty \frac{x^{s-1}}{1+e^x}\,dx$$
and arrive at
$$\sum_{n=1}^\infty \frac{(-1)^{n-1}\eta(n)}{n} =\int_0^\infty \frac{1-e^{-x}}{x(1+e^x)}\,dx =\int_1^\infty \frac{x-1}{x^2(x+1)\log(x)}\,dx \tag 3$$
Yet, neither of these integrals is trivial to evaluate (without reversing the preceding steps).
And what are some other ways to handle the integrals in $(3)$?
|
Observation $1$:
A suggestion made in a comment from @nospoon was to expand one of the integrals in a series and exploit Frullani's Integral. Proceeding accordingly, we find that
$$\begin{align}
\int_0^\infty \frac{1-e^{-x}}{x(1+e^x)}\,dx&=\int_0^\infty \left(\frac{(e^{-x}-e^{-2x})}{x}\right)\left(\sum_{n=0}^\infty (-1)^{n}e^{-nx}\right)\,dx\\\\
&=\sum_{n=0}^\infty (-1)^{n} \int_0^\infty \frac{e^{-(n+1)x}-e^{-(n+2)x}}{x}\,dx\\\\
&=\sum_{n=0}^\infty (-1)^{n} \log\left(1+\frac{1}{n+1}\right)\\\\
&=\sum_{n=1}^\infty (-1)^{n-1} \log\left(1+\frac{1}{n}\right)
\end{align}$$
thereby recovering the left-hand side of Equation $(1)$ in the OP.
Observation $2$:
In the answer posted by Olivier Oloa, note the intermediate relationship
$$I'(s)=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n+s}$$
Upon integrating $I'(s)$, as $\int_0^1 I'(s)\,ds$, we find that
$$I(1)=\sum_{n=1}^\infty (-1)^{n-1}\log\left(1+\frac1n\right)$$
thereby recovering again the left-hand side of Equation $(1)$ in the OP.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
}
|
Can fractions be written as over 1? I know that all whole numbers can be written as the whole number divided by one. I was wondering if fractions could be written the same way, for example..
Can $1\over2$ be written as
$1/2\over1$
Or $2\over3$ as
$2/3\over1$
If we wanted to solve this problem,
$3 \times 1/2$
Could it be done this way?
${3\over1} \times {1/2\over1} = {(3/2)\over1}$
|
Of course! All real numbers, when divided by one, equal the number itself. This is true regardless of how the real number is expressed, whether it be a fraction, a mixed number, or any other representation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
$A\times B$ connected component implies $A,B$ connect components $X,Y$ topological spaces. I want to show that if $C\subseteq X\times Y$ is a connected component, then $C=A\times B$ where $A,B$ are connected components of $X,Y$.
What I have so far, is that any continuous $f:\;C\to\{0,1\}$ must be constant - so pick $f_A:\;A\to\{0,1\}$ and $f_B:\;B\to\{0,1\}$ both continuous. What I'd like to do is somehow compose them to get a continuous map $F:A\times B\to\{0,1\}$ which will be constant, and then argue that $f_A,f_B$ must have been constant, so $A,B$ connected (components?).
First, would this idea work? If so, any ideas on how I can mix up $f_A,\;f_B?$
|
One can indeed show that $A, B$ connected implies $A \times B$ connected. There are several answers on this site that address this.
So if $C$ is a connected component of $X \times Y$, then consider the continuous projections $p_X,p_Y$ onto the component spaces. Continuous images preserve connectedness so $A = p_X[C], B = p_Y[C]$ are connected and clearly $C \subseteq A \times B$. The latter set is connected by the first lemma and $C$ is a maximal connected set (definition of a connected component), hence equality $C = A \times B$ has been shown.
$A$ must be a component of $X$, or else there would be strictly larger connected $A \subset A' \subseteq X$, but then $A' \times B$ would also be connected and bigger than $C$, which cannot be by maximality of $C = A \times B$. Similar reasoning applies to $B$. So both $A$ and $B$ are components of $X$ resp. $Y$.
The proof of the first mentioned lemma could proceed by starting with a continuous function $f: A \times B \rightarrow \{0,1\}$ and showing it is constant, using that $f$ restricted to all sets of the form $\{x\} \times B$ and $A \times \{y\}$ are constant (these sets are connected, being homeomorphic to $B$ resp. $A$, which are connected).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1782999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Inverse of $\tan^{2}\theta$? I re-arranged: $$3\tan^{2} x -1=0$$ to get $\tan^{2}\theta = \frac{1}3$. I noticed the inverse of the $cos, sin$ and $tan$ functions are written as $\cos^{-1}\theta, \sin^{-1}\theta$ and $\tan^{-1}\theta$ respectively, does this mean the inverse of $\tan^{2}\theta$ would equal $\tan^{(2-1=1)}\theta = \tan\theta$ ? Also is it referred to Arc-$function$ or the inverse of the function, I've heard they're two different things but the distinction is ambiguous to me.
|
REMEMBER: $tan^2\;x$ is a simplification of $(tan(x))^2$.
It's easier than it seems, root both sides so $tan(x) = \frac{\pm 1}{\sqrt3}$
Now inverse tan $\frac{1}{\sqrt{3}} $ ... $tan^{-1}(\frac{1}{\sqrt{3}})$
and you get: $\theta = 30$ this is the principal value (closest to the origin); you can find the limitless other solutions by $\pm 180$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Use direct proof to prove: If $A \cap B = A \cap C$ and $A \cup B = A \cup C$, then $B = C$ I'm interested in knowing if the method I used is correct. I've been teaching myself proofs lately and I am having difficulties with how to approach a problem so any general tips would be awesome as well! Here is what I have:
Assume $A \cap B = A \cap C$ and $A \cup B = A \cup C$.
Let $x \in B$.
Let $x \in A$.
Then, $x \in A \cap B$.
Since $A \cap B = A \cap C$, $x$ must also be an element of $C$.
That is, $x \in C$.
Using the same thinking we can prove the $A \cup B = A \cup C$ case. Ie.
Assume $A \cap B = A \cap C$ and $A \cup B = A \cup C$.
Let $x \in B$.
Let $x \in A$.
Then, $x \in A \cap B$.
Since $A \cup B = A \cup C$, $x$ must also be an element of $C$.
That is, $x \in C$.
Finally, we see that for any $x \in B$, it follows that $x \in C$ as well. We can show that the converse is true by letting $x \in C$. Thus, $B = C$.
|
I believe the proof should go like this:
Assume $x\in B$. Then $x\in A \cup B=A\cup C$. If $x\in C$ then good. If $x\in A$ then it is in $A\cap B=A\cap C$ so it is in $C$.
Then assume $x\in C$. By a symmetric argument to the one just given, it is in $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How do I get the goal $(A \land B) \lor (A \land C)$ from the premises $A \land (B \lor C)$? (Using Fitch) I have
$$\begin{array} {r|c:l}
1. & A \land (B \lor C)
\\ 2. & A \land B
\\ 3. & A & \land \textsf{ Elim } 2
\\ 4. & B & \land \textsf{ Elim } 2
\\ 5. & A \land B & \land \textsf{ Intro } 4, 3
\\[2ex]\quad 6. & A \land C
\\ 7. & A & \lor \textsf{ Elim } 6
\\ 8. & C & \land \textsf{ Elim } 6
\\ 9. & A \land C & \land \textsf{ Intro } 8, 7
\end{array}$$
where do I go from here?
How do I get $A \land B \lor (A \land C)$
thanks
|
Hint 1: If the only assumption you are given is $A \land B$, can you prove the conclusion? What about only being given $A \land C$?
Hint 2 : Combine the 2 above proofs with 1 more step from the fitch rules.
Long version: Looking at the last step,
$$(A \land B) \lor (A \land C)$$
it seems like the final step might be something like $\lor-\textsf{Intro}$. But using $\lor-\textsf{Intro}$ would require either establishing $(A \land B)$ or establishing $(A \land C)$, and neither of those follows from the assumption of $A \lor (B \land C)$.
So looking through the fitch rules, most of them create a theorem with a given structure or would require proving something that would be unprovable. One candidate stands out as what could be the last step of the proof is $\lor-\textsf{Elim}$:
$$\begin{array} {c}
\begin{array} {c|c|c}
& x & y \\
x \lor y & \vdots & \vdots \\
& z & z
\end{array}
\\ \hline
z
\end{array}$$
because one of the assumptions is a $\lor$ type theorem, and the conclusion does follow from either $B$ or from $C$ (in other words, if we had been asked to prove the conclusion from $A \land X$, it would be true whether $X$ was $B$ or $C$). So build the proof using $x = B$, $y = C$, $z = (A \land B) \lor (A \land C)$ :
$$\begin{array} {r|l|l}
%
(1) & A \land (B \lor C) & \textsf{Assumption} \\
%
\\
%
(2) & A & \land-\textsf{Elim of } 1 \\
%
(3) & B \lor C & \land-\textsf{Elim of } 1 \\
%
\\
%
(4) & B & \textsf{Assumption} \\
%
(5) & \quad A \land B & \land-\textsf{Intro of } 2 ,~ 4 \\
%
(6) & \quad (A \land B) \lor (A \land C) & \lor-\textsf{Intro of } 5 \\
%
\\
%
(7) & C & \textsf{Assumption} \\
%
(8) & \quad A \land C & \land-\textsf{Intro of } 2 ,~ 7 \\
%
(9) & \quad (A \land B) \lor (A \land C) & \lor-\textsf{Intro of } 8 \\
%
\\
%
(10) & (A \land B) \lor (A \land C) & \lor-\textsf{Elim of } 3 ,~ 4 \to 6 ,~ 7 \to 9
%
\end{array}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
All subgroups normal $\implies$ abelian group This is , I think an easy problem just that I am not getting the catch of it. How to show whether or not the statement is true?
All subgroups of a group are normal$\implies$ the group is an abelian group?
I have been able to show the other way round.
|
This is actually not true. A group for which all subgroups are normal is called a Dedekind group, and non-abelian ones are called "Hamiltonian". The smallest example is the quaternion group $Q_8$. See this MO discussion for more info.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
Resolving $\sec{x}\left(\sin^3x + \sin x \cos^2x\right)=\tan{x}$ Going steadily through my book, I found this exercise to resolve
$$ \sec{x}\left(\sin^3x + \sin x \cos^2x\right)=\tan{x}$$
Here's how I resolve it ($LHS$) and again bear with me as I truly reverting to a feeling of vulnerability, like a child actually
As $\sec x$ is equal to $\frac{1}{\cos x}$
That leads us to this
$$\frac{(\sin^3x+\sin x\cos^2x)}{\cos x}$$
I'm factorizing one $\sin x$
$$\frac{\sin x(\sin^2x+\cos^2x)}{\cos x} = \frac{\sin x(1)}{\cos x} = \tan x$$
That seems to work otherwise I completly messed this up
Reading the book's solution, I have something different...
$$\begin{align*}
LHS&=\frac{\sin^3x}{\cos x}+ \sin x \cos x \\[4pt]
&=\frac{\sin x}{\cos x}-\frac{\sin x\cos^2x}{\cos x}+\sin x\cos x\\[4pt]
&= \tan x\end{align*}$$
What did I miss?
|
You missed nothing. The book is doing the same you did, only in a more cumbersome way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate $\int\frac{\sqrt{x^2+2x-3}}{x+1}d\,x$ by trig substitution I am preparing for an exam and found this integral in a previous test. Did I do it correctly?
My attempt.
$$
\int\frac{\sqrt{x^2+2x-3}}{x+1}\,dx
$$
Complete the square of $x^2+2x-3$; I changed the integral to
$$
\int\frac{\sqrt{(x-1)^2-4}}{x+1}\,dx
$$
then set $u=x+1$ to get
$$
\int\frac{\sqrt{u^2-4}}{u}\,dx
$$
Using the triangle, $2\sec\theta=u$ and $du=2\sec\theta\tan\theta d\theta$
$$
\int\frac{\sqrt{(2\sec\theta)^2-2^2}}{2\sec\theta}2\sec\theta \tan\theta\, d\theta
$$
This I simplified to
$$
2\int\tan^2\theta\, d\theta = 2\int\sec^2\theta-1\,d\theta
=2[\tan\theta-\theta]+C
$$
Back substitute
$$
\theta=\tan^{-1}\frac{\sqrt{u^2-4}}{2}
$$
and
$$
\tan\theta=\frac{\sqrt{u^2-4}}{2}
$$
Back substitute $u=x+1$
$$
\int\frac{\sqrt{x^2+2x-3}}{x+1}\,dx=
\sqrt{(x-1)^2-4}-\tan^{-1}{\sqrt{(x-1)^2-4}}+C
$$
|
You've made a mistake. I hope you can find it using my answer
$$\int\frac{\sqrt{x^2+2x-3}}{x+1}\space\text{d}x=\int\frac{\sqrt{(x+1)^2-4}}{x+1}\space\text{d}x=$$
Substitute $u=x+1$ and $\text{d}u=\text{d}x$:
$$\int\frac{\sqrt{u^2-4}}{u}\space\text{d}u=$$
Substitute $u=2\sec(s)$ and $\text{d}u=2\tan(s)\sec(s)\space\text{d}s$.
We get that $\sqrt{u^2-4}=\sqrt{4\sec^2(s)-4}=2\tan(s)$ and $s=\text{arcsec}\left(\frac{u}{2}\right)$:
$$2\int\tan^2(s)\space\text{d}s=2\int\left[\sec^2(s)-1\right]\space\text{d}s=2\left[\int\sec^2(s)\space\text{d}s-\int1\space\text{d}s\right]$$
Notice now, the integral of $\sec^2(s)$ is equal to $\tan(s)$ and the integral of $1$ is just $s$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Alternative Derivation of Recurrence Relation for Bessel Functions of the First Kind How can the recurrence relation
$J^{'}_n(x) = \frac{1}{2} [J_{n-1}(x) - J_{n+1}(x)]$
be derived directly from the following?
$J_n(x) = \frac{1}{\pi} \int_0^\pi \cos(n\theta - x \sin\theta) \text{d} \theta$
|
Hint: $J'_n(x)={1\over\pi}\int_0^{\pi}\sin(\theta)\sin(n\theta-x\sin\theta)d\theta$
Hint: use $\sin x\sin y ={1\over 2}(\cos(x-y)-\cos(x+y))$
$={1\over \pi}\int_0^{\pi}{1\over 2}(\cos(-\theta+n\theta-x\sin\theta)-\cos(\theta+n\theta-x\sin\theta))d\theta$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding the fixed subfield (Galois theory) Let's say we are working with the field extension $\mathbb{Q}(\gamma)$, where $\gamma$ is the seventh root of unity. I know my basis for this extension will thus be:
$\{1, \gamma, \gamma^2, \gamma^3, \gamma^4, \gamma^5, \gamma^6 \}$
And the Galois group will consist of automorphisms that take the generator $\gamma$ and map it to it's different powers. Since this group has order $6$, I know I will have a subgroup of order $2$ and a subgroup of order$3$. The subgroup of order $2$ will be:
$<\sigma_2>$ where $\sigma_2$ sends $\gamma$ to $\gamma^2$
And the subgroup of order 3 will be:
$<\sigma_6>$ where $\sigma_6$ sends $\gamma$ to $\gamma^6$
My goal is to find the subfield that is fixed by, say, the subgroup $<\sigma_2>$
What I did was take an arbitrary element of $\mathbb{Q}(\gamma)$ and acted on it with $\sigma_2$. The arbitrary element $x$ will just be a linear combination of the basis elements of $\mathbb{Q}(\gamma)$, thus:
$x = a + b\gamma + c\gamma^2 + d\gamma^3 + e\gamma^4 + d\gamma^5 + e\gamma^6$
Then:
$\sigma_2(x)$ = $\sigma_2(a) + \sigma_2(b\gamma) + \sigma_2(c\gamma^2) + \sigma_2(d\gamma^3) + \sigma_2(e\gamma^4) + \sigma_2(d\gamma^5) + \sigma_2(e\gamma^6) = a + b\gamma^2 + c\gamma^4 + d\gamma^6 + e\gamma + f\gamma^3 + g\gamma^5$
Comparing coefficients, we see that $a = a$ and $b = e, c = b, d = f, e = c, d = g, e = d$
When we did a similar exercise in class, I remember things simplified much more nicely. From the looks of it, my calculation shows that things get shuffled around, so the only things fixed are the coefficients from $\mathbb{Q}$, but I know that's not right. What am I doing wrong?
|
Your above writings imply that $\mathbb{Q}(\zeta)$ has degree $7$ over $\mathbb{Q}$. What is wrong?
Try writing $\zeta^{6}$ in terms of $1,\dots,\zeta^{5}$. Remember the minimal polynomial for $\zeta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1783926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Given $x$ and $y$, let $m = ax + by$, $n = cx + dy$, where $ad-bc = \pm 1$. Prove that $(m, n) = (x, y)$. This is exercise 10 Chapter 1 the book Introduction
to Analytic
Number Theory by Tom M. Apostol. All alphabets represent integers and by $(w,z)=g$ we mean the greatest common divisor of w and z is g. What I tried:
$m = ax + by$ and $n = cx + dy$. By solving for $x$ and $y$, and considering $ad-bc = \pm 1$ I just got that $(x,y) = (dm-bn, an-cm)$. I can't go further, please help!
|
Following my suggestion in the comments, suppose without loss that $(x,y)=1$ (divide both equations by this if necessary). We have $$
\begin{bmatrix} m \\ n \end{bmatrix} = \begin{bmatrix}a & b \\c & d \end{bmatrix}
\begin{bmatrix}x \\ y\end{bmatrix}
$$
By hypothesis the matrix $\begin{bmatrix}a & b \\c & d \end{bmatrix}$ has inverse $\begin{bmatrix}e & f \\g & h \end{bmatrix}$. Hence
$$\begin{bmatrix}e & f \\g & h \end{bmatrix}
\begin{bmatrix} m \\ n \end{bmatrix} =
\begin{bmatrix}x \\ y\end{bmatrix}.
$$
In other words, there are integers $e,f,g,h$ such that $x=em+fn$ and $y=gm+hn$. Now, $(m,n)$ divides both $x,y$ and so $(m,n)|(x,y)$. But $(x,y)=1$ so $(m,n)=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
In mathematics, what is an $N \times N \times N$ matrix? In mathematics, what is an $N \times N \times N$ matrix? I think this is a tensor but definitions of tensors that I have read are so overly complicated and verbose that I have trouble understanding them.
|
I'ts a tensor. But so are normal numbers, vectors, and matrices which can be considered 1X1 tensor, 1Xn tensor, mXn Tensor. All data structures of this form (mXnX...)are what are called Tensors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
}
|
Find the rank and nullity of the following matrix Find the rank and the nullity of the following linear map from $\mathbb{R}^{4}$ to $\mathbb{R}^{3}$
$\left(x,y,z,t\right)\rightarrow(x-t,z-y,x-2y+2z-t)$
I understand how to find the rank and nullity, since the nullity is the dimension of the null space and the rank is the dimension of the column space. But I am having difficulty putting it into a matrix to solve.
And can someone also explain why we can take a linear map and essentially say that it is the "same" structurally as a the corresponding matrix. How do isomorphisms link into this idea ?
From this how would you know if it is onto ?
|
You can write the transformation as
$$T(\mathbf{x})=\mathbf{A}\mathbf{x}=\begin{bmatrix}1&0&0&-1\\0&-1&1&0\\1&-2&2&-1\end{bmatrix}\begin{bmatrix}x\\y\\z\\t\end{bmatrix}$$
where $\mathbf{A}$ is the transformation matrix. This matrix acts on a given vector $\mathbf{x}$ to give a transformed vector. If you carry out the matrix multiplication, you end up with
$$\begin{bmatrix}1&0&0&-1\\0&-1&1&0\\1&-2&2&-1\end{bmatrix}\begin{bmatrix}x\\y\\z\\t\end{bmatrix}=\begin{bmatrix}x-t\\-y+z\\x-2y+2z-t\end{bmatrix}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to find the probability of elementary events of a random experiment when they are not equally likely? Consider this question
Consider the experiment of tossing a coin. If the coin shows head, toss it
again but if it shows tail, then throw a die. Find the conditional probability of the event that ‘the die shows a number greater than 4’ given that ‘there is at least one tail’.
Now we know the sample space S of this experiment is S = {(H,H), (H,T), (T,1), (T,2), (T,3), (T,4), (T,5), (T,6)}
where (H, H) denotes that both the tosses result into
head and (T, i) denote the first toss result into a tail and
the number i appeared on the die for i = 1,2,3,4,5,6.
My book tells that all outcomes of this experiment are not equally likely and it assigned the probabilities to the 8 elementary
events
(H, H), (H, T), (T, 1), (T, 2), (T, 3) (T, 4), (T, 5), (T, 6)
as $\frac14 $, $\frac14 $, ${1 \over 12}$, ${1 \over 12}$,${1 \over 12}$, ${1 \over 12}$, ${1 \over 12}$, ${1 \over 12}$ respectively.
My question is why does it consider outcomes of this experiment to not be equally likely and if say they are not equally likely then why does it assign specifically these probabilities to the elementary events?
And why we don't solve the question as elementary events , which is :
Consider the experiment of throwing a die, if a simple multiple of 3 comes up, throw the die again and if any other number comes , toss a coin . Find the E/F where E : "the coin show a tail" and F : "at least one die shows a 3"
|
Let's look at just one elementary event, to get the idea: What is the probability of $(T,1)$.
Well, first yo have to flip a tails -- that is probability $\frac12$ to happen. Then if you do, you have to roll a 1; that is probability $\frac16$ if you rolled a tails in the first place. So the probability of actually rolling a 1 is
$$ \frac12 \cdot \frac16 = \frac1{12}$$
One trap to avoid in any prob class is stubbornly assigning equal probabilities to all outcomes of an experiment without considering how those outcomes occur. As a more familiar example: Flip ten honest coins and count the number of heads. The probability of six heads and four tails is much greater than the probability of nine heads and one tail.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Integral of Wiener Squared process I don't have a background of stochastic calculus.
It is known fact that definite integral of standard Wiener process from $0$ to $t$ results in another Gaussian process with slice distribution that is normal distributed with mean equal to $0$ and variance $\frac{T^3}{3}$ i-e
$$ \int_0^{t} W_s ds \sim \mathcal{N}(0,\frac{t^3}{3}) $$
Question:
What if we square the standard Wiener process and then integrate i-e $$ \int_0^{t} W_s^2 ds \sim ? $$
Would that be scaled Chi-square distributed ?
|
That should not be true.
The problem is that you're not considering the square of the stochastic integral,
i.e. the random variable $J= (\int_0^1 {W_s} ds)^2$, which would indeed be the square of a normally distributed random variable, but the r.v. $S= (\int_0^1 {W_s}^2 ds)$.
A note in the direction of finding a solution would be a discretization attempt, that is defining an equally spaced partition $\Pi_{[0,t]}$ of $[0,t]$, $\Pi_{[0,1]}=\{0,t_1,...,t_i,...,t_{n^2}=1\}$ and considering the limit as the mesh of the partition goes to $0$ (here $\Delta t=1/n^2$):
$$
S=\lim_{|\Pi| \to 0} \Delta t\sum_{i \le n^2} W_{t_i}^2
$$
Here the random variables $W_{t_i}$ are independent, and it looks similar to result III. in this article by P. Erdös and M. Kac. Also check here.
Using Ito's formula on $f(x)=x^4$, we get for S the identity
$$
S=\int_0^t W^2_t dt =\frac{2}{3}\int_0^t W_t^3dWt - \frac{1}{6}W_t^4
$$
but I'm not sure this tells much about the r.v.
It would be also of interest to compute the moments of this rv.
Using the Ito Isometry:
$$\mathbb{E}[S]=\mathbb{E}\left[ \int_0^t {W_s}^2 ds\right] =
\mathbb{E}\left[\left( \int_0^t W_s dW_s\right)^2\right]=\\
\mathbb{E}\left[\left( \frac{1}{2} W_t^2-\frac{1}{2}t\right)^2\right]=
\mathbb{E}\left[ \frac{1}{4} W_t^4+\frac{1}{4}t^2-\frac{1}{2}W_t^2t\right]=
\frac{3}{4}t^2+\frac{1}{4}t^2-\frac{1}{2}t^2=\frac{t^2}{2}$$
where in the last step I used $\mathbb{E}[Z^4]=3\sigma^4$ for a Std Normal r.v. $Z$.
$$
\mathrm{Var}[S]=\mathrm{Var}\left[ \int_0^t {W_s}^2 ds\right] = \mathbb{E}\left[\left( \int_0^t {W_s}^2 ds\right)^2\right] - \left(\mathbb{E}\left[ \int_0^t {W_s}^2 ds\right]\right)^2=\\
\mathbb{E}\left[\left( \int_0^t {W_s}^2 ds\right)^2\right] - \frac{t^4}{4}=
$$
To evaluate this last integral we can note that through Ito's formula
$$
6\int_0^t W_t^2dt=W_t^4-4\int_0^t W_t^3dW_t
$$
and $$\mathrm{Var}\left[ \int_0^t {W_s}^2 ds\right] =\frac{1}{36}\mathbb{E}\left[\left(W_t^4-4\int_0^t W_t^3dW_t\right)^2\right]-\frac{t^4}{4}$$
but here the computation gets more complicated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Show that $f(x)=\frac{x}{1+|x|}$ is uniformly continuous. I tried to use the definition and arrived this far:
$|f(x)-f(y)|=\left|\frac{x}{1+|x|}-\frac{y}{1+|y|}\right|=\frac{|x-y+x|y|-y|x||}{(1+|x|)(1+|y|)}\leq|x-y+x|y|-y|x||$.
Any suggestion for ending the proof?
I also tried to prove that $\frac{x}{1+x}$ is uniformly continuous on $[0,\infty[$ and that $\frac{x}{1-x}$ is uniformly continuous on $[-\infty,0[$, but I wonder if we can use the definition with the function $f(x)=\frac{x}{1+|x|}$ itself.
|
Here is another approach: Your function $f$ is differeentiable on all of ${\mathbb R}$, whereby $$f'(x)={1\over\bigl(1+|x|\bigr)^2}\qquad(-\infty<x<\infty)\ .$$
As $|f'(x)|\leq1$ for all $x$ the MVT implies that $|f(x)-f(y)|\leq|x-y|$; hence $f$ is even Lipschitz continuous on ${\mathbb R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Whats the difference between a series and sequence? I was looking at a question earlier that involved sequences and found out that the sequence converged to 0 but the series diverged to infinity. How is that possible? for example the sequence was $a_n$ $=$ $\frac{1}{x_n}$
The question was here: Convergence of $x_n=f\left(\frac{1}{n}\right), n\geq 1$
|
A series is in some sense a type of sequence. However, it is a sequence of partial sums.
For example, if we take some sequence $\{a_n\}_{n\geq 1}$, then we can in turn retrieve a series from this sequence by considering the following partial sums:
$S_N=a_1+...+a_n=\sum_{n=1}^{N}a_n$
Then if we consider the following sequence: $\{S_N\}_{n \geq 1}$ We have actually defined a series!
$S_1=a_1$
$S_2=a_1+a_2$
and so on.
Thus, the difference is the following:
Consider the sequence $\{a_n\}_{n \geq 1}$ defined by $a_n=\frac{1}{n}$.
It is intuitively clear (and if not, use the Archimedean property) that $\lim_{n \to \infty} \frac{1}{n}=0$.
Now, we can consider the sequence of partial sums:
Let $\{S_n\}_{n \geq 1}$ be defined by $S_n=1+\frac{1}{2}+...+\frac{1}{n}$
Then $$\lim_{n \to \infty} S_n=\lim_{n \to \infty} \sum_{k=1}^{n} \frac{1}{k}=\sum_{k=1}^{\infty}\frac{1}{n}$$
Which is called the harmonic series, and it diverges.
We can show this by either the integral test, or just note:
$\begin{align} &1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}+...\\
>&1+\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+...\\
=&1+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+...
\end{align}$
Which very clearly diverges.
As an aside, it should be very clear that $\{S_n\}$ will not converge at all if $\{a_n\}$ does not, but the converse is not true.
If you want an example where they both converge, just take $\{a_n\}$ to be defined by $a_n=\frac{1}{2^n}$. Then you can consider the sequence of partial sums for $a_n$ (and hence, define a series.)
Then $\{a_n\} \to 0$ as $n \to \infty$. Yet we also have that
$$\lim_{n \to \infty} S_n=\lim_{n \to \infty} \sum_{k=1}^{n}\frac{1}{2^k}=\sum_{k=1}^{\infty}\frac{1}{2^k}=\frac{1/2}{1-1/2}=1$$
To see a derivation of the penultimate equality, you can look further into what are called geometric series
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Evaluation of $\sin \frac{\pi}{7}\cdot \sin \frac{2\pi}{7}\cdot \sin \frac{3\pi}{7}$
Evaluation of $$\sin \frac{\pi}{7}\cdot \sin \frac{2\pi}{7}\cdot \sin \frac{3\pi}{7} = $$
$\bf{My\; Try::}$ I have solved Using Direct formula::
$$\sin \frac{\pi}{n}\cdot \sin \frac{2\pi}{n}\cdot......\sin \frac{(n-1)\pi}{n} = \frac{n}{2^{n-1}}$$
Now Put $n=7\;,$ We get
$$\sin \frac{\pi}{7}\cdot \sin \frac{2\pi}{7}\cdot \sin \frac{3\pi}{7}\cdot \sin \frac{4\pi}{7}\cdot \sin \frac{5\pi}{7}\cdot \sin \frac{6\pi}{7}=\frac{7}{2^{7-1}}$$
So $$\sin \frac{\pi}{7}\cdot \sin \frac{2\pi}{7}\cdot \sin \frac{3\pi}{7} =\frac{\sqrt{7}}{8}$$
Now my question is how can we solve it without using Direct Formula, Help me
Thanks
|
Using $2\sin a\sin b=\cos(a-b)-\cos(a+b)$ and $2\sin a\cos b=\sin(a+b)+\sin(a-b)$, write
$$\sin \frac{\pi}7\cdot\sin \frac{2\pi}7\cdot \sin \frac{3\pi}7 = \frac12\left(\cos\frac{\pi}7-\cos\frac{3\pi}7\right)\sin\frac{3\pi}7=\frac14\left(\sin\frac{4\pi}7+\sin\frac{2\pi}7-\sin\frac{\pi}7\right)\\=\frac14\left(\sin\frac{2\pi}7+\sin\frac{4\pi}7+\sin\frac{8\pi}7\right)$$
Then have a look at this question: Trigo Problem : Find the value of $\sin\frac{2\pi}{7}+\sin\frac{4\pi}{7}+\sin\frac{8\pi}{7}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
What is a particular use of Gram-Schmidt orthogonalization? We have a linear space V of $m \times n$ matrices. I know that we can use Gram-Schmidt to construct an orthonormal basis, but the natural basis for this space (where every ij-th element is $1$ and the rest $0$) is just that - every matrix there is orthogonal to the rest, and each norm equals $1$.
Where does the algorithm come into use? Why would somebody go through the trouble of constructing a new basis when the natural one fits the bill?
|
You can apply Gram Schmidt in order to obtain decomposition of a matrix $A \in \Re^{n\times m}, n>m$ as:
\begin{align}
QR = A \quad Q \in \Re^{n \times n},R \in \Re^{n \times m}
\end{align}
where $Q$ is orthogonal matrix obtained by Gram Schmidt orthogonalisation and $R$ is right upper matrix with zero raws $r_i$ for $i > m$ . Espacially you can use it decomposition in order to solve minimalisation problem of the form:
\begin{align}
\min \limits_{x \in \Re^m \setminus \{0\}} ||Ax-b ||_2 ,\quad b \in \Re^n
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
why the quotient space is finite $X/\ker T$ Let $T:X\rightarrow Y$ be a linear operator from Banach space to Banach space, if $Y$ is finite dimensional, show $X/\ker T$ is finite dimensional, moreover has same dimension with $Y$.
Any help is appreciated.
|
What you seem to be looking for is a proof of the first isomorphism theorem.
Let $\{k_i\}_{i \in I}$ be a basis of $\ker T$. This extends to a basis $\{k_i\}_{i \in I} \cup \{x_j\}_{j \in J}$ of $X$ (where $\operatorname{span} \{k_i\}_{i \in I} \cap \operatorname{span} \{x_j\}_{j \in J} = \{0\}$). Hence $\{ x_j + \ker T\}_{j \in J}$ is a basis of $X / (\ker T)$. We write $[x_j]$ for the element $x_j + \ker T$ of the quotient space, and we get an induced operator $T' \colon X / (\ker T) \to Y$ via $T'[x_j] = T x_j$. The heart of the proof is the following statement:
Since $\{[x_j]\}_{j \in J}$ is linearly independent in $X / (\ker T)$, so is $\{T'[x_j]\}_{j \in J}$ in $Y$.
Proof: By definition of linear independence, we must show that if any finite linear combination of vectors in $\{T'[x_j]\}_{j \in J}$ is the $0$ vector, then the coefficients are all $0$. We have $a_1 T'[x_{j_1}] + \cdots + a_n T'[x_{j_n}] = 0 \implies T\left( a_1 x_{j_1} + \cdots + a_n x_{j_n} \right) = 0$, which implies that $a_1 x_{j_1} + \cdots + a_n x_{j_n} \in \ker T = \operatorname{span} \{k_i\}_{i \in I}$. Since $\operatorname{span} \{k_i\}_{i \in I} \cap \operatorname{span} \{x_j\}_{j \in J} = \{0\}$, we must have $a_1 = \cdots = a_n = 0$, completing the proof.
Since $Y$ is finite dimensional, the above statement implies that $J$ is finite, and hence $X / (\ker T)$ is finite dimensional.
As @Mathematician42 says, it's not necessarily the case that $\dim\left( X / (\ker T) \right) = \dim Y$; we could have some basis vector of $Y$ which is never hit by $T$ (or $T'$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1784940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Let$ (X; T_{\text{cocountable}})$ be an infinite set, show that it is closed under countable intersections. Also give an example to show that $\mathcal{T}_{\text{cocountable}}$ need not be closed under arbitrary intersections.
I was looking for some feedback on my proof:
$X\setminus\bigcap_{\alpha \in I} A_{\alpha}$, where $I$ is a countable set, equals to $\bigcup_{\alpha\in I}\{X\setminus A_{\alpha}\}$. The union of countably many sets is also countable and hence, $T_{\text{cocountable}}$ is closed under countable intersections.
An example of an arbitrary intersection could be $X \cap \emptyset = \emptyset$ and $X\setminus \emptyset = X$, which is infinite.
|
I concur with the countable intersections, the equivalent formulation that the closed sets are closed under countable unions is even easier.
As to arbitrary intersections, your examples make no sense (the results are both in the topology?).
Instead consider $X_x = X\setminus \{x\}$ for all $x \in X$, all of which are open. Suppose $X$ is uncountable (or else the topology is discrete and closed under all intersections) and write $X$ as a disjoint union $A \cup B$, which are both uncountable. Then $B$ is not open (why?) and $B = \cap_{x \in A} X_x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Basis maps to Linearly Independent Set How do I prove that a linear map f will map a basis to a linearly independent set?
Supposedly this is examinable but damned if it's not on either set of notes.
Thanks!
|
Suppose that $T:V\rightarrow W$ is a injective linear map. Let $\left\{v_1,\dots ,v_n\right\}$ be a basis of $V$ (You can do a similar argument if $V$ is infinite-dimensional). Suppose that $\sum_{i=1}^n\lambda_iT(v_i)=0$, then $T(\sum_{i=1}^n\lambda_iv_i)=0$. Hence $\sum_{i=1}^n\lambda_iv_i\in \text{Ker}(T)$, but since $T$ is injective, $\text{Ker}(T)=\left\{0\right\}$. Hence $\sum_{i=1}^n\lambda_iv_i=0$. Since $\left\{v_1,\dots , v_n\right\}$ is a basis, we need that $\lambda_i=0$ for all $i$. Hence the $T(v_i)$'s are linearly independent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why do equatons of two variables specify curves in $\mathbb{R}^2$? I suppose to more formally characterize the question more formally, why are all points of the set $\{ (x,y) \mid F(x,y) = 0 \}$ always boundary points (and I believe also never isolated points) in the standard topology of $\mathbb{R}^2$ -- for simplicity's sake, say in the case where $F(x,y) \in \mathbb{R}[x,y]$ (i.e. polynomials in $x,y$ with coefficients in $\mathbb{R}$)?
I imagine there are standard proofs of such a proposition (and certainly more general than this) in algebraic geometry, but I have not studied algebraic geometry, and was wondering if there were any elementary proofs of this, only relying on basic point-set topology, algebra, real analysis, etc... I'm also looking for a relatively intuitive explanation/proof, if possible. Maybe the most intuitive reason why this is true is because of the implicit function theorem, but I wonder if there are other, still intuitive, elementary proofs of this.
|
Suppose $p(x,y)$ is a polynomial that is not identically $0.$ Let $Z$ be the zero set of $p.$ Then $Z$ is closed, hence $Z = \text { int } Z \cup \partial Z.$ Suppose $\text { int } Z $ is nonempty. Then there is an open disc $D(a,r) \subset Z.$ Then for any nonzero $v\in \mathbb R^2,$ the function $p_v(t) = p(a+tv)$ is a one variable polynomial in $t$ that vanishes in a neighborhood of $0.$ Thus $p_v$ vanishes identically. Since this is true for all $v,$ $p \equiv 0,$ contradiction.
This result also holds for any real analytic function $f(x,y)$ on $\mathbb R^2.$ The proof is basically the same. Beyond this, there are plenty of functions in $C^\infty(\mathbb R^2)$ for which the zero set has non-empty interior.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Express last equation of system as sum of multiples of first two equations The question says to 'Express the last equation of each system as a sum of multiples of the first two equations."
System in question being:
$ x_1+x_2+x_3=1 $
$ 2x_1-x_2+3x_3=3 $
$ x_1-2x_2+2x_3=2 $
The question gives a hint saying "Label the equations, use the gaussian algorithm" and the answer is 'Eqn 3 = Eqn 2 - Eqn 1' but short of eye-balling it, I'm not sure how they deduce that after row-reducing to REF.
|
NOTE: $r_i$ is the original $i^{th}$ equation as stated in your question above.
Well, let's go through the process of finding the extended echelon form using Gauss-Jordan elimination. Here's the matrix:
$$\left[\begin{matrix}1 \ 1 \ 1 \ 1 \\ 2 \ -1 \ 3 \ 3\\ 1 \ -2 \ 2 \ 2\end{matrix}\right]\left[\begin{matrix}1 \ 0 \ 0 \\ 0 \ 1 \ 0 \\ 0 \ 0 \ 1\end{matrix}\right]$$
First, we subtract the second row by twice the first row and the third row by the first row:
$$\left[\begin{matrix}1 \ 1 \ 1 \ 1 \\ 0 \ -3 \ 1 \ 1 \\ 0 \ -3 \ 1 \ 1\end{matrix}\right]\left[\begin{matrix}1 \ 0 \ 0 \\ -2 \ 1 \ 0 \\ -1 \ 0 \ 1\end{matrix}\right]$$
Now, we subtract the third row by the second row (we don't really care about the first row at this point since we just want to know the numbers in the third row):
$$\left[\begin{matrix}1 \ 1 \ 1 \ 1 \\ 0 \ -3 \ 1 \ 1 \\ 0 \ 0 \ 0 \ 0\end{matrix}\right]\left[\begin{matrix}1 \ 0 \ 0 \\ -2 \ 1 \ 0 \\ 1 \ -1 \ 1\end{matrix}\right]$$
Thus, since the third row in the matrix to the left is $\mathbf 0$ and third row in the matrix to the right is $1 \ -1 \ 1$, we have that $r_1-r_2+r_3=\mathbf 0$, or that $r_3=r_2-r_1$. Therefore, the third equation is the second equation minus the first.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Bott&Tu Definition: "Types of Forms: In Bott&Tu's well-known book "Differential forms in Algebraic topology", they note
-(p34): every form on $\mathbb{R}^n \times \mathbb{R}$ can be decomposed uniquely as a linear combination of two types of forms:
Type 1: $\pi^*(\phi) f(x,t)$
Type 2: $\pi^*(\phi) f(x,t) dt.$
Here $\phi$ is a form on $\mathbb{R}^n.$
Then (on p35) they add :
-If $\{U_{\alpha}\}$ is an atlas for $M$ then $\{ U_{\alpha} \times \mathbb{R} \}$ is an atlas for $M \times \mathbb{R}.$ Again every form on $M \times \mathbb{R}$ is a linear combination of forms of type (1) and type (2).
My question: I agree with the claim on p34. I also agree that, given a form on $M \times \mathbb{R},$ I can split it canonically as $\text{ker}(i^*) \oplus (1- \text{ker}(i^*))$ (where $i: M\to M \times \mathbb{R}$ is the zero section). The two summands will then be $\textbf{locally}$ of the form claimed by Bott and Tu.
However, I don't understand why this decomposition should hold globally, as Bott&Tu seem to be arguing (they repeat the claim on p61).
|
Hint: Take a partition of the unity $(f_{\alpha})$ subordinate to $(U_{\alpha})$, and for each form $v$, let $v_{\alpha}$ be the restriction of $v$ to $U_{\alpha}$, write $v_{\alpha}=v^1_{\alpha}+v^2_{\alpha}$ where $v^1_{\alpha}$ is of type 1 and $v^2_{\alpha}$ is of type 2. Then write $v_1=\sum_{\alpha}f_{\alpha}v^1_{\alpha}$ and $v^2_{\alpha}=\sum_{\alpha}f_{\alpha}v^2_{\alpha}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What is the smallest $d$ such that $\int\cdots\int \frac{1}{(1+x_1^2 + \cdots + x_n^2)^d} \, dx_1\cdots dx_d$ converges?
What power $d>0$ is the smallest integer such that in $\mathbb{R}^n$,
$$I(n) =\int_{-\infty}^\infty\int_{-\infty}^\infty \cdots\int_{-\infty}^\infty \frac{1}{(1+x_1^2 + \cdots + x_n^2)^d} \,dx_1\, dx_2 \cdots dx_n < +\infty$$
Hint: Think of the integral in "polar coordinates" where $r$ goes from $0$ to $\infty$, integrate over the sphere of radius $r$.
I have no idea how to start. More specifically, how do you do polar coordianates in $\mathbb{R}^n$? Some calculations show that if $n=1$ then $d=1$ would be good enough. But for $n=2$ and $n=3$, $d=2$ and the integrals would equal to $\pi^2$.
|
Think of $dx_1 \cdots dx_n$ as a volume. Given the the integrand has rotational symmetry, partition the space into spherical shells. The integrand is constant on the surface of this shell.
$$
I(n) = \int_0^\infty \frac{1}{(1+ r^2)^d} dV_r
$$
where $dV_r$ denotes the volume of the thin shell spanning radius range $(r, r+dr)$. The volume of this thin shell scales as $r^{d-1} dr$ in radius.
You should be able to take it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
If $I$ is finitely generated nilpotent and $R/I^{n-1}$ is noetherian then $R$ is noetherian
If $I$ is a finitely generated ideal of a commutative ring $R$ with $1$ such that $I^n = \{0\}$ and $R/I^{n-1}$ is noetherian, then $R$ is also noetherian.
I don't know what I should do. If I can prove for example that $I^{n-1}$ is finitely generated as $R/I^{n-1}$ module then I can deduce that it is noetherian as $R$-module and then I am done. But I don't know if that is true and how to prove it.
Thanks.
Added: also isn't it possible to prove that $I^{n-1}$ is finitely generated without requiring $I^n = \{0\}$? (just realized that)
I mean I can prove that if $I$ and $J$ are finitely generated ideals then $IJ$ is also finitely generated so by induction if $I_1,...,I_k$ are finitely generated then $I_1...I_k$ is so. So $I^{n-1}$ is finitely generated as $R/I^{n-1}$ module so it is noetherian as $R$-module so $R$ is then noetherian. Is there something wrong with this?
|
Since $I^n=0$ the ideal $I^{n-1}$ is a finitely generated $R/I^{n-1}$-module: it is finitely generated (any power of a finitely generated ideal is finitely generated) and $I^{n-1}\cdot I^{n-1}=0$; see also here. Now use the exact sequence of $R$-modules $$0\to I^{n-1}\to R\to R/I^{n-1}\to 0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1785889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Interpretation of definitions and logical implication in Calculus - e.g. monotonic strictly increasing function I read definitions in Calculus books that often confuse me from a logical perspective. For example, the definition of a monotonic function, e.g. a strictly increasing function, is defined as follows.
$$
\forall x_1, x_2 \in A ~~~ x_1 < x_2 \Rightarrow f(x_1) < f(x_2) \tag{1}\label{1}
$$
Now let's try to forget the intuitive knowledge of a strictly increasing function. This definition should pinpoint a class of functions that are OK and a class of functions that are KO. The functions that are OK are the ones for which the property \eqref{1} is TRUE for every $(x_1, x_2)$ couple in $A$. I've circled the TRUE cases in the attached truth table (only the $\Rightarrow$ part for now; 1) 2) 3) 4) and a) b) c) d) are labels).
I've been schematic and tried to represent each possible case in the following graphs (the KO cases are marked with an X).
The cases that puzzle me are the ones marked with question marks. For example, the $\delta)$ case is an allowed case since the property $\eqref{1}$ is TRUE. In fact, $x_1 < x_2$ is false and $f(x_1) < f(x_2)$ is true, giving us TRUE for $\Rightarrow$. This is counterintuitive and either is $\eqref{1}$ or my logic wrong. Another allowed case is $\theta)$, for which $x_1 < x_2$ is false and $f(x_1) < f(x_2)$ is false (in fact $f(x_1)=f(x_2)$) so $\Rightarrow$ is TRUE.
Admitting for a minute that $\eqref{1}$ may be wrong, I adjusted it as in $\eqref{2}$.
$$
\forall x_1, x_2 \in A ~~~ x_1 < x_2 \Leftrightarrow f(x_1) < f(x_2) \tag{2}\label{2}
$$
This adjustment gets rid of the $\delta$ case but not the $\theta$ one, so I'm logically confused. What is the correct definition of a strictly increasing function if not $\eqref{1}$? How should I interpret definitions like $\eqref{1}$ from a logical point of view?
I've never studied logic before (but sooner or later I will)... for now please help me sleep peacefully tonight :D.
Thanks,
Luca
|
To be a strictly increasing function, every pair $(a,~b)$ must pass the test. The test is that if $a < b$ then $f(a) < f(b)$.
In your plot (d), you've shown a pair that passes the test: the pair $(x_1,~x_2)$. But the plot is clearly not strictly increasing. This suggests that there must be a pair that doesn't pass the test. What about $(x_2,~x_1)$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Alternative formula for sample covariance Is this an equivalent formula for the sample covariance?
$$\frac{1}{n-1}\left(\sum_{i=1}^nx_iy_i -n\overline{x}\overline{y}\right)$$
Thanks
|
It is an equivalent formula to $\frac{1}{n-1}\sum_{i=1}^{n} (x_i-\overline x)(y_i-\overline y)$
Firstly you can multiply out the brackets.
$\sum_{i=1}^{n} (x_i-\overline x)(y_i-\overline y)=\sum_{i=1}^{n}x_iy_i-\sum_{i=1}^{n}x_i\overline y-\sum_{i=1}^{n}\overline x y_i+ \overline x \ \overline y\sum_{i=1}^{n} 1$
$\overline x$ and $\overline y$ are both constants. Thus they can be put in front of the sigma signs.
$=\sum_{i=1}^{n}x_iy_i-\overline y\overline x-\overline x\sum_{i=1}^{n} y_i+\overline x \ \overline y\sum_{i=1}^{n} 1$
*
*$\overline x=\frac{1}{n}\sum_{i=1}^n x_i\Rightarrow n\cdot \overline
x=\sum_{i=1}^n x_i$.
*Similar for $y_i$
$=\sum_{i=1}^{n}x_iy_i-n\cdot \overline y \ \overline x-n \cdot \overline x \ \overline y+\overline x \ \overline y\sum_{i=1}^{n} 1$
*
*$\sum_{i=1}^n 1=n$
$=\sum_{i=1}^{n}x_iy_i-n\cdot \overline y \ \overline x\underbrace{-n \cdot \overline x \ \overline y+n\cdot \overline x \ \overline y}_{=0}$
Finally we get
$\sum_{i=1}^{n} (x_i-\overline x)(y_i-\overline y)=\sum_{i=1}^{n}x_iy_i-n\cdot \overline y \ \overline x$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to expand $x^n$ as $n \to 0$? I am trying to expand $x^n$ in small $n$ using Taylor series.
Using wolfram alpha, I found that it is $1+ n\log(x) + \cdots$
I tried to Taylor expand $x^n$ around $n=0$ but I cannot get this result.
|
Hint. One may recall that
$$
e^z=1+z+\cdots+\frac{z^k}{k!}+\cdots,\quad z \in \mathbb{C},
$$ then, for $x>0$ and for $0<n<1$,
$$
\begin{align}
x^n&=e^{n\ln x}=1+n\ln x+\frac{n^2(\ln x)^2}{2!}+\cdots+\frac{n^k(\ln x)^k}{k!}+\cdots,
\\\\x^n&=1+n\ln x+\cdots+\mathcal{O}\left(n^k\right),
\end{align}
$$ as announced.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Intuition why Eigenvector of Covariance matrix points into direction of maximum variance In context of principal component analysis, I am using the Eigenvectors of the Covariance matrix to project the data.
I am able to prove that the Eigenvector of the Covariance Matrix is the direction of the greatest variance in the original data. However I am wondering, is there is an intuitive explanation for this fact?
|
Yes, the way I captured this is:
Nature has no metric system in itself, so when you measure something, you're doing it through a super-imposed metric that does not, in principle, have any meaning
However, one could measure things in a "more natural way" taking the distance from the mean divided by the standard deviation, let me explain this to you with an example
Suppose you see a man which is 2.10 meters tall, we all would say that he is a very tall man, not because of the digits "2.10" but because (unconsciously) we know that the average height of a human being is (I'm making this up) 1.80m and the standard deviation is 8cm, so that this individual is "3.75 standard deviations far from the mean"
Now suppose you go to Mars and see an individual which is 6 meters tall, and a scientist tells you that the average height of martians is 5.30 meters, would you conclude that this indidual is "exceptionally tall"? The answer is: it depends on the variability! (i.e. the standard deviation)
So that, one natural way measure things is the so called Mahalanobis distance
$$\Sigma \text{ be a positive def. matrix (in our case it will be the covariance matrix)} \quad d(x,\mu)=(x-\mu)^T\Sigma^{-1}(x-\mu) $$
This mean that the contour levels (in the euclidean rappresentation) of the distance of points $X_i$ from their mean $\mu$ are ellipsoid whose axes are the eigenvector of the matrix $\Sigma$ and the lenght of the axes is proportional to the eigenvalue associated with eigenvector
So that to larger eigenvalue is associated longer axis (in the euclidean distance!) which means more variability in that direction
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Probability that $A \cup B$ = S and $A \cap B = \phi $ Let $S$ be a set containing $n$ elements and we select two subsets: $A$ and $B$ at random then the probability that $A \cup B$ = S and $A \cap B = \varnothing $ is?
My attempt
Total number of cases= $3^n$ as each element in set $S$ has three option: Go to $A$ or $B$ or to neither of $A$ or $B$
For favourable cases: Each element has two options: Either go to $A$ or to $B$ which gives $2^n$ favourable cases.
Is my approach correct?
|
Pick any subset $A$, and there is only one subset $B$, namely $S \setminus A$ which satisfies $A \cup B = S$ and $A \cap B = \emptyset $.
There are $2^n$ subsets to choose from so the probability of selecting such a pair is $1/2^n$.
(Or, $1/(2^n - 1)$ if one constrains that $A \ne B$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 0
}
|
Estimation the integration of $\frac{x}{\sin (x)} $ How to prove that integral of $\frac{x}{\sin (x)} $ between $0$ to $\frac{\pi}{2}$ lies within the interval $\frac{\pi^2}{4}$ and $\frac{\pi}{2}$ ?
|
The function is:
$$f(x)=x\csc x$$
Using the Taylor series of $\csc x$ with radius $\pi$,meaning it converges for $x \in (-\pi,0) \cup (0,\pi) \supseteq (0,\frac{\pi}{2}]$:
$$f(x)=x(\frac{1}{x}+\frac{1}{6}x+....)$$
$$=1+\frac{1}{6}x^2+...$$
In the specified interval,
$$1 < \frac{x}{\sin x}$$
Also the Taylor series expansion shows us that our function will be at maximum for the highest value of $x$ in our interval, that is $x=\frac{\pi}{2} $. So,
$$\frac{\pi}{2} \geq \frac{x}{\sin x}$$
Thus,
$$\int_{0}^{ \frac{\pi}{2}} 1 < \int_{0}^{ \frac{\pi}{2}} \frac{x}{\sin x} \leq \int_{0}^{ \frac{\pi}{2}} \frac{\pi}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Solution to a simple system of quadratic equations I am hoping to find a closed-form solution to the following system of $n$ quadratic equations:
$$ x_j^2 = \sum_{i=1}^n B_{ij}x_i $$
for $j\in\{1,\dots,n\}$, where $B_{ij}\geq 0$. There is a trivial solution at $x=0$ but I am looking for others. Any help would be much appreciated.
|
Let $B\in\mathbb{R}^{n\times n}$ a matrix with entries $B_{ij}$. Let $x\in\mathbb{R}^n$ such that $x=(x_1,\ldots,x_n)$, define $y=(x_1^2,\ldots,x_n^2)$ and note that your system is equivalent to the system $B^Tx = y$. This is a linear system and there is a lot about this in the literature.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Negative sign that appears in integration I have to solve the following definite integral $$\int_{0}^{4}r^3 \sqrt{25-r^2}dr=3604/15$$
I have tried a change of variables given by $u=\sqrt{25-r^2}$ where then I find that $dr=-u du/r$ and $r^2=25-u^2$. That change of coordinates then give the following integral $$-\int_{3}^{5}(25-u^2)u du$$
However, it now evaluates to $-3604/15$ and I wonder why I have a negative popping up if I have done what seems like the correct thing to do.
|
You've already been answered about the confusion with the limits. Now you can try the following and not make a substitution and thus not change the limits. Integrate by parts:
$$\begin{cases}u=r^2&u'=2r\\{}\\v'=r\sqrt{25-r^2}&v=-\frac13(25-r^2)^{3/2}\end{cases}\;\;\implies$$$${}$$
$$\int_0^4r^3\sqrt{25-r^2}\,dr=\left.-\frac{r^2}3(25-r^2)^{3/2}\right|_0^4+\frac13\int_0^2(2r\,dr)(25-r^2)^{3/2}=$$
$$=-\frac{16}3\cdot27-\left.\frac13\frac25(25-r^2)^{5/2}\right|_0^4=-\frac{27\cdot16}3-\frac2{15}\left(243-3125\right)=$$
$$=-\frac{432}3+\frac{5764}{15}=\frac{3604}{15}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
On the Y-Combinator in Lambda Calculus I am trying to follow this explanation on the Y-combinator
Fairly at the beginning the author shows this function definition and claims that stepper(stepper(stepper(stepper(stepper()))) (5) were equal to factorial(5).
stepper = function(next_step)
return function(n)
if n == 0 then
return 1
else
return n * next_step(n-1)
end
end
end
I have translated this into (enriched) lambda syntax to the best of my understanding as follows:
$\renewcommand{\l}{\lambda} \renewcommand{\t}{\text}$
for $2!$
$((\l f. \l n.\ (\t{zero})\ \overline{n}\ \overline{1}\ \overline{n} \times f\ (\overline{n} - 1))\ (\l f. \l n.\ (\t{zero})\ \overline{n}\ \overline{1}\ \overline{n} \times f\ (\overline{n} - 1)))\ \overline{2}$
$\longrightarrow (\l n.\ (\t{zero})\ \overline{n}\ \overline{1}\ \overline{n} \times (\l f. \l n.\ (\t{zero})\ \overline{n}\ \overline{1}\ \overline{n} \times f\ (\overline{n} - \overline{1}))\ (\overline{n} - 1))\ \overline{2}\ $
$\longrightarrow (\t{zero})\ \overline{2}\ \overline{1}\ \overline{n} \times (\l f. \l n.\ (\t{zero})\ n\ \overline{1}\ \overline{n} \times f\ (\overline{n} - 1))\ (\overline{2} - \overline{1}) $
$\longrightarrow \overline{n} \times (\l f. \l n.\ (\t{zero})\ \overline{n}\ \overline{1}\ \overline{n} \times f\ (\overline{n} - \overline{1}))\ (\overline{2} - \overline{1}) $
Here is the problem now. If I apply $(\overline{2} - \overline{1})$ now it gets inserted into $f$, which makes no sense. So, should the call
stepper(stepper(stepper(stepper(stepper()))) (5)
not also have an initial argument for the innermost stepper()-call? The author claims the code to work correctly, so what am I missing here, please?
|
Well, assuming that $n$ will reach $0$ while we get to the innermost stepper function, that would allow us to input anything as next_step is not used then. (Probably nil is passed this way to the function.)
Now stepper() is a function that basically expects the input 0 and returns 1.
The next step is stepper(stepper()) which inputs n and returns n * stepper()(n-1), i.e. if n happens to be 1 now, it returns 1.
Following this logic, the given 5-fold stepper function seems to expect the input 4, so there is a mistake in any case. Maybe if n==1 was rather meant, or just missed one more stepper.
Indeed, it would be clearer if e.g. this innermost input would be something like $\lambda x.1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1786910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove ths sum of $\small\sqrt{x^2-2x+16}+\sqrt{y^2-14y+64}+\sqrt{x^2-16x+y^2-14y+\frac{7}{4}xy+64}\ge 11$ Let $x,y\in R$.show that
$$\color{crimson}{f(x,y)=\sqrt{x^2-2x+16}+\sqrt{y^2-14y+64} + \sqrt{x^2-16x+y^2-14y+\frac{7}{4}xy+64} \ge 11}$$
Everything I tried has failed so far.use Computer found this inequality $\color{blue}=$ iff only if $\color{blue}{x=2,y=6}$
Here is one thing I tried, but obviously didn't work.
$$f(x,y)=\sqrt{(x-1)^2+15}+\sqrt{(y-7)^2+15}+\sqrt{(x-8)^2+(y-7)^2+\dfrac{7}{4}xy-49}$$
Thanks in advance
|
For convenience, we make the translation $x=2+a$ and $y=6+b$, so that the equality case is $a=b=0$. Then the expression to bound is:
$$\sqrt{(a+1)^2+15}+\sqrt{(b-1)^2+15}+\sqrt{\frac{7}{8}(a+b)^2+\frac{1}{8}(a-6)^2+\frac{1}{8}(b+6)^2} $$
Now recall the following form of Cauchy-Schwarz for $n$ nonnegative variables $x_1, \cdots, x_n$:
$$\sqrt{n(x_1+\cdots+x_n)}=\sqrt{(1+\cdots+1)(x_1+\cdots+x_n)}\ge \sqrt{x_1}+\cdots+\sqrt{x_n}$$
with equality iff $x_1=\cdots=x_n$. We use this three times, keeping in mind the equality case $a=b=0$:
$$\sqrt{(a+1)^2+15}=\frac{1}{4}\sqrt{16((a+1)^2+15)}\ge \frac{1}{4}(|a+1|+15)$$
$$\sqrt{(b-1)^2+15}=\frac{1}{4}\sqrt{16((b-1)^2+15)}\ge \frac{1}{4}(|b-1|+15)$$
$$\sqrt{\frac{7}{8}(a+b)^2+\frac{1}{8}(a-6)^2+\frac{1}{8}(b+6)^2}\ge \frac{1}{4}\sqrt{2((a-6)^2+(b+6)^2)}\ge \frac{1}{4}(|a-6|+|b+6|)$$
Now since $|a-6|+|a+1|\ge 7$ and $|b+6|+|b-1|\ge 7$ by the triangle inequality, the expression must be at least $\frac{15}{2}+\frac{7}{2}=11$, as required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Set Interview Question, Any Creative Way to solve? I ran into a simple question, but I need an expert help me more to understand more:
The following is True:
$ A - (C \cup B)= (A-B)-C$
$ C - (B \cup A)= (C-B)-A$
$ B - (A \cup C)= (B-C)-A$
and the following is False:
$ A - (B \cup C)= (B-C)-A$
this is an interview question, but how we can check the these sentence as true or false quickly?
|
To check quickly, you can use the boolean logic, i.e. replacing the set $A$ with statement $a = (x \in A)$. Thus you get (using $A - B = A \cap B^C \Leftrightarrow a*\neg b$ ):
$$a * \neg (b+c) = (b*\neg c)*\neg a$$
$$a * \neg b * \neg c = (b*\neg c)*\neg a$$
this should be equal for all $a,b,c$, but for $(1,0,0)$ we have
$$1 = 1*\neg 0*\neg 0 = (0*\neg 0) *\neg 1 = 0$$
which is contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 9,
"answer_id": 3
}
|
Diagonalisation proof
Suppose the nth pass through a manufacturing process is modelled by the linear equations $x_n=A^nx_0$, where $x_0$ is the initial state of the system and
$$A=\frac{1}{5} \begin{bmatrix} 3 & 2 \\ 2 & 3 \end{bmatrix}$$
Show that
$$A^n= \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{bmatrix}+\left( \frac{1}{5} \right)^n \begin{bmatrix} \frac{1}{2} & -\frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}$$
Then, with the initial state $x_0=\begin{bmatrix} p \\ 1-p \end{bmatrix}$
, calculate $\lim_{n \to \infty} x_n$.
(the original is here)
I am not sure how to do the proof part
The hint is:
First diagonalize the matrix; eigenvalues are $1, \frac{1}{5}$.
I understand the hint and have diagonalised it but I don't know how to change it into the given form? After diagonalisation, I just get 3 matrices multiplied together
|
Hint :
$A^{n}=(P^{-1} D P)^{n}=P^{-1} D^{n} P$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Choice of a horizontal tangent space of a principal bundle Let $\pi:P\to M$ be a principal bundle with group $G=\pi^{-1}(p)$, and let $u\in P$ and $p=\pi(u)$.
As I understand it, the choice of the vertical tangent space $V_uP=\mathrm{ker}(\pi_*)$ is natural, while there's no natural choice of a horizontal subspace $H_uP$ such that $T_uP=H_uP\oplus V_uP$. A choice of such a $H_uP$ amounts to choosing an Ehresmann connection $\omega\in\mathfrak{g}\otimes T^*P$.
Let's pick a local trivialization $\phi_i:M\times G\to P$ and then define a map $\psi_i:M\to P$ with $\psi_i(x):=\phi_i(x, e)$, where $e\in G$ is the identity element. In other words, we're choosing the local section of points that correspond to the identity element in the local trivialization. The pushforward ${\psi_i}_*$ maps $T_pM$ to a subspace of $T_uP$, and we can define $H_uP$ as the image of ${\psi_i}_*$.
Is that possible? Do we get a (local) Ehresmann connection this way?
Why isn't it a natural choice?
|
The point is that the local trivialization is defined on an open subset of $M$ not on $M$ unless the principal bundle is trivial. Yes, you can use a trivialization $(U_i)_{i\in I}$ where $\pi^{-1}(U_i)\simeq U_i\times G$ of the principal bundle, define the connection like that on $U_i\times G$ and use a partition of the unity to glue and obtain a connection on $P$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why is this Inequality True for all Positive Real Numbers? I was reading Folland's Real Analysis and came across the following inequality in chapter 6 (it is from lemma 6.1)
Let $0<\lambda<1$. Then $t^\lambda\leq \lambda t +(1-\lambda)$, $t$ a
real positive number.
Is this inequality always true? I'm not sure how to prove it and doesn't seem evident to me.
|
If you want to avoid concavity arguments just note that the function $$\phi(t) = t^\lambda - \lambda t - (1-\lambda)$$
has derivative $$\phi'(t) = \lambda t ^{\lambda - 1} - \lambda$$ which is positive if $0 < t < 1$ and negative if $t > 1$. Thus $\phi$ increases to its maximum at $t = 1$ and then decreases so that $$t^\lambda - \lambda t - (1-\lambda) \le \phi(1) = 0$$ for all $t > 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
If $a$ and $b$ are roots of $x^4+x^3-1=0$, $ab$ is a root of $x^6+x^4+x^3-x^2-1=0$. I have to prove that:
If $a$ and $b$ are two roots of $x^4+x^3-1=0$, then $ab$ is a root of $x^6+x^4+x^3-x^2-1=0$.
I tried this :
$a$ and $b$ are root of $x^4+x^3-1=0$ means :
$\begin{cases}
a^4+a^3-1=0\\
b^4+b^3-1=0
\end{cases}$
which gives us :
$(ab)^4+(ab)^3=a^3+b^3+a^4+b^4+a^4b^3-a^3b^4-1$
can you help me carry on ? or propose another solution ? thanks in advance
|
Let $a,b,c,d$ be the roots of $x^4+x^3-1=0$.
By Vieta's formula,
$$a+b+c+d=-1\quad\Rightarrow\quad c+d=-1-(a+b)\tag1$$
$$abcd=-1\quad\Rightarrow \quad cd=-\frac{1}{ab}\tag2$$
Since we have
$$a^4+a^3=1\quad\text{and}\quad b^4+b^3=1$$
we can have
$$1=(a^4+a^3)(b^4+b^3)$$
$$(ab)^4+(ab)^3(a+b+1)=1,$$
i.e.
$$a+b=\frac{1-(ab)^4}{(ab)^3}-1\tag3$$
Similarly,
$$(cd)^4+(cd)^3(c+d+1)=1$$
Using $(1)(2)$, this can be written as
$$\left(-\frac{1}{ab}\right)^4+\left(-\frac{1}{ab}\right)^3(-1-(a+b)+1)=1,$$
i.e.
$$a+b=\left(1-\frac{1}{(ab)^4}\right)(ab)^3\tag4$$
From $(3)(4)$, letting $ab=x$, we have
$$\frac{1-x^4}{x^3}-1=\left(1-\frac{1}{x^4}\right)x^3$$
to get
$$x^6+x^4+x^3-x^2-1=0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Using Logic Laws to prove $p \leftrightarrow q \equiv (p\lor q)\to(p \land q)$ I am trying to prove that
$p \leftrightarrow q \equiv (p\lor q)\to(p \land q)$
and am really lost in the steps to solve this.
So far I have:
$p \leftrightarrow q \equiv (p\to q)\land(q\to p) \qquad$|equivalence
$p \leftrightarrow q\equiv (\neg p\lor q)\land(\neg q\lor p) \qquad$|implication
and I am not sure how to proceed from here. Any advice would be greatly appreciated!
edit:
Thanks to lord farin i now have
≡ (~p ∨ q) ∧ (q → p) Implication Law
≡ (q → p) ∧ (~p ∨ q) Commutative Law
≡ (q → p ∧ ~p ) ∨ (q → p ∧ q) Distributive Law
≡ (~p ∧ q → p) ∨ (q ∧ q → p) Commutative Law
but i am unsure of how to get there still.
|
\begin{align}
(p\lor q)\to(p \land q) & \equiv (p\lor q)'\lor(p \land q)\\
& \equiv (p'\land q')\lor(p \land q)\\
& \equiv (p'\lor(p \land q))\land (q'\lor(p \land q))\\
& \equiv ((p'\lor p) \land (p'\lor q))\land ((q'\lor p) \land (q'\lor q))\\
& \equiv (T \land (p'\lor q))\land ((q'\lor p) \land T)\\
& \equiv (p'\lor q)\land (q'\lor p)\\
& \equiv (p\rightarrow q)\land (q\rightarrow p)\\
& \equiv p \leftrightarrow q
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1787962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Prove that $f(x)=\frac{1}{x}$ is not uniformly continuous on (0,1) So I'm having difficulties understand and utilizing the definition of uniform continuity:
$\forall \epsilon \gt 0,$ $\exists \delta>0 $ such that
$$ |x_1-x_2|\lt \delta \Rightarrow |f(x_1)-f(x_2)|\lt \epsilon $$
Asides from plugging in the function I'm lost as to what to do. How do I decide on which $\delta$ or $\epsilon$ to pick? (not just for this question, but for similar questions as well)
I've looked at other posts but they seem to be using techniques way too advanced for my understanding.
|
Consider the elements $\{1/n^2 : n \geq 1\}$ of $(0,1)$. As $n$ goes to $\infty$ these elements converge to $0$ and get as close as you want. Therefore you'll satisfy the $|1/n^2-1/(n+1)^2|\leq \delta$ part. On the other hand if you take $f(1/(n+1)^2)-f(1/n^2) = 2n+1 \to \infty$. Thus, for any $\delta$ you find a counter example.
Another proof can be obtained using the answer to the following question: Prove if a function is uniformly continuous on open interval, it is continuous on closed.
Suppose that the function $f(x) = 1/x$ is uniformly continuous on $(0,1)$. Then by the answer in the linked question, it should be extendable by continuity on $[0,1]$. This is obviously false, since the limit in $0$ is $\infty$.
Another argument: Note that uniformly continuous function maps Cauchy sequences to Cauchy sequences (this fact was used to prove the claim in the question linked above). Thus the sequence $(1/n)$ needs to be mapped to a Cauchy sequence. Since $f(1/n) = n$ is not a Cauchy sequence we can see that $f$ is not uniformly continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Counting Turns in a Rectilinear Spiral Graph So consider a rectangular spiral graph which starts at the origin, goes right 1, up 1, left 2, down 2, right 3, ... (in units). How can we tell how many turns there have been given a point? For example, if we give the point $(1,1)$, then there have been a total of $1$ turn to get to that point. If we give $(1,-1)$, there have been $4$ turns to get there. And if it is $(10,10)$ there have been $37$ turns. Additionally, what if we were given points in the middle. For example for the point $(0,1)$ the number of turns would be $ 2$. I calculated these visually, but I am looking for a mathematical solution, possibly a closed form.
I think I have figured out a sort of algorithm that would give me specific points, but this doesn't account for the points in the middle. If $x=0$ and $y=0$ then $0$. If $x>0$ and $y>0$ then $\operatorname{abs}(y)\cdot 4-3$. If $x>0$ and $y<0$ then $\operatorname{abs}(y)\cdot 4$. If $x<0$ and $y>0$ then $\operatorname{abs}(y)\cdot 4-2$. If $x<0$ and $y<0$ then $\operatorname{abs}(y)\cdot 4-1$.
I'm not sure if the works in all cases, but this is my start. However, I am actually looking for a solution that doesn't have these multiple cases. However, if there is one with less cases, I am still interested Also, my algorithm doesn't take into account if the point lies in the middle. For example if the point is $(3,5)$ my solution will give $17$, while the correct result is $18$.
Something I found was https://oeis.org/A242601 which can be used to find the turning points. Not sure how this helps though.
|
Most of the corners lie on circles of radius $k\sqrt 2$, as shown in the diagram below:
There is a set of "rogue corners" (coloured red) that do not lie on these circles.
The value of $(k-1)$ gives you the number of complete turns made around the origin, but this is not the same as the number of "turns" as you describe them. Your "turns" are in fact "right-angle anti-clockwise turns," but we can deal with that.
The well-behaved corners lie on the line $y=x$ or the line $y=-x$. For a well-behaved corner with coordinates $(x,y)$ you can therefore identify the value of $k$ by calculating:
$$k=|x|$$
A more general point $(x,y)$ will lie between two corners, and you can identify the value of $k$ by this simple way:
$$k=\max(|x|,|y|)$$
This procedure, however, gives an incorrect $k$ value for the rogue corners.
Having identified the $k$ value, we then need to find the angle $\alpha$ turned anticlockwise from the positive $x$-axis. Setting $\alpha = \arctan(\frac yx)$ seems obvious, but we need to take more care to identify the quadrant and to avoid problems with division by zero. I therefore suggest the following recipe:
$$\alpha=\begin{cases} 0& x>0, y=0\\ \arctan(\frac yx)& x>0, y>0\\ \frac {\pi}2 & x=0, y>0 \\ \pi+\arctan(\frac yx)& x<0 \\ \frac {3\pi}2& x=0, y<0 \\ 2\pi+\arctan(\frac yx)& x>0, y<0 \end{cases}$$
The total angle turned is given by:
$$\theta = 2\pi(k-1)+\alpha$$
You turn a corner at $\theta=\frac {\pi}4,\frac {3\pi}4,\frac {5\pi}4,...$ etc.
If $n$ is the number of turns we have $\theta=\frac {(2n-1)\pi}4$
Rearrange to get $n = \frac {(\frac {4\theta}{\pi}+1)}2 = \frac {2\theta}{\pi} + \frac 12$
$n = \lfloor {\frac {2\theta}{\pi} + \frac 12 }\rfloor$
I tested this using a spreadsheet and found that this worked for all corners except for the rogue corners. Their $n$ value was exactly 4 more than it should have been (that is 4 right-angle turns or one complete turn).
Now the rogue corners lie between the line $y=-x$ and the $x$-axis. To deal with them I adapted the calculation for $\alpha$ so that points in that region would have their $\alpha$ value reduced by $2 \pi$:
$$\alpha=\begin{cases} 0& x>0, y=0\\ \arctan(\frac yx)& x>0, y>0\\ \frac {\pi}2 & x=0, y>0 \\ \pi+\arctan(\frac yx)& x<0 \\ \frac {3\pi}2& x=0, y<0 \\ 2\pi+\arctan(\frac yx)& x>0, y<0, y \le -x \\ \arctan(\frac yx)& x>0, y<0, y>-x \end{cases}$$
I tested this using a spreadsheet for all points. Although it worked for all the corners (including rogue corners), it did not for the other points in the region between the line $y=-x$ and the $x$-axis.
I adapted the calculation for $\alpha$ further, setting the $\alpha$ of the rogue corners to $-\frac{\pi}4}:
$$\alpha=\begin{cases} 0& x>0, y=0\\ \arctan(\frac yx)& x>0, y>0\\ \frac {\pi}2 & x=0, y>0 \\ \pi+\arctan(\frac yx)& x<0 \\ \frac {3\pi}2& x=0, y<0 \\ 2\pi+\arctan(\frac yx)& x>0, y<0, y \le -x \\ -\frac{\pi}4& x>0, y<0, y=-x+1 \\ \arctan(\frac yx)& x>0, y<0, y>-x+1 \end{cases}$$
Finally I had to adapt the formula for $n$ to a ceiling function rather than a floor:
$n = \lceil {\frac {2\theta}{\pi} + \frac 12 }\rceil$
This now works for all points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proportion in sets We have $3$ sets of positive integers.
$$A = \{x_1,y_2,z_3\},\quad{} B = \{x_2,y_2,z_2\}, \quad{} C= \{x,y,z\}$$
Which proportion do we use for adding $A$ and $B$ ($x_1+x_2$ and so on), so the proportion in their numbers gets as close as possible to the proportion of the numbers in $C$ ($x,y,z$ make a proportion) ?
|
Let's make some assumptions about things to approach this problem. We need a notion of comparing proportions, so let's say, given two triples $(a,b,c)$ and $(d,e,f)$, they are in-proportion equivalent to $(1, b/a, c/a)$ and $(1, e/d. f/d)$, so I will use the 2-norm to measure the difference and say that the distance $D$ between $(a,b,c)$ and $(d,e,f)$ is
$$
D = \sqrt{\left(\frac{b}{a} - \frac{e}{d}\right)^2
+\left(\frac{c}{a} - \frac{f}{d}\right)^2}.
$$
We could have used a different metric (e.g. rescaling by 2nd or 3rd coordinate), but let's settle on this definition for now.
Then, given
$$
A = \begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix},
B = \begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix},
C = \begin{pmatrix} x \\ y \\ z \end{pmatrix},
$$
we let $a, b \in \mathbb{R}^+$ and we can see that using a linear combination $aA + bB$ to approach $C$, we get to compare
$$
aA + bB
= \begin{pmatrix} ax_1 + bx_2 \\ ay_1 + by_2 \\ az_1 + bz_2 \end{pmatrix}
$$
to $C$, which using our metric yields the distance of
$$
D^2 = \left(\frac{ay_1 + by_2}{ax_1 + bx_2} - \frac{y}{x}\right)^2
+\left(\frac{az_1 + bz_2}{ax_1 + bx_2} - \frac{z}{x}\right)^2.
$$
Dividing numerator and denominator of both fractions by $a$ and letting $d = b/a$, we get
$$
D^2(d) = \left(\frac{y_1 + dy_2}{x_1 + dx_2} - \frac{y}{x}\right)^2
+\left(\frac{z_1 + dz_2}{ax_1 + dx_2} - \frac{z}{x}\right)^2,
$$
where all elements except $d$ are fixed and the problem reduces to minimizing $D^2(d)$ (which is equivalent to minimizing $D(d)$ but easier) over all $d \in \mathbb{R}^+$.
Hope you can finish this from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is there any Algorithm to Write a Number $N$ as a Sum of $M$ Natural Numbers? I have a number $N$ (for example $N=64$).
Is there any algorithm to find all the feasible ways for writing the number $N$ as a sum of $M$ positive numbers?
(For example $M=16$) --- Repetition is allowed.
$n_1 + n_2 + n_3 + ... + n_{16} = 64$
$n_i \ge 1$
|
This is the stars and bars problem. Imagine a line of $64$ stars and $63$ candidate bars between them. Pick $15$ of those bars to be real and read off the groupings. There are ${64 \choose 15}=122131734269895$ To make the list, there are many algorithms on the web to generate the combinations of $15$ choices out of $63$. For example, you can use the fact that ${63 \choose 15}={62 \choose 14}+{62 \choose 15}$, where the first are combinations that include the first bar (and hence have $1$ as the first summand) and the second are those that do not have the first bar (so the first summand is greater than $1$). This suggests a recursive algorithm.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
How to find normal subgroups from a character table? I know that normal subgroups are the union of some conjugacy classes
Conjugacy classes are represented by the the columns in a matrix
How could we use character values in the table to determine normal subgroups?
|
This is quite well-known and can be found in books on representation theory. Here is an explanation, which is far from being original.
First fact : $N$ is a normal subgroup of a finite group $G$ if and only if there exists a character $\chi$ of $G$ such that $N = \ker \chi := \{g \in G | \chi(g)=\chi(1)\}$. Indeed, if $N$ is normal then $G$ acts on the complex algebra $\mathbf{C}[G/N] = \displaystyle \bigoplus_{gN \in G/N} \mathbf{C} e_{gN}$ by $h \cdot e_{gN}=e_{hgN}$. This is a linear representation of $G$ (coming from the regular representation of $G/N$). Let $\chi$ be its character. It is easy to check that $\chi(h) = 0$ if $h \notin N$ and $\chi(h) = \mathrm{Card}(G/N) = \chi(1)$ if $h \in N$. So $N = \ker \chi$. Conversely, using the fact that a character is constant on every conjugacy class, any subgroup of the form $\ker \chi$ is normal.
Second fact : if $\rho : G \to \mathrm{GL}(V)$ is the representation associated to the character $\chi$ then $\ker \rho = \ker \chi$. The inclusion $\subseteq$ is trivial. Conversely, assume $\chi(g) = \chi(1) = \dim V$. Since the eigenvalues of $\rho(g)$ are roots of $1$ and $\chi(g)$ is the sum of the eigenvalues (with multiplicities), these eigenvalues are forced to be all equal to $1$. So $\rho(g) = \mathrm{id}_V$, that is to say $g \in \ker \rho$.
Third fact : if $\chi = \displaystyle \sum_{i=1}^r n_i \chi_i$ (where the $\chi_i$ are pairwise distinct irreducible characters and $n_i \geq 1$) then $\ker \chi = \displaystyle \bigcap_{i=1}^r \ker \chi_i$. Writing $\rho, \rho_1,\ldots,\rho_r$ for the corresponding representations, $\rho$ is the direct sum of copies of $\rho_1,\ldots,\rho_r$ so $\ker \rho = \displaystyle \bigcap_{i=1}^r \ker \rho_i$. Then apply the second fact.
Conclusion : with your character table, you can read the subgroups $N_i:=\ker \chi_i$. Then the normal subgroups of $G$ are exactly the intersections of some $N_i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Check equivalence of quadratic forms over finite fields How to check whether the two quadratic forms \begin{equation} x_1^2 + x_2^2 \quad \text{(I)}\end{equation} and
\begin{equation} 2x_1x_2 \quad \text{(II)} \end{equation}
are equivalent on each of the spaces $\mathbb{F}_3^2\, \text{and}\,\mathbb{F}_5^2$?
I know that these forms correspond to the two matrices \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} in case (I) and
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} in case (II). But what do I do with those and what difference do the different fields make?
|
You've wrtten the other way round. For the quadratic form $ x_1^2 + x_2^2 $, the corresponding matrix is the identity while that for $ 2x_1x_2 $ is $ A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} $. The forms are equivalent iff the matrices are congruent. (over $ \mathbb{F}_3 $ and $ \mathbb{F}_5 $)
So, suppose $ A = P^TIP = P^TP $ over $ \mathbb{F}_3 $ for some invertible $ P $. Then, taking determinants gives $ -1 = ( \det P)^2 $, which is a contradiction as $ -1 $ is not a quadratic residue modulo $ 3 $.
Over $ \mathbb{F}_5 $, you can check that they indeed are equivalent by the matrix $ P = \begin{pmatrix} 1 & 3 \\ 3 & 1 \end{pmatrix} $ as we have, $ (3x_1+x_2)^2 + (x_1+3x_2)^2 = 2x_1x_2 $ in $ \mathbb{F}_5 $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Limit of $\sqrt{\frac{\pi}{1-x}}-\sum\limits_{k=1}^\infty\frac{x^k}{\sqrt{k}}$ when $x\to 1^-$? I am trying to understand if
$$\sqrt{\frac{2\pi}{1-x}}-\sum\limits_{k=1}^\infty\frac{x^k}{\sqrt{k}}$$ is convergent for $x\to 1^-$. Any help?
Update: Given the insightful comments below, it is clear it is not converging, hence the actual question now is to find
$$ \lim_{x\to 1^-}\left(\sqrt{\frac{\pi}{1-x}}-\sum_{n\geq 1}\frac{x^n}{\sqrt{n}}\right)$$
|
Using the binomial series you know that $\sqrt{\frac{2\pi}{1-x}}=\sqrt{2\pi}+\sum_{k=1}^\infty\begin{pmatrix}-\frac{1}{2}\\k\end{pmatrix}(-x)^k=\sqrt{2\pi}+\sqrt{2\pi}\sum_{k=1}^\infty \frac{(2k)!}{4^k(k!)^2}x^k$. Using Stirling's formula you know that $\sqrt{2\pi}\frac{(2k)!}{4^kk!}=\frac{\sqrt{2\pi}}{4^k}(1+\epsilon_k)\frac{\sqrt{2\pi(2k)}(\frac{2k}{e})^{2k}}{(\sqrt{2\pi k}(\frac{k}{e})^{k})^2}=(1+\epsilon_k)\frac{\sqrt{2}}{\sqrt{k}}$, where $\epsilon_k\rightarrow 0$ as $k\rightarrow\infty$. Now compare this to $\frac{1}{\sqrt{k}}$, for $k$ large enough, the kompnents of the sum is for instance larger than $\frac{1}{3\sqrt{k}}$, Then the sum is $\geq C+\sum_{k>N}\frac{1}{3\sqrt{k}}x^k$. The right hand side goes to infinity as $x\rightarrow 1$ form left side.
I think the problem would be more interesting if we have $\sqrt{\pi}$ in the first komponent, then it seems that one should modify $\epsilon_k$ to obtain the rate, but this not clear for me, at least this is not readable from the given Stirling's formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1788909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
}
|
Dimension of irreducible affine variety is same as any open subset Let $X$ be an irreducible affine variety. Let $U \subset X$ be a nonempty open subset. Show that dim $U=$ dim $X$.
Since $U \subset X$, dim $U \leq$ dim $X$ is immediate. I also know that the result is not true if $X$ is any irreducible topological space, so somehow the properties of an affine variety have to come in. I have tried assuming $U=X$ \ $V(f_1,...,f_k)$ but I don't know how to continue on.
Any help is appreciated!
|
You should note that any nonempty open subset of an irreducible variety (or topological space) is dense : https://en.wikipedia.org/wiki/Hyperconnected_space.
Then you can use the definition of the dimension to conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
The fact that surface of spherical segment only depends on its height follows from symplectic geometry It is quite quite well known that the surface of the piece of a sphere with $z_0<z<z_1$ for some values of $z_0,z_1$ is given by
$ S = 2\pi R (z_1-z_0) $. So this surface area only depends on the height of the spherical segment.
My teacher mentioned that this fact follows from some general theorem in symplectic geometry. He mentioned this right after giving the action of $U(1)$ on $S^2$ by rotation as an example of a hamiltonian Lie group action. So I guess that it should follow somehow from this action. (Whose moment map is essentially the z-coordinate of the sphere.)
What could be the general theorem my teacher was talking about?
|
Consider a hamiltonian $S^1$-action on a symplectic manifold $(M, \omega)$. Denote by $\mu : M \to \mathbb{R}$ the associated moment map. Since $Lie(S^1) \cong T_0S^1 \cong \mathbb{R}$ is generated by $\frac{\partial}{\partial \theta}$, the moment map is determined by the hamiltonian function $H : M \to \mathbb{R} : m \mapsto \mu(m)(\frac{\partial}{\partial \theta})$.
Given any 'invariant cylinder' $C' \subset M$, we show that its $\omega$-area is completely determined by the values of $\mu$ on $\partial C'$.
Indeed, consider the cylinder $C = S^1 \times [0,1]$. It has a distinguished area-form, that is $d\theta \wedge dt$. Consider a map $\phi : C \to M$ such that for each $t \in [0,1]$, the map $\phi_t = \phi(-,t) : S^1 \to M$ is an orbit of the $S^1$-action on $M$. In particular, the 'velocity vector' $X = (\phi_t)_{\ast} \frac{\partial}{\partial \theta}$ coincides with the hamiltonian vector field $X_H$ implicitly given by $dH = X_H \lrcorner \, \omega$. Notice that $H$ is constant on any circle $\phi_t(S^1)$ and that, moreover, the quantity $dH \left( \phi_{\ast}\frac{\partial}{\partial t} \right)$ is independent of $\theta$.
We compute
$$ \begin{align}
\int_C \phi^{\ast}\omega &= \int_C \omega \left( \phi_{\ast}\frac{\partial}{\partial \theta}, \phi_{\ast}\frac{\partial}{\partial t} \right) \, d\theta \wedge dt = \int_C dH \left( \phi_{\ast}\frac{\partial}{\partial t} \right) \, d\theta \wedge dt \\
&= 2\pi \int_{[0,1]} dH \left( \phi_{\ast}\frac{\partial}{\partial t} \right) \, dt = 2\pi \int_{[0,1]} \phi^{\ast}dH \\
&= 2\pi \int_{[0,1]} d(\phi^{\ast}H) = 2\pi \int_{\{0,1\}} \phi^{\ast}H \\
&= 2\pi (\left. H \right|_{\phi_1(S^1)} - \left. H \right|_{\phi_0(S^1)}) \, .
\end{align} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Quantum group notation I was jumping into the deep end and reading a few papers and lectures on quantum groups. My knowledge on Lie algebras is a bit thin but I was just wondering the notation used in the starting of this document:
One should have $U_\hbar(g) = U(g)[[\hbar]]$ as a vector space [...]
My question is, is the $\hbar$ the Cartan subalgebra $\mathfrak{h}$ I have been seeing in other papers, and what does the $U(g)[[\hbar]]$ mean precisely? Usually I see the quantum group notated as $U_q(g)$, so the sudden change in notation concerned me. Thanks in advance!
|
In this case $\hbar$ is just a parameter. Frequently, one also denotes $U_{\hbar}(g)$ as $U_q(g)$.
$U(g)[[\hbar]]$ denotes the space of formal power series over $U(g)$, i.e. if $f\in U(g)[[\hbar]]$, then $$f=\sum_{n\in\mathbb{Z}}a_n \hbar^n$$ with $a_n\in U(g)$ for every $n$. (Sometimes people reindex so that the sum is over $\mathbb{N}$ or some other countable set.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Rational Distance Problem triple -- irrational point Many points with rational coordinates are known with rational distances to three vertices of a unit square. For example, the following points are rational distances from $a=(0,0)$, $b=(1,0)$, and $c=(0,1)$.
$(945/3364, 225/841)$, $(99/175, 297/700)$, $(8288/12675, 1628/4225)$, $(1155/10952, 99/2738)$
Is there a point with irrational coordinates that has rational distances to points $a, b, c$?
|
I would say no consider only two points which form an ellipse then by the erdos anning theorem all points of mutual rational distance (ellipse) which includes the other point. all points then have rational coordinates.
this is poorly worded take the first two points as foci. take the radius such that the ellipse hits the third point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find the sum of the infinite series $\sum n(n+1)/n!$ How do find the sum of the series till infinity?
$$ \frac{2}{1!}+\frac{2+4}{2!}+\frac{2+4+6}{3!}+\frac{2+4+6+8}{4!}+\cdots$$
I know that it gets reduced to $$\sum\limits_{n=1}^∞ \frac{n(n+1)}{n!}$$
But I don't know how to proceed further.
|
You could also consider that $$A(x)=\sum\limits_{n=1}^∞ \frac{n(n+1)}{n!}x^n=\sum\limits_{n=0}^∞ \frac{n(n+1)}{n!}x^n=\sum\limits_{n=0}^∞ \frac{n(n-1)+2n}{n!}x^n$$ $$A(x)=\sum\limits_{n=0}^∞ \frac{n(n-1)}{n!}x^n+2\sum\limits_{n=0}^∞ \frac{n}{n!}x^n=x^2\sum\limits_{n=0}^∞ \frac{n(n-1)}{n!}x^{n-2}+2x\sum\limits_{n=0}^∞ \frac{n}{n!}x^{n-1}$$ $$A(x)=x^2 \left(\sum\limits_{n=0}^∞ \frac{x^n}{n!} \right)''+2x\left(\sum\limits_{n=0}^∞ \frac{x^n}{n!} \right)'=x^2e^x+2x e^x=x(x+2)e^x$$ Now, compute $A(1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 2
}
|
Lim Sup and Measurability of one Random Variable with respect to Another Here, there is a common proposition in probability theory :
Let $X,Y: (\Omega, \mathcal{S}) \rightarrow (\mathbb{R}, \mathcal{R})$ where $\mathcal{R}$ are the Borel Sets for the Reals. Show that $Y$ is measurable with respect to $\sigma(X) = \{ X^{-1}(B) : B \in \mathcal{R} \}$ if and only if there exists a function $f: \mathbb{R} \rightarrow \mathbb{R}$ such that $Y(\omega) =
f(X(\omega))$ for all $\omega \in \Omega$.
A proof is given here (written by Nate Eldredge).
I have worked on it, but a point need to be clarified :
*
*Why we use the lim sup ? Because we do not know that the lim exists whereas the lim sup always exists ?
*(Mainly) Is there a good example that explain the use of lim sup ? - Because I simply fail to find a good and simple example that could justify the lim sup. I have tried many constructions (with a Logarithm for example) but does not work ... maybe build one with $cos$ or $-1^n$, no ?
|
We have, by construction, $f_n(X)=Y_n$ and therefore $\lim f_n(X)=Y$, but $\lim f_n(x)$ may not exist for all $x\in\mathbb R$ if the range of $X$ doesn't cover the entire real line. (This is related to the fact that the choice of $B_{i,n}$ may not be unique.) So you take limsup outside the range of $X$ (on the range of $X$ it doesn't matter whether you take limsup or lim because limits exist).
I've got one thing to add, though. The limsup may yield infinity while $f$ is expected to be finite. So we may note the following: $\limsup f_n=:f$ is measurable as a function from $\mathbb R$ to the extended reals. So the set of $x$'s for which $f$ takes the value of $\infty$ is measurable. Change $\infty$ to any finite number on this set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How do you pack circles into a circle? I want to know how many small circles can be packed into a large circle.
Looking at Erich's Packing Center it seems that packing is a non-trivial problem to solve.
My interest is practical rather than theoretical, so I don't really care about the absolute upper bound. I just want a good enough solution that doesn't waste too much empty space.
Also, I have a range of acceptable diameters for the small circles. So it's not necessarily a question of packing $x$ circles into $1$ circle, but packing $x, x+1, ... x+n$ circles into a circle, and seeing which one looks nicer. For instance, looking at EPC, $16$ and $18$ circles in one circle seems to be much prettier than $17$ - so even if I was looking for $17$ ideally, I would compromise by either using smaller circles or fewer circles instead.
So to solve my problem, I need a general, easily computable algorithm for packing $n$ circles into $1$ circle. I suspect this does not exist (otherwise EPC wouldn't). What about a general, easily computable algorithm that packs $n$ circles into a circle "well enough"?
|
Have you tried just packing them on the boundary? (I'll explain what I mean below).
Let $C$ be the circle of radius $R$ and order your $N$ circles as $C_1,\cdots,C_N$ such that $r_1 \ge r_2 \ge \cdots \ge r_N$. Place $C_1$ inside $C$ tangent to $C$. Then place $C_2$ tangent to $C_1$ and $C$, $C_3$ tangent to $C_2$ and $C$, $C_4$ tangent to $C_3$ and $C$, and keep going until the circle tangent to $C_n$ and $C$ has to intersect $C_1$. Then just "keep going around" (letting $C_{n+1}$ be the unique circle of radius $r_{n+1}$ tangent to $C_1$ and $C_n$).
Of course, this algorithm is not optimal and doesn't always work, but "it won't waste too much space" if it works. If you had values of $r_i$ and $R$ and the remaining space was too big, you can always retry with a smaller value of $R$.
Hope that helps,
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Differentiation under the integral sign for $\int_{0}^{1}\frac{\arctan x}{x\sqrt{1-x^2}}\,dx$ Hello I have a problem that is:
$$\int_0^1\frac{\arctan(x)}{x\sqrt{1-x^2}}dx$$
I try use the following integral
$$ \int_0^1\frac{dy}{1+x^2y^2}= \frac{\arctan(x)}{x}$$
My question: if I can do $$\frac{\arctan(x)}{x\sqrt{1-x^2}}= \int_0^1\frac{dy}{(1+x^2y^2)(\sqrt{1-x^2})}$$ and solve but I note that the integral is more difficult.
Any comment or any help will be well received.
Thanks.
|
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Leftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
By following the user hint, we'll have:
\begin{align}
\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} & =
\int_{0}^{1}{1 \over \root{1 - x^{2}}}\
\overbrace{\int_{0}^{1}{\dd y \over 1 + x^{2}y^{2}}}^{\ds{\arctan\pars{x}/x}}\
\,\dd x =
\int_{0}^{1}\int_{0}^{1}{\dd x \over \root{1 - x^{2}}\pars{y^{2}x^{2} + 1}}
\,\dd y
\end{align}
With $x \to 1/x$, the above expression becomes
\begin{align}
\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} & =
\int_{0}^{1}
\int_{1}^{\infty}{x\,\dd x \over \root{x^{2} - 1}\pars{x^{2} + y^{2}}}\,\dd y\
\\[3mm] & \stackrel{x^{2}\ \to\ x}{=}\
\half\int_{0}^{1}
\int_{1}^{\infty}{\dd x \over \root{x - 1}\pars{x + y^{2}}}\,\dd y\
\\[3mm] & \stackrel{x - 1\ \to\ x}{=}\
\half\int_{0}^{1}
\int_{0}^{\infty}{\dd x \over \root{x}\pars{x + 1 + y^{2}}}\,\dd y
\\[3mm] & \stackrel{x\ \to\ x^{2}}{=}\
\int_{0}^{1}\int_{0}^{\infty}{\dd x \over x^{2} + y^{2} + 1}\,\dd y =
\int_{0}^{1}{1 \over \root{y^{2} + 1}}\ \overbrace{%
\int_{0}^{\infty}{\dd x \over x^{2} + 1}}^{\ds{\pi/2}}\ \,\dd y
\end{align}
With $y \equiv \sinh\pars{\theta}$, the last expression is reduced to
\begin{align}
\color{#f00}{\int_{0}^{1}{\arctan\pars{x} \over x\root{1 - x^{2}}}\,\dd x} & =
{\pi \over 2}\int_{0}^{\textrm{arcsinh}\pars{1}}\,\dd\theta =
\color{#f00}{{\pi \over 2}\,\textrm{arcsinh}\pars{1} =
{\pi \over 2}\,\ln\pars{\!\root{2} + 1}} \approx 1.3845
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Another of $\frac{1^2}{1^2}+\frac{1-2^2+3^2-4^2}{1+2^2+3^2+4^2}+\cdots=\frac{\pi}{4}$ type expressable in cube? Gregory and Leibniz formula
(1)
$$-\sum_{m=1}^{\infty}\frac{(-1)^m}{2m-1}=\frac{\pi}{4}$$
We found another series equivalent to (1)
This is expressed in term of square numbers
$$-\sum_{m}^{\infty}\frac{\sum_{n=1}^{3m-2}(-1)^nn^2
}{\sum_{n=1}^{3m-2}(+1)^nn^2}=\frac{\pi}{4}$$
$$\frac{1^2}{1^2}+\frac{1-2^2+3^2-4^2}{1+2^2+3^2+4^2}+\frac{1-2^2+3^2-4^2+5^2-6^2+7^2}{1^2+2^2+3^2+4^2+5^2+6^2+7^2}+\cdots=\frac{\pi}{4}$$
Is there another series equivalent to (1) but expressable in term of cube numbers?
|
A proof of mahdi's result:
$$ \sum_{k=1}^{n}k^3 = \frac{n^2(n+1)^2}{4}\tag{1}$$
$$ \sum_{k=1}^{2n}(-1)^{k+1} k^3 = -n^2(4n+3),\qquad \sum_{k=1}^{2n-1}(-1)^{k+1} k^3 = n^2(4n-3)\tag{2}$$
lead to:
$$\begin{eqnarray*}\sum_{n\geq 1}\frac{1^3-2^3+\ldots}{1^3+2^3+\ldots+n^3} &=& \sum_{m\geq 1}\frac{-(4m+3)}{(2m+1)^2}+\sum_{m\geq 1}\frac{(4m-3)}{(2m-1)^2}\\&=&3+\sum_{m\geq 1}\frac{(4m-3)-(4m-1)}{(2n-1)^2}\\&=&3-2\sum_{m\geq 1}\frac{1}{(2n-1)^2}=\color{red}{3-\frac{\pi^2}{4}}.\tag{3}\end{eqnarray*}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1789914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Non-trivial explicit example of a partition of unity Does exist a non-discrete paracompact example where is possible to give a partition of unity with the functions defined explicitly for a specific non trivial cover of the space?
|
Here is another simpler example that may help you visualize what partitions of unity are. Consider the open cover of $\mathbb{R}$ given by $U = (-\infty,2)$ and $V = (-2,\infty)$. A partition of unity associated to $\{U,V\}$ could be given by $\{ f_U, f_V \}$, where
$$ f_U, f_V \colon \mathbb{R} \to [0,1]$$
$$ f_U (x) = \left\{ \begin{array}{ll}
1 & \text{ if } x \leq -1 \\
\frac{1-x}{2} & \text{ if } -1 \leq x \leq 1 \\
0 & \text{ if } x \geq 1 \end{array} \right. $$
$$ f_V (x) = \left\{ \begin{array}{ll}
0 & \text{ if } x \leq -1 \\
\frac{x-1}{2} & \text{ if } -1 \leq x \leq 1 \\
1 & \text{ if } x \geq 1 \end{array} \right. $$
Then it is clear that $f_U(x)+f_V(x) = 1$ for all $x$. The support of $f_U$ is contained in $U$ and the support of $f_V$ in $V$. Of course the condition that any $x$ has an open neighbourhood where only a finite number of the functions are different from cero is automatic here because there are only two functions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
s-arc transitive graph is also (s-1)-arc transitive I read on a paper that an s-arc transitive graph is also (s-1)-arc transitive and thus (s-2)-transitive, which was stated as obvious. However, I was thinking that a path of 2 edges, $P_2$, is 2-arc transitive but not vertex-transitive, right?
Could anyone advise me where I got it wrong? Thanks!
|
The right answer is that the s-arc transitive graph must also be regular (all vertices have the same degree). with this extra condition it is easy to prove the claim. Note in some trivial cases the result can be concluded by resorting to trivial LOGICAL rules.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Parametrizing the surface $z=\log(x^2+y^2)$
Let $S$ be the surface given by $$z = \log(x^2+y^2),$$ with $1\leq x^2+y^2\leq5$. Find the surface area of $S$.
I'm thinking the approach should be $$A(s) = \iint_D \ |\textbf{T}_u\times \textbf{T}_v| \,du\,dv ,$$ where $T_u, T_v$ are tangent vectors to the surface (and of course their cross product is the normal).
Now my issue is the parametrization bit. I'm having a very hard time grasping the concept of parametrization of surfaces. I'd be really grateful if someone can help with parametrizing the surface, i.e. $x(u,v)$, $y(u,v)$ and $z(u,v)$ and the domains of $u$ and $v$.
|
If a geometric object has a particular symmetry, generally parameterizations that reflect that symmetry result in easier computations. In our case, $S$ is the surface of a graph whose domain $A := \{1 \leq x^2 + y^2 \leq 5\}$ is an annulus centered at the origin. This suggests letting one of our parameterization variables be the distance $r$ of a point $p \in A$ to the origin, that set, setting $$r := \sqrt{x^2 + y^2}.$$ Then, we can see that our domain $A$ takes on an especially simple form: $$A := \{1 \leq r \leq \sqrt{5}\} .$$ When we use this $r$ for one of our parameters, usually we take an angular parameter $\theta$ for the other (the coordinate system $(r, \theta)$ is called polar coordinates). Since our region sweeps all the way around our origin, we should let $\theta$ vary over some (half-open) interval of length $2 \pi$, e.g., $[0, 2 \pi]$.
Drawing a picture, we see using elementary trigonometry that the transformation that expresses $x, y$ in terms of our polar coordinates $r, \theta$ is
$$\left\{\begin{array}{rcl}x(r, \theta) & = & r \cos \theta \\ y(r, \theta) & = & r \sin \theta\end{array}\right.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Unit quaternion multiplied by -1 If all components of a unit quaternion (also known as versor) are multiplied by -1, so it still remains a versor, does the resulting versor is considered equivalent to the original versor?
|
If we think about rotation quaternion which are also unit quaternion then multiplying it with $-1$ will result it in additional $2\pi$ rotation and in consequence this will not affect original rotation hence it is equivalent to original unit quaternion.
$$q = \cos{\frac{\theta}{2}}+\sin{\frac{\theta}{2}}\frac{\vec{u}}{\|\vec{u}||}$$$$-q= \cos{(\frac{\theta}{2}+\pi)}+\sin{(\frac{\theta}{2}+\pi)}\frac{\vec{u}}{\|\vec{u}||}$$
hence if $q$ is rotates through $\theta$ around $\vec{u}$ then $-q$ rotates $\theta +2\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to show that a limit exisits, and evaluate the limit of trig functions as $x$ tends to point For each of the following functions $f$, determine whether $\lim_{x\to a}f(x)$
exists, and compute the limit if it exists. In each case, justify your answers.
$$f(x)= x^2\cos\frac{1}{x} (\sin x)^4, \text{ where } a=0$$
$$f(x)= \frac{3(\tan(2x))^2}{2x^2}, \text{ where } a=0$$
I'm awful at trig questions, (I don't think I'm allowed to use L'Hôpital's rule).
|
Part (a):
$$\lim_{x\to 0}x^2\cos(1/x)(\sin x)^4$$ as $\cos(\mathrm{anything})=$ is defined, but here it is oscillating between $-1$ and $1$ so we can apply limit
$$\lim_{x\to 0}x^2\cos(1/x)(\sin x)^4=(0)*n*(0)$$ where $n=$ some oscillating(but finite) number
hence answer is $0$.
Part (b):
this one is quite straight forward you might be knowing that $\dfrac{\tan x}{x}=1$ when $x\to 0$(why?you can ask in comments)
then similarly $\bigg(\dfrac{\tan x}{x}\bigg)^2=1$ when $x\to 0$
so just mutliply and divide you will get$$\lim_{x\to 0}\,\,({3\,\,\text x\,\,2})\bigg(\dfrac{\tan 2x}{2x}\bigg)^2=6$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove that an $n \times n$ matrix $A$ over $\mathbb{Z}_2$ is diagonalisable and invertible if and only if $A=I_n$ Through some facts, when $A$ is invertible, I found out that the eigenvalue can't be $0$, since if the eigenvalue is $0$, then $\det(A)=0$, which means that is is not invertible. Since it is over $\mathbb{Z}_2$, then eigenvalue is $1$.
Since the eigenvalue $(\lambda)$ is $1$, and $Av = \lambda v$, then $Av = v$ when $A=I$.
However, I didn't quite get how it can be diagonalizable if and only if $A=I$, or did I get it wrong somewhere? please help :)
|
Since $0$ and $1$ are the only scalars available, and a diagonal entry$~0$ in a diagonal matrix makes it singular (for instance because there is then a zero column), the only invertible diagonal matrix is the identity matrices$~I_n$. A diagonalisable matrix is by definition similar to some diagonal matrix, and invertibility is unchanged under similarity. Hence an invertible matrix over $\Bbb Z/2\Bbb Z$ that is diagonalisable (over that field) is similar to$~I_n$, but only the identity matrix itself is similar to$~I_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Number of rectangles to cover a circle After searching around I found this is similiar to the Gauss Circle but different enough (for me anyway) that it doesn't translate well.
I have a circle, radius of 9 that I need to completely cover with rectangles 4 x 8. What is the minimum number of whole rectangles required.
My own calculations concluded it's between 8 and 11. I found this by setting the maximum number of panels equal to (diameter of circle / panel height)x(diameter of circle/panel width) to obtain 10.125 rounded up to 11
I do think I could safely use 10 due to waste.
I then found the minimum by setting the number of panels equal to (area of circle/area of panel). This gave me an answer of 7.95, rounded up to 8.
Is there a better way to do this?
|
It is possible to cover the circle by $11$ rectangles.
We can construct the $11$ rectangles by following procedure.
*
*Center the circle of radius $9$ at origin.
*Start covering the circle with rectangle $C_0 = [5,9] \times [-4,4]$ (the red one).
*Rotate $C_0$ with respect to origin for angles $\frac{2k}{7} k \pi $ for $k = 1,\ldots, 6$. This gives us $6$ new rectangles $C_1, C_2, \ldots, C_6$ (the gray ones).
*
Making copies of $C_0$, $C_2$ and $C_5$ and shift them inwards for a radial displacement $4$. This give us $3$ rectangles $C'_0$, $C'_2$ and $C'_5$ (the green ones). To make the covering work, one need to shift $C'_2$ and $C'_5$ a little bit tangentially too.
*
*What's remain on the circle can be covered by the rectangle $[-7,1] \times [-2,2]$ (the blue one).
According to Erich's packing center,
the current best known covering of circle by $18$ unit squares has radius $r \approx 2.116$. Since $2.116 \le 2.25 = \frac{9}{4}$, this means there is no known covering of our circle with $9$ rectangles. This leaves us with the question whether we can reduce the number of rectangles from $11$ to $10$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
}
|
Prove that $f$ is periodic if and only if $f(a) =f(0)$.
Suppose that $f'$ is periodic with period $a$. Prove that $f$ is periodic if and only if $f(a) =f(0)$.
I understand the direction assuming that $f(a) = f(0)$, but the other direction I don't get in the solution below. How do they get $f(na) = nf(a) - (n-1)f(0) = \dfrac{n}{n+1}[(n-1)f(a)-f(0)]$?
The following is the solution the book gives for the second part:
Conversely, suppose that $f$ is periodic (with some period not necessarily equal to $a$). Let $g(x)=f(x+a)-f(x)$. Then $g'(x)=f'(x+a)-f'(x)=0$, so $g$ has the constant value $g(0)=f(a)-f(0)$. I.e., $$f(x+a)=f(x)+f(a)-f(0)$$ It follows that $$f(na)=nf(a)-(n-1)f(0)=\frac{n}{n+1}[(n-1)f(a)-f(0)]$$ Now if $f(a)\neq f(0)$, then this would be unbounded. But $f$ is bounded since it is periodic.
|
Slightly differently worded:
As $g$ is constant, we have $f(x+a)-f(x)=g(x)=g(0)$ for all $x$, hence $f(na)=f(0)+ng(0)$ by induction on $n$. If $g(0)\ne 0$, this is unbounded, whereas a continuous periodic function must be bounded. Hence we conclude $g\equiv 0$, i.e., $f(x+a)=f(x)$ for all $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1790964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that $\int_0^{\infty}\cos(\alpha t)e^{-\lambda t}\,\mathrm{d}t=\frac{\lambda}{\alpha^2+\lambda^2}$ using real methods? Could you possibly help me prove $$\int_0^{\infty}\cos(\alpha t)e^{-\lambda t}\,\mathrm{d}t=\frac{\lambda}{\alpha^2+\lambda^2}$$
I'm an early calculus student, so I would appreciate a thorough answer using real and as basic as possible methods. Thank you in advance.
|
Explicitly, with the choice $$u = \cos \alpha t, \quad du = -\alpha \sin \alpha t \, dt, \\ dv = e^{-\lambda t}, \quad v = -\lambda^{-1} e^{-\lambda t},$$ the first integration by parts gives
$$I = \int e^{-\lambda t} \cos \alpha t \, dt = -\frac{1}{\lambda} e^{-\lambda t} \cos \alpha t - \frac{\alpha}{\lambda} \int e^{-\lambda t} \sin \alpha t.$$ Then, the second integration by parts with the choice $$u = \sin \alpha t, \quad du = \alpha \cos \alpha t \, dt, \\ dv = e^{-\lambda t}, \quad v = -\lambda^{-1} e^{-\lambda t},$$ gives $$I = -\frac{1}{\lambda}e^{-\lambda t} \cos \alpha t + \frac{\alpha}{\lambda^2} e^{-\lambda t} \sin \alpha t - \frac{\alpha^2}{\lambda^2} \int e^{-\lambda t} \cos \alpha t \, dt.$$ But this integral on the RHS is simply $I$, so we conclude $$\left(1 + \frac{\alpha^2}{\lambda^2} \right) I = \frac{e^{-\lambda t}}{\lambda^2} \left(\alpha \sin \alpha t - \lambda \cos \alpha t\right),$$ which gives the value of the indefinite integral as $$I = \frac{e^{-\lambda t}}{\lambda^2 + \alpha^2} (\alpha \sin \alpha t - \lambda \cos \alpha t) + C,$$ for some constant of integration $C$. Taking the limit as $t \to \infty$, noting that $\sin \alpha t$ and $\cos \alpha t$ are bounded and finite, gives $$\int_{t = 0}^\infty e^{-\lambda t} \cos \alpha t \, dt = I(\infty) - I(0) = 0 - \left(- \frac{\lambda}{\lambda^2 + \alpha^2} \right) = \frac{\lambda}{\lambda^2 + \alpha^2},$$ as claimed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
Example of a projective variety that is not projectively normal but normal I want to prove the following statement:
Let $Y$ be the quartic curve in $\mathbb{P}^3$ given parametrically by $(x,y,z,w)=(t^4,t^3u,tu^3,u^4)$. Then $Y$ is normal but not projectively normal.
To show it is normal, we can consider the affine patches parametrized by $(x,y,z)=(t^4,t^3,t)$ and $(y,z,w)=(u,u^3,u^4)$, each of the patches has affine coordinate ring isomorphic to $k[t]$, which is a UFD and hence normal.
But I couldn't show that $Y$ is not projectively normal. The homogeneous coordinate ring $S(Y)$ of $Y$ is isomorphic to $k[t^4,t^3u,tu^3,u^4]$, and I want to show that $S(Y)$ is not integrally closed.
Suppose $f\in k(t^4,t^3u,tu^3,u^4)$ is integral over $k[t^4,t^3u,tu^3,u^4]$, then $f$ should be integral over $k[t,u]$ and hence $f\in k[t,u]$. So I want prove $k(t^4,t^3u,tu^3,u^4)$ contains something like $t,t^2,t^3,ut,ut^2$ or $u^2t^2$, but I can't find an example.
Any help is appreciated!
|
A good way to visualize this is to look the set $Q$ of all possible exponents. In this case, it's the set of all $\Bbb N$-linear combinations of $(4,0), (3,1),(1,3),(0,4)$. For our ring to have even a chance to be normal, $Q$ certainly (prove it!) must satisfy the following criterion (in which case we say $Q$ is saturated):
If $\mathbf a \in \Bbb Z Q$ and there's a positive integer $k$ such that $k \mathbf a \in Q$, then $\mathbf a \in Q$, where $\Bbb Z Q$ is the additive group generated by $Q$. (In our case, $\Bbb Z Q= \Bbb Z^2$)
Taking a look at our $Q$, we run into a glaring obstruction: $(2,2)$ is missing! Specifically, $2(2,2)=(4,0)+(0,4)\in Q$, yet $(2,2)\notin Q$.
Returning to our ring, this tells us that $t^2u^2$ satisfies the monic polynomial $X^2-t^4u^4$ over $S(Y)$, yet $t^2u^2\notin S(Y)$. Thus, $S(Y)$ isn't normal.
The above analysis works because $S(Y)$ is an affine semigroup ring (with affine semigroup Q). It turns out that for an affine semigroup ring, being normal is equivalent to its affine semigroup being saturated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Double Integral $\int\limits_0^\pi \int\limits_0^\pi|\cos(x+y)|\,dx\,dy$ First off, I apologize for any English mistakes.I've come across a double integral problem that I haven't been able to solve: Find
$$\int_0^\pi \int_0^\pi|\cos(x+y)|\,dx\,dy$$
Intuitively, I thought it could be solved in a similar manner to what you would do for a regular integral with and absolute value. For example, to solve $\int_0^\pi |\cos(x)| \,dx$, you just take the interval where $\cos(x)$ is positive and multiply it by two:
$$\int_0^\pi |\cos(x)|\,dx = 2\int_0^{\frac{\pi}{2}} \cos(x) \, dx$$
So I naively assumed my initial problem could be solved in a similar manner, that is:
$$\int_0^\pi \int_0^\pi |\cos(x+y)| \,dx\,dy = 4\int_0^{\frac{\pi}{4}}\int_0^{\frac{\pi}{4}}\cos(x+y) \,dx\,dy$$
However, this does not give me the right answer. I'm not really sure how this integral is actually solved.
Thanks a lot for your help!
|
A handful of hints to get you started, each for different directions.
Most require you to just use some alternative definition of $\cos(x)$, as mentioned in the comments...
*
*Use the piecewise definition of $\left|\cos(x+y)\right|$, and add together the separate integrands
*Note that if $x,y\in\Bbb R$, then $\left|\cos(x+y)\right|=\sqrt{\cos^2(x+y)}$
*$\cos(x+y)=\cos(x) \cos(y)-\sin(x) \sin(y)$
*You could always use the definition $\cos(x+y)=\frac 12 e^{-i x-i y}+\frac 12 e^{i x+i y} $, but that can end up being much more difficult.
Start with the first suggestion, though. Try to redefine the function as a piecewise one. Here's a basic example of how a simple integral of an absolute value works from Paul's Online Notes (example 5):
$$\int_0^3\left|3t-5\right|\mathrm{d}t=\int_0^{\frac 5 3} 5-3t\,\mathrm{d}t+\int_{\frac 5 3}^3 3t-5\,\mathrm{d}t$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Recurrent sequence limit Let $a_n$ be a sequence defined: $a_1=3; a_{n+1}=a_n^2-2$
We must find the limit: $$\lim_{n\to\infty}\frac{a_n}{a_1a_2...a_{n-1}}$$
My attempt
The sequence is increasing and does not have an upper bound. Let $b_n=\frac{a_n}{a_1a_2...a_{n-1}},n\geq2$. This sequence is decreasing(we have $b_{n+1}-b_{n}<0$. How can I find this limit?
|
since take $a_{1}=x+\dfrac{1}{x},x>1$,then
$$a_{2}=x^2+\dfrac{1}{x^2},a_{3}=x^4+\dfrac{1}{x^4}\cdots,a_{n}=x^{2^{n-1}}+\dfrac{1}{x^{2^{n-1}}}$$
so we have
$$a_{1}a_{2}\cdots a_{n}=\left(x+\dfrac{1}{x}\right)\left(x^2+\dfrac{1}{x^2}\right)\cdots\left(x^{2^{n-1}}+\dfrac{1}{x^{2^{n-1}}}\right)=\dfrac{x^{2^n}-\dfrac{1}{x^{2^n}}}{x-\dfrac{1}{x}}$$
so
$$\dfrac{a_{n+1}}{a_{1}a_{2}\cdots a_{n}}=\left(x-\frac{1}{x}\right)\dfrac{x^{2^n}+\dfrac{1}{x^{2^n}}}{x^{2^n}-\dfrac{1}{x^{2^n}}}\to \dfrac{x^2-1}{x},n\to+\infty$$
and
$$\left(x-\dfrac{1}{x}\right)^2+4=\left(x+\dfrac{1}{x}\right)^2=9\Longrightarrow x-\dfrac{1}{x}=\sqrt{5}$$
another approach
$$a^2_{n+1}-4=a^4_{n}-4a^2_{n}=a^2_{n}(a^2_{n}-4)=a^2_{n}a^2_{n-1}(a^2_{n-1}-4)=\cdots=a^2_{n}a^2_{n-1}\cdots a^2_{1}(a^2_{1}-4)$$
so we have
$$\dfrac{a_{n+1}}{a_{1}a_{2}\cdots a_{n}}=\sqrt{a^2_{1}-2}\cdot \dfrac{a_{n+1}}{\sqrt{a^2_{n+1}-4}}\to\sqrt{a^2_{1}-4}=\sqrt{5},a_{n+1}\to+\infty$$
since $a_{n+1}\to +\infty,n\to +\infty$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
In how many ways can $2t+1$ identical balls be placed in $3$ boxes so that any two boxes together will contain more balls than the third?
In how many ways can $2t+1$ identical balls be placed in $3$ boxes so that any two boxes together will contain more balls than the third?
I think we have to use multinomial theorem, but I cannot frame the expression!
|
Notation: A box $X$ has $x$ balls inside.
First of all, note that no box can contain less than $1$ and more than $t$ balls. Also, the sum of two boxes mustn't be lower than $t + 1$.
Let's say box $A$ has $1 \leq a \leq t$ balls. Box $B$ must have then at least $t - a + 1 \leq b \leq t$ balls. This gives you $a$ ways of filling box $B$. The remaining balls go to box $C$.
Therefore, since $a$ ranges from $1$ to $t$, you have $\sum_{a=1}^{t} a = \dfrac{t\cdot\left(t+1\right)}{2}$ ways of filling these boxes that satisfy the condition.
Edit: I am assuming that boxes are distinguishable from eachother, so that the distribution $\{t, t, 1\}$ differs from $\{t, 1, t\}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What's the chance that a number randomly drawn from a set of numbers is bigger than one drawn before it? It has been a long time since last I opened a book on probability/statistics, so this might actually be a very basic question.
Let's say I have a set $S = \{1, 2, \cdots, 5000\}$. Clearly, $S$ has 5000 elements.
Now, I select a number randomly from $S$ and remove it (I reassign $S = S\backslash\{a^0\}$), where I'm calling the number $a^0$. Now, $S$ has 4999 elements.
If I select and then remove a second number at random, called $a^*$, what is the probability that $a^* > a^0$?
So, after removing $a^0$, we have 4999 elements in the set. The chance that a number in the set is greater than $a^0$ will be
$$
P(a^*>a^0)= \dfrac{\mbox{number of elements greater than $a^0$}}{\mbox{number of elements remaining}} = \dfrac{\mbox{number of elements greater than $a^0$}}{4999}
$$
Let's try a simpler example, $T = \{1,2,3,4\}$.
I pick | P(a^*>a^0)
1 | 1
2 | 2/3
3 | 1/3
4 | 0
I thought about looking at the expected value as well.
$$
E(X) = \sum xP(x) = x_0 P(x_0) + x_1 P(x_1) + \cdots + x_n P(x_n)
$$
Applying this to the diminished set, we have
$$
E(S) = \sum aP(a) = a_0P(a_0) + \cdots + a_{5000}P(a_{5000})
$$
Now, $P(a_i) = 1/4999$ for all $i = 1,\cdots,5000$ except for when $i=i^0 \implies a_{i^{0}} = a^0 \implies P(a_{i^{0}}) = 0$.
So a rough estimate of the expectation is $E(S) = \frac{1}{4999}\sum_{k=1}^{5000} k = 5001 \times 2500 \div 4999 \approx 2501.00020004$. Obviously, this is quite far off. For example, $a^0 = 5000$ means that we will have an error in our expectation greater than $1$!
I'm editing because there were a lot of comments and I think I was unclear.
When selecting the first number $a^0$, each number has probability $1/5000$ of being drawn. Hence, the expected value drawn is $(1+2+\cdots+5000)/5000 = 2500\times5001\div5000 = 5001/2$ (using the definition of the expectation). This was mentioned by André Nicolas below.
But remember: I remove $a^0$ after drawing it, so the size of the set decreases by 1.
So now I'll draw for the second time, from the set with $a^0$ removed.
Each number still in the set has probability $1/4999$ of being drawn. But our sum in the numerator is missing $a^0$.
So the expected value is $(1+2+\cdots+5000 - a^0)/4999$. Is there a way of making this value more explicit?
Sorry if you think I'm being obtuse; I just really don't get it!
|
The probability is $\frac12$ by symmetry. Since all numbers are by assumption equiprobable, for each pair you have the same probability of drawing first the one number and then the other or vice versa.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
One partial derivative is continuous (at a single point) implies differentiable? Let $f:\mathbb{R}^2\to\mathbb{R}$ and $(p,q)\in\mathbb{R}^2$ such that both $f_x$ and $f_y$ exists at $(p,q)$.
Assume that $f_x$ is continuous at $(p,q)$.
How do we prove/disprove that $f$ is differentiable at $(p,q)$?
I do note that it is similar to this question: Continuity of one partial derivative implies differentiability
However the critical difference is that for my case, I only have $f_x$ continuous at a single point $(p,q)$, not even in a neighborhood, hence I believe that the approach of Fundamental Theorem of Calculus used in the other question cannot work.
Thanks for any help!
|
A simpler example: f(x,y)= 1 if xy= 0, otherwise, f(x,y)= 0. Both partial derivatives at (0, 0) are 0 but f is not differentiable at (0, 0).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Norm equivalence on $l^1$. Suppose that $\|\cdot\|$ is a norm on $l^1$ such that:
a) $(l^1, \|\cdot\|)$ is a Banach space,
b) for all $x \in l^1$ $\|x\|_{\infty} \leq \|x\|$.
Prove that the norms $\|.\|$ and $\|.\|_1$ are equivalent.
($\|\cdot\|_{\infty}$ - supremum norm and $\|\cdot\|_1$ - standard $l^1$)
Becouse both $(l^1, \|\cdot\|)$ and $(l^1, \|\cdot\|_1)$ are Banach so it's enough to prove the existence some $M > 0$ such that $M\|x\|_1 \leq \|x\|$ or $\|x\|_1 \geq M\|x\|$ but I don't know which one may be easier to prove.
|
At first I thought one or the other inequality must be obvious, but I don't see it after a little thought. Big Hint: It's trivial from the Closed Graph Theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1791925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $x = a_1+\dfrac{a_2}{2!}+\dfrac{a_3}{3!}+\cdots$
If $x$ is a positive rational number, show that $x$ can be uniquely expressed in the form $$x = a_1+\dfrac{a_2}{2!}+\dfrac{a_3}{3!}+\cdots\text{,}$$ where $a_1,a_2,\ldots$ are integers, $0 \leq a_n \leq n-1$ for $n > 1$, and the series terminates.
I don't see how in the solution below we can take "$a_n \in \{0,\ldots,n-1\}$ such that $m - a_n = nm_1$ for some $m_1$".
Book's solution:
|
It's the archmedian principal. For any two natural number $m$ and $n$ there exist a unique natural number (including 0) $k$ such that $k*n \le m < (k+1)n$.
Or in other words for any two natural numbers $m$ and $n$ there are unique natural $k$ and $a$ such that $m = k*n + a; 0\le a < n$.
Or in other words for any two natural numbers $m$ and $n$ there are unique $m-a = k*n$.
Or in other words for any $m/n!$ we can choice $a_n$ to be the unique $a_n \in \{0,..... ,n-1\}$ so that $m - a_n = n*m_1$ for some natural $m_1$. (i.e. $a_n = a$ above and $m_1 = k$ above.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Minimize $a^5+b^5+c^5+d^5+e^5 = p^4+q^4+r^4+s^4 = x^3+y^3+z^3 = m^2 + n^2$ with distinct positive integers
Find the minimum value of the following:
$$a^5+b^5+c^5+d^5+e^5 = p^4+q^4+r^4+s^4 = x^3+y^3+z^3 = m^2 + n^2$$
where all numbers are different/distinct positive integers.
I know the answer (see below), but want to confirm the same.
Is there any way to prove following conjecture?
Conjecture. There is always unique way to write down $\sum_{i=1}^{n} a_{i}^n$ for any arbitrary value of $n$ such that it gives same value for all values of $n$.
Answer is given below, spoiler alert:
$$1^5+2^5+4^5+7^5+9^5 = 3^4+6^4+10^4+16^4 = 17^3+20^3+40^3 = 88^2 + 263^2 = 76913$$
|
the smallest is
76913 squares of: 263 88
76913 cubes of: 40 17 20
76913 fourth powers of: 16 3 6 10
76913 fifth powers of: 9 1 2 4 7
76913
the second smallest is
1560402 squares of: 1239 159
1560402 cubes of: 101 45 76
1560402 fourth powers of: 35 5 12 14
1560402 fifth powers of: 17 1 6 8 10
1560402
Here is another one. It may or may not be the third smallest.
2091473 squares of: 1367 472
2091473 cubes of: 122 10 65
2091473 fourth powers of: 32 14 21 30
2091473 fifth powers of: 17 4 8 10 14
2091473
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Cardano's method returns incorrect answer for $x = u + v$ I'm trying to use Cardano's method to solve this equation:
$$x^3+6x=20 \tag{1}$$
As described on Wikipedia, I let $x = u + v$ and expand in $(1)$:
$$(u+v)^3+6(u+v)=20$$
$$u^3 + v^3 + (3uv+6)(u+v)-20=0 \tag{2}$$
I then let $3uv + 6 = 0$ and substitute in $(2)$:
$$u^3 + v^3 - 20 = 0$$
$$u^3 + v^3 = 20 \tag{3}$$
I also express $uv$ as a product of cubics:
$$3uv + 6 = 0$$
$$uv = -2$$
$$u^3v^3 = -8$$
$$-4u^3v^3 = 32 \tag{4}$$
At this point, Wikipedia says "the combination of these two equations [$(3)$ and $(4)$] leads to a quadratic equation" which I think I can also be achieved by squaring $(3)$ and adding $(4)$ to both sides:
$$u^6 + 2u^3v^3 + v^6 = 400$$
$$u^6 - 2u^3v^3 + v^6 = 432$$
$$(u^3 - v^3)^2 = 432$$
$$u^3 - v^3 = \pm 12\sqrt{3} \tag{5}$$
I then get $u$ by adding $(3)$ and $(5)$:
$$2u^3 = 20 + 12\sqrt{3} \textrm{ or } 20 - 12\sqrt{3}$$
$$u = \sqrt[3]{10 + 6\sqrt{3}} \textrm{ or } \sqrt[3]{10 - 6\sqrt{3}}$$
and $v$ by subtracting $(3)$ and $(5)$:
$$2v^3 = 20 - 12\sqrt{3} \textrm{ or } 20 + 12\sqrt{3}$$
$$v = \sqrt[3]{10 - 6\sqrt{3}} \textrm{ or } \sqrt[3]{10 + 6\sqrt{3}}$$
I finally get $x$ by adding $u$ and $v$:
$$x = \sqrt[3]{10 + 6\sqrt{3}} + \sqrt[3]{10 - 6\sqrt{3}}$$
$$\textrm{ or } \sqrt[3]{10 - 6\sqrt{3}} + \sqrt[3]{10 + 6\sqrt{3}}$$
$$\textrm{ or } 2\sqrt[3]{10 + 6\sqrt{3}}$$
$$\textrm{ or } 2\sqrt[3]{10 - 6\sqrt{3}}$$
I know there's a real solution, so that only leaves $x = 2\sqrt[3]{10 + 6\sqrt{3}}$ which equals approximately $195$ instead of $20$ in the original equation. I can only find the correct real solution by using $x = u - v$ instead of $x = u + v$:
$$x = \sqrt[3]{10 + 6\sqrt{3}} - \sqrt[3]{6\sqrt{3} - 10}$$
So, am I misusing Cardano's method somehow?
|
You forgot the condition $u^3v^3=-8$. The correct solution is the first you enunciated.
That said, your way of solving the system of equations $\; \begin{cases}u^3+v^3=20\\u^3v^3=-8\end{cases}\;$ is over complicated.
Just use what any high-school student knows to solve the problem of finding two numbers the sum $s$ and product $p$ of which are given: they're roots of the quadratic equation (if any):
$$t^2-st+p=t^2-20t-8=0$$
The reduced discriminant is $\Delta'=100+8$, hence the roots
$$u^3,v^3=10\pm\sqrt{108}=10\pm6\sqrt 3.$$
You must take both roots, because the problem involves the condition on the product is $8$ – i.e. it implies the qudratic equation, but is not equivalent to it. Thus
$$x=u+v=\sqrt[3]{10-6\sqrt 3}+\sqrt[3]{10+6\sqrt 3}.$$
Edit:
As pointed out by @André Nicolas $10\pm6\sqrt 3=(1\pm\sqrt3)^3$, so that the root is equal to $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why am I getting two different answers (and the textbook a third) on this 3D trig problem?
Simone is facing north and facing the entrance to a tunnel through a mountain. She notices that a $1515$ m high mountain is at a bearing of $270^\circ$ from where she is standing and its peak has an angle of elevation of $35^\circ$. When she exits the tunnel, the same mountain has a bearing of $258^\circ$ and its peak has an angle of elevation of $31^\circ$.
Assuming the tunnel is perfectly level and straight, how long is it?
I had two problems with this question. I was getting two different answers using methods that should give the same answer and neither of those answers matched with the answer in the textbook.
Attempt 1:
What we want to figure out is the value for $d$. If we can figure out the values for $x$ and $y$, then we can use Pythagorean theorem to figure out the value for $d$.
In this case,
$$x = \frac{1515}{\tan 35^\circ} \qquad\text{and}\qquad y = \frac{1515}{tan 31^\circ}$$
We also know that $d =\sqrt{y^{2}-x^{2}}$, so
$$d = \sqrt{\left(\frac{1515}{\tan 31^\circ}\right)^{2}- \left(\frac{1515}{\tan 35^\circ}\right)^{2}}$$
This makes the value of $d$ about 1294 meters.
Attempt 2:
We can figure out the value for $\theta$. In the ground level triangle $\theta = 258^\circ -180^\circ = 78^\circ$. This also means $\gamma = 90^\circ - 78^\circ = 12^\circ$. In the solution above, we figured out the value for $x$ and $y$. We can use trig ratios to figure out the value for $d$.
$$\tan\gamma = \frac{d}{x} \qquad\to\qquad d = x \tan 12^\circ = \frac{1515 \tan 12^\circ}{\tan 35^\circ}$$
This gives a value for $d$ equal to about $460$ meter.
In my textbook, the answer for the length of the tunnel is actually $650$ meters. I was wondering what am I doing wrong. Also: Why are my two answers not matching?
|
From the right angled triangle directly, connect two triangles in horizontal plane:
$$ 1515 \sqrt{ \cot ^2 31^0 - \cot^2 35^0} \approx 1294 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Factor out (m+1) in the following so that the final answer is $\frac{(2m+1) (m+2) (m+1)} {6}$ Question: $\frac{m (m+1) (2m+1) + 6(m+1)^2}{6}$=$\frac{(2m+3)(m+2)(m+1)}{6}$
I must multiply by 6 on both sides and expand the brackets and collect like terms. I'm I correct?
Edit notes: The original problem was posed as:
$$ \frac{(m+1) (2m+1) + 6(m+1)^2}{6}= \frac{(2m+3)+(m+2)(m+1)}{6} $$
which did not match the title. The edit provides the corrected question.
|
The description as given is correct. Multiplying both sides by $6$ yields
$$(2m+1) (m+1) + 6(m+1)^2= (2m+3)+(m+2)(m+1)$$
Now what remains is to determine the value of both sides. Let $L_{m}$ be the left and $R_{m}$ be the right.
\begin{align}
L_{m} &= (2m+1)(m+1)m + 6(m+1)^2 \\
&= (2m^2 + 3m + 1)m + 6(m^2 + 2m +1) \\
&= 2 m^3 + 9 m^2 + 13 m + 6.
\end{align}
\begin{align}
R_{m} &= (2m+3)(m+2)(m+1) \\
&= 2 m^3 + 9m^2 + 13 m + 6.
\end{align}
This demonstrates that $L_{m} = R_{m}$, as required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Module of finite length $\implies$ finite direct sum of indecomposable modules I'm trying to solve the following question.
Let $M$ be an $R$-module of finite length (i.e, both Artinian and Noetherian). Prove that it is isomorphic to a finite direct sum of indecomposable submodules.
I was thinking I could induct on the length, though I'm having a difficulty in the induction step.
Base case holds trivially. If for all modules of length $ <n $, the proposition holds, consider a module of length $n$, which has the following decomposition:$$ M \supset M_1 \supset \ldots \supset M_n \supset (0) $$
By the Induction hypotheses, $M_1$ is isomorphic to a finite direct sum of indecomposable modules.
Now, if I show that the following short exact sequence splits, then I'm done.
$$ 0 \rightarrow M_{1} \rightarrow M \rightarrow M/M_1 \rightarrow 0 $$
But I'm unable to do this. Any help will be appreciated.
|
If $M$ is indecomposable, you're done. Otherwise, $M=X\oplus Y$, with $X$ and $Y$ both nonzero.
Now prove that the lengths of $X$ and $Y$ are less than the length of $M$.
Hint: build composition series for $X$ and $Y$ and paste them up.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Tokens in boxes problem Tokens numbered $1,2,3...$ are placed in turn in a number of boxes. A token cannot be placed in a box if it is the sum of two other tokens already placed inside that box. How far can you reach for a given number of boxes?
For two boxes it's quite straightforward to see that it's impossible to place more than $8$ tokens (box $\#1$ contains $1,2,4,8$ and box $\#2$ contains $3,5,6,7).$ Using a computer to try many solutions [1], I was able to show that $23$ is the maximum for $3$ boxes. After trying more than 10 billion solutions, I have found a solution going up to $58$ for four boxes, but I am unable to show that it's the optimal solution.
Is there a more clever way to find the answer?
[1] https://gist.github.com/joelthelion/89e1a98c73ea6784bcbd4d7450b0bd5e
|
These numbers are tabulated at https://oeis.org/A072842 – but not very far. All that's given there is 2, 8, 23, 66, 196, and that last number is a little shaky. Several references are given for further reading. Some bounds are given for large numbers of boxes, but the upper and lower bounds are very far apart.
So, if you find a good way to go beyond 5 boxes, you're onto a winner.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Student test statistic and self normalizing sum. I study the asymptotic distribution of self normalizing sums which are defined as
$S_n/V_n$ where
$S_n=\sum_{i=1}^n X_i$ and $V_n^2 = \sum_{i=1}^n X_i^2$
for some i.i.d RV's $X_i$.
Motivation to study such sums comes from the fact that the classical Student $T_n$ statistic could be expressed as:
$T_n(X)= \frac{\sum_{i=1}^n X_i}{\sqrt{\frac{n}{n-1}\sum_{i=1}^n (X_i-\overline X)^2}} = \frac{S_n/V_n}{\sqrt{\frac{1}{n-1}(n - (S_n/V_n)^2)}}$
From the paper I study (http://arxiv.org/pdf/1204.2074v2.pdf) I know that:
If $T_n$ or $S_n/V_n$ has an asymptotic distribution, then so does the other, and they coincide.
but it do not seem trivial for me. Can someone explain it?
I'm not sure if it is only showing that the denominator is equal 1 in probability and using the Slutsky-Theorem?
https://de.wikipedia.org/wiki/Slutsky-Theorem
|
The authors reference a 1969 paper by Efron. The relevant section in Efron appears to be a reference to what at that time was an unpublished paper by Logan, Mallows, Rice and Shepp (see p.16, last paragraph of Efron).
However, the article you are reading give the actual paper, published in 1973 :Limit distributions of self-normalized sums. They reference back to Efron's 1969 paper, where he uses Fisher's finding that the t-statistic is asymptotically normal under the 0-mean hypothesis as long as the vector of random variates have rotational symmetry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Definition of $\frac{dy}{dx}$ where we cannot write $y=f(x)$ Informally, I can say that "for a small unit change in $x$, $\frac{dy}{dx}$ is the corresponding small change in $y$". This is however a bit vague and imprecise, which is why we have a formal definition, such as:
Where $y=f(x)$, we define $$\frac{dy}{dx}=\lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{h}. $$
$$$$
But what about if we are unable to express $y$ as a function of $x$? (To pick a random example, say we have $\sin (x+y) + e^y = 0$.)
Then what would the general definition of $\frac{dy}{dx}$ be? (How would the above definition be modified?)
$$$$
To elaborate, applying the $\frac{d}{dx}$ operator to my random example, we get $$\left(1+\frac{dy}{dx}\right)\cos(x+y)+e^{y}\frac{dy}{dx}=0 \implies \frac{dy}{dx}=\frac{-\cos(x+y)}{\cos(x+y)+e^{y}}.$$
Again, I can say that "for a small unit change in $x$, $\frac{dy}{dx}$ is the corresponding small change in $y$". But what would the corresponding precise, formal definition be?
|
Apparently you are mixing two things: "definition of derivative" and "a practical method of calculate derivative in a certain context". The limit definition is not supposed to be used for calculating derivatives of complicated functions, rather it is supposed to be used to prove theorems and these theorems turn out to be the basis of techniques of differentiating complicated functions.
Thus when you are applying operator $d/dx$ on $$\sin(x + y) + e^{y} = 0\tag{1}$$ you are in effect assuming that the equation $(1)$ defined a genuine function $y = f(x)$ of $x$ in a certain interval and then you are using the basic rules of differentiation (mainly sum rule and chain rule).
Note that in general not every equation of type $f(x, y) = 0$ leads to a function $y = g(x)$. In case of equation $(1)$ there is a genuine function $y = f(x)$ and hence on the basis of this assumption we can differentiate the equation $(1)$ and find $f'(x)$ with some algebraic manipulation.
It many cases it is not possible/desirable to use the limit definition of derivative to calculate it. The example you gave is one such case because here we don't have an explicit formula for $f(x)$ in terms of elementary functions. Note that the definition of $f'(x)$ does not assume that there should be an explicit formula to calculate $f(x)$ in terms of elementary functions and hence we can very well talk about $f'(x)$ where $f(x) = y$ is given by the equation $(1)$ in implicit manner.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1792878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Power Diophantine equation: $(a+1)^n=a^{n+2}+(2a+1)^{n-1}$ How to solve following power Diophantine equation in positive integers with $n>1$:$$(a+1)^n=a^{n+2}+(2a+1)^{n-1}$$
|
If the equation holds, then $(a+1)^n>(2a+1)^{n-1}$ and $(a+1)^n>a^{n+2}$.
Multiplying gives $(a+1)^{2n} > (2a+1)^{n-1}a^{n+2}$. This can be rewritten as $(a^2+2a+1)^n > (2a^2+a)^{n-1} a^3$. However, if $a\geq 3$, then $a^2+2a+1 < 2a^2+a$ and $a^2+2a+1 <a^3$, which gives a contradiction.
Hence $a=1$ or $a=2$. The first gives $2^n=1+3^{n-1}$. With induction, one may prove that the right hand side is larger for $n>2$. This only gives the solution $(1,2)$.
The second gives $3^n=2^{n+2}+5^{n-1}$. With induction, one may prove that the right hand side is larger for $n>3$. One can also check that there is no equality for $n=1,2,3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1793142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
recurrence relation number of bacteria Assume that growth in a bacterial population has the following properties:
*
*At the beginning of every hour, two new bacteria are formed for each bacteria that lived in the previous hour.
*During the hour, all bacteria that have lived for two hours die.
*At the beginning of the first hour, the population consists of 100
bacteria.
Derive a recurrence relation for the number of bacteria.
I know that if the bacteria wouldn't die $A_{n} = 2A_{n - 1}$, but now I have no idea what to do.
|
We know $A_1 = 100$, and in the second hour there will be $200$ newborn and $100$ that will die the third hour. So, $A_2 = 300$. Those $300$ alive in the second hour mean that in the third hour there will be $600$ newborn and $200$ still alive; $A_3 = 800$.
Following this logic, we can define:
$$A_n = 2 \, A_{n-1} + 2 \, A_{n-2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1793349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Prove the Radical of an Ideal is an Ideal I am given that $R$ is a commutative ring, $A$ is an ideal of $R$, and $N(A)=\{x\in R\,|\,x^n\in A$ for some $n\}$.
I am studying with a group for our comprehensive exam and this problem has us stuck for two reasons.
FIRST - We decided to assume $n\in\mathbb{Z}^+$ even though this restriction was not given. We decided $n\ne 0$ because then $x^0=1$ and we are not guaranteed unity. We also decided $n\notin\mathbb{Z}^-$ because $x^{-1}$ has no meaning if there are no multiplicative inverses. Is this a valid argument?
SECOND - We want to assume $x,\,y\in N(A)$ which means $x^m,\,y^n\in A$ and use the binomial theorem to expand $(x-y)^n$ which we have already proved is valid in a commutative ring and show that each term is in A so $x-y$ is in $N(A)$. The biggest problem is how to approach the $-y$ if we are not guaranteed unity. Does anyone have any suggestions?
Thank you in advance for any insight.
|
You seem to have everything else, so here’s one approach to the question of why $x\in N(A)$ implies that $-x\in N(A)$.
Since $(-a)b +ab =0b=0$, we see that $(-a)b=-(ab)$, and so $(-x)^n=x^n$ if $n>0$ is even, and it’s $-x^n$ if $n$ is odd. In either case, if $x^n\in A$, then $(-x)^n\in A$, so that $x\in N(A)\Longrightarrow-x\in N(A)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1793633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability that all $n$ colors are selected in $m$ trials? I have a concrete problem, say, there are $n$ different balls ($n$ different colors to distinguish them), each ball will be selected uniformly at random. The way I choose a ball is that I randomly get a ball, and write down its color.Then I put this ball back to the collection and choose a ball again. It won't stop until I get $m$ recorded colors. Suppose $m \geq n$. How many ways are there such that all $n$ colors were selected? And what is the probability that all $n$ different colors are recorded in the consecutive $m$ selections? My guts tell me there are $n^m$ combinations in total, but my rusty math halts me there. Thank you in advance.
|
There are $n^m$ possible colour strings, all equally likely. Now we count the favourables. The favourables are the functions from a set of $m$ elements to a set of $n$ elements that are onto.
The number of such functions is given by $n!$ times the Stirling Number of the Second Kind often denoted by $S(m,n)$. There is no known closed form for $S(m,n)$, but there are useful recurrences.
So our probability is $\frac{n!S(m,n)}{n^m}$.
Remark: Another way to count the favourables is to use Inclusion/Exclusion. There are $n^m$ functions. There are are $(n-1)^m$ functions that "miss" a particular number, so an estimate for the number of favourables is $n^m-\binom{n}{1}(n-1)^m$. But this double-subtracts the functions that miss two elements, so an improved estimate for the number of favourables is $n^m-\binom{n}{1}(n-1)^m+\binom{n}{2}(n-2)^m$. Continue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1793701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Trying to Understand Van Kampen Theorem
Theorem. Let $X$ be the union of two path-connected open sets $A$ and $B$ and assume that $A\cap B\neq \emptyset$ is simply-connected. Let $x_0$ be a point in $A\cap B$ and all fundamental groups will be written with respect to this base point. Let $\Phi:\pi_1(A)\sqcup \pi_1(B)\to \pi_1(X)$ be the natural homomorphism induced from the maps $\pi_1(A), \pi_1(B)\to \pi_1(X)$ (Here '$\sqcup$' denotes the free product). Then $\Phi$ is and isomorphism.
(The above theorem is given in more general form in Hatcher's Algebraic Topology, but for me the above special case suffices.)
I can see that $\Phi$ is surjective. So we need to address the injectivity of $\Phi$.
Write $G=\pi_1(A)\sqcup \pi_1(B)$.
Suppose $\gamma$ is a loop in $A$ based at $x_0$. Think of $[\gamma]$ as member of $G$. Assume that $\Phi([\gamma])$ has the trivial homotopy class in $X$. In order for $\Phi$ to be injective, it is necessary that $[\gamma]$ be the identity element of $G$.
My question is whether or not the following statement is correct:
Statement. $[\gamma]$ is the identity element of $G$ if and only if $\gamma$ has trivial homotopy class in $\pi_1(A)$.
Please check my last statement. For if the above is wrong then it would mean I have to go back to free products.)
|
If I am understanding you correctly, you are asking whether the canonical inclusion map $\pi_1(A)\to \pi_1(A)\sqcup\pi_1(B)$ is injective. This is true, and follows from the concrete description of elements of the free product as reduced words. Explicitly, an element of $\pi_1(A)\sqcup\pi_1(B)$ is a finite sequence $(a_1,a_2,\dots, a_n)$ where the $a_i$ alternate between being non-identity elements of $\pi_1(A)$ and non-identity elements of $\pi_1(B)$. The inclusion map $\pi_1(A)\to\pi_1(A)\sqcup\pi_1(B)$ then sends $a\in\pi_1(A)$ to the sequence of length $1$ consisting only of $a$ if $a$ is not the identity, and sends the identity to the sequence of length $0$ (which is the identity of $\pi_1(A)\sqcup\pi_1(B)$). This map is obviously injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1793853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.