Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Plotting a line without negative values Is it possible to have a line a equation of sine without negative values. Not the same as sin(abs(x)) as negative values would be reflected in the x-axis. I'm looking for an equation where negative values of sin are converted to 0, looking not too dissimilar to a toblerone.
|
What about $f(x)=\max(\sin(x),0)$?
Edit: The function given by @RideTheWavelet in his comment is precisely this one, stated in different form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2177525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding a threshold between two accumulations My question:
I have a list of numbers. This numbers are part of two accumulations, for each accumulation there is some unknown number of values around a specific average I don't know.
How can I find a threshold between those two accumulations, so I can say for every number if it's in accumulation $1$ or $2$?
Calculating the average of the two values forming the biggest jump would not work, it would be too unprecise.
Almost no numbers are the same, so it's originally not a bimodal distribution.
A computer should finally calculate this, so the way of doing this can be long.
The data is made by a human, pressing a button longly or shortly. The computer should detect if he means long or short, independently of the absolute length of the pressure.
Thanks for your advice.
|
I already have an idea:
Maybe I could "group" the numbers reducing their "resolution" and then calculate the threshold of the now bimodal distribution. But this "resolution" has to be right, if it's to small, the result would be too unprecise, if it's too high, the result could be totally wrong.
I'm interested in your ideas :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2177600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Interpretation of Kolmogorovs Law of Large Numbers and Convergence My lecture slides begin,
Consider a sequence of random numbers $X_1,X_2,...,X_n$ (the lecturer here stated
that random numbers and random variables - which in my understanding
are maps from a sample space to the real line - can be used
interchangeably) all sampled from the same distribution which has mean (expectation) $\mu$.
Then the arithmetic average of the sequence
$S_n=\frac{1}{n}(X_1+X_2+...+X_n)\approx E[x]$
and
$S_n=\frac{1}{n}(X_1+X_2+...+X_n)= E[x]$ as $n\rightarrow \infty$
It then goes on to state almost surely convergence
$X_n \rightarrow X \leftrightarrow P(\omega\in\Omega\mid X_n(w)\rightarrow X(w)$ as $n\rightarrow \infty)=1.$
My question is a conceptual one, what do we mean by $X_n\rightarrow X$ and $X_n(w)\rightarrow X(w)$?
For that matter what do we mean by $X_n$?
Are these aprroximations to the distribution $X$ after the $n_{th}$ trial if so when summing $S_n$ are we summing "maps from a sample space to the real line"? How would one add a map?
Am I correct in reading almost sure convergence as "$X_n$ becomes the distribution $X$ if and only if the probability of an outcome is 1 such that $X_n$ evaluated at that outcome becomes the value of $X$ evaluated at that outcome as we take successive approximations of $X$, $X_n$"
Please help!
|
I take it that the random variables $X_1,X_2,\ldots$ are assumed to be independent - this is an important assumption.
For a given element of the sample space, $\omega$, the sequence $X_1(\omega),X_2(\omega),\ldots$ is a sequence of real numbers. Saying that $X_n(\omega)\to X(\omega)$ is shorthand for saying that the limit
$$
\lim_{n\to\infty}X_n(\omega)
$$
exists and equals $X(w)$. That is, for all $\omega$ and for all $\epsilon$, there exists a number $N(\omega,\epsilon)$ such that for all $n>N(\omega,\epsilon)$ we have $|X_n(\omega)-X(\omega)|<\epsilon$.
Saying $X_n\to X$ is shorthand for saying that $X_n(\omega)\to X(\omega)$ for all $\omega$ in the sample space.
Each $X_n$ is a random variable, that is, a map from the sample space to $\mathbb R$. However, we are also assuming that these random variables are independent. That means we are assuming that there is a common sample space $\Omega$ such that $X_1,X_2,\ldots$ are all functions from $\Omega$ to $\mathbb R$, meaning that for every $n$ and for every sequence of real numbers $a_1,\ldots,a_n$ the events $\{X_1<a_1\},\ldots,\{X_n<a_n\}$ are independent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2177739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can a topology $\tau$ on $\mathbb R$ be defined such that $(\mathbb R,\tau)$ is a compact Hausdorff space? Can a topology $\tau$ on $\mathbb R$ be defined such that $(\mathbb R,\tau)$ is compact Hausdorff ? Obviously such a topology must be Normal , but I am unable to make any further conclusion . Please help . Thanks in advance
|
Max's answer is fine: If $f:X \rightarrow Y$ is a bijection and $Y$ has a compact Hausdorff topology $\mathcal{T}$ we can define $\mathcal{T}_X = \{ f^{-1}[O] : O \in \mathcal{T}\}$ as a topology on $X$. This makes $f$ a homeomorphism, and any properties that $Y$ has, $X$ has too in this topology.
But to give a concrete example as well:
The Fort topology is an easy example: $$\mathcal{T} = \{U \subset \mathbb{R}: 0 \notin U \text { or } \mathbb{R} \setminus U \text { is finite}\}$$
is a compact Hausdorff topology on the reals. (It is in fact homeomorphic to the one-point compactification of the discrete reals.). Compactness is easy to check: any open cover contains an open set that contains $0$. Its complement can only be finite etc. Hausdorffness is seen by some case distinctions (is one of the points $0$ or not).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2177818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find the volume of a revolved cycloid I have to find the volume of the area surrounded by the first arc of the cycloid given by $x(t) = a(t - \sin t), y(t) = a(1 - \cos t)$ when revolved around the $y$-axis. I know that:
$$ V = \pi \int\limits_{a}^{b}[f(x)]^2dx$$
The problem here is that I do not know which bounds I need to take to integrate.
I did the following:
$$ V = \pi \int\limits_{0}^{2\pi a}[x(t)]^2dy(t) = \pi \int\limits_{0}^{2\pi a}a^3(t - \sin t)^2 cos t dt$$
but this yields the wrong result. I chose the bounds this way because when $t = 2\pi a$, then $y = 0$ again. Can someone provide any help?
EDIT: I would like to know a solution using the formula I listed above.
|
Your formula works only if a line parallel to the $x$-axis intersects the curve at a single point. But that is not the case here, because for every $y$ between $0$ and $a$ there are two intersections.
The integral
$$
V_1 = \pi \int_{0}^{2a}x_{int}^2 dy
= \pi \int_{0}^{\pi}a^3(t - \sin t)^2 \sin t\, dt
$$
gives the volume of the space comprised between the $y$ axis and the rotated cycloid (notice that $dy=a\sin t\,dt$).
The integral
$$
V_2 = \pi \int_{0}^{2a}x_{ext}^2 dy
= -\pi \int_{\pi}^{2\pi}a^3(t - \sin t)^2 \sin t\, dt
$$
gives the volume of the rotated outer half-cycloid. The volume of the rotated cycloid is then
$$
V=V_2-V_1=-\pi \int_{0}^{2\pi}a^3(t - \sin t)^2 \sin t\, dt.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2177983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Definition of hyperplane in machine learning On this answer the hyperplane, presumably in a perceptron classifier, is described as the dot product $\langle \vec{w_{x}},\vec{x} \rangle$, where $\vec{w_x}$ is presumably the vector of weights, and $\vec x$ an example in the training set.
My very tentative understanding is the the hyperplane was the plane defined by the examples in the training set, (or possibly the vector of weights). In other words, a vectorial or geometric (hyperplane) object, rather than a scalar.
Can I get an explanation? Thank you in advance.
|
The hyperplane is defined by the equation $\langle \vec{w},\vec{x} \rangle = 0$. This hyperplane partitions the training set into two sets, $\{\vec x\mid \langle\vec{w},\vec{x}_i\rangle >= 0\}$, and $\{\vec x\mid \langle\vec{w},\vec{x}_i\rangle <0\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
eigenvalues of $Q+\mu cc^T$ as $\mu\rightarrow +\infty$ Assume $Q$ is a $n\times n$ SPD, $c$ is a $n\times 1$ matrix. Let $\mu>0$, then one of the eigenvalues of $Q+\mu cc^T$ will go to infinity as $\mu\rightarrow +\infty$ and the other eigenvalues will remain bounded.
Any hint on how to prove the claim?
|
The characteristic equation of $Q+\mu cc^T$ is:
$$det((Q + \mu cc^T)-\lambda I)=det((Q-\lambda I) + \mu cc^T)=0.$$
It can be transformed, using matrix-determinant lemma (https://en.wikipedia.org/wiki/Matrix_determinant_lemma), into:
$$\left(1+\mu c^T(Q-\lambda I)^{-1}c\right)det(Q)=0$$
Because $det(Q)\neq0$, this is equivalent to the following equation:
$$\tag{1}c^T(Q-\lambda I)^{-1}c=-\frac{1}{\mu}.$$
Let us now diagonalize $(Q-\lambda I)^{-1}$.
We assume that $A$ is diagonalizable under the form $Q=R^{-1} D R=R^T D R$ with
*
*$R$ invertible, because $A$ is SDP, with the essential property that $R$ is orthogonal, i.e. $R^{-1}=R^T$, and
*$D:=diag(\lambda_1,...\lambda_n)$.
We then have $Q-\lambda I=R^{-1} D R-\lambda R^{-1}R$, i.e., $Q-\lambda I=R^{-1}(D-\lambda I)R.$
Thus,
$$\tag{2}(Q-\lambda I)^{-1}=R^{-1}(D-\lambda I)^{-1}R=R^{T}(D-\lambda I)^{-1}R.$$
Inserting (2) into (1), and setting $v:=Rc$, we get:
$$\tag{3}v^T(D-\lambda I)^{-1}v=-\frac{1}{\mu}$$
Let $v_k$ be the coefficients of $v$. (3) is equivalent to:
$$\tag{4}\underbrace{\sum_{k=1}^n \dfrac{v_k^2}{\lambda_k - \lambda}}_{f(\lambda)}=-\frac{1}{\mu}$$
Now, a graphical end on a particular case, explaining how the roots behave, for the sake of understanding:
We have taken a particular case with $n=3$,
$$f(\lambda):=\dfrac{4}{1-\lambda}+\dfrac{2}{3-\lambda}+\dfrac{1}{6-\lambda}$$
with $f'(\lambda)>0$ thus $f$ increasing.
This curve has the $\lambda$-axis for its horizontal asymptote and possesses $3$ vertical asymptotes (one for each pole of $f$). Of course, what we are going to say is immediately generalizable (For example, the fact that $f'(x)>0$). The roots of equation (4) are the abscissas of intersection points between the curve and the horizontal (red) line with equation $y=-\frac{1}{\mu}$. As $\mu \to +\infty$, $-\frac{1}{\mu} \to 0_{-}$: the red line intersects the curve
*
*at "ordinary" points (all but the last) that gently swift to the right, staying in their "confinement intervals", here $(1,3)$ or $(3,6)$,
*at an "extraordinary point", the rightmost one, whose abscissa tends to $+\infty$.
Remark: This technique of resolution is called "secular equation method"; see for example, pages 10 and 11 of (http://www.netlib.org/lapack/lawnspdf/lawn69.pdf) or (http://www2.cs.cas.cz/harrachov/slides/Golub.pdf))
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Does domination of exponential-factorial by tetration generalize to higher-order hyperoperations? Let $\star$ be any operation in the sequence of hyperoperations $(\text{Succ},+,\times,\uparrow,\uparrow\uparrow,\ldots)$, and consider the $\star$-factorial function defined as follows on the positive integers:
$$\begin{align}f_\star(n) &:= n\star(n-1)\star(n-2)\ldots \star 1\quad (n\text{ operands})
\end{align}$$
where a right-to-left order of evaluation is assumed; e.g. $f_\uparrow(3) = 3\uparrow 2\uparrow 1 = 3\uparrow (2\uparrow 1)$.
(With appropriate treatment for an argument of $n=0$, the definition could be extended to the nonnegative integers, but it would not be useful to do so in the present context.)
For each $\star$ operation, we want to compare the growth-rate of $f_\star$ to the following function $g_\star$:
$$\begin{align}g_\star(n) &:= b\star' n\\
&= b\star b\star b\ldots \star b\quad (n+1\text{ operands})\quad\text{ if }\star =\text{Succ}\\
&= b\star b\star b\ldots \star b\quad (n\text{ operands})\quad\text{ if }\star \in(+,\times,\uparrow,\uparrow\uparrow,\ldots)
\end{align}$$
where $\star'$ is the hyperoperation just after $\star$, and $b\ge 2$ is some given fixed integer. We use the concept of eventual domination; i.e., for two functions $f,g$ on the positive integers, the notation $f<^*g$ (resp. $f>^*g$) means that $f(n)<g(n)$ (resp. $f(n)>g(n)$) for all sufficiently large $n$.
Examples
$$\begin{align}
f_\text{Succ}(n)&=n & \color{blue}{<^*} \ \ g_\text{Succ}(n)=b+n\quad\quad\quad\quad\text{ if }b\ge 2\\
f_+(n)&=\frac{n(n+1)}{2} & \color{red}{>^*} \ \ g_+(n)=b\times n\quad\quad\quad\quad\text{ if }b\ge 2\\
f_\times(n)&=n! & \color{red}{>^*} \ \ g_\times(n)=b\uparrow n\quad\quad\quad\quad\text{ if }b\ge 2\\
f_\uparrow(n)&=n^{(n - 1)^{(n - 2)^{\cdots^1}}} & \color{blue}{<^*} \ \ g_\uparrow(n)=b\uparrow\uparrow n\quad\quad\quad\quad\text{ if }b\ge 3
\end{align}$$
The first two examples above are easily worked out, while the third example is a well-known result for the standard factorial function. The fourth example follows from properties of the exponential factorial function proved by @Deedlit. (Note that in the fourth example, domination would occur in the reverse order if $b=2$; i.e., $f_\uparrow(n) \color{red}{>^*} 2\uparrow\uparrow n$.)
Question 1: Is it the case that if $b=3$ then $f_\star \color{blue}{<^*} g_\star$ for all $\star\in(\uparrow,\uparrow\uparrow,\ldots)$? I.e., in terms of Knuth's uparrows, do we have $f_{\uparrow^k}(.)\color{blue}{<^*}3\uparrow^{k+1}(.)$ for all $k\ge 1$?
Question 2: If Question 1 has a negative answer, then is it the case that for each $\star\in(\uparrow,\uparrow\uparrow,\ldots)$ there exists a $b$ (depending on $\star$) such that $f_\star \color{blue}{<^*} g_\star$? I.e., is there, for each $k\ge 1$, a $b_k$ such that $f_{\uparrow^k}(.)\color{blue}{<^*}b_k\uparrow^{k+1}(.)$?
|
For convenience, we will let $a \uparrow^b 0 = 1$ and $a\uparrow^0 b = ab$.
As in the proof for the exponential case, we can prove that $3 \uparrow^{k+1} (n-2) < f_{\uparrow^k}(n) < 3 \uparrow^{k+1} (n-1)$ for $n \ge 2$. The left inequality is trivial, so we will prove the right hand side.
First, we need a lemma:
Lemma. $(n+1) \uparrow^k m > n\uparrow^k m + n$ for $n, k \ge 1, m\ge 2$.
Case $k = 1$: $(n+1)^m > n^m + m\cdot n^{m-1} > n^m +n$.
Inductive step:
$$(n+1)\uparrow^k m = (n+1) \uparrow^{k-1} [(n+1)\uparrow^k (m-1)] > (n+1)\uparrow^{k-1}(n \uparrow^k (m-1)) $$ $$> n \uparrow^{k-1}(n \uparrow^k (m-1)) + n \text{ (by inductive hypothesis)} = n \uparrow^k m + n$$
Now for the main proof:
Statement. $(n+2) \uparrow^k f_{\uparrow^k}(n) \le 3 \uparrow^{k+1} n$ for $n \ge 1$.
Base case: $3 \uparrow^k f_{\uparrow^k}(1) = 3 \uparrow^k 1 = 3 = 3\uparrow^{k+1} 1$.
Inductive step:
$$ 3 \uparrow^{k+1} (n+1) = 3\uparrow^k (3\uparrow^{k+1} n)$$ $$ \ge 3\uparrow^k ((n+2) \uparrow^k f_{\uparrow^k}(n)) > 3 \uparrow^k ((n+1)\uparrow^k f_{\uparrow^k}(n) + n+1) \text{(by the lemma)} > (3\uparrow^k (n+1)) \uparrow^k [(n+1)\uparrow^k f_{\uparrow^k}(n)]$$ (by the Knuth Arrow theorem) $$> (n+3) \uparrow^k ((n+1) \uparrow^k f_{\uparrow^k}(n)) = (n+3) \uparrow^k f_{\uparrow^k}(n+1).$$
This proves the statement. But then,
$$3 \uparrow^{k+1}(n-1) \ge (n+1) \uparrow^k f_{\uparrow^k}(n-1) > n \uparrow^k f_{\uparrow^k}(n-1) = f_{\uparrow^k}(n).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Number of strings generatable I really do not know how to start with the conditions of this problem:
A H64 string is a string with 64 characters, each character must be
choosen from 16 hexadecimal characters (0 - 9, a - f). However, no
character can appear over 10 times. How many H64 strings that can be
generated from the above condition?
The normal solution to generate a string with 64 characters is 16^64, how can we eliminate the cases with 10 times appearing characters?
|
This is not a full answer. But I think there are enough hints to work it out.
Let $A_k$ denote the set of such valid H64 strings that make use of $k$ distinct hexdigits. By pigeonhole principle, the given condition leads to the constraint that $k\ge6$. For $k\neq l$ clearly, the two sets of strings $A_k$ and $A_l$ are disjoint.
So the final answer is $\sum_{k=6}^{16} |A_k|$.
Let me indicate how to find the cardinality $A_k$. First you have to decide on which $k$ symbols out of the $16$ are going to be used in a single H64 string. This shows that $|A_k|$ is $16\choose k$ times the number of H64 strings that can be formed just using $1,2,\ldots, k$.
You can relate this to the count of monomials in $k$ variables of total degree $16$, but the difference is that $x^2yz$ and $xyxz$ are to be counted as different monomials. At this stage I am not able to come up with a good approach to proceed further.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How can I find a closed form for this sum $\sum_{k=1}^n{n \choose k}p^{k+1}(1-p)^{n-1}$? Based on some guess and check I think that
$$\sum_{k=1}^n{n \choose k}p^{k+1}(1-p)^{n-1}=p(1-p)^{n-1}((p+1)^n-1)$$
Where $0\leq p \leq 1$
but I'm not sure how to get from one to the other, or if it is truly correct.
|
By the Binomial Theorem, we have that
$$\begin{align}
\sum_{k=1}^{n}\binom{n}{k}p^{k+1}(1-p)^{n-1}&=p(1-p)^{n-1}\sum_{k=1}^{n}\binom{n}{k}p^{k}\\
&=p(1-p)^{n-1}\sum_{k=1}^{n}\binom{n}{k}p^{k}(1)^{n-k}\\
&=p(1-p)^{n-1}\left[\sum_{k=0}^{n}\binom{n}{k}p^{k}(1)^{n-k}-1\right] \quad \text{ since }\binom{n}{0}p^{0}(1)^{n-0}=1 \\
&= p(1-p)^{n-1}[(1+p)^{n}-1]\text{.}
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why does $f$ strictly increasing implies the triangular inequality for this metric? Assume $(X,d)$ is a metric space, and define a new metric $\tilde{d}$ on $X$.
Set $\tilde{d} = \frac{d(x,y)}{1+d(x,y)}$. Now with manipulation and since $d$ is a metric, I manage to show that $\tilde{d}$ satiesfies the triangular ineqality. But I read that one can conside the function $f(t) = \frac{t}{1+t}$ and show that this is a strictly increasing function (no problem). But they argue that the triangular inequality follows from $f$ is stricly increasing, why?
I guess I could pick three values $f(t_1), f(t_2), f(t_3)$ which all lies in $\mathbb{R}$ then on $\mathbb{R}$ we have $p(x,y) = |x-y|$ and thus $p(f(t_1),f(t_2)) \leq p(f(t_1), f(t_3)) + p(f(t_3), f(t_2))$ is that it?
|
Let $u=d(x,z),v=u=d(x,y),w=d(y,z)$. It is enough to check the condition $$u\le v+w\implies f(u)\le f(v)+f(w).$$ By increasingness we have $f(u)\le f(v+w)$. Elementary computation shows that $f(v+w)\le f(v)+f(w)$, which finishes this argument.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do you quickly know that this matrix is diagonalisable (characteristic polynomial given)? We have matrix $A=\begin{pmatrix}
3 & 4 & -3\\
2 & 7 & -4\\
3 & 9 & -5
\end{pmatrix}$
The characteristic polynomial is $-(\lambda-2)^{2} \cdot (\lambda-1)=0$
Now I'd like to know a quick way to know if this matrix is diagonalisable. To be more precise, I want know that this matrix has as many eigenvalues as its own size (it's a $3 \times 3$ matrix so we need at least$3$ eigenvalues).
We see that $\lambda_{1}=2$ and $\lambda_{2}=1$ are eigenvalues. But how do I know without further long calculation that $\lambda_{1}=2$ is a double eigenvalue? I only know it because I did further calculations (polynomial long division) but how can I know it without wasting more time?
|
I think that there's a simple test that requires really little further computation. If you want to check if $\lambda$ is a double eigenvalue of $A$, then it is sufficient to see if $A-\lambda I$ has rank $n-2$.
In your case, if $A-2I$ has rank 1, and rank 1 matrices are really simple, since all the columns are multiples of each other.
$$
A - 2I=\begin{pmatrix}
1 & 4 & -3\\
2 & 5 & -4\\
3 & 9 & -7
\end{pmatrix}
$$
In this case, 2 isn't a double eigenvalue, since the first and second column are not multiple of each other, so this matrix has rank 2 or more.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2178951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
WolframAlpha says limit exists when it doesn't? I was trying to calculate the following limit:
$$
\lim_{(x,y)\to (0,0)} \frac{(x^2+y^2)^2}{x^2+y^4}
$$
and, feeding it into WolframAlpha, I obtain the following answer, stating the limit is $0$:
However, when I try to calculate the limit when $x = 0$ and $y$ approaches 0, the limit is 1...
Is the answer given by WolframAlpha wrong? or am I?
|
This is only to complement the excellent answer of StackTD, who correctly shows that you are right — the limit does not exist (as one can find two different paths to the origin along which the limits of the function differ). The key message is:
Do not try limits in more than one variable with Mathematica or WolframAlpha.
(actually, I'd go further and suggest: Do not try limits in more than one real variable with Mathematica or WolframAlpha.)
See e.g. this thread on Mathematica.SE which goes at length into explaining one can possibly try and do it -- spoiler, it's complicated. Quoting a comment from there, by Jens:
With Limit, you're always restricted to a line in the larger space, and you can't make statements about the existence of the limit in the sense of the higher-dimensional space. For that you have to show the independence of the result on the direction of the line. If you intentionally set up a function to have different limits along different lines, I dont (sic) see what else you can do with Mathematica.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2179077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 3,
"answer_id": 1
}
|
Reparametrization of a curve Given a parametrized curve, we know that its arc length parametrization is its unit speed reparametrization. However, I wanted to know if there was any generic procedure to find any other reparametrizations of the same curve, which are not unit speed, and are non trivial?
|
Assume you have a curve $\gamma : [a,b] \to \mathbb R^d$ and $\varphi : [a,b] \to [a,b]$ is a reparametrization, i.e., $\varphi'(t) > 0$. Then you can prescribe any speed function for your parametrization. Given a function $\sigma: [a,b] \to \mathbb R_{>0}$, define $\varphi$ via the ODE
$$ \varphi'(t) = \frac{\sigma(t)}{|\gamma'(\varphi(t))|} \,. $$
Then the reparametrized curve $\tilde \gamma (t) = \gamma \circ \varphi(t)$ has speed $\sigma$, i.e.,
$$ |\tilde \gamma'(t)| = \sigma(t)\,. $$
The only restriction on $\sigma$ is that $\varphi(b) = b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2179522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Using Lagrange Multipliers to determine the point on a surface nearest to P I'm attempting to figure this problem out. I would appreciate some guidance on how to get the answer. Thanks.
Consider the surface defined as $S: x^2+y^2+z^2 = 8$. If we have a $P = (0,1,1)$, use Lagrange multipliers to determine the point on $S$ nearest to $P$.
|
It is obvious that the shortest distance between $P$ and $S$ is the distance between $P$ and the center of the sphere, minus the radius, that is:
$$
|\sqrt{0^2+1^2+1^2}-\sqrt{8}| = \sqrt{2}
$$
Now, if you want the coordinates of the nearest point, you can use Lagrange multipliers indeed: you want to minimize the (squared) distance between $P$ and $S$, given by
$$
x^2+(y-1)^2+(z-1)^2
$$
subject to
$$
x^2+y^2+z^2=8
$$
The Lagrangian equals
$$
\mathcal{L}=x^2+(y-1)^2+(z-1)^2 + \lambda(8-x^2+y^2+z^2)
$$
So you need to solve the following system
\begin{cases}
2x-2\lambda x =0 \\
2y-2-2\lambda y = 0 \\
2z -2 -2\lambda z = 0 \\
x^2+y^2+z^2=8
\end{cases}
You should get
$$
(x,y,z,\lambda)=(0,2,2,\frac{1}{2})
$$
So the nearest the point has coordinates $(0,2,2)$ and the shortest distance equals
$$\sqrt{0^2+(2-1)^2+(2-1)^2}=\sqrt{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2179646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is my understanding of weights and weight spaces of subalgebras of $\mathfrak{gl}(V)$ correct? I am learning about weights of subalgefbras of $\mathfrak{gl}(V)$ where $V$ is a vector space over some field $\mathbb{F}$.
The definition that I have read states
Let $M$ be a subalgebra of $\mathfrak{gl}(V)$. A weight of $M$ is a linear map $\lambda : M\rightarrow\mathbb{F}$ such that $V_\lambda := \left\{ v\in V\mid A(v)=\lambda(A)\cdot v \quad \forall A\in M \right\}$ is a non-zero subspace of $V$. Then $V_\lambda$ is called a weight subspace associated with the weight $\lambda$.
So I was thinking about this definition a bit, and it reminded me of eigenvectors and eigenvalues in linear algebra. I then thought a bit more and came up with that, in a way, the weight subspace is a set of "global eigenvectors" for $M$. By which I mean all vectors in the weight subspace get mapped to scalar multiples of themselves by all elements of $M$. Then, the weight $\lambda:M\rightarrow \mathbb{F}$ gives the "associated eigenvalue" corresponding to $A\in M$.
I was wondering whether this view is correct (or indeed helpful) in understanding this particular notion, or have I got the wrong end of the stick?
Thanks,
Andy.
|
Yes, that is exactly correct. Each $V_\lambda$ is a simultaneous eigenspace for every linear map in $M$. $\lambda$ is the function that assigns to every linear map in $M$ the eigenvalue associated to its action on vectors in $V_\lambda$.
This example may be helpful: Take $V = \mathbb C^3$ and take
$$M = \left\{ \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right)\in \mathfrak{gl}(V) \mid a+ b+ c = 0 \right\} .$$
(This is actually a Cartan subalgebra of $\mathfrak{sl}_3 \mathbb C$, by the way.)
The three simultaneous eigenspaces are
$$ V_{\lambda_1} = \mathbb C \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) , \ \ \ V_{\lambda_2} = \mathbb C \left( \begin{array}{c} 0 \\ 1 \\ 0 \end{array} \right), V_{\lambda_3} = \mathbb C \left( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right), $$
and the weights, which are functions assigning each matrix in $M$ to its eigenvalue when acting on the respective space, are
$$ \lambda_1 : \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right) \mapsto a,$$
$$ \lambda_2 : \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right) \mapsto b,$$
$$ \lambda_3 : \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array} \right) \mapsto c,$$
Note that in this example, $a + b+ c = 0$ for all matrices in $M$. Therefore,
$$ \lambda_1 + \lambda_2 + \lambda_3 = 0.$$
so only two of the three weights are linearly independent. This "dependency between weights" is quite a common feature of weights of Lie algebras.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2179740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Derivative of $x|x|$ I am trying to find the derivative of $f(x)=x|x|$ using the defition of derivative. For $x > 0$ I found that $f'(x)=2x$ and for $x<0$ the derivative is $f'(x)=-2x$. Everything is fine up to here. Now I want to check what happens when at $x=0$.
By the way, I know that $|x|$ is not differentiable at $x=0$.
So I am checking the left & right limits of $f$ when $x$ approaches $0$.
*
*$\lim_{x \to 0^-}\cfrac{x|x|}{x} = \lim_{x \to 0^-}\cfrac{x(-x)}{x}=\lim_{x \to 0^-}\cfrac{(-x)}{1} = -0? = 0. $
*$\lim_{x \to 0^+}\cfrac{x|x|}{x} = \lim_{x \to 0^4}\cfrac{x(x)}{x}=\lim_{x \to 0^+}\cfrac{(x)}{1} = 0. $
I think that $f$ is not differentiable at $x=0$ since $|x|$ is not differentiable at that point. So , what do I do wrong?
Should I write something like $\lim_{x \to 0^-}\cfrac{x|x|}{x} = -0^{-}$ and $\lim_{x \to 0^+}\cfrac{x|x|}{x} =0^{+}$ so that $f'$ does not exist at $x=0$?
|
You didn't do anything wrong, you in fact shown that $f$ is differentiable at $x = 0$ and $f'(0) = 0$. The fact that $x \mapsto |x|$ is not differentiable at $x = 0$ doesn't mean that if you consider the product of this function with another function then the result won't be differentiable at $x = 0$, as you have just shown.
However, since you have shown that $f'(x) = 2|x|$ you can see that $f$ is not twice differentiable at $x = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2179831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
How can I prove $\lim_{n \to \infty} \frac{n^2}{2^n}=0$ How can I prove $\lim_{n \to \infty} \frac{n^2}{2^n}=0$
I tried to use $\mid \frac{n^2}{2^n} - 0 \mid <\epsilon $ However, because of $n^2$ I cannot use it. Also, I tried to use ratio test and I got $\lim_{n->\infty}\frac{(n+1)^2}{2}\frac{1}{n^2}$, but after that one, I dont know how to get limit < 0
I have no idea to solve this problem.
I can also use a squeez lemma, but dont know how to apply to this question.
|
Stolz-Ces$\mathrm{\grave{a}}$ro Theorem:
$$
\lim_{n \to \infty}{n \over 2^{n}} =
\lim_{n \to \infty}{\left(n + 1\right) - n \over 2^{n + 1} - 2^{n}} =
\lim_{n \to \infty}{1 \over 2^{n}} = \bbox[#ffe,10px,border:1px dotted navy]{\displaystyle{\large 0}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2179947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Solve $\int_{0}^{\infty}\frac{\ln(2x)}{4+x^2}dx$ by contour integration I'm a little stuck with this one. I've found the singularities to be at $\pm 2i$ and $0$ (branch point). So far, using a branch cut at $2\pi$ I've found that
$$\int_{0}^{\infty}\frac{\ln(2r)}{4+r^2}dr+\int_{0}^{\infty}\frac{\ln(2r)+2\pi i}{4+r^2}dr = 2I+\int_{0}^{\infty}\frac{2\pi i}{4+r^2}dr$$
And
$$\int_{0}^{\infty}\frac{2\pi i}{4+r^2}dr = \text{Res}(r=2i) \\ = -2\pi^2\lim_{r\to2i}\frac{r-2i}{(r-2i)(r+2i)} = \frac{i}{2}\pi^2$$
My problem is that the answer I found is completely imaginary, and I'm not sure how that's possible given that the original function is real. Any help is appreciated.
|
Consider
$$f(z) = \frac{\log(2z)}{z^2+4}$$
By using a key-hole integral with branch-cut on positive axis we should get
$$\int^\infty_0 \frac{\log(|2x|)}{x^2+4}\,dx + \int_{\infty}^{0} \frac{\log(|2x|)+2\pi i}{x^2+4}\,dx =2\pi i \sum \mathrm{Res}(f,z_0)$$
We see that the first and second integrals will cancel. Now to avoid that consider
$$f(z) = \frac{\log(2z)^2}{z^2+4}$$
By using a key-hole integral with branch-cut on positive axis we should get
$$\int^\infty_0 \frac{\log^2(|2x|)}{x^2+4}\,dx + \int^0_{\infty} \frac{(\log(|2x|)+2\pi i)^2}{x^2+4}\,dx =2\pi i \sum \mathrm{Res}(f,z_0)$$
$$\int^\infty_0 \frac{\log^2(|2x|)}{x^2+4}\,dx - \int_0^{\infty} \frac{(\log(|2x|)+2\pi i)^2}{x^2+4}\,dx =2\pi i \sum \mathrm{Res}(f,z_0)$$
Now you can see that $\log^2(2x)$ will be cancelled and we are left out with $\log(2x)$.
Another approach
Integrate around a big half-circle indented at 0 where the branch cut is chosen on the imaginary axis then for
$$f(z) = \frac{\log(2z)}{z^2+4}$$
we have only one pole at $z = 2i$
$$\int_{-\infty}^0 \frac{\log|2x|+\pi i}{x^2+4}\,dx +\int^{\infty}_0 \frac{\log|2x|}{x^2+4}\,dx = 2\pi i \,\mathrm{Res}(f,2i)$$
$$2\int^\infty_0 \frac{\log(2x)}{x^2+4}\,dx + \pi i\int^\infty_0\frac{1}{x^2+4}\,dx = 2\pi i \,\mathrm{Res}(f,2i) $$
Note that
$$\mathrm{Res}(f,2i) = \lim_{z \to 2i} (z-2i) \frac{\log(2z)}{(z-2i)(z+2i)} = \frac{\log(4i)}{4i} = \frac{\log(4)+(\pi i)/2}{4i}$$
$$2\int^\infty_0 \frac{\log(2x)}{x^2+4}\,dx + \pi i\int^\infty_0\frac{1}{x^2+4}\,dx = \pi \frac{\log(4)+(\pi i)/2}{2} $$
By comparison we have
$$\int^\infty_0 \frac{\log(2x)}{x^2+4}\,dx = \frac{\pi}{2} \log(2)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Which of the following sets of functions from $\mathbb{R}$ to $\mathbb{R}$ is a vector subspace of $\mathbb{R}$?
Which of the following sets of functions from $\mathbb{R}$ to $\mathbb{R}$ is a vector subspace of $\mathbb{R}$?
$S_1=\{f|\displaystyle\lim_{x\to 3}f(x)=0\}$, $S_2=\{h|\displaystyle\lim_{x\to 3}h(x)=1\}$, $S_3=\{g|\displaystyle\lim_{x\to 3}g(x)~ \text{exists}\}$
*
*Only $S_1$.
*Only $S_2.$
*$S_1$ and $S_3$ but not $S_2$
*All of the three are vector spaces.
$S_1$ is definitely a vector subspace but $S_2$ is not as it doesn't contain $0$. But I am confused about the $S_3$. As there is no specific limit mentioned, so it means all sorts of limits are possible, including $0$. So I guess $S_3$ is also a vector subspace. That means option 3 is correct. Is my solution correct? Any help would be great. Thank you.
|
Your solution is correct.
For $S_3$ : Take $g,h \in S_3$.
Then $\lim_{x \to 3} g(x)$ and $\lim_{x \to 3} h(x)$ both exist.
Now check
*
*$\lim_{x \to 3} (g(x)+h(x))=\lim_{x \to 3} g(x)+\lim_{x \to 3} h(x)=\text{exists}.$ This implies that $g+h \in S_3$.
*let $a \in \Bbb R$ be arbitrary. Then $\lim_{x \to 3} ag(x)=a\lim_{x \to 3} g(x)=\text{exists}$. This implies $ag \in S_3$.
From 1 and 2, $S_3$ is also a vector space.
Note : "Limit exists" means that the limit is finite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove that $f_n(x):=x^n$ is not a Cauchy sequence in $C[0,1]$ under the norm $\|f\|= \sup|f(x)|$?
How to prove that $f_n(x)=x^n$ is not a Cauchy sequence in $C[0,1]$ under the norm $\|f\|= \sup_{x\in [a,b]}|f(x)|$, by showing that it does not satisfy the definition of a Cauchy sequence?
|
For a fixed $n \ge 1$ and for $m \ge n$,
$$
\|x^n-x^m\|=\max_{x\in[0,1]}(x^n-x^m) \ge (x^n-x^m)|_{x=1/2^{1/n}}
= \frac{1}{2}-\left(\frac{1}{2}\right)^{m/n},
$$
Hence,
$$
\|x^n-x^m\| \ge \frac{1}{4} \mbox{ whenever } \left(\frac{1}{2}\right)^{m/n} \le \frac{1}{4},
$$
which occurs whenever $m/n > 2$, or $m > 2n$. So $\{ x^n \}_{n=1}^{\infty}$ cannot be a Cauchy sequence in $C[0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Solution of a trigonometric equation involving double sines,cosines What is the sum of all the solutions of $$\sin^2 (2\sin (x-\frac {\pi}{6}))+\sec^2 (x-\frac {\pi}{2}\tan^2 (x))=1$$ in $[0-4\pi]$ writing $\sec (..)=\frac {1}{\cos (..)} $ .. and rearranging we have $\cos (a).\cos (b)=1$ now as $\cos $ is $-1\leq \cos^2 (..)\leq 1$ . So both a and b have to be $n\pi $ thus for bracket of $\sin (..) $ the solution will be $\frac {\pi}6,\frac {7\pi}{6},2\pi+\frac {\pi}{6},2\pi+\frac {7\pi}{6} $ but I dont know what to do of the next bracket involving $x,\tan (x) $ . Thanks
|
We have $$\sin^2\left(2\sin\left(x-\dfrac\pi6\right)\right)+\tan^2\left(x-\dfrac\pi2\tan^2x\right)=0$$
For real $x,$
$\sin\left(2\sin\left(x-\dfrac\pi6\right)\right)=\tan\left(x-\dfrac\pi2\tan^2x\right)=0$
$\implies 2\sin\left(x-\dfrac\pi6\right)=m\pi\ \ \ \ (1)$ where $m$ is any integer
Now as $-1\le\sin y\le1$
$-1\le\dfrac{m\pi}2\le1\implies m=0$
$\implies\sin\left(x-\dfrac\pi6\right)=0\implies x-\dfrac\pi6=r\pi$ where $r$ is any integer
and $x-\dfrac\pi2\tan^2x=n\pi\ \ \ \ (2)$ where $n$ is any integer
$\implies \dfrac\pi6+r\pi-n\pi=\dfrac\pi2\tan^2\left(\dfrac\pi6+r\pi\right)$
$\iff\dfrac16+r-n=\dfrac16\implies r=n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Counting subsets of a given set Could anyone be kind in helping me with this simple question?
What is the number of subsets of $$\{1,2,3,\cdots, n\}$$ containing k elements?
Thanks.
|
It is the sum of number of ways of choosing $r$ elements from n elements, where $r$ ranges from $0$ to $n$. This is because $r = 0$ is the null set, $k = 1$ is all $nC1$ subsets of cardinality $1$, and so on...
Just imagine yourself picking all subsets of size 0, then size 1, ..., till size n.
This is also the sum of the nth row of Pascal's Triangle, which is also $2^n$.
Permutations do not count, because the order of a set does not matter.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Help me understand linear function. today in my high school lesson I learned for linear function. I know that each linear function has the form of $f(x)=kx+n$ where $k,x,n ∈ \mathbb R$. Now let's say be have those two functions:
$f_1(x)=kx+n_1\\
f_2(x)=kx+ n_2$
Since $k$ is same in both functions, we know that it represents those function whose graph will be parallel, but why and how to prove that those functions will be parallel?
|
I always make a drawing if possible and it is possible in this case, since you are working in the plane. Drawing two parallel lines, what you see is that the vertical distance is always constant (I included an example of such a drawing).
The vertical lines correspond with points on the $x$-axis, so let us take such a point and call it $z$. We want to show that the vertical distance does not depend on this point. The vertical distance corresponding to the vertical line at the point $z$ is given by
$$|f_1(z) - f_2(z)| = |(kz + n_1) - (kz + n_2)| = |n_1 - n_2|$$
and this does not depend on the point we consider!
This correspond to vertically translating the function $f_1$ over a distance $|n_1 - n_2|$ and we see that the shifted function coincides with $f_2$. Indeed, assuming that $n_1 \geq n_2$, we have that
$$f_1(x) - (n_1 - n_2) = (kx + n_1) - (n_1 - n_2) = kx + n_2 = f_2(x)$$
so the translated function coincides with the second function, hence they have to be parallel.
These are all things which you can quickly deduce by just drawing a picture.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
A hard integral Looking for a solution for an integral:
$$I(k)=\int_0^{\infty } \frac{e^{-\frac{(\log (u)-k)^2}{2 s^2}}}{\sqrt{2 \pi } s \left(1+u\right)} \, du .$$
So far I tried substitutions and by parts to no avail.
|
Here is a start: $I(0) = \frac{1}{2}$
Proof:
$$I(0) = \int\limits_0^\infty \frac{\exp\left[-\frac{(\log u)^2}{2s^2}\right]}{\sqrt{2\pi} s (1+u)} \rm{d}u$$
Put $\log u = x$
\begin{align}
I(0) &= \int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s} \frac{e^x}{1+e^x} \rm{d}x \\
&= \int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s}\rm{d}x - \int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s} \frac{1}{1+e^x} \rm{d}x
\end{align}
The first integral is $1$. Call the second integral $K$.
$$K=\int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s} \frac{1}{1+e^x} \rm{d}x$$
Flipping the range around $0$,
$$K=\int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s} \frac{1}{1+e^{-x}} \rm{d}x$$
Now take the average of the two expressions,
\begin{align}
K &=\frac{1}{2}\int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s} \left[\frac{1}{1+e^x}+\frac{1}{1+e^{-x}}\right] \rm{d}x\\
&=\frac{1}{2}\int\limits_{-\infty}^\infty \frac{\exp\left[-\frac{x^2}{2s^2}\right]}{\sqrt{2\pi} s}\rm{d}x\\
&=\frac{1}{2}\\
I(0) &= 1 - K = \frac{1}{2}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Combinatorics tennis match The prompt says, a tennis club has to select 2 mixed double pairs from a group of 5 men and 4 women. In how many ways can this be done?
There's total of 9 people and we need to choose of 8 people, that's what I think "2 mixed double pairs" means since one pair is 2 people and 2 double pairs would mean 2 * 4 = 8 people so I simply did 9 choose 8
|
The amount of ways to pick the first pair is simply 5*4=20.
The next one is simply 4*3=12. (You have to chose one man and one woman for each pair.)
But we've overcounted by two (the same pair of pairs situations), so it's $\frac{20\cdot 12}{2}=\boxed{120}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Direct limit isomorphism Suppose I have a directed system $(V_i, \phi_i: V_i \rightarrow V_{i+1})$, say of vector spaces $V_i$. Let $\psi_i: V_i \rightarrow V_i$ be isomorphisms. I can construct the related directed system
$(V_i, \phi_i\circ \psi_i: V_i \rightarrow V_{i+1})$. Is it true that the direct limits $\lim_{\phi_i} V_i$ and $\lim_{\phi_i\circ \psi_i} V_i$ are isomorphic? It seems to me that you cannot directly use $\psi_i$ to get isomorphism of the directed system but maybe the limits are still isomorphic.
Edit: As explained in Jeremy's answer below, this is false in general. What about if $\phi, \psi$ commute? ie $\phi_i \psi_i = \psi_{i+1}\phi_i$. In this case, $\psi$ induces an isomorphism of $\lim_{\phi_i} V_i$. Does this stronger assumption imply that $\lim_{\phi_i} V_i$ and $\lim_{\phi_i\circ \psi_i} V_i$ are isomorphic? In Jeremy's example, $\phi, \psi$ do not commute.
|
Take $V_i=k^2$ for all $i$, with $\phi_i$ given by the matrix $\begin{pmatrix}1&0\\0&0\end{pmatrix}$ for all $i$. Then $\varinjlim_{\phi_i}V_i$ is one dimensional.
But now take $\psi_i$ to be the isomorphism given by the matrix $\begin{pmatrix}0&1\\1&0\end{pmatrix}$ for all $i$. Then $\varinjlim_{\phi_i\circ\psi_i}V_i$ is zero.
The answer to the supplementary question about the case where $\phi_i\psi_i=\psi_{i+1}\phi_i$ is that in this case the directed systems are isomorphic (via $\psi_i^i:V_i\to V_i$), and therefore the direct limits are isomorphic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2180965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Fibonacci Numbers proof, circular reasoning? I got this from "Number Theory" by George E. Andrews (1971), at the end of the first chapter, he asks for proofs by mathematical induction about Fibonacci Numbers as exercices. In one of them I am asked to show that
$$(\forall \, n \in \Bbb Z^+)((F_{n+1})^2-F_nF_{n+2}=(-1)^n)$$
which has already has been asked on Math S.E.: "Fibonacci numbers and proof by induction" but I am not looking for a complete solution here; rather, since I am learning the subject on my own I would appreciate it if someone could simply tell me if my reasoning is correct, it does not "feel" concrete to me.
Going through the usual steps I first check that the base case $F_2^2-F_1F_3=1-2=(-1)^1$ is true such that the inductive step can be taken. Then, assuming $(F_{k+1})^2-F_kF_{k+2}=(-1)^k$ is true, I try to show that it implies that $(F_{k+2})^2-F_{k+1}F_{k+3}=(-1)^{k+1}$ is also true.
Then, doing a bit of algebra I get,
$$\begin{align}
(F_{k+2})^2-F_{k+1}F_{k+3} & = (-1)^{k+1}=(-1)^k(-1)\\
& = ((F_{k+1})^2-F_kF_{k+2})(-1)\\
& = F_kF_{k+2}-(F_{k+1})^2\\
(F_{k+2})^2-F_{k+1}F_{k+3} + (F_{k+1})^2-F_kF_{k+2} & = 0\\
(F_{k+2})^2-F_{k+1}F_{k+3} + (-1)^k & = 0\\
(F_{k+2})^2-F_{k+1}F_{k+3} & = -(-1)^k = (-1)(-1)^k\\
& = (-1)^{k+1}\\
\end{align}$$
$$\tag*{$\blacksquare$}$$
I am not sure about this, it feels like circular reasoning, is it?
Thanks a lot for the valuable input!
|
As @ElliotG pointed out, you cannot begin with what you want to prove. Instead, if you want to show an equality, begin with the left hand side of the inequality, then do things with it until you arrive at the right hand side.
In the induction step you know that $F_{n+2} = F_{n+1} + F_n$ and $F_{k+1}^2-F_k F_{k+2} = (-1)^k$. So start with
$$ F_{k+1}^2 - F_{k+1}F_{k+3} = \dots\,, $$
and work your way, using the known identities, until you arrive at
$$ \dots = (-1)^{k+1}\,.$$
Then you will have proven the induction step.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Are all finite dimensional algebras over the real numbers `Banach algebra'-able Suppose that $A$ is a finite dimensional algebra over the real or complex numbers. Then $A$ has a natural topology induced from it being a finite dimensional vector space. Is it always true that there is a norm on $A$ satisfying $\| MN \| \leq \| M \| \| N \|$, or are there some finite dimensional algebras which aren't `Banachable'? If we weaken the norm to not being complete, do we obtain stronger results?
|
Since $A$ is a finite-dimensional vector space, we can define a norm (any $\ell^p$-norm will do). Since $A$ is finite-dimensional, any linear operator is continuous with respect to this norm. To each $a\in A$ we can associate a linear map $x\mapsto ax$. Thus considering $A$ as a subalgebra of $L(A)$ with the operator norm, we obtain a norm on $A$ which is submultiplicative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Degree of $\mathbb{Q}(\xi_{p^{2}})$ over $\mathbb{Q}$. What is the degree of the extension $\mathbb{Q}(\xi_{p^{2}})$ over $\mathbb{Q}$ where $p$ is a prime and $\xi_{p^{2}}$ is a primitive $p^{th}$ root of unity?
|
The degree of $\zeta_n$ over $\mathbb{Q}$ is $\phi(n)$ (Euler-phi function), for $n=p^2$ that is $p(p-1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Polynomial is irreducible over $\mathbb{Q}$ I've been scratching my head lately over this problem in my textbook:
Prove that the polynomial $x^4 + x^3 +x^2 +x +1$ is irreducible over $\mathbb{Q}$.
I've done some research and found this link, but they talk about Eisenstein's criterion, which we haven't covered in our math class just yet. Is there a general strategy where we can show whether a polynomial is irreducible over a field?
My textbook really didn't go into depth on the topic of irreducibility of polynomials, but this Wiki link somewhat helps. Perhaps the rational root theorem may be helpful here, but how would I go about starting this proof?
EDIT
Please see the first comment for my initial strategy at showing the required result.
|
Note that in this case you can use Cohn's irreducibility criterion for base $b=2$, because $f(2)=31$ is a prime and the irreducibility follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 4
}
|
The set of Polynomials is not complete in sup-norm over [0,1] Let $P$ be the set of all polynomials and consider the sup-norm on $[0,1]$ by $$||p||_\infty = \sup\{|p(x)| : x \in [0,1]\}.$$ I need to show $P$ is not complete in the sup-norm on the interval $[0,1]$. To do this, it suffices to show there exists a sequence of elements in $P$ which is Cauchy but has no limit in $P$. To this end, consider the sequence of polynomials $(p_n)$ defined by $$p_n(x) = 1 + \frac{x}{2} + \frac{x^2}{4} + \cdots + \frac{x^n}{2^n} = \sum_{k=0}^{n} \frac{x^k}{2^k},$$ where $x \in [0,1]$. I have already proved that $(p_n)$ is in fact Cauchy with respect to the sup-norm, but I am confused on how to show that $$p(x) = \frac{1}{1 - \frac{x}{2}}$$ is the only possible limit for the sequence $(p_n)$. We know that $$\lim_{n \to \infty} p_n(x) = \lim_{n \to \infty} \sum_{k=0}^{n} \frac{x^k}{2^k} = \sum_{k=0}^{\infty} \left(\frac{x}{2}\right)^k = \frac{1}{1 - \frac{x}{2}} = p(x)$$ for $x \in [0,1]$. Also, $\sum_{k=0}^\infty \left(\frac{x}{2}\right)^k = \sum_{k=0}^\infty \frac{1}{2^k} x^k$, is a power series with radius of convergence $(-2,2).$ Therefore, since $[0,1] \subseteq (-2,2)$, it follows that $\sum_{k=0}^\infty \left(\frac{x}{2}\right)^k = \sum_{k=0}^\infty \frac{1}{2^k} x^k$ converges uniformly on $[0,1]$ and is continuous on $[0,1]$. I believe this would then imply that $p(x)$ is in the larger space $C([0,1])$ and hence, $p(x)$ is the only possible limit for our Cauchy sequence $(p_n)$. Is this argument valid? Do I even need to state any of this? Thanks.
|
Indeed you are trying to show that set of all polynomials complete! you have to show that the limit of a series of polynomial is not a polynomial. One of the ways is proving by counterexample. We have
$$\lim_{n\rightarrow \infty} 1-x^2/2+x^4/4-\cdots=cosx$$ which is not polynomial then the space is not complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How many barcodes are there? A barcode is made of white and black lines. A barcode always begins and ends width a black line. Each line has is of thickness 1 or 2, and the whole barcode is of thickness 12.
How many different barcodes are there (we read a barcode from left to right).
I know that I'm required to show some effort, yet I really don't have a clue about this problem. Can you give me any hints?
|
Denote by $b_i$ the number of bars (black or white) of width $i\in\{1,2\}$. Then $b_1+2b_2=12$, hence $b_1$ is even. Since the total number of bars $b_1+b_2$ is odd it follows that $b_2$ is odd. This leaves the cases
$$(b_1,b_2)\in\bigl\{(10,1),(6,3),(2,5)\bigr\}\ .$$
The total number of admissible arrangements then comes to
$${11\choose 1}+{9\choose 3}+{7\choose 2}=116\ .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
}
|
Why is a number that is divisible by $2$ and $3$, also divisible by $6$? Why is it that a number such as $108$, that is divisible by $2$ and $3$, is also divisible by $6$?
Is this true that all numbers divisible by two integers are divisible by their product?
|
Let $a$ be divisible by $2$ and $3$. So $$a=2^{r_{0}}\cdot p^{r_{1}}_{1}\cdot p^{r_{2}}_{2}\cdot\ldots=3^{s_{0}}\cdot q^{s_{1}}_{1}\cdot q^{s_{2}}_{2}\cdot\ldots$$ where $p_{i}$ and $q_{j}$ are distinct prime numbers, $r_{i}$ and $s_{j}$ are possibly $0$ for $1\leqslant i,j<\infty$ and $r_{0},s_{0}\geqslant 1$. We know from the fundamental theorem that these two must be unique factorisations (up to re-ordering). Hence there must be some $p_{k}^{r_{k}}=3^{s_{0}}$ and some $q_{\ell}^{s_{\ell}}=2^{r_{0}}$. Therefore $a$ has a factor of $6$: \begin{align*}
a&=2^{r_{0}}\cdot p_{1}^{r_{1}}\ldots\cdot p_{k}^{r_{k}}\cdot\ldots\\
&=2^{r_{0}}\cdot p_{1}^{r_{1}}\ldots\cdot3^{s_{0}}\cdot\ldots\\
&=6\cdot(2^{r_{0}-1}\cdot3^{s_{0}-1}\cdot p_{1}^{r_{1}}\cdot\ldots).
\end{align*} (The argument works on the other side as well).
This might be a little explicit but I hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 5
}
|
Substitution or comparison? For the question,
$X^4 + \frac{9}{X^4} = (X^2 - \frac{a}{X^2})^2 + b$ I have to find the values of $a$ and $b$. I tried two solutions both included the expansion of brackets.
By comparing terms I obtained $3$ for $a$ and $6$ for $b$.
However when I tried to substitute using $b$, I obtained a value of $0$ for $b$.
Why is it not possible to solve by substituting and on a more general tone when is it possible to solve by substituting?
|
$X^4 + \frac{9}{X^4} = (X^2 - \frac{a}{X^2})^2 + b$
$x^4 + \frac {9}{x^4} = x^4 + \frac {a^2}{x^4} + b - 2a$
So by comparing $a^2 = 9$ and $b-2a = 0$.
So $a = 3$ or $a = -3$ and $b = 6$ or $b = -6$.
I'm not sure what you mean by substituting? Do you mean picking an arbitrary value for $x$ and getting two equations for $a$ and $b$.
Let $x = 1$. then $10 = (1-a)^2 + b = 1 -2a + a^2 +b$ or $b= 9 + 2a -a^2$
if $x = \sqrt{2}$ then $4 +\frac 94 = (2- \frac a2)^2 +b = 4-2a + \frac{a^2}4 +b$ or $b=\frac 94 + 2a -\frac {a^2}4$
So $9-a^2 = \frac 94 - \frac {a^2}4 = \frac 14( 9 - a^2)$ so $9-a^2 = 0$ so $a = 3$ or $-3$.... and $b = 6$ or $-6$.
So how did you get $b= 0$? Perhaps if we knew how you got that we can figure out what went wrong. Did you divide by zero? Did you get an redundant equation $0 = 0$ and mistake that for $b = 0$?
If $x = 0$ you get $0 + \frac 90 = (0 + \frac a0)^2 +b$ but this doesn't help us as dividing by zero is an absolute forbidden no-no that makes no sense. If you make the error and assume $\frac 90 = 0$ (It does !!!!!!!!NOT!!!!!!!) then you get $0 = 0 + b$ (which is !!!!!!!!WRONG!!!!!!)
Of if you did $x = k$ and $x = -k$ you will end up with the redundant two equations: $k^4 + \frac 9{k^4} = (k^2 - \frac a{k^2})^2 + b$ repeated twice which gets us no where.
=====
from the comments it sounds like what you did was
$X^4 + \frac{9}{X^4} = (X^2 - \frac{a}{X^2})^2 + b$
$b = X^4 + \frac{9}{X^4}-(X^2 - \frac{a}{X^2})^2$ and then plug that back into the original
$X^4 + \frac{9}{X^4} = (X^2 - \frac{a}{X^2})^2+ X^4 + \frac{9}{X^4}-(X^2 - \frac{a}{X^2})^2$
$X^4 + \frac{9}{X^4}=X^4 + \frac{9}{X^4}$
$0 = 0$
Which is true. Zero does equal 0. But that doesn't help us in the least as it says nothing about $a$ or $ b$ or $x$.
If this is what you mean by "substitution" it will NEVER work. You are simply defining something in terms of an expression and trying to solve the same expression with no new information and you simply eliminate everything in circular reasoning.
It's a bit like trying to do "Snoopy = Charlie Brown's dog. Solve for Charlie Brown"
Charlie Brown = Snoopy's owner
Snoopy = Snoopy's owner's dog
Snoopy = Snoopy
Snoopy - Snoopy = Snoopy - Snoopy
0 = 0
Therefore Charlie Brown is nobody!
Hopefully you can see why that is wrong and how it will never work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2181952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Summation related to arctan What is $$\sum _{r=1} ^{90} \arctan \left(\frac{2r}{2+r^2+r^4}\right)$$ . Now I tried to express it as sum of $\arctan(a)-\arctan(b)$ . My try was using $r+\frac{1}{r}=a,r-\frac{1}{r}=b$ everything worked well except the term independent of variable. .all the problem is around that 2 in denominator. How do I tackle it?
|
Hint:
$$\frac{2r}{2+r^2+r^4} = \frac{(r^2+r+1)-(r^2-r+1)}{1+(r^2+r+1)(r^2-r+1)}$$
Now you may proceed!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving differential equation: $y''-\frac{\epsilon ^2 y}{x^2} + \frac{y'}{x}=0$ I've derived a differential equation, as below.
$$y''(x)-\frac{\epsilon ^2 y(x)}{x^2} + \frac{y'(x)}{x}=0$$
It is a second order linear differential equation, WolframAlpha seem to suggest that the solution is (for the equation without $\epsilon$):
$$y(x) = \frac{(c_1 (x^2 + 1))}{x} + \frac{(i c_2 (x^2 - 1))}{(2 x)}$$
I have two questions:
*
*What is the step-by-step towards the solution?
*Why is there imaginary part in the solution?
Would be appreciated for any help!
|
This is a Cauchy-Euler equation.
Putting it in that form:
$$x^2\frac{d^2 y}{dx^2}+x\frac{dy}{dx}-\epsilon^2 y=0$$
You can use the ansatz $y=x^{\lambda}$:
$$\frac{dy}{dx}=\lambda x^{\lambda-1}$$
$$\frac{d^2y}{dx^2}=\lambda(\lambda-1)x^{\lambda-2}$$
To obtain the polynomial:
$$-\epsilon^2+\lambda^2=0$$
Solving for $\lambda$ and using the fact that $y(x)=c_1 y_1(x)+c_2 y_2(x)$, we obtain the general solution.
To answer your second question:
Do not be alarmed that the solution given by the above looks different when letting $\epsilon=1$. Wolfram|Alpha in this case did not give the simplest solution. Let's continue from the answer given:
$$y(x)=\frac{c_1(x^2+1)}{x}+\frac{ic_2(x^2-1)}{x}=c_1\left(x+\frac{1}{x}\right)+i\cdot c_2\left(x-\frac{1}{x}\right)$$
Grouping terms:
$$y(x)=x\cdot (c_1+i\cdot c_2)+\frac{1}{x}\cdot (c_1-i\cdot c_2)$$
Since $c_1$ and $c_2$ are arbitrary constants, we may let $c_1+ic_2=k_1$ and $c_1-ic_2=k_2$. This gives a simple solution, which is consistent with the solution we obtain for general $\epsilon$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Chinese Remainder Theorem Puzzle Im working on another CRT problem and I'm having a bit of trouble understanding the question at hand.
a group of seven men have found a stash of silver coins and are trying to share the coins equally among each other. Finally, there are six coins left over, and in a ensuing fight, one man is slain. The remaining six men, still unable to share equally because two silver coins are left over, fight again — and another man lies dead. The remaining men attempt to share, except that one coin is left over. One more fight, one more dead man, and now that an equal sharing is possible.
So I'm assuming this means
$$x\equiv 2 \pmod {6}$$
$$x\equiv 1 \pmod {5}$$
$$x\equiv 0 \pmod {4}$$
right?
|
You have interpreted three of the conditions properly, but you missed the original fight when there were seven men and six coins left, so you should add in $x \equiv 6 \pmod 7$ to your system.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Proximal Operator / Mapping of $\frac{1}{2} {\|x\|}^2 + \delta_{\mathbb{R}_+^n}\left(x\right)$: Sum of $L_2$ Norm Squared and Indicator Function Let $$f(x) = \frac{1}{2}\|x\|^2 + \delta_{\mathbb{R}_+^n}(x)$$
(componentwise nonnegtive). How to find $$\operatorname{prox}_{\alpha, f}(z)$$
I know
*
*$\operatorname{prox}_{\alpha, \frac{1}{2}\|\cdot\|^2}(z) = \frac{1}{1+\alpha}z$
*The definition: $$\operatorname{prox}_{\alpha, f}(z) = \arg\min_x\bigg(\frac{1}{2\alpha}\|x-z\|^2 + f(x)\bigg)$$
So we have $$\operatorname{prox}_{\alpha, f}(z) = \arg\min_x\bigg(\frac{1}{2\alpha}\|x-z\|^2 + \frac{1}{2}\|x\|_2^2 + \delta_{\mathbb{R}_+^n}(x) \bigg)$$
and then
$$\operatorname{prox}_{\alpha, f}(z) = \arg\min_{x\in \mathbb{R}_+^n}\bigg(\frac{1}{2\alpha}\|x-z\|^2 + \frac{1}{2}\|x\|_2^2 \bigg)$$
|
For simplicity I'll assume that $\alpha = 1$. You want to evaluate
$$
x^\star = \arg \min_x \quad I(x) + \frac12 \|x\|^2 + \frac12 \|x - z \|^2
$$
where $I$ is the indicator function of the nonnegative orthant.
We'll combine the two quadratic terms into a single quadratic term by completing the square. Notice that
\begin{align}
&\frac12 \|x \|^2 +\frac{1}{2} \|x - z \|^2 \\
&= \frac12 \|x\|^2 + \frac12 \|x\|^2 - \langle x,z \rangle + \frac12\|z\|^2 \\
&= \|x\|^2 - \langle x,z \rangle + \frac12 \| z\|^2 \\
&= \underbrace{\|x\|^2 - 2 \langle x,z/2\rangle + \|z/2\|^2}_{\text{perfect square}} - \|z/2\|^2 + \frac12 \|z\|^2\\
&= \left\|x - \frac{z}{2} \right\|^2 + \text{terms that do not depend on $x$}.
\end{align}
Therefore,
$$
x^\star = \arg \min_x \quad I(x) + \|x - \frac{z}{2} \|^2.
$$
Computing $x^\star$ has now been reduced to evaluating the prox-operator of $I$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Stronger version Goursat Theorem Show the following:
Let $\Delta \subset \mathbb C$, be a triangle and let $f:\Delta \rightarrow \mathbb C$ be continous. Futhermore, assume that $f$ is holomorphic in the interior of $\Delta$. Then
$$\int _{\partial \Delta}f = 0$$
My attempt: By Goursat Theorem the result is valid for any triangle in the interior of $\Delta$. But, I do not know how to extend this result in everywhere $\Delta$.
¡Any help would be awesome!
|
I would probably complete your argument like this:
Since $f$ is continuous on $\Delta$, and $\Delta$ is compact, we know that $f$ is uniformly continuous on $\Delta$. So for any given $\epsilon > 0$, there exists a $\delta > 0$ such that
$$ |x - x_0 | < \delta \implies |f(x) - f(x_0)| < \epsilon $$
for all $x , x_o \in \Delta$.
If $\Delta_{\delta / 2}$ is a "slightly smaller version of $\Delta$" - more precisely, if $\Delta_{\delta/ 2}$ is a triangle in the interior of $\Delta$ whose centre coincides with the centre of $\Delta$ and whose corners are located at most a distance $\delta / 2$ from the corresponding corners of the original triangle $\Delta$ - then
$$ | \ \int_{\partial \Delta} f - \frac L {L_{\delta/2}} \int_{\partial \Delta_{\delta / 2}} f \ | < L \epsilon,$$
where $L$ is the total perimeter of $\Delta$ and $L_{\delta/2}$ is the total perimeter of $\Delta_{\delta/2}$.
[The funny looking factor of $L/L_{\delta/2}$ is there to compensate for the fact that the lengths of the two contours are slightly different.]
But as you said, $\int_{\partial \Delta_{\delta / 2}} f = 0$ by Goursat's lemma. Therefore,
$$ | \int_{\partial \Delta} f \ | < L \epsilon.$$
Since $\epsilon$ was chosen arbitrarily, we conclude that
$$ \int_{\partial \Delta} f = 0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Proof of the Kellogg's theorem Kellogg' theorem: Let $m\ge1,0<\alpha<1$. If $\Omega\in \mathbb{R}^n$ is of class $C^{m,\alpha}$, then $\log k\in C^{m-1,\alpha}(\partial\Omega)$, where $k(y)=k_{x_0}(y)=k(x_0,y)$ is the Poisson kernel for the domain $\Omega$ and $x_0\in\Omega$ is fixed.
I am looking for the proof of this theorem, since I can not find it in Kellogg's book:Foundations of Potential Theory. I will be grateful if someone can tell me where I can get a complete proof for it.
|
Kellog's original article on his theorem.
http://www.ams.org/journals/tran/1931-033-02/S0002-9947-1931-1501602-2/home.html
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Optimal strategy for meta paper-scissor-rock You'll be familiar with the childhood game of paper-scissor-rock (hereafter abbreviated to PSR).
Let an 'optimal strategy' for PSR be one which will not be expected to lose in the long term against any other strategy.
It seems intuitively obvious that the optimal strategy for PSR is to, on each turn, randomly go paper with 1/3 probability, scissors with 1/3 probability, or rock with 1/3 probability. The optimal strategy should be random (rather than, say, going paper then scissor then rock in turn), because otherwise an opponent can learn how to exploit it. It should have equal probabilities of paper, scissor and rock for the same reason - if it went paper with greater than 1/3 probability, the opponent would start going scissors more frequently.
Now consider another game, that of meta-PSR. Here, there are two new options - let's call them 'diamond' and 'charcoal'. Diamond beats any of paper, scissor & rock; but loses to charcoal. Charcoal loses to paper, scissor & rock, but beats diamond.
What is the optimal strategy for meta-PSR?
Again, it seems clear that a randomised strategy is necessary. What's not clear is with what probabilities they should be distributed. For instance, if they each had 1/5 probability, then an opposing strategy could beat it by going diamond with higher frequency and charcoal with lower. Any thoughts?
|
This is a zero-sum game with payoff matrix $$\pmatrix{0&1&1&1&-1\\
-1&0&-1&1&1\\
-1&1&0&-1&1\\
-1&-1&1&0&1\\
1&-1&-1&-1&0}$$
where the rows are diamond, rock, paper, scissors, charcoal.
There are standard techniques to solve such games (easily Googleable!); a Nash equilibrium here is to go diamond or charcoal with $p=\frac{1}{3}$ each, and rock/paper/scissors with $p=\frac{1}{9}$ each.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proving the inequality $\int_{1}^{b} a^{\log _bx}dx$ > ${\ln b}$ Proving the inequality
$$\int_{1}^{b}a^{\log _bx}dx > {\ln b} : a,b>0 , b\neq1$$
Solving the integral, I found the result below,
$$\frac{\ ab-1}{{\ln ab}}{\ln b}$$
I know that I need only prove that, this part $$\frac{\ ab-1}{{\ln ab}}$$ must be greater than $1$, which I am unable to proceed further.
I ran to a difficulty when, $a= \frac{1}{b}$ or $b= \frac{1}{a}$ as $b$ ${\neq1}$
Please help.
Any alternative way to prove the inequality will be valued.
|
I leave this as an answer, since it is the only way for me to include an image. I think the problem in the question is invalid, and should be updated to be correct (or removed if the true problem in itself is as stated).
I let Mathematica plot the region where
$$
\ln b\frac{ab-1}{\ln(ab)}<\ln b.
$$
It is the blue domain below.
For example, for $a=1/2$ and $b=4/3$ one gets
$$
\ln b\frac{ab-1}{\ln(ab)}-\ln b\approx -0.05.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
How do we find the inverse Laplace transform of $\frac{1}{(s^2+a^2)^2}$? How do we find the inverse Laplace transform of $\frac{1}{(s^2+a^2)^2}$? Do I need to use the convolution theory? It doesn't match any of the known laplace inverse transforms. It matches with the Laplace transform of $\sin(at)$ but I don't know if that helps or not. Also there seems to be a formula with limits and imaginary numbers. How do I just know to apply that here?
|
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\int_{0^{+} - \infty\ic}^{0^{+} + \infty\ic}
{\expo{st} \over \pars{s^{2} + a^{2}}^{2}}\,{\dd s \over 2\pi\ic} =
\lim_{s \to -a\ic}\partiald{}{s}\bracks{\expo{st} \over \pars{s - a\ic}^{2}} +
\lim_{s \to a\ic}\partiald{}{s}\bracks{\expo{st} \over \pars{s + a\ic}^{2}}
\\[5mm] = &\
2\,\Re\bracks{-\,{-\ic\expo{-\ic at} + at\expo{-\ic at} \over 4a^{3}}} =
\bbx{\ds{{\sin\pars{at} - at\cos\pars{at} \over 2a^{3}}}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2182957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Is there a list of common identities for principal branches of complex logarithms and roots? Often I have a necessity to simplify some expressions involving roots or logarithms, where arguments appear to be imaginary or even more generally complex. Due to $n$th root and logarithm being generally multivalued, many simple identities, which hold for positive arguments, no longer hold for negative or complex ones.
Thus, for example, when I have an expression like
$$\frac{\sqrt{i\pi+x}}{\sqrt{-i\pi-x}},$$
I have to stop and analyze, whether I really can "extract $-1$" from a radical, and which sign the resulting $i$ will have — taking into account how the principal branch of the multivalued function (square root here) is defined.
So, is there any list of exactly working (i.e. not "up to branch") identities for principal branches of roots and logarithms, which would help simplification of such expressions?
|
There can't really be any such identities written down in a useful way.
The common ways of writing down things results in expressions that define analytic functions, so if we have an identity $f(z)=g(z)$, the expression $f(z)-g(z)$ will define an analytic function that is identically zero in the domain where the identity is valid. But by the identity theorem this means that our identity ought to keep holding as we move $z$ smoothly around the branch points. Which means that the identity doesn't really take principal branches into account after all.
We can cheat this argument by picking uncommon ways to write down $f$ and $g$ (or by choosing a sufficiently uninteresting identity such as $\operatorname{Log}z = \operatorname{Log}z$) -- but generally that would mean that using the identity will itself require the same kind of painstaking analysis you're hoping to get out of.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How many six-digit numbers can I create using these digits {$1$, $4$, $4$, $5$, $5$, $5$, $7$, $9$} Step 1 - Determine how many six-digit numbers can we create if we had $8$ distinct digits
|{$1$, $2$, $3$, $4$, $5$, $6$, $7$, $8$}| = 8
n!/(n-p)!
8!/(8-6)! = $20160$
BUT I have {$1$, $4$, $4$, $5$, $5$, $5$, $7$, $9$}
4's repeat 2 times
5's repeat 3 times
After this step, I have no idea what to do.
I need an explanation which involves:
*
*number $2$ (number of repeating 4's)
*number $3$ (number of repeating 5's)
Please try to explain this using inclusion-exclusion method
Additional text:
If he had 8 distinct digits we can calculate this without a problem.
But when we have repeating digits we have to subtract something (and I don't know how to find that 'something')
If he had {1,2,3,4,5,6,7,8} we can make 20160 numbers:
*
*123456
*123457
*123458
*123465
*123467
*123468
...
*
*887654
But let's say we had {1,2,3,4,5,6,8,8}
For easier understanding, I will visually distinct these 8's with an 'a' and 'b'
{1,2,3,4,5,6,8a,8b}
Following numbers would be:
*
*123456
*123458a
*123458b
*123465
*123468a
*123468b
Once you remove 'a' and 'b' you will get duplicate numbers, which cause a problem
*
*123456
*123458
*123458 (duplicate)
*123465
*123468
*123468 (duplicate)
We need to find how many duplicates are there, and then subtract them with original number (20160)
|
Case $1$: pick six digits from the $3$ distinct digits and $(5,5,5)$
$\dfrac{6!}{3!}$
Case $2$: pick six digits from the $3$ distinct digits and $(5,5,4)$
$\dfrac{6!}{2!}$
Case $3$: pick six digits from the $3$ distinct digits and $(5,4,4)$
$\dfrac{6!}{2!}$
Case $4$: pick six digits from $2$ distinct digits and $(5,5,5,4)$
$\dbinom{3}{2}\dfrac{6!}{3!}$
Case $5$: pick six digits from $2$ distinct digits and $(5,5,4,4)$
$\dbinom{3}{2}\dfrac{6!}{2!.2!}$
Case $6$: pick six digits from $1$ distinct digit and $(5,5,5,4,4)$
$\dbinom{3}{1}\dfrac{6!}{3!.2!}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
find the sum to $n$ terms of the series $1+4w+9w^2+...+n^2w^{n-1}$ where $w$ is $n$th root of unity I want to find the sum to $n$ terms of the series
$$1+4w+9w^2+...+n^2w^{n-1}$$
where$w$ is $n$th root of unity.
Let $$S_n = 1+4w+9w^2+...+n^2w^{n-1}$$
then $$ wS_n=w+4w^2+....+(n-1)^2w^{n-1}+n^2$$
therefore $$(1-w)S_n=1+3w+5w^2+...+(2n-1)w^{n-1}-n^2$$
Now when I find the sum of $$ 1+3w+5w^2+...+(2n-1)w^{n-1}$$ it comes out to be $$\frac{-2n}{1-w}$$ and after substituting we get $$ S_n= \frac{-2n}{(1-w)^2}-\frac{n^2}{1-w}$$
The problem is my book doesn't agree with my answer
The answer is $$\frac{n[(1-w)n+2]}{3w}.$$
|
I find $(1-w)S_n=-n^2+1+3w+5w^2+...+(2n-1)w^{n-1}$
Let $U_n=1+3w+5w^2+...+(2n-1)w^{n-1}$
$(1-w)U_n=1+2(w+w^2++w^{n-1})-(2n-1)=-2n+2(1+w+w^2++w^{n-1})$
Now $1+w+w^2++w^{n-1}=\dfrac{1-w^n}{1-w}=0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Localization of a Dedekind Domain I have a question about this article here:
In the proof of (ii) of corollary 5.3 it says that $R/P^r\cong R_P/R_P P^r$ where $R_P$ is the localization of a Dedekind Domain R at the prime ideal P. Can someone explain this to me?
Thanks.
Edit: And I also don't really understand why the quotient of a DVR is a PID. I know that DVRs are PIDs and quotients of PIDs over prime ideals are PIDs again but $R_P P^r$ isn't a prime ideal, is it?
|
Let $\phi: R \rightarrow R_P /R_P P^r, \ x \mapsto x + R_P P^r$ and $\mu: R \rightarrow R_P, \ x \mapsto \frac{x}{1}$. Then we have
$$ ker(\phi) = \mu^{-1}(R_P P^r) = P^r. $$
By the first isomorphism theorem for rings you get $R/P^r \cong R_P /R_P P^r$ as rings.
Edit:
Let $I\subseteq R_P/R_P P^r$ be an ideal and $\psi: R_P \rightarrow R_P / R_P P^r, \ x \mapsto x + R_P P^r $. As $\psi$ is surjective we have
$$ \psi \circ \psi^{-1} (I) = I.$$
Furthermore, the ideal $\psi^{-1}(I)$ is principal (as $R_P$ is a PID) and thus $I= \psi \circ \psi^{-1}(I)$ is principal as well. What does not hold is, that $R/P^r$ is a PID for $r>1$ (it is not integral, as $(p^{r-1}+ P^r)\cdot (p + P^r)= 0 + P^r$ ) and thus neither is $R_P/R_P P^r$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is there a particularly good function form for this curve? This curve shape seems to appear in various natural phenomena:
Do you recognize it? Do you know a specific function form that could match it or approximate it closely?
|
FIRST PART : Search for a model function.
In the empirical approach, the curve given by L_R_T is used (without the scattered points) : copy on Figure 1 below, curve drawn in red.
On Figure 2, instead of $x$ the abscissas are $\ln(x)$. The curve tends to become sinusoidal. But it is not symmetrical relatively to the horizontal axe. This draw to think that a damping factor should be taken into account.
On Figure 3, with a damping function very roughly adjusted by trial and error, the shape of the curve becomes closer from a sinusoid (dashed curve).
This leads to think that a good candidat might be on the form
$$y(x)\simeq x^{\alpha}\left(b\:\sin\left(\omega \ln(x)\right)+c\:\cos\left(\omega \ln(x)\right)\right) $$
They are four adjustable parameters $\omega,\alpha,b,c$ in the proposed formula.
This is the same as
$$y(x)\simeq x^{\alpha}\rho\:\sin\left(\omega \ln(x)+\varphi\right) \quad
\begin{cases} \rho=\sqrt{b^2+c^2} \\ \tan(\varphi)=\frac{c}{b}\end{cases}$$
The dashed curves are drawn with $\omega=\frac{\pi}{2}$ , $\alpha=-0.25$ , $\rho=5.6$ , $\varphi=-0.5$ Of course, this is only a rough preliminary result.
SECOND PART : Method to compute approximate values of the parameters.
Generally, this requires a non-linear regression method. For example of the kind of Levenberg–Marquardt algorithm. They are iterative processes, starting from guessed values of parameters.
A non-conventional approach (not iterative, no initial guess) is described in the paper : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
In case of damped sinusoidal function the method of calculus is given page 66.
The so-called "short way" is sufficient for the next result.
The data used comes from the figure given by L_R_T. Only the scattered points are used which coordinates where picked by graphical scanning.
One cannot expect an accurate result because they are only few points, not well distributed and with a big scatter.
The computed values are shown below (symbols defined p.66 in the paper referenced above. Note that in the paper $x$ must be replaced by $\ln(x)$ to be consistent with the actual case).
The computed curve is drawn in blue on the next figure.
The full computation process (page 67 of the referenced paper) leads to :
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $f$ is smooth and bijective, is $f^{-1}$ smooth, too? Let $U \subseteq \mathbb{R}^m$ be open. Assume that the function $f \colon U \rightarrow f(U)$ is $\mathcal{C}^{\infty}$ and a bijection and its differential $\text{d}f(x)$ is injective for every $x \in U$. Furthermore, assume that $f$ and $f^{-1}$ are continuous.
Is this enough to conclude that also $f^{-1}$ is $\mathcal{C}^{\infty}$?
I tried to use the inverse function theorem, but this only gives me a locally result and no inverse function $f^{-1}$ with maximal domain $f(U)$.
|
Write $f^{-1}=:g$. The inverse function theorem contains the formula
$$dg=\iota\circ df\circ g\ ,\tag{1}$$
whereby $\iota$ denotes taking the inverse of a regular linear map $L:\>{\mathbb R}^n\to{\mathbb R}^n$. As $f$ and $\iota$ are $C^\infty$ formula $(1)$ can be used to set up an induction proof that $g$ is $C^\infty$ as well, starting from $g\in C^0$ (which should be included in the proof of the inverse function theorem), and making heavy use of the chain rule.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Number of 5-card Charlie hands in blackjack A five-card Charlie in blackjack is when you have a total of 5 cards and you do not exceed a point total of 21. How many such hands are there? Of course, the natural next question concerns six-card Charlies, etc.
It seems like one way of determining the answer might be to determine the total number of 5-card hands and then subtract out the number of hands that exceed 21, but I am at a loss as to how to do this effectively. Is there some use of the inclusion-exclusion principle at work here? The condition that the cards do not exceed 21 is the difficulty I am having a hard time addressing. Any ideas?
|
Here is an answer by brute force enumeration of all 5-card hands, using an R program: the number of 5-card Charlie hands is 139,972.
deck <- c(rep(1:9, 4), rep(10, 16))
acceptable <- function(x) {sum(x) <= 21}
sum(combn(deck, 5, acceptable))
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
k-connectedness of simplicial complexes Let $\mathcal{C}$ be a finite simplicial complex. Is there an algorithm, that can tell wether $\mathcal{C}$ is simply connected? If not, are there restrictions to $\mathcal{C}$ under which such an algorithm exists?
|
No, there is no such algorithm. Here's why.
Given a finite simplicial complex $\mathcal{C}$ and a choice of vertex $v$, there is an algorithm to write down a finite presentation of $\pi_1(\mathcal{C},v)$. Conversely, given a finite group presentation $\langle g_i \,|\, r_j\rangle$ there is an algorithm to construct a finite simplicial complex $\mathcal{C}$ and vertex $v$ such that $\pi_1(\mathcal{C},v)$ has the given presentation.
Thus, an algorithm as you have asked for exists if and only if an algorithm exists to decide whether, given a group presentation, the group is trivial.
But no such algorithm exists: https://berstein2015.wordpress.com/2015/02/17/just-about-any-property-of-finitely-presented-groups-is-undecidable/
Roughly speaking, the reason no such algorithm exists is that you can algorithmically encode the halting problem for Turing machines into the problem for deciding whether a group presentation presents the trivial group. Hence, if you could decide the latter problem, then you could decide the former, which cannot be done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
evans book, laplace operator, estimate $\newcommand{\dd}{\text{d}}$
In the proof of symmetry of Green's function, Evans uses this estimate.
Green's function is defined as $G(x,y) := \Phi(y-x) + \phi^x$, where $\Phi$ is the fundamental solution for Laplace's equation, and $\phi^x$ is the corrector function, depending on the geometry and satisfying $\Delta \phi^x = 0$ in $\Omega$ and $\phi^x(y) = G(x,y)$ on $\partial \Omega$.
How does one obtain the factor $\varepsilon^{n-1}$ in the following estimate
$$\bigg|\int_{ \partial B_\varepsilon(x) } \frac{\partial G(x,z)}{\partial \nu} \, G(y,z) \, \dd \sigma(z) \bigg|
\leq C \, \varepsilon^{n-1} \, \lVert G(x,y) \rVert_{L^\infty(\partial B_\varepsilon(x))} \overset{ \varepsilon \to 0}{\to} 0$$
I remember from previous sections, that $|\nabla \Phi(x)| \leq C \, |x|^{1-n}$, but I am not sure if this helps.
Thanks for any thoughts on this!
|
I believe the source of your confusion is that you have mixed up the $x$ and $y$ in Evans' proof. The estimate he claims is
$$
\left\vert \int_{\partial B(x,\epsilon)} \frac{\partial w}{\partial \nu} v dS\right\vert \le C \epsilon^{n-1} \sup_{\partial B(x,\epsilon)} |v| = o(1)
$$
where (and here is the key part)
$$
w(z) = G(y,z) \text{ and } v(z) = G(x,z).
$$
Consequently, near $x$, the function $w(z) = G(y,z)$ is smooth, and so the first order derivatives are bounded. Thus we may assume that $\epsilon < \epsilon_0$ and bound
$$
\left\vert \int_{\partial B(x,\epsilon)} \frac{\partial w}{\partial \nu} v dS\right\vert \le n \alpha(n) \epsilon^{n-1} \sup_{\partial B(x,\epsilon_0)} |\nabla w| \sup_{\partial B(x,\epsilon)} |v| \le C \epsilon^{n-1} \sup_{\partial B(x,\epsilon)} |v| = o(1).
$$
Here the appearance of $\epsilon^{n-1}$ is simply due to the surface area of $\partial B(x,\epsilon)$. The fact that the resulting estimate is $o(1)$ relies on bounds for the fundamental solution to the Laplacian (as in the previous sections in Evans) and the fact that the corrector function is harmonic and hence smooth.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2183984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is there a differentiable function on $[0,\infty)$ that satisfies $y'=y^n$ and $y(0)>0$? Let $n$ be an integer greater than $1$. Is there a differentiable function on $[0,\infty)$ that satisfies $y'=y^n$ and $y(0)>0$?
My attempt: We solve the differential equation noting that
$\frac{dy}{dt}=y^n$
$\int\frac{1}{y^n}=\int dt$
So we get that $y=((1-n)(t+c))^{\frac{1}{1-n}}$ for some constant $c$.
However, I notice that since $n\geq2$, we get that
$y(0)=((1-n)c)^{\frac{1}{1-n}}$, so $c$ must be negative in order to actually evaluate this term. In this case, my answer is yes, there exists a function, but I'm not sure this is correct.
Any help appreciated!
|
This is the main point: whatever positive number is picked for $y(0),$ the solution of the ODE $y' = y^n$ blows up in finite time.
If $n=2$ and $y(0) = \frac{1}{W} $ with constant $W > 0,$ then
$$ y(x) = \frac{1}{W - x} $$
for $x < W.$ The solution does not extend to $+\infty$ as requested.
If $n=3$ and $y(0) = \frac{1}{\sqrt {2W}} $ with constant $W > 0,$ then
$$ y(x) = \frac{1}{\sqrt{2W - 2x}} $$
for $x < W.$ The solution does not extend to $+\infty$ as requested.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to Re-write Function as Unit Step Function Write the function given by: $cos(t)$, $t ∈ [0, 2π)$ and $0$ otherwise, in terms of unit step functions.
This step is at the beginning of solving a Laplace Transform, which I can do, I just don't understand this initial step.
|
Note that
$$f(t)=\cos(t)(u(t)-u(t-2\pi))=\begin{cases}\cos(t)&,t\in [0,2\pi)\\\\0&,\text{elsewhere}\end{cases}$$
where $u$ is defined by
$$u(t)=\begin{cases}1&,t\ge 0\\\\0&,\text{elsewhere}\end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is Wikipedia page on Kalman Filter's wrong? I was reading the wikipedia page on Kalman filter
Snippet from wikipedia
There, when estimating the co-variance matrix, the Q matrix is used. When calculating Kalman gain, R matrix is used.
In most literature I found on Kalman filters, it is the other way around.
Here is a snippet from the Probabilistic Robotics written by Dr. Sebastian Thrun from Stanford.
Kalman Filter snippet from Thrun's book
I guess it would be fair to assume that the wikipedia article is incorrect.
Just hoping someone could double check it to make sure I'm right.
|
Thrune is using $Q$ to denote the sensor noise covariance and $R$ to denote the process noise covariance. This is stated on page 35. Wikipedia uses the reverse of those definitions, defined in this section. Neither is wrong. I would argue that Wikipedia's notation is actually more common; I have read quite a bit of literature on this subject.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Percentage based on Profit and Loss
What was the Percentage of Discount given?
*
*23.5 % profit was earned by selling an almirah for rs 12,350.
*If there were no Discount,the earned profit would have been 30%
*The cost price of the almirah was rs 10,000
for these Question option are
*
*only I and II
*only II and III
*only I and III
*Any two of the three
*None of these
options are used to find the discount with the help of three
statements , or only two statements like that
I have tried:
From first statement:
Profit will be 23.5 percent
Selling Price will be 12,350
Cost Price =?
12350*100/123.5 = C.P
10,000 = C.P
From second statement
when no discount offered
Profit = 30 %
therefore selling price =130/100 * 10,000
S.P = 13,000
Difference in selling Price = 13,000 - 12,350 = 650
Percentage discount = 650/12350 *100 = 5.26%
I got the answer option a that is from one and second statement we can answer this question
But the answer is given as last option None of these what i am doing Mistake?Please anyone guide me
|
In question first statement you found cost price. But you are not able to find marked price. As for offered discount you need to know marked price.
Now placing things in an order -
From statement I -
After discount original SP 12350.
We can also calculate CP from it. CP = 10000.
From statement II -
MP = 13000 (Also SP if no discount).
Thus, I and II give the answer.
II and III can not give the answer. Because we require profit percentage with discount and profit percentage without discount. So II and III are not sufficient.
Since III gives C.P. = Rs. 10,000, I and III give the answer.
Therefore, I and II [or] I and III give the answer.
So option 5th is answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Can you calculate $\frac{dx}{dy}$ by finding $\frac{dy}{dx}$ and flipping it over? This may be a silly question but for example if you had the gradient at $x=4$ of $y=x^2+1$, then can you just calculate $\frac{dx}{dy}$ by finding $\frac{dy}{dx}$ and flipping it over? Or must you make $x$ the subject and differentiate?
|
implicit differentiation operator d/dy
$y = x^2 + 1$
$d/dy(y) = d/dy(x^2 + 1)$
$1 = 2x dx/dy$ (by the chain rule on the RHS)
$dy / dx = 2x$
as expected
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
$4n$ is a square modulo $d$ implies $n$ is a square modulo $d$ I was wondering if someone could help me on a small detail that I need to clarify. Let $d$ be a squarefree integer and $n$ be any integer. Then I want to show the following:
$4n$ is a square modulo $d$ $\Rightarrow$ $n$ is a square modulo $d$. Namely if $\exists x$ such that $x^2 \equiv 4n \mod d$, then $\exists x^{\prime}$ such that ${x^{\prime}}^2 \equiv n \mod d$.
This result would be straight forward if I could use Euler's Criterion by my module doesn't cover it and $d$ is not necessarily a prime.
I'm guessing that this result generalizes with any square instead of $4$.
|
Proof for odd $d$ :
Suppose, $4n$ is a square modulo $d$, in other words $$x^2\equiv 4n\mod d$$ for some $x\in \mathbb Z_d$.
Since $d$ is odd, there exists an $y\in\mathbb Z_d$ with $2y\equiv 1\mod d$, and we have $$(xy)^2=x^2y^2\equiv 4y^2n\equiv n\mod d$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $a * b = a + b - ab$ defines a group operation on $\Bbb R \setminus \{1\}$ So, basically I'm taking an intro into proofs class, and we're given homework to prove something in abstract algebra. Being one that hasn't yet taken an abstract algebra course I really don't know if what I'm doing is correct here.
Prove: The set $\mathbb{R} \backslash \left\{ 1 \right\}$ is a group under the operation $*$, where:
$$a * b = a + b - ab, \quad \forall \,\, a,b \in \mathbb{R} \backslash \left\{ 1 \right\} .$$
My proof structure:
After reading about abstract algebra for a while, it looks like what I need to show is that if this set is a group, it has to satisfy associativity with the operation, and the existence of an identity and inverse element.
So what I did was that I assumed that there exists an element in the set $\mathbb{R} \backslash \left\{ 1 \right\}$ such that it satisfies the identity property for the set and another element that satisfies the inverse property for all the elements in the set. However I'm having trouble trying to show that the operation is indeed associative through algebra since
$$\begin{align}
a(b * c) & = a(b+c) - abc \\
& \ne (a+b)c - abc = (a*b)c \\
\end{align}$$
So in short, I want to ask if it's correct to assume that an element for the set exists that would satisfy the identity and inverse property for the group. Also, is this even a group at all since the operation doesn't seem to satisfy the associativity requirements.
|
The statement of the associativity condition is wrong; it should be
$$(a \ast b) \ast c = a \ast (b \ast c) .$$
Expanding the l.h.s. gives
\begin{align}(a \ast b) \ast c &= (a + b - ab) \ast c \\ &= (a + b - ab) + c - (a + b - ab) c \\ &= a + b + c - bc - ca - ab + abc .\end{align}
Now, do the same thing for the r.h.s. and verify that the expressions agree.
We cannot simply assume the existence of an identity and inverse. There are plenty of associative binary operations which lack one or both!
We can pick out the identity element using the definition: We must have, for example, that
$$a \ast e = a$$ for all $a$, and expanding gives
$$a + e - ae = a.$$
Can you find which $e$ makes this true for all $a$?
Likewise, to show that the operation admits inverses, it's enough to produce a formula for the inverse $a^{-1}$ of an arbitrary element $a$, that is, an element that satisfies $a \ast a^{-1} = e = a^{-1} \ast a$. As with the identity, use the definition of $\ast$ and solve for $a^{-1}$ in terms of $a$.
Remark If we glance at the triple product, which by dint of associativity we may as well write $a \ast b \ast c$, we can see the occurrence of the elementary symmetric polynomials in $a, b, c$, which motivates writing that product as $(a - 1) (b - 1) (c - 1) + 1$. Then, glancing back we can see that we can similarly write $a \ast b = (a - 1) (b - 1) + 1,$ which is just the conjugation of the usual multiplication (on $\Bbb R - \{0\}$) by the shift $s : x \mapsto x + 1$, that is, $a \ast b := s(s^{-1}(a) s^{-1}(b))$. Since multiplication defines a group structure on $\Bbb R - \{0\}$, that $\ast$ defines a group structure follows by unwinding definitions. For example, to show associativity, we have
$$(a \ast b) \ast c = s(s^{-1}(s(s^{-1}(a) s^{-1}(b))) s^{-1}(c)) = s(s^{-1}(a) s^{-1}(b) s^{-1}(c)) = s(s^{-1}(a)s^{-1}(s(s^{-1}(b) s^{-1}(c))) = a \ast (b \ast c) .$$
(Note that in writing the third expression without parentheses we have implicitly used the associativity of the usual multiplication.) You can just as well use this characterization to determine the group identity $e$ and the formula for the inverse of an element under $\ast$.
Indeed, this together with analogous arguments for the other group axioms shows that conjugating any group operation by a set bijection defines an operation on the other set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
Bounded functions and Montel's Theorem Let $\Omega\subset\mathbb{C}$ be a region and $(z_n)_n\subset\Omega$ such that $z_n\rightarrow z\in\Omega$. Also, let $f_n:\Omega\rightarrow\mathbb{C}$ be a sequence of $f_n\in\mathcal{H}(\Omega)$ such that there exists $M$ satisfying that for each $n\in\mathbb{N}$, $|f_n|<M$ and $\lim_{n\rightarrow\infty}|f_n(z_n)|=M$.
I have to show that $|f_n|\rightarrow M$ uniformly on the compact sets of $\Omega$.
By now I have noticed that $\mathcal{F}:=(f_n)_n$ is a normal family thanks to Montel's theorem, and also that
$$|f_n(z_n)|<M\qquad\forall n\in\mathbb{N}.$$
How can I proceed with the rest of the proof? I have tried to prove it via the definition of each property, but I don't get anywhere.
|
Let $K \subset \Omega$ be a compact. Without LOG, we can suppose that $K$ contains an open disk centered on $x$.
Now let's proceed by contradiction, i.e. that $\vert f_n\vert $ doesn't converge uniformly to $M$. This means that it exists a real $0 < M^\prime < M$ and a strictly increasing sequence of integers $(n_j)_j$ and $z_j\in K$ with $\vert f_{n_j}(z_j)\vert \le M^\prime$. The sequence $(z_j)$ is included in the compact $K$ and has therefore a converging sub-sequence to a complex $z \in K$. Again without LOG, we can suppose that the sequence $(f_n)$ is such that there exists a sequence $(z_n) \to z$ with $\vert f_n(z_n) \vert \le M^\prime$.
Now using Montel's theorem, it exists a sub-sequence $(f_{n_k})$ converging uniformly to $f$ holomorphic in $K$. $f$ is not constant as
$$\vert f(z) \vert \le M^\prime < M=\vert f(x) \vert.$$
But this is contradicting the maximum modulus principle as $x$ is an interior point of $K$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Containing an open set = being an open set? In stats courses - particularly the Casella and Berger text - reference is made to theorems being satisfied if and only if some $\Theta \subset \mathbb{R}^p$ "contains an open set," $p \geq 1$.
Isn't this just the same as $\Theta$ being an open set? Suppose there were some $\Theta^{\prime} \subset \Theta$ that were open. Then $\Theta^{\prime}$ would contain some open rectangles. But $\Theta$ would contain these open rectangles as well. Hence $\Theta$ is open.
I'm not at all familiar with topology, so I apologize for any informal terminology.
|
No it is not means the same, every subset contains an open subset that this the empty set, but is not always open like the closed ball. For the classical topology on $R$, you can take the example of the closed interval $[a,b]$ which contains the open interval $(a,b)$ but the closed interval is not open.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
find a conformal map which maps a strip with a slit to the same strip without slit
find a conformal map which maps $\left\{ z: 0<\operatorname{Im z}<1\right\}$ minus $[a,a+hi]$ into the same strip without slit, where $a\in \mathbb{R}$ and $0<h<1$.
since the problem asks to eliminate slit I want to do $z^2, \sqrt{z}$ transformation to make it into a non-slit region (like first move the slit to the imaginary axis, then do exp, move it to the negative real axis, take square root to get a half plane) The problem is I don't know how to map a plane/half plane/etc. into a strip again, and I found no problem discussing on this kind of map. Should I try to inverse the map of a strip to something or what? Thanks for any help.
|
If you could find a map which maps $\{z:0<\operatorname{Im} z<1\}\text{ minus }[a,a+hi]$ onto the upper half plane, then consider $$
z\mapsto \frac{1}{\pi}\log z,$$
which maps the upper half plane onto $\{z:0<\operatorname{Im} z<1\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2184950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $g(x)=\sqrt[3]{x}$ prove that g is continuous at c= 0 Please check my proof
Let $\epsilon >0$ and $\delta >0$
$$|x-0|<\delta \leftrightarrow |\sqrt[3]{x}-\sqrt[3]{0}|<\epsilon $$
$$ \leftrightarrow \sqrt[3]{x}<\epsilon $$
$$ \leftrightarrow x<\epsilon ^{3}$$
choose $\delta =\epsilon ^{3}$
then
$$|\sqrt[3]{x}-\sqrt[3]{0}|< \delta \leftrightarrow |\sqrt[3]{x}-\sqrt[3]{0}<(\sqrt[3]{\epsilon })^{3}=\epsilon $$
therefore it is continouos at 0
|
You've done most the work correctly. But some comments on your writeup:
*
*The first block of work between “Let $\epsilon > 0$” and “Choose $\delta = \epsilon^3$” is scratch work. It shouldn't be included in your final product.
*You write “Let $\epsilon > 0$ and $\delta > 0$” at the start; this is not idiomatic. You are fulfilling a definition that starts “For every $\epsilon > 0$, ...” In other words, $\epsilon$ is arbitrary. But $\delta$ is far from arbitrary; you have to specify it.
So here is I would write it:
Given $\epsilon > 0$, let $\delta = \epsilon^3$. Then for any real number $x$,
$$
|x - 0 | < \delta \implies -\delta < x < \delta \implies -\epsilon^3 < x < \epsilon^3
$$
Taking cube roots,
$$
-\epsilon < \sqrt[3]{x} < \epsilon
\implies |\sqrt[3]{x} - 0 | < \epsilon
$$
Since this is true for any $\epsilon$, we have $\lim_{x\to 0} \sqrt[3]{x} = 0$.
This is probably how you are expected to write up the problem. However, a question for your teacher: How do we know that
$$
-\epsilon^3 < x < \epsilon^3
\implies -\epsilon < \sqrt[3]{x} < \epsilon
$$
is true? How, in fact, do we know what $\sqrt[3]{x}$ is when $x$ is a real number? In fact, we don't know how to define $\sqrt[3]{x}$ until we know that the function $g(x) = x^3$ is continuous, and has a continuous inverse. So this whole exercise puts the cart before the horse in a way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\lim_{n\to \infty}\sin{n\pi x} =0$ if $x\in \mathbb{Z},$ but the limit fails to exist if $x\notin \mathbb{Z}.$ Show that $\lim_{n\to \infty}\sin{n\pi x} =0$ if $x\in \mathbb{Z},$ but the limit fails to exist if $x\notin \mathbb{Z}.$
1st part
If $x\in \mathbb{Z}$ then $\sin{n\pi x}=0$ for all $n,$ giving the first part.
Edit:
2nd part
If $x\notin \mathbb{Z},$
I want to show that the limit doesn't exist.
How to do that?
|
For $x \in \mathbb{R}$, for $n \in \mathbb{N}$, we have $$\sin ((n+1) \pi x) - \sin (n \pi x) = \sin (n \pi x) \big( \cos(\pi x) - 1) + \cos(n \pi x)\sin(\pi x).$$
Denote $A = \cos(\pi x)-1$ and $B = \sin(\pi x)$. We have $A \neq 0$ and $B \neq 0$, and $$\sin \big( (n+1) \pi x \big) - \sin (n \pi x) = A \sin(n \pi x) + B \cos(n \pi x).$$
Now denote $C = \sqrt{A^2+B^2}$ ; classically, there exists $\phi \in \mathbb{R}$ such that $$\forall n \in \mathbb{N},\ \sin \big( (n+1) \pi x \big) - \sin (n \pi x) = C \sin (n \pi x + \phi).$$
$ $
Now we assume that $\lim \limits_{n \to +\infty} \sin(n \pi x)$ exists. Thus $C \sin (n \pi x + \phi) \underset{n \to +\infty}{\longrightarrow} 0$. Then, you can find here a short proof that $$\forall y \in \mathbb{R},\ |\sin (y)| \ge \frac{2}{\pi}d(y,\pi \mathbb{Z})$$ where $d(t,A) = \inf \{ |t-a|,\ a\in A\}$ stands for the distance to the set $A$.
As $C > 0$, we have that $d(n\pi x + \phi, \pi \mathbb{Z}) \underset{n \to +\infty}{\longrightarrow} 0$. Using the continuity of the distance yields $d\big( ((n+1)\pi x + \phi)-(n\pi x + \phi), \pi \mathbb{Z} \big) \underset{n \to +\infty}{\longrightarrow} 0$, i.e. $d(\pi x, \mathbb{Z}) \underset{n \to +\infty}{\longrightarrow} 0$, so $d(\pi x, \pi \mathbb{Z}) = 0$. As $\pi \mathbb{Z}$ is closed in $\mathbb{R}$, we conclude $\pi x \in \pi \mathbb{Z}$, and thus $x \in \mathbb{Z}$.
$ $
Hence, if $\lim \limits_{n \to +\infty} \sin(n \pi x)$ exists, then $x \in \mathbb{Z}$ (and thus, if $x$ is not an integer, then $\big( \sin (n \pi x) \big)_{n \ge 0}$ does not converge).
Note that if $x$ is irrational, you can even prove that $\big( \sin (n \pi x) \big)_{n \ge 0}$ is dense in $[0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 2
}
|
Understanding smooth manifolds I'm having a little trouble understanding smooth manifolds.
I have been told the ellipsode $E = x^2+2y^{2}+3z^{2}=6$ Where $(x,y,z) \in\mathbb{R}^3$ is a 2-dimensional manifold of smoothness but I didn't really understand why. Can somebody help explain why?
Thanks
|
You don't have to know what a manifold is in order to check that a subset of $\mathbb R^n$ is a submanifold!
To see that $E$ is a submanifold it suffices to check that the smooth function $f(P)=f(x,y,z)= x^2+2y^{2}+3z^{2}-6$ has a non-zero gradient at all $P\in E$, which is clear since $\operatorname {grad} f(x,y,z)=(2x,4y,6z)$ only vanishes at the origin.
An important but neglected distinction
a) Certain subsets $M\subset \mathbb R^n$ are called submanifolds of $\mathbb R^n$: there are several equivalent definitions for these sets, all deriving from the implicit function theorem.
The point I want to emphasize is that (in principle!) given $M$ one can answer the question "is $M$ a submanifold of $\mathbb R^n$ ?" unequivocally with "yes!" or "no!", without invoking any extraneous structure.
b) On the other hand, given an abstract set $S$ it does not make sense to ask whether $S$ is or is not a smooth manifold: a smooth manifold is a structure consisting of of a topology on $S$ plus a complicated set $\mathcal A$, called an atlas, which has to satisfy a lot of weird requirements.
c) The link between the two concepts above is that a submanifold $M\subset \mathbb R^n$ can be endowed very canonically with such an atlas $\mathcal A$, so that $(M,\mathcal A)$ becomes a manifold in its own right.
The important practical point however is that a student asked to check whether some $M\subset R^n$ is a manifold should never worry about atlases or charts: submanifolds are automatically provided with a canonical atlas and the student should only concentrate on the tools for proving that $M$ is a submanifold, like implicit function theorem, submersions, gradients, differentials, immersions, diffeomorphisms,... (these depend of course on the presentation the teacher has chosen).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Solve $f^2(x)=\frac{k}{f''(x)}$ While studying physics I have many time stumbled upon forces that are directly affected by the position of a particle. Moreover given $f(x)$ how could we approach solving the equation:
$$f^2(x)=\frac{k}{f''(x)}$$
I am new to differential equations and I would really appreciate if someone could explain the thinking process and the steps needed to reach a solution. Thanks in advance.
|
Putting $y = f(x)$, we have
$$y^{\prime\prime} = \frac{k}{y^2}.$$
Now multiplying both sides by $2y^\prime$, we get
$$2y^\prime y^{\prime\prime} = \frac{2ky^\prime}{y^2},$$
which can be rewritten as
$$\left( \left( y^\prime \right)^2 \right)^\prime = \frac{2ky^\prime}{y^2} = \frac{\mathrm{d}}{\mathrm{d} x} \left( - \frac{2k}{y} \right),$$
which upon integration of both sides gives
$$ \left( y^\prime \right)^2 = c -\frac{2k}{y},$$
where $c$ is an arbitrary constant of integration. So
$$ y^\prime = \left( c -\frac{2k}{y} \right)^{\frac{1}{2}},$$ and therefore
$$ \frac{y^\prime }{ \left( c -\frac{2k}{y} \right)^{\frac{1}{2}} } = 1,$$
which we can rewrite as
$$ \frac{ y^\frac{1}{2} y^\prime }{\left( cy - 2k \right)^\frac{1}{2} } = 1, $$
or $$ \frac{1}{c^\frac{1}{2}} \left( \frac{y}{ y - \frac{2k}{c} } \right)^\frac{1}{2} y^\prime = 1.$$
Can you take it from here on?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Can someone help me solve $z^{3}=-i$ $z$ is a complex number. I tried to solve this question by setting $z=a+bi$, but when I calculated $(a+bi)^{3}$, I found that's a little bit complicated to compute, Can someone help me teach me some easier way to solve the problem? Thanks a lot.
|
$z^3=-i \leftrightarrow z^3+i=0 \leftrightarrow (z)^3+(-i)^3=0$.
The LHS factors into $(z-i)(z^2+iz-1)=0$.
Now you have $(z-i)=0 \rightarrow z=i$ and $(z^2+iz-1)=0 \rightarrow z=\frac{-i \pm \sqrt{3}}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why isn't the unit interval $I=[0,1]$ the universal cover of $S^1$? The question is very simple, i think. But i cannot see the answer.
The fact is that a universal cover has a fiber which is isomorphic (under a choice of a point) to the fundamental group of the covered space. In this case $\mathbb{R}$ whit the exponential map has this property, while $I$ with the exponential map not. But actually it is a covering, is simply connected and so by definition also the universal cover.
Where is the flaw in this (paradoxal) argument?
|
Another way to tell that the map is not a covering map is recalling that if the base is connected, then the covering has fibers of the same cardinality, which is clearly not the case (take the fiber at $0$ and at $1/2$).
Yet another way is to use the fact that a compact base space with infinite fundamental group must have a noncompact universal cover (although rather indirect, this also shows that there cannot exist any cover whatsoever $p:I \to S^1$ and is a nice result in itself). One important note is that this does not use "uniqueness" of the universal cover, only that we have a covering map from a simply connected space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Who can prove that a triangular number cannot be a cube, fourth power or fifth power? Triangular numbers (See https://en.wikipedia.org/wiki/Triangular_number )
are numbers of the form $$\frac{n(n+1)}{2}$$
In ProofWiki I found three claims about triangular numbers. The three claims are that a triangular number cannot be a cube, not a fourth power and not a fifth power. Unfortunately, neither was a proof given nor did I manage to do it myself. Therefore my qeustions :
Does someone know a proof that a triangular number cannot be a cube, a fourth power or a fifth power ?
|
First, notice $n$ and $n+1$ are coprime. And if the product of coprime numbers is a n-th power then both are also n-th powers. Now divide the problem into the cases where $n$ is odd and even.
$$n=2t$$
$$t(2t+1)=a^b$$
Then $t$ and $2t+1$ are b-th powers. Let $t=y^b$, $2t+1=x^b$. Then
$$x^b-2y^b=1$$
Applying the same substitutions to the case where $n$ is odd you find $$x^b-2y^b=-1$$
In this answer Keith Conrad proves the only solution is $x=1$, $y=0$, which mean $n=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
If A and B are two mutually exclusive events with P(A) = 0.2 and P(B) = 0.3, then what is P (A and B(complement)) If A and B are two mutually exclusive events with P(A) = 0.2 and P(B) = 0.3, then what is P (A and B(complement))
I thought it would be P(B complement) = 0.7, because it is 1-0.3, and then P (A) = 0.2, so 0.2 * 0.7 = .14 but unfortunately this was incorrect.
|
If $A$ and $B$ are mutually exclusive, then $A \subset B^c$,
Hence $P(A \cap B^c) = P(A)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does, $\lim\limits_{x \to +\infty } f'(x) = + \infty \Leftrightarrow \lim\limits_{x \to +\infty } \frac{{f(x)}}{x} = + \infty $? Let $f:\Bbb R \to \Bbb R$ be a differentiable function. If $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x} = + \infty $, it is always true that $\mathop {\lim }\limits_{x \to + \infty } f'(x) = + \infty $? How about the converse?
For example, $\mathop {\lim }\limits_{x \to + \infty } \frac{{\ln x}}{x} = 0$ is finite, then we can see $\mathop {\lim }\limits_{x \to + \infty } (\ln x)' = 0$ is finite. $\mathop {\lim }\limits_{x \to + \infty } \frac{{{x^2}}}{x} = + \infty $ so $\mathop {\lim }\limits_{x \to + \infty } ({x^2})' = \mathop {\lim }\limits_{x \to + \infty } x = + \infty $. So the claim seems good to me, but I don't know how to actually prove it. $\mathop {\lim }\limits_{x \to + \infty } f'(x) = \mathop {\lim }\limits_{x \to \infty } \mathop {\lim }\limits_{h \to 0} \frac{{f(x + h) - f(x)}}{h}$, I don't know how to deal with this mixed limit. Also since the limits in the proposition diverges, it looks like mean value theorem sort of thing cannot apply here.
|
If $\lim_{x\to \infty}f'(x)$ exists, then from L'Hospital's Rule we have
$$\lim_{x\to \infty}\frac{f(x)}{x}=\lim_{x\to \infty}f'(x)$$
regardless of whether $\lim_{x\to \infty}f(x)$ exists or not (See the note that follows Case 2 of THIS ARTICLE).
Hence, if $\lim_{x\to \infty}f'(x)=\infty$, then $\lim_{x\to \infty}\frac{f(x)}{x}=\infty$ also.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
}
|
Is Digit-wise calculation possible? Suppose we have to do an intense calculation, like calculating $a^b$ for large $a$ and $b$. Then, instead of multiplying $a$ by itself $b$ times, could we just do some shortcut method with $a$ and $b$ which gives us the units digit of $a^{b}$, then another algorithm which gives the tens digit and hence, could we find the answer digit-by-digit?
I mean, could we have a function $f$ or any algorithm which takes three inputs: $a$, $b$, and $n$, such that $f(a,b,n)$ gives us the $n^{th}$ digit of $a^b$?
Similarly, could we have another algorithm, which takes inputs $a$ and $n$ and gives us the $n^{th}$ digit of $a!$, i.e. calculating factorials digit-wise?
Maybe, it's like a divisibility test where you have an algorithm to check whether $a$ is divisible by $b$ without actually dividing $a$ by $b$. Maybe, binary could be of some help in digit-wise calculation, because in binary, all the digits can have only two possible values.
|
If $a$ is coprime to $10$, then $a^n \mod 10^k$ is periodic in $n$ with period dividing $\varphi(10^k) = 4 \times 10^{k-1}$. Thus the lowest $k$ decimal digits of
$a^n$ are the same as those of $a^m$ where $n \equiv m \mod 4 \times 10^{k-1}$.
For example, since $21 \equiv 1 \mod 4$, the lowest digit of $7^{21}$ is the same as that of $7^1$, namely $7$.
In the case of $7$, it turns out that the order of $7 \mod 1000$ is actually $20$, so the lowest three digits of $7^{21}$ are the same as those of $7^1$, namely $007$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the sigma algebra of cylindrical sets? This is a basic question but still, let $C$ be the space of real-valued continuous functions $f$ on $[0,t]$. Then a cylindrical subset of $C$ is defined as a set of the form
$$
S = \{\, f\in C; \,(f(t_1),\dots\,f(t_n))\in B\}
$$
where $B\in \mathcal{B}^n$ and $0<t_1<\dots<t_n<t$. So, take
$$
S_1 = \{\, f\in C;\, f(t_1) \in B_1 \}
$$
$$
S_2 = \{\, g\in C;\, g(t_2) \in B_2 \}
$$
How is the union of these two sets a cylindrical subset of C as defined above? The union of the sets of functions $f$ and $g$ such that $f(t_1)$ is in some interval and $g(t_2)$ is in some other interval, isn't it the set of functions $h$ such that either $h(t_1)$ or $h(t_2)$ belong to the said intervals.
Or is it that the sum of the sets $S_1$ and $S_2$ is defined as the cylindrical set that corresponds to the Borel set $(B_1\times\mathbb{R})\, \cup \,(\mathbb{R}\times B_2)$?
(Apologies for any lack of rigor)
|
Be careful: the (or maybe "a") family of cylinder sets need not to be a priori closed under unions! You gave the right definition of cylinder set, but note that the family:
$$ \{ C_{t_1 \dots t_n} (B) : B \in (\mathcal{B})^n\, \ n \in \mathbb{N}, \ t_1 \dots t_n \in [0,T] \} $$
is just a $\pi$-system (i.e, it is closed under finite intersections), not an algebra or even a $\sigma$-algebra. If you want a "closed-under union" property you must consider the $\sigma$-algebra generated by all cylinder sets. Here's an idea: let $\mathcal{F}_{t_1 \dots t_n}$ be the $\sigma$-algebra generated by the above family and define:
$$ \overline{\mathcal{F}} = \bigcup_{n=1}^{+\infty} \bigcup_{t_i \in [0,T], i \le n} \mathcal{F_{t_1 \dots t_n}}. $$
Then $ \overline{\mathcal{F}}$ is actually an algebra of cylinder sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2185980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Looping through zeroes in polynomials Question.
If $p$ is a polynomial of degree $n$ with $p(\alpha)=0$, what do we know of the polynomial $q$ (with degree $n-1$) such that the numbers $(q^k(\alpha))_{k=1}^n$ contain all of the zeroes of $p$?
Here I denote $q(q(\cdots q(\alpha)))=q^k(\alpha)$.
Notes.
We know for a fact such $q$ exists, since there always exists a polynomial of degree $n-1$ through $n$ given points. $q$ is not unique, however, since there are multiple permutations we can put the zeroes in.
Examples.
For linear $p$ (write $p(x)=a_1x+a_0$), this is obvious; $q(x)=\tfrac{-a_0}{a_1}=\alpha$ suffices. If $p(\alpha)=0$, then $q(\alpha)=\alpha$ indeed are all the zeroes of $p$.
If $p$ is quadratic, write $p(x)=a_2x^2+a_1x+a_0$, and have $p(\alpha)=0$ again; now $q(x)=\frac{-a_1}{a_2}-x$.
If $p$ is cubic, write $p(x)=a_3x^3+a_2x^2+a_1x+a_0$. This is where I get stuck, since the roots of cubic equations aren't expressions that are easy to work with.
Attempts.
First I see (denote the (not necessarily real) zeroes by $z_1,z_2,\cdots,z_n$) that $z_1+\cdots+z_n=\frac{-a_{n-1}}{a_n}$ and $z_1z_2\cdots z_n=\frac{(-1)^na_0}{a_n}$. We can produce similar expressions for the other coefficients, but I doubt this is useful; they're not even solvable for $n>4$. We also have (given $z_1$)
$$-a_nz_1^n=a_{n-1}z_1^{n-1}+\cdots+a_1x+a_0$$
with which we can reduce every expression of degree $n$ or larger in $z_1$ to an expression of degree $n-1$ or smaller.
For $n=3$ (let's do some specific examples), we could write $q(x)=b_2x^2+b_1x+b_0$, and take for example $p(x)=x^3-x-1$. Then, if $\alpha$ is a zero of $p$, then $\alpha^3=\alpha+1$, and so $q(\alpha)^3=q(\alpha)+1$, which is
$$(b_2\alpha^2+b_1\alpha+b_0)^3=b_2\alpha^2+b_1\alpha+b_0+1$$
working out the constant terms gives $b_2^3+b_1^3+b_0^3+6b_0b_1b_2=b_0+1$ which isn't very useful either.
Please, enlighten me. Has there been done work on this subject, am I missing something obvious, or perhaps you see something that I missed?
|
Let $\alpha=\alpha_1, \alpha_2, \ldots, \alpha_m$ be the distinct roots of $p$.
Choose a permutation $\sigma$ of $1,2,\dots,m$ without fixed points. For instance, an $m$-cycle such as $(12\cdots m)$.
Let $q$ be the unique polynomial such that $q(\alpha_i)=\alpha_{\sigma(i)}$. That will work, but won't have degree $n-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Prove that if $r(t)\times \frac{dr(t)}{dt}=0$, then $r(t)$ has a fixed direction. Let $r(t)$, where $t$ is a parameter $(t ∈R)$, be a position vector such that $r(t)\times \frac{dr(t)}{dt}=0$
I am asked to show that r(t) has a fixed direction.
A hint says: Let $r(t) = f(t)\hat e(t)$ where $\hat e(t)$ is a unit vector).
Could someone tell me how to use the hint? May I ask for a proof?
EDIT: I did some searching and found this question Show fixed direction of a position vector
But I cannot understand the answer. Could someone please use the hint to explicitly show that? Thanks in advance!
|
If $r(t)=f(t)ê (t)$ then
$$ \frac{d \vec r}{dt} = f'(t)ê (t) + f(t)\frac{dê (t)}{dt} $$
$$ \vec r \times \frac{d \vec r}{dt} = \vec r \times f'(t)ê (t) + \vec r \times f(t)\frac{dê (t)}{dt} = \vec r \times f(t)\frac{dê (t)}{dt} = 0$$
Thus $\frac{dê (t)}{dt} = 0$ or $\vec r$ is parallel to $\frac{dê (t)}{dt}$
But $\vec r$ is perpendicular to $\frac{dê (t)}{dt}$ because of the property - "A vector of constant magnitude is perpendicular to it's derivative."
Therefore, $\frac{dê (t)}{dt} = 0$ i.e. there is no change in the direction vector.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Injectivity of the factorial-related map $f:\mathscr P(\Bbb N) \to \Bbb R$, $f(I) = \sum_{n \in I} \frac 1 {n!}$ Note: For the purpose of this question, $\Bbb N$ does not include $0$.
I have a function $f:\mathscr P(\Bbb N) \to \Bbb R$ defined by:
$$f(I) = \sum_{n \in I} \frac 1 {n!}$$
This is essentially a transformation from binary sequences indexed by $\Bbb N$ to a number in $\Bbb R$.
I would like to prove that this function is injective.
|
Hint First prove that $f(\{k\}) > f(\{k + 1, k + 2, \ldots\})$ for all $k \in \Bbb N$.
Then, consider distinct elements $I, J \in \mathscr{P}(\Bbb N)$ and the smallest $k \in \Bbb N$ which is in one and not the other. (You'll probably also want to use the apparent fact that $f$ is monotonic under inclusion, that is, that if $I \subseteq J$ then $f(I) \leq f(J)$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can we define a 'rectangular coordinate' on a curved surface? We use rectangular coordinate on a flat plane, so can we use it in a curved surface, like the axis is somehow bent?
If yes, is there any application? Also, can we generalize this to higher dimension?
|
Assume we have a parametrized surface (patch) given by $S:[0,1]^2\rightarrow \mathbb R^n$ and we are looking for a reparametrization function $\varphi:[0,1]^2\rightarrow [0,1]^2$ so that
$$\langle \partial_x (S\circ \varphi),\partial_y (S\circ \varphi)\rangle=0,$$
which expresses this perpendicularity claim you are looking for. At some point $p\in[0,1]^2$, this can be written as
$$0=\langle D_pS\cdot \partial_x\varphi, D_pS\cdot \partial_y\varphi\rangle
=\langle\partial_x\varphi,\underbrace{(D_pS)^\top D_pS}_{:=A_p}\cdot\partial_y\varphi\rangle
=\langle\partial_x\varphi,A_p\cdot\partial_y\varphi\rangle
=\langle\partial_x\varphi,\partial_y\varphi\rangle_{A_p}.
$$
So we are looking for a diffeomorphism $\varphi:[0,1]^2\rightarrow[0,1]^2$, so that its partial derivatices are perpendicular with respect to the above defined dot product using $A_p:=(D_pS)^\top D_pS$.
I do not know enough about those differential equations to tell you anything about its solution or even if there are any. But this would be my way approaching it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
In mean value theorem over $[x_0,x]$, will $x^* \to+\infty$ always as $x \to +\infty$? The mean value theorem states that if $f:\Bbb R \to \Bbb R$ is continuous over $[x_0,x]$, and differentiable over $(x_0,x)$, then there exists $x^*\in(x_0,x)$ s.t. $f'({x^*}) = \frac{{f(x) - f({x_0})}}{{x - {x_0}}}$.
Now suppose $f$ is differentiable over $[x_0,+\infty)$, do we always have (or we can let, in cases like $f(x)$ is constant) $x^* \to+\infty$ as $x \to +\infty$? If not, on what condition this can happen?
|
This is false. Consider $f\left( x \right) = \left\{ {\begin{array}{*{20}{c}}
0&{x \geqslant 0} \\
{{e^{\frac{1}{x}}}}&{x < 0}
\end{array}} \right.$
It is easy to verify $f(x)$ is differentiable over $\Bbb R$ and ${f'}\left( x \right) = \left\{ {\begin{array}{*{20}{c}}
0&{x \ge 0}\\
{ - {x^{ - 2}}{e^{{x^{ - 1}}}}}&{x < 0}
\end{array}} \right.$
We can choose any $x_0<0$, and notice $\frac{{f\left( x \right) - f({x_0})}}{{x - {x_0}}} < 0$ for any $x>x_0$, implying $x^*<0$ for any $x_0<x<+∞$, and we actually have $x^*→0$ as $x→+∞$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How does a function satisfy the Lipschitz condition? Currently taking a higher level differential equations class and we're studying existence and uniqueness of solutions. Multiple times during proofs, my professor uses Lipschitz to say $$f(t,x)-f(t,y)\le L(x-y)$$ This concept makes sense to me as it only works if a function is Lipschitz. My question is, how can you tell if a function is Lipschitz, so that you can utilize this principle?
Different sites I have visited say that a function is Lipschitz if it satisfies the above inequality, which seems like circular logic to me.
One method I know of is if the function exists inside of a closed rectangle.
|
There is no general criterion besides the definition, but there are some known results. The easiest one is to check whether $f$ is differentiable with bounded derivative, in which case it is Lipschitz with constant $||df||$, which is an immediate consequence of the mean value theorem (at least in convex regions). Often it is sufficient to know this locally.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does $\int _0^{\infty }\:\frac{1}{1+x^2\left(\sin x\right)^2}\ \operatorname dx$ converge? I have been trying to prove the following integral:
$$\int _0^{\infty }\:\frac{1}{1+x^2\left(\sin x\right)^2}\ dx$$
diverges (please correct me if I am mistaken).
I have tried to use different comparison tests (as this is an integral of a positive function) with no success.
Any ideas?
|
The idea is to bound the integral below on intervals where $\displaystyle \frac{1}{1+x^2\left(\sin x\right)^2}$ has spikes, that is to say, it suffices to find some $\varepsilon_k$ such that $$\sum_{k\geq1}\int_{k\pi -\varepsilon_k}^{k\pi +\varepsilon_k}\frac{1}{1+x^2\left(\sin x\right)^2}dx$$ diverges.
On each of these intervals, since $\sin^2(x)$ is $\pi$-periodic, $1+x^2\sin^2(x)\leq 1+(k\pi + \varepsilon_k)^2\sin^2(\varepsilon_k)$, hence $$\sum_{k=1}^N\int_{k\pi -\varepsilon_k}^{k\pi +\varepsilon_k}\frac{1}{1+x^2\left(\sin x\right)^2}dx \geq \sum_{k=1}^N \frac{2 \varepsilon_k}{1+(k\pi + \varepsilon_k)^2\sin^2(\varepsilon_k)}$$
Some rough asymptotics suggest $$\frac{2 \varepsilon_k}{1+(k\pi + \varepsilon_k)^2\sin^2(\varepsilon_k)}\sim \frac{2 \varepsilon_k}{\pi^2k^2\epsilon^2_k}=\frac 2\pi \frac{1}{k^2\varepsilon_k} $$
Setting $\varepsilon_k=\frac 1k$ seems therefore like a sound idea, since we would get something like the harmonic series, which diverges.
Indeed, $$\begin{align}\frac{ \frac 2k}{1+(k\pi + \frac 1k)^2\sin^2(\frac 1k)}&=\frac 2k \frac{1}{1+(\pi^2k^2+2\pi +\frac{1}{k^2})(\frac{1}{k^2}+o(\frac{1}{k^2}))} \\
&=\frac 2k \frac{1}{1+\pi^2 + o(1)}\\
&\sim \frac{2}{1+\pi^2}\frac{1}k
\end{align}$$
With this choice of $\varepsilon_k$, $\displaystyle \sum_{k\geq 1}^\infty \frac{2 \varepsilon_k}{1+(k\pi + \varepsilon_k)^2\sin^2(\varepsilon_k)}$ diverges, which concludes the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
}
|
Curve sketching: Desmos shows an oblique, absolute value asymptote I sketched the function $f(x) = x^{6/7}-9x^{2/7}$ and got something like this.
Where POI means point of inflection.
However, when I graph it in Desmos, I get what looks like an oblique asymptote, that corresponds to an absolute value function.
The more I zoom out though, the more the slope seems to decrease.
This shows (or at least lends credence) to the fact that there are two POIs, right? In addition, is it true that there is no asymptote?
This is my solution.
|
In fact, there is no contradiction concerning your POI, with abscissas at $\pm (15)^{7/4} \approx \pm 114.3$ : it is impossible to spot them even on an large curve plainly because the transition from positive to negative concavity is very faint.
See graphics below obtained with Geogebra. The first one for the variations of function $f$, the second one for function $f''$, the latter graphics evidencing an extremely small variation (order $10^{-5}$), before and after the abscissa the transition at the POI.
Remark: $f''(x)=-\dfrac{6}{49}\dfrac{x^{4/7}-15}{x^{12/7}}.$
$$\text{Above: Curve of f}.$$
$$\text{Above: Curve of} \ f'' \ \text{ in the vicinity of a POI}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2186902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many four-letter words w using an n-letter alphabet satisfy... This question has two parts. The first part was easy.
a) How many four-letter words w using an n-letter alphabet satisfy $w_i \neq w_{i+1}$ for $i=1,2,3$
Simple. Choose the first letter in $n$ ways. Because no consecutive letter can have the same letter as the previous, we have $n-1$ ways for the last three for a total:
$$n \times (n-1) \times (n-1) \times (n-1) = n(n-1)^3$$
b) How many of the words in (a) also satisfy $w_4 \neq w_1$?
I'm struggling here. I thought of the possibilites, but I'm having trouble counting them.
Consider when $w_3 = w_1$ and when $w_3 \neq w_1$. Obviously, if $w_3 = w_1$, $w_4$ can be chosen in $n-1$ ways. Why? Take for example the standard alphabet. If we have the incomplete word:
A E A _
$w_4$ can be every letter except "A" to satisfy $w_4 \neq w_1$. Hence, $w_4$ has $n-1$ possibilities.
If $w_3 \neq w_1$ there are $n-2$ ways to choose $w_4$ by the same reasoning. Certainly, if we have the incomplete word:
A E C _
$w_4$ can be every letter except "A" and "C." $n-2$ ways.
But how can I count this? I thought it would be:
$$n \times (n-1)^2 \times (n-2)$$
But that doesn't take into account the special cases I considered above, does it?
|
Your answer to the first question is correct.
You have also correctly identified the cases in the second question.
Case 1: The third letter is the same as the first letter.
We have $n$ ways to select the first letter. Since the second letter must be different from the first, we can select it in $n - 1$ ways. We have only one choice for the third letter since it must be the same as the first letter. Since the last letter is different from the third letter, it must also be different from the first letter. Thus, there are $n - 1$ choices for the fourth letter. Hence, there are
$$n \cdot (n - 1) \cdot 1 \cdot (n - 1) = n(n - 1)^2$$
such words.
Case 2: The third letter is different from the first letter.
We have $n$ ways to select the first letter. Since the second letter must be different from the first, we can select it in $n - 1$ ways. Since the third letter must be different from both the first letter and the second letter, the third letter can be selected in $n - 2$ ways. Since the fourth letter must be different from both the third letter and the first letter, we can select the fourth letter in $n - 2$ ways. Hence, there are
$$n(n - 1)(n - 2)(n - 2) = n(n - 1)(n - 2)^2$$
such words.
Total: Since the cases are mutually exclusive, there are
$$n(n - 1)^2 + n(n - 1)(n - 2)^2$$
four letter words in which each letters differs from the preceding letter and the last letter is different from the first letter.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find the limit of a function as $ x$ approaches $0$ How can I find the limit of $\dfrac{\cos(3x) - 1 }{x^2}$ as $x\to 0$?
If someone could please break down the steps, for clear understanding. I'm studying for the GRE. Thanks in advance !!
|
Since you are studying for the GRE, which if I recall correctly is a multiple choice exam, knowing L'Hospital's rule is a good thing. Here's the simple version:
When you plug in your value that $x$ is approaching, if you end up with $0/0$ or $\infty/\infty$, then you can take the derivatives of the top and bottom, and then take the limit again. Using this rule (which we have to apply twice here),
$$\lim\limits_{x\to 0}\dfrac{\cos(3x)-1}{x^2}=\lim\limits_{x\to0}\dfrac{-3\sin(3x)}{2x}=\lim\limits_{x\to0}\dfrac{-9\cos(3x)}{2},$$
and then plugging in $0$ gives our answer, which is $-9/2$. Does this all make decent sense?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Let $f : \Bbb R \rightarrow \Bbb R$ be a func such that $p>0$, that $f(x+p) = f(x)$ for all $x \in \Bbb R$ . Show that $f$ has an absolute max and min Problem: Let $f : \Bbb R \rightarrow \Bbb R$ be a contiunous function such that for some real number $p>0$, $f(x+p) = f(x)$ for all $x \in \Bbb R$. Show that $f$ has an absolute max and min.
Thoughts:
By rolle's theorem, I know that between $f(x+p)$ and $f(x)$ there has to be a local minimum if $f$ is differentiable on this open interval, but I am outright confused by the precise statement of the question, and in fact I have included an image which may be a counter-example if the question has not been stated properly(sorry for the crudeness of the image) assuming the function continues in this manner infinitely
Edit: As per a comment, since $f$ is not shown to be differentiable on this interval, then Rolle's theorem does not apply.
Also see my answer for a response to my initial confusion.
|
Theorem: Any continuous map on a closed interval has a max and a min.
Apply this to the interval $[0,p]$. Since for any $x \in R$ we have some $x_0 \in [0,p]$ such that $f(x)=f(x_0)$, the maximum on that sub-interval is in fact a global max.
Note: No differentiability assumption was made.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Decidability of regularity of context-free grammar I've searched for a long time, but cannot find this. Maybe it's an open problem, but it seems not that hard.
Let's say I have a context-free grammar, say in Chomsky normal form for definiteness. Is there an algorithm to check whether it generates a regular language?
Of course, some simple cases can be easily decided, but I want to know whether there is a general procedure. I also know that for general grammars it is undecidable. If it is an open problem, so be it, but I won't think it is just because I haven't been able to find the answer. :-)
|
The following theorems are proved in Jeffrey Shallit's A Second Course in Formal Languages and Automata Theory.
Theorem 6.6.6. It is undecidable whether, given a CFG $G$, $L(G)$ is regular.
Theorem 6.6.7. There exists no algorithm that, given a CFG $G$ such that $L(G)$ is regular, outputs a DFA that accepts $L(G)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
solve this diophantine equation: ${\rm lcm}[x,y]+{\rm lcm}[y,z]+{\rm lcm}[z,x]=3(x+y+z)$ I am almost certain it is a duplicate question but I am looking for a reference regarding how solve the diophantine equation,Find the postive integer $x,y,z$ such
$${\rm lcm}[x,y]+{\rm lcm}[y,z]+{\rm lcm}[z,x]=3(x+y+z)$$
|
Now that's a nice little problem. I didn't expect it to be solvable completely, but... wait a minute.
Obviously, a multiple of any solution is also a solution, so we may just as well divide it by $\gcd(x,y,z)$, if any, and look for the primitive triples. Now, being coprime as a triple does not mean being pairwise coprime, so we may assume $\gcd(x,y)=d_1, \gcd(x,z)=d_2,$ and $\gcd(y,z)=d_3$. Then $x=a\cdot d_1d_2, y=b\cdot d_1d_3$, and $z=c\cdot d_2d_3$, where $a,b,c,d_1,d_2,\text{ and }d_3$ are all pairwise coprime. (Upd. Not quite all of them, as explained in a comment by Litho, but $d_3$ is coprime to each of $d_1, d_2$, and $a$, which suffices for our purpose.) With that in mind, the equation becomes
$$(ab+ac+bc)\cdot d_1d_2d_3=3a\cdot d_1d_2+3b\cdot d_1d_3+3c\cdot d_2d_3$$
Now, the LHS is apparently divisible by $d_3$, and so are two of the three summands in the RHS. Then $3ad_1d_2$ must be also divisible by $d_3$, which is only possible if $d_3$ is either $1$ or $3$. The same reasoning applies to $d_1$ and $d_2$. Being pairwise coprime, they can't equal 3 all at once, or any two of them. So we have two possible cases:
*
*All gcd's equal 1;
*One gcd equals 3 and the rest are 1.
In the first case the equation reduces to
$$ab+ac+bc=3(a+b+c)$$
Considering the three sub-cases ($c=1,c=2,\text{ and }c\geqslant3$), we arrive at the solutions (1,3,9), (2,2,8), and (3,3,3), which are no good for us, since they violate the requirement of coprimality.
In the second case it is
$$3ab+3ac+3bc=3(3a+3b+c)$$
which can be simplified to
$$ab+ac+bc=3a+3b+c$$
Analyzing it in the similar manner, we discover some solutions which translate nicely to those of the original equation.
Finally, the primitive triples are: $$(x,y,z)=(3,3,5)\text{ or }(1,9,21)$$
The rest of solutions are the permutations of these two and multiples thereof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Discrete math(Divisors and primes) Is the following statement true or false?Explain.
There are integers $x,y,$ and $z$ such that $14$ divides $2^x × 3^y × 5^z$.
My guess is false but I don't know how to explain it?Does it have anything to do with the fundamental theorem of arithmetic?
|
Here's an intuitive explanation
Yes $14$ does not divide $2^x × 3^y × 5^z$ for any integers $x,y,z$ because
$14=2 × 7$
Since there are no seven's in the expression..It will never fully divide it
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Minimize $A=\frac{1+2^{x+y}}{1+4^x}+\frac{1+2^{x+y}}{1+4^y}$ For $a,b>0$. Minimize $$A=\frac{1+2^{x+y}}{1+4^x}+\frac{1+2^{x+y}}{1+4^y}$$
i think we let $2^x=a;2^y=b$
Hence $A=\frac{1+ab}{1+a^2}+\frac{1+ab}{1+b^2}$
We need pro $A\geq 2$(Wolfram Alpha) but $x,y$ is a very odd number and i can't find how to prove it $\geq2$
|
we have to prove that $$(1+ab)\left(\frac{1}{1+a^2}+\frac{1}{1+b^2}\right)\geq 2$$ and this is equivalent to $${\frac { \left( ab-1 \right) \left( a-b \right) ^{2}}{ \left( {a}^{2}
+1 \right) \left( {b}^{2}+1 \right) }}
\geq 0$$ this is right if $$ab\geq 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
maximum number of number of roots of $p(x) = 0$ is
Let $p(x)=x^6+ax^5+bx^4+x^3+bx^2+ax+1.$ Given that $x=1$ is a root of $p$ but $x=-1$ is not, find the maximum number of number of roots of $p$.
My attempt:
$x=0$ in not a root of $p(x)=0.$ So
$$\left(x^3+\frac{1}{x^3}\right)+a\left(x^2+\frac{1}{x^2}\right)+b\left(x+\frac{1}{x}\right)+1=0$$
So
$$\displaystyle \left(x+\frac{1}{x}\right)^3-3\left(x+\frac{1}{x}\right)+a\left(x+\frac{1}{x}\right)^2-2+b\left(x+\frac{1}{x}\right)+1=0$$
So
$$ t^3+at^2+(b-3)t+1=0,$$
where
$$\left(x+\frac{1}{x}\right) = t \quad\text{ and }\quad |t|\geq 2.$$
Given $x=1$ is a root of $p$, $1+a+b+1+a+b+1=0$, so
$$\displaystyle a+b = -\frac{3}{2}$$
so
$$t^3+at^2+\left(-\frac{9}{2}-a\right)t+(1-2a) = 0.$$
$t=2$ is a root, so
$$(t-2)\bigg[t^2+(a+2)t+\left(\frac{2a-1}{2}\right)\bigg]=0,$$
So discriminant of above quadratic equation is $\displaystyle D = a^2+6>0$. So above equation has $2$ distinct real roots.
But I did not know how I use $x=-1$ is not a root. Could some help me to solve it? Thanks.
|
This answer assumes that you want to find the maximum number of the real roots of $p(x)$.
You already have
$$(t-2)\left(t^2+(a+2)t+a-\frac 12\right)=0$$
where $t=x+\frac 1x$ (which is correct though you have a typo in the part $a(x^2+\frac{1}{x^2})=a(x+\frac 1x)^2-2\color{red}{a}$).
Let $t_{\pm}$ where $t_-\lt t_+$ be the roots of $t^2+(a+2)t+a-\frac 12$ whose discriminant is $a^2+6\gt 0$.
Then, we have that
$$\text{$p(x)$ has six real roots if and only if $|t_-|\ge 2$ and $|t_+|\ge 2$}$$
Here, we have
$$\begin{align}|t_-|\ge 2&\iff \left|\frac{-a-2-\sqrt{a^2+6}}{2}\right|\ge 2\\\\&\iff \left|-a-2-\sqrt{a^2+6}\right|^2\ge 4^2\\\\&\iff (-a-2)^2+2(a+2)\sqrt{a^2+6}+a^2+6\ge 16\\\\&\iff (a+2)\sqrt{a^2+6}\ge -(a+3)(a-1)\tag2\end{align}$$
*
*For $a\gt 1$, $(2)$ holds since the LHS is non-negative and the RHS is negative.
*For $-3\le a\lt -2$, $(2)$ does not hold since the LHS is negative and the RHS is non-negative.
*For $-2\le a\le 1$, since the both sides are non-negative,$$(2)\iff (a+2)^2(a^2+6)\ge (-(a+3)(a-1))^2\iff \left(a+\frac 12\right)\left(a+\frac{5}{2}\right)\ge 0$$
*For $a\lt -3$, since the both side are negative,$$(2)\iff (-a-2)^2(a^2+6)\le (a+3)^2(a-1)^2\iff \left(a+\frac 12\right)\left(a+\frac{5}{2}\right)\le 0$$
So, we get
$$|t_-|\ge 2\iff (2)\iff a\ge -\frac 12\tag3$$
Similarly, we get
$$|t_+|\ge 2\iff a\le -\frac{5}{2}\tag4$$
Since there are no $a$ satisfying $(3)$ and $(4)$, we have that the number of the real roots of $p(x)$ is equal to or less than $4$. (Note that the number of the real roots of $p(x)$ is even since the number of the real solutions of $t=x+\frac 1x$ is even counted with multiplicity.)
By the way, for $a=-\frac 52$, we have
$$p(x)=(x-1)^4\left(\left(x+\frac 34\right)^2+\frac{7}{16}\right)$$
of which $x=-1$ is not a root.
The number of real roots of $p(x)$ for $a=-\frac 52$ is $4$.
Therefore, the maximum number of real roots of $p(x)$ is $\color{red}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Are these funcitons linearly independent? Let $a,b,c,\ldots$ be a finite set of distinct positive real numbers.
Are the functions
$(a+x)^{-r}$, where $r$ is a positive real number, linearly independent function on $[0,\infty)$? Are there any references for this? Would the answer depend on r?
|
Let $0 < a_1 < \dots < a_n$ and assume that $\left( (a_i + x)^{-r} \right)_{i=1}^n$ are linearly dependent over $[0,\infty)$ and so we can find $b_1,\dots,b_n \in \mathbb{R}$ with
$$ \sum_{i=1}^n \frac{b_i}{(a_i + x)^r} = \sum_{i=1}^n \frac{b_i}{e^{r \ln(a_i + x)}}= 0 $$
for all $x \in [0,\infty)$. Note that the function $\sum_{i=1}^n \frac{b_i}{(a_i + x)^r}$ is in fact defined on the interval $(-a_1,\infty)$. In fact, it can be extended to a complex analytic function $z \mapsto \sum_{i=1}^n \frac{b_i}{e^{r \ln(a_i + z)}}$ on the set $\{ z \in \mathbb{C} \, | \, \Re(z) > -a_1 \}$ and by assumption, it vanishes on $[0,\infty)$ and hence it must actually vanish on $\{ z \in \mathbb{C} \, | \, \Re(z) > -a_1 \}$. In particular, we must have
$$ 0 = \lim_{x \to -a_1} \sum_{i=1}^n \frac{b_i}{(a_i + x)^r} = \sum_{i=2}^n \frac{b_i}{(a_i - a_1)^r} + \lim_{x \to -a_1} \frac{b_1}{(x + a_1)^r}$$
which shows that $b_1 = 0$. But then $\sum_{i=1}^n \frac{b_i}{(a_i + x)^r} = \sum_{i=2}^n \frac{b_i}{(a_i + x)^r}$ is actually well-defined on $(-a_2,\infty)$ and repeating the argument we see that $b_2 = 0$, etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2187994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that trigonometric equation isn't changed by value of x $\sqrt{\sin^4x+\cos2x} + \sqrt{\cos^4x-\cos2x}$ I have to prove that $x$ doesn't matter. However I can't get things to simplify.
|
Hint. One has
$$
\sin^4x+\cos2x=\sin^4x+2 \cos^2x-1=\sin^4x-2 \sin^2x+1
$$ and
$$
\cos^4x-\cos2x=\cos^4x-2 \cos^2x+1
$$ Can you finish it?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Integration by Substitution I have the following problem: $\int (x+2)(2x-3)^6dx$
What I did was allow $u = 2x-3$
$dx= \frac{1}{2}du$
Since I am integrating w.r.t u I decided to let $x=\frac{u+3}{2}$
my new equation is $\int ( \frac{u+3}{2}+2)(u^6)\frac{1}{2}du$
which I then simplify to $\int ( \frac{u+7}{4})(u^6)du$
I think I did something wrong because my following steps aren't giving the correct answer
|
simplifying your Integrand we get $$\frac{1}{4}\int (u+7)u^6du$$ further we get the integral $$\frac{1}{4}\int\left( u^7+7u^6\right)du$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proving $\int_0^\infty\frac{\sin(x)}x\ dx=\frac{\pi}2$. Why is this step correct? I came across a different approach on the proof: $$\int_0^\infty \frac{\sin(x)}x\ dx=\frac{\pi}2$$
First, recall the identity: $$\sin(A)-\sin(B)=2\sin\left(\frac{A}2-\frac{B}2\right)\cos\left(\frac{A}2+\frac{B}2\right)$$
Applying the identity for: $$A=kx+\frac{x}2\ \land\ B=kx-\frac{x}2$$
We obtain:$$\sin\left(kx+\frac{x}2\right)-\sin\left(kx-\frac{x}2\right)=2\sin\left(\frac{x}2\right)\cos\left(kx\right)\Rightarrow \\\cos\left(kx\right)=\frac{\sin\left(kx+\frac{x}2\right)-\sin\left(kx-\frac{x}2\right)}{2\sin\left(\frac{x}2\right)}$$
Using the previous result, we can easily show that:
$$\frac12+\cos(x)+\cos(2x)+\cdots+\cos(\lambda x)=\frac{\sin\left(\lambda x+\frac{x}2\right)}{2\sin\left(\frac{x}2\right)} \quad \text{where $\lambda \in \mathbb{N}$}$$
Integrating the last expression: $$\int_0^\pi\frac{\sin\left(\lambda x+\frac{x}2\right)}{\sin\left(\frac{x}2\right)}\ dx=\int_0^\pi\left(1+2\cos(x)+2\cos(2x)+\cdots+2\cos(\lambda x)\right)\ dx=\pi$$
We can also prove (since $f(x)$ is continuous on $[0,\pi]$), using Riemann-Lebesgue Lemma, that: $$\lim_{\lambda\to\infty}\int_0^\pi\underbrace{\left(\frac2t-\frac1{\sin\left(\frac{t}2\right)}\right)}_{f(x)}\sin\left(\lambda t+\frac{t}2\right)dt=\lim_{\lambda\to\infty}\int_0^\pi\left(\frac{2\sin\left(\lambda t+\frac{t}2\right)}t-\frac{\sin\left(\lambda t+\frac{t}2\right)}{\sin\left(\frac{t}2\right)}\right)=0$$
Therefore: $$\left(1\right)\ \lim_{\lambda\to\infty}\int_0^\pi\frac{2\sin\left(\lambda t+\frac{t}2\right)}t=\lim_{\lambda\to\infty}\int_0^\pi\frac{\sin\left(\lambda t+\frac{t}2\right)}{\sin\left(\frac{t}2\right)}=\pi$$
$$$$Returning to the initial problem:
$$\\$$
Let: $$x=\lambda t+\frac{t}2$$
Thus:
$$\int_0^\infty \frac{\sin(x)}x\ dx \stackrel{\eqref{*}}=\frac12\lim_{\lambda\to\infty}\int_0^{\color{teal}{\pi}}\frac{2\sin\left(\lambda t+\frac{t}2\right)}{t}\ dt$$
Using the result obtained from $(1)$:$$\int_0^\infty \frac{\sin(x)}x\ dx=\boxed{\frac{\pi}2}$$
$$$$ My question comes from $\color{teal}{(???)}$, Why is it correct to have $\pi$ instead of $\infty$ when changing the limits of integration?
|
Let $l_n=n/\pi-1/2$ for $n/\pi>1/2$. Then $$\int_0^n t^{-1}\sin t\;dt =\int_0^{\pi} t^{-1}\sin (l_nt+t/2)\;dt.$$ Now let $n\to \infty.$
It would have been clearer if this had been said .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
isosceles triangle height and base given only angles and area If we know only that:
$A=22$
$v_1=30^\circ$ and $v_2=75^\circ$ and $v_3=75^\circ$
How do I get height and base without knowing at least one of the sides?
Edit: v are angles and A is the area.
|
We have that $A$ = $22$
Let $M$ be the midpoint of $AC$. Then $A = 2\cdot \frac{1}{2}\cdot AM \cdot MB = AM\cdot MB$
You also have the relationship $AM = AB\cos(75^{\circ})$ and $AC = 2AM$.
Let $AB = a \Rightarrow BC = a$
Let $AM = b$
Then we have the area of the triangle = $22 = \frac{1}{2}$ X base X height= $b\sqrt{a^{2}-b^{2}}$
We also have that $a\cos 75^{\circ} = b$
Subbing this in, we get $a^{2}\cos75^{\circ}\sin75^{\circ}=22$
You can use this to find $a$, then the rest all follows.
Note:
I get the base of the triangle = $2(\sqrt{33}-\sqrt{11})$
the height = $\sqrt{33} + \sqrt{11}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is the set of diagonal matrices with positive entries open in the set of positive definite symmetric matrices? I suspect that it's not, but would like to know a proof for why the set of diagonal matrices with positive entries is or isn't open in the set of positive definite symmetric matrices.
I am familiar with what it means to being open as in for any point in the subset there exists a small ball around that point also in that subset, but I don't know how this translates to sets of matrices.
|
Let $A$ be diagonal with positive diagonal entries, choose as $B$ any positive definite matrix that is not diagonal and consider $A_n := A + \frac B n$. Then each of the $A_n$ is positive definite and none of them is diagonal. So, your set is definitely not open (except for the case where the space is one-dimensional).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Irreducibility of polynomials in ring In text book "A course in abstract algebra" by author Khanna & bhambri" it is given that,
f(x) = 2( x^2) + 2 is irreducible polynomial over Z.
Because they used the definition "let R be an Integral domain with unity then a polynomial f(x) in R[x] of positive degree (i.e. deg ≥ 1) is said to be irreducible polynomial over R if it can not be expressed as product of two polynomials of positive degree. In other words, if whenever f(x) = g(x)• h(x)
Then deg g = 0 or deg h = 0.
Here f(x) = 2(x^2) + 2 = 2(x^2 + 1)
Clearly deg g = deg(2) = 0.
So f(x) is irreducible polynomial over Z by above definition.
But in book "Contemporary abstract algebra" by "Joseph A. Gallian", they used the definition,
"Let D be an integral domain. A polynomial f(x) in D[x] that is neither a zero polynomial nor unit in D[x] is said to be irreducible over D if, whenever
f(x) = g(x) • h(x) with, g(x) & h(x) from D[x], then g(x) OR h(x) is unit in D[x]."
Now here f(x) = 2(x^2) + 2 = 2( x^2 + 1)
Clearly neither 2 nor x^2 + 1 in Z[x] are unit. So that f(x) is reducible by definition of book by "Joseph A. Gallian".
So one book says, it is irreducible over Z but other says it is reducible over Z.
Please suggest me which one I should prefer? .
|
To dramatize the flaw in the definition given in the text by "Khanna & Bhambri" (K &B), consider the polynomials
$$x,\;2x,\;3x,\;6x$$
By K&B's definition, the above polynomials are all irreducible in $\mathbb{Z}[x]$. Moreover, since none of them is a unit factor times one of the others, they would be regarded as distinct irreducibles (i.e., none is an "associate" of any of the others).
But then the polynomial $6x^2$ factors in $\mathbb{Z}[x]$ in two different ways
$$6x^2 = (2x)(3x)\qquad\text{and}\qquad 6x^2 = (x)(6x)$$
as a product of irreducible elements, thus breaking "unique factorization".
As I mentioned in my prior comment, it appears that the K&B text (perhaps accidentally) conflated irreducibility in $\mathbb{Z}[x]$ with irreducibility in $\mathbb{Q}[x]$.
I would regard it as an error, and use the standard definition instead (e.g., Gallian's definition), but check to make sure your teacher agrees.
In any case, good catch!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Does $M^{*}$ contain an injective function?
Let $R$ be a PID and $M$ a finitely generated $R$-module. Let $F$ be the field of fractions of $R$ and $P = F/R.$ Let $M^{*}$ = $\mathrm{Hom}_R(M, P).$ Suppose $M$ is torsion. Is there an injective function in $M^*?$
What I've done so far was let $M =\left <e_1, ..., e_n\right>$ and let $\phi(e_i) = 1/r_{e_i} + R$ where $r_{e_i}$ is the nonzero annihilator of $e_i.$ I don't know if these works though...any hints?
|
The answer is:
$M^*$ contains an injective morphism if and only if $M$ is a cyclic
torsion $R$-module, i.e. $M \cong R/(a)$ for some $0 \neq a \in R$.
The backward direction is trivial, because $R/(a) \cong \langle \frac{1}{a} \rangle \subset F/R$.
The forward direction goes as follows: It is well known that any finitely generated submodule of $F$ is cyclic, hence the same is true for $P$. In particular an injection $M \to P$ with $M$ finitely generated implies that $M$ is cyclic.
The main ingredient, that
any finitely generated submodule of $F$ is cyclic
is proven here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2188927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Graph Theory proving planarity I have these set's of graphs:
I used Euler's inequality, and the 4 color theorem which resulted in a inconclusive result. Using Kuratowski's theorem I was unable to create a K3 3 or K5 graph. So does this prove that the graphs are planar? How do we really know that there does not exist a K3 3 or K5? I did use planar embedding and was able to put it in a form where the edges do not overlap, but the method does not seem rigorous enough to prove planarity.
|
Graph $G$ is not planar. The subgraph you get by deleting the edges $ab,$ $ad,$ $bd,$ $cg,$ $eg$ is homeomorphic to $K_{3,3};$ the vertices $a,b,d$ are connected to the vertices $e,f,g$ by the internally disjoint paths $ace,$ $af,$ $ag,$ $be,$ $bf,$ $bg,$ $de,$ $df,$ $dg.$
Graph $H$ is planar. Plot $a$ at $(0,0),$ $b$ at $(2,0),$ $c$ at $(1,1),$ $d$ at $(1,3),$ $e$ at $(1,4),$ $f$ at $(2,3),$ $g$ at $(1,2),$ and draw all the edges as straight line segments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
What is a Universal Construction in Category Theory? From pg. 59 of Categories for the Working Mathematician:
Show that the construction of the polynomial ring $K[x]$ in an indeterminate $x$ over a commutative ring $K$ is a universal construction.
Question: What does the author mean by this bolded term?
For context: up to this point in the book, the author has already defined the notions of universal arrow, universal element, and universal property.
Is the author therefore just using the term universal construction as a synonym for the term universal property (or universal X, where X could be either element or arrow)?
|
A universal construction is simply a definition of an object as "the unique-up-to-isomorphism object satisfying a certain universal property". The name "construction" is a little misleading, because it's a definition: one still needs to show that the object actually exists, and that usually has to be done non-category-theoretically.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.