Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Inverse rotation transformations I'm taking the 2-degree gibmle system and position its alignment point in a arbitrary position (denoted by the axes angles phi for the first degree, and theta for the second). How can I reverse the transformations I did (first rotation of the first axis by phi, then rotation of the seconds axis by theta) and get phi and theta from the resulted alignment point's position on the unit sphere?
|
If your alignment point happens to lie on either of the axes, you cannot undo the operations, because rotation about that axis will leave the alignment point in the same place, so a single "final position" leads to multiple possible input-rotations.
Assuming that the axes start out perpendicular, with the first aligned with the $y$ axis and the second aligned with $x$, and the alignment point on the positive $z$ axis, it's not too difficult. The tricky part is deciding what "first" and "second" mean here.
I mean that if you rotate in $y$, then the $x$-rotation axis remains fixed, but if you rotate in $x$, then the $y$-rotation axis will be moved.
In that case, a rotation by $\phi$ about $y$ moves $(0,0, 1)$ to $(\sin \phi, 0, \cos \phi)$; a further rotation about $x$ by $\theta$ moves the resulting point to
$$
\begin{bmatrix}
1 & 0 & 0 \\
0 & \cos \theta & -\sin \theta \\
0 & \sin \theta & \cos \theta
\end{bmatrix}
\begin{bmatrix}
\sin \phi \\
0 \\
\cos \phi
\end{bmatrix} =
\begin{bmatrix}
\sin \phi \\
-\sin \theta \cos \phi \\
\cos \theta \cos \phi
\end{bmatrix}
$$
Given this location as an $xyz$-triple, how can you recover $\phi$ and $\theta$? Well,
let's assume that $\phi$ is restricted to $-90\deg < \phi < 90 \deg$. In that case, $\cos \phi > 0$, and
$$
\theta = {\mathrm{atan2}}(-y, z)
$$
Now you can compute
$$
u = -x / \sin \theta
$$
or
$$
u = z/ \cos \theta
$$
using whichever formula has a nonzero denominator.
Finally,
$$
\phi = \mathrm{atan2}(x, u).
$$
If you try to actually implement this, there's a good chance that one or both of your angle-directions will be opposite to mine, or that in your assembly the $y$-rotation comes first, etc. So you'll have to work through the analog of this using your conditions. But the main idea is that "atan2" is the solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Select a subsequence to obtain a convergent series. Does there exists strictly increasing sequence $\{a_k\}_{k\in\mathbb N}\subset\mathbb N$,
such that
$$
\sum_{k=1}^{\infty}\frac{1}{(\log a_k)^{1+\delta}}\lt \infty,
$$
where $\delta>0$ given and $$\lim_{k\to \infty }\frac{a_{k+1}}{a_k}=1.$$
|
Answer. Try
$$
a_k=\left\lfloor 2^{k^{\color{red}{1/(1+\delta/2)}}}\right\rfloor.
$$
Then $\dfrac{a_{k+1}}{a_k}\to 1$, and
$$
\frac{1}{(\log a_k)^{1+\delta}}\approx\frac{c}{n^{1+\delta/2}}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If there is a mapping of $B$ onto $A$, then $2^{|A|} \leq 2^{|B|}$ If there is a mapping of $B$ onto $A$, then $2^{|A|} \leq 2^{|B|}$. [Hint: Given $g$ mapping $B$ onto $A$, let $f(X)=g^{-1}(X)$ for all $X \subseteq A$]
I follow the hint and obtain the function $f$. If $f$ is injective, then the statement is proven.
Question: Why does $g^{-1}$ exist in the first place? How do we know $g$ is injective? The hint given seems a bit weird.
Can anyone explain to me?
|
If there exists a function $f:B \longrightarrow A$ such that $f$ is onto then $\lvert B \rvert \geq \lvert A \rvert$. And this means that there exists an injective function $g: A \longrightarrow B$. Now as we want to see that $2^{\lvert A \rvert} \leq 2^{\lvert B \rvert}$ it's enough to define an injective function $h: \mathcal P \left( {A} \right) \longrightarrow \mathcal P \left( {B} \right)$. Such that for $C \subseteq A$, $ h(C) :=${$ g(x) \lvert x \in C$}. Now it's not to hard to prove that this function is well defined and injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Diagonalizing $xyz$ The quadratic form $g(x,y) = xy$ can be diagonalized
by the change of variables $x = (u + v)$ and $y = (u - v)$ .
However, it seems unlikely that the cubic form $f(x,y,z) = xyz$,
can be diagonalized by a linear change of variables.
Is there a short computational or theoretical proof of this?
Thanks.
|
If I understand correctly your question, you are asking if it is possible to write $xyz = \ell_1^3 + \ell_2^3 + \ell_3^3$ for some linear forms $\ell_1=\ell_1(x,y,z)$, $\ell_2,\ell_3$.
We can prove that $xyz \neq \ell_1^3 + \ell_2^3 + \ell_3^3$ in a few different ways. Here is a short theoretical proof that uses a little bit of projective geometry.
Lemma: If $xyz = \ell_1^3 + \ell_2^3 + \ell_3^3$, then $\{\ell_1,\ell_2,\ell_3\}$ are linearly independent.
Proof: The second derivatives of $xyz$ include $x$, $y$, and $z$. If $\ell_1,\ell_2,\ell_3$ span a space of dimension less than $3$, then they depend on only $2$ (or $1$) variables, and the second derivatives of each $\ell_i^3$ still only depend on $2$ (or $1$) variables, so they span less than $3$ dimensions. $\square$
Claim: $xyz \neq \ell_1^3 + \ell_2^3 + \ell_3^3$.
Proof: For convenience say $u=\ell_1$, $v=\ell_2$, $w=\ell_3$. The projective curve defined by $xyz=0$ is singular, in fact reducible. However the curve defined by $u^3+v^3+w^3=0$ is nonsingular. There is no linear change of coordinates that can carry a singular curve to a nonsingular one. $\square$
However, if you are not interested in projective geometry, it is still possible to deal with this via other approaches.
There is some literature on this subject under the name "Waring rank", Waring decompositions, symmetric tensors, and symmetric tensor rank. Some general introductions include Landsberg, Tensors: Geometry and Applications or Carlini, Grieve, Oeding, Four Lectures on Secant Varieties. That book and paper have basic explanations and references to further reading. I hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Applying the Law of Large Numbers? $X_k$, $k \geq 1$ are iid random variables such that
$$\limsup_{n\rightarrow\infty} \frac{X_n}{n} < \infty$$
with probability 1. We want to show that
$$\limsup_{n\rightarrow\infty} \frac{\sum_{i=1}^n X_i}{n} < \infty$$
with probability 1.
The hint says to apply the law of large numbers to the sequence $\max(X_k,0), k \geq 1$. SLLN gives that
$$\frac{\sum_{i=1}^n \max(X_i,0)}{n} \rightarrow \mathbb{E}\max(X,0) = \mathbb{E}(X; X>0)$$
almost surely. I feel that the idea here is that $\limsup X_n/n < \infty$ a.s. implies that $\mathbb{E}(X;X>0)$, but I am not really sure how to approach this...
|
Consider $X_k^+ := \max(X_k,0)$. Then,
\begin{align*}
P\left(\limsup \frac{X_n}{n} < \infty\right)=1 &\Rightarrow P\left(\limsup \frac{X_n^+}{n} < \infty\right)=1 \\
&\Rightarrow \exists A: P\left(\frac{X_n^+}{n} > A \text{ i.o.}\right)=0 \text{ a.s.}\\
&\Rightarrow \sum_{i=1}^n P\left(\frac{X_i^+}{i} > A\right) < \infty \text{ a.s.} \\
&\Rightarrow \sum_{i=1}^n P\left( X^+ > iA \right) < \infty \text{ a.s.} \\
&\Rightarrow \mathbb{E}X^+ < \infty \text{ a.s.}
\end{align*}
By the Strong Law of Large Numbers,
$$\frac{\sum_{i=1}^n X_i^+}{n} \rightarrow \mathbb{E}X^+ < \infty \text{ a.s.},$$
and so,
$$\limsup \frac{\sum_{i=1}^n X_i}{n} \leq \frac{\sum_{i=1}^n X_i^+}{n} < \infty \text{ a.s.}$$
Lemma. $\limsup_n X_n < \infty$ a.s. if and only if $\sum P(X_n > A) < \infty$ for some $A$.
Proof of Lemma. We write $Y = \limsup_n X_n$ for notational simplicity. Since $X_n$ are independent, Borel-Cantelli Lemmas show that
\begin{align*}
\sum_{n=1}^\infty P(X_n > A) < \infty &\iff P(X_n > A \text{ i.o.}) = 0 \\
\sum_{n=1}^\infty P(X_n > A) = \infty &\iff P(X_n > A \text{ i.o.}) = 1.
\end{align*}
To relate this with the finiteness of $Y$, note that
$\bullet$ If $a_n > A$ i.o., then $\limsup_n a_n \geq A$.
$\bullet$ If $\limsup_n a_n \geq A$, then for any $\epsilon > 0$, we have $a_n \geq A-\epsilon$ i.o.
This yields the inequality
$$P(Y \geq A+\epsilon) \leq P(X_n > A \text{ i.o.}) \leq P(Y \geq A)$$
This shows that
$\bullet$ If $P(Y < \infty) = 0$, then $P(Y \geq A) < 1$ for some constant $A$. Then, $P(X_n > A \text{ i.o.}) < 1$ and hence, $P(X_n > A \text{ i.o.}) = 0$.
$\bullet$ If $P(X_n > A \text{ i.o.}) = 0$, then $P(Y \geq A + \epsilon) = 0$, and thus, $P(Y= \infty) = 0$ as well.
These combine to show that $limsup_n X_n < \infty $ a.s. $\iff \sum_n P(X_n > A) < \infty$ for some $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why do we use dummy variables in integrals? I want to know why we use dummy variables in integrals?
|
I'll interpret the question as
Why are we not using the orignal one instead of a dummy variable?
Why use $\int_0^xf(x')dx'$ and not $\int_0^xf(x)dx$?
(If this is the case, please clarify you question with an marked(!) edit, others interpreted in differently.)
Because it has a different meaning and there is a problem with how to define it.
To show the difference define
$$g(x):= \int_0^xf(x,x')dx'$$
for some $f:\mathbb{R}^2\to \mathbb{R},(x,y)\mapsto f(x,y)$ continuously differentiable with respect to both arguments (being only continuous instead of differentiable would be enough).
Now compute
$$\frac{dg(x)}{dx}= \frac{d}{dx}\int_0^xf(x,x')dx' =f(x,x) +\int_0^x\frac{df(x,x')}{dx}dx' =f(x,x) + \int_0^x f_1(x,x')dx',$$
where $f_1$ means the first partial derivative of $f$.
The right part happens, because the inner part of the integral was dependend of the outher variable $x$.
So $\int_0^xf(x,x')dx'$ means something different from $\int_0^xf(x',x')dx'$
This is the reason why
$$\int_0^xf(x,x)dx$$
is somewhat not well defined.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 0
}
|
How to simplify $(\sin\theta-\cos\theta)^2+(\sin\theta+\cos\theta)^2$? Simplify: $(\sin \theta − \cos \theta)^2 + (\sin \theta + \cos \theta)^2$
Answer choices:
*
*1
*2
*$ \sin^2 \theta$
*$ \cos^2 \theta$
I am lost on how to do this. Help would be much appreciated.
|
$$\begin{align}
&\phantom{=}\left(\sin x-\cos x\right)^2+\left(\sin x+\cos x\right)^2\\
&=\sin^2x-2\sin x\cos x+\cos^2x+\sin^2x+2\sin x\cos x+\cos^2x\\
&=2\sin^2x+2\cos^2x\\
&=2\left(\sin^2x+\cos^2x\right)\\
&=2
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Find the Basis and Dimension of a Solution Space for homogeneous systems I have the following system of equations:
$$\left\{\begin{array}{c}
x+2y-2z+2s-t=0\\
x+2y-z+3s-2t=0\\
2x+4y-7z+s+t=0
\end{array}\right.$$
Which forms the following matrix
$$\left[\begin{array}{ccccc|c}
1 & 2 & -2 & 2 & -1 & 0\\
1 & 2 & -1 & 3 & -2 & 0\\
2 & 4 & -7 & 1 & 1 & 0
\end{array}\right]$$
Which I then row reduced to the following form:
$$\left[\begin{array}{ccccc|c}
1 & 2 & 0 & 4 & -3 & 0\\
0 & 0 & 1 & 1 & -1 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]$$
I am unsure from this point how to find the basis for the solution set. Any help of direction would be appreciated. I know that the dimension would be $n-r$ where $n$ is the number of unknowns and $r$ is the rank of the matrix but I do not know how to find the basis.
|
First solve the system, assigning parameters to the variables which correspond to non-leading (non-pivot) columns:
$$\eqalign{
&t=\alpha\cr
&s=\beta\cr
z+s-t=0\quad\Rightarrow\quad &z=\alpha-\beta\cr
&y=\gamma\cr
x+2y+4s-3t=0\quad\Rightarrow\quad &x=3\alpha-4\beta-2\gamma\ .\cr}$$
So the solution set is
$$\left\{\pmatrix{3\alpha-4\beta-2\gamma\cr \gamma\cr \alpha-\beta\cr \beta\cr \alpha\cr}\ \Bigg|\ \alpha,\beta,\gamma\in{\Bbb R}\right\}$$
which can be written
$$\left\{\alpha\pmatrix{3\cr0\cr1\cr0\cr1\cr}+\beta\pmatrix{-4\cr0\cr-1\cr1\cr0\cr}+\gamma\pmatrix{-2\cr1\cr0\cr0\cr0\cr}\ \Bigg|\ \alpha,\beta,\gamma\in{\Bbb R}\right\}\ .$$
The three vectors shown span the solution set; it is also not too hard to prove that they are linearly independent; therefore they form a basis for the solution set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Probability of at least 3 red balls given 4 choices in a bag of 4 red balls and 4 black balls? Let's say there are 8 balls in a bag, where 4 are red and 4 are black.
If I choose four balls from the bag without replacement, what is the probability that I will choose at least 3 red balls?
My thinking was to use the idea that $P(E) = \frac{|E|}{|S|}$. Therefore, am I correct in saying that $|E| = {4 \choose 3} \cdot 5$, since I am choosing 3 red balls from the 4 available, and the last ball can be of any colour?
However, I am not sure about $|S|$. How do I choose four balls from eight, keeping into account that there are only two colours? I assume that $8 \choose 4$ isn't correct.
|
Here we want to find the probability of only 3 Red balls OR 4 Red balls being drawn without replacement. This means that we need to add the probability of either event together.
Another way to think of it is finding the total number of ways there are to draw 3 Red balls/4 Red balls, and then divide that by the total number of possible outcomes.
the number of ways to choose 3 Red balls from 4 Red balls: $$ 4 \choose 3 $$
The probability of that happening is equal to the probability that a Red is drawn in 3 of 4 slots (and by multiplication rule, we don't care how we arrange them)
$$ \frac48 *\frac37*\frac26*\frac45$$
AND the 4th slot is Black. So we multiply by the $\frac 45$.
the number of ways to draw 4 Red balls from 4 Red balls is 1: $${4\choose 4} = \frac{4!}{4!} = 1$$
Multiplied by the probability of pulling 4 Red balls which is: $$ \frac 48
*\frac 37 * \frac 26 * \frac 15$$
Therefore we get :$$ {4 \choose 3} \frac48 *\frac37*\frac26*\frac45 + {4 \choose 4} \frac48 *\frac37*\frac26*\frac15 $$
$$= 17/70$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/940974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Relationship between $\int_a^b f(x) dx$ and $\sum_{i= \lceil a\rceil}^{\lfloor b\rfloor} f(i)$ Let we have a continuous function $f(x)$ in the interval $ [ a,b ] $
Does there exist any relationship between its integral and summation of function-values defined at the integers between $a$ and $b$.
i-e Relationship between $\int_a^b f(x) dx$ and $\sum_{i= \lceil a\rceil}^{\lfloor b\rfloor} f(i)$ ?
For instance we have an integral test for infinite series which if positive and decreasing, then both integral and summation converges. But what can be inferred about the partial sum of series (not necessarily decreasing) if we know the integral between some finite limits ?
|
Yes. Euler-McLaurin's formulas completely describe this kind of relations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding all solutions of an expression and expressing them in the form $a+bi$ $$6x^2+12x+7=0$$
Steps I took:
$$\frac { -12\pm \sqrt { 12^{ 2 }-4\cdot6\cdot7 } }{ 12 } $$
$$\frac { -12\pm \sqrt { -24 } }{ 12 } $$
$$\frac { -12\pm i\sqrt { 24 } }{ 12 } $$
$$\frac { -12\pm 2i\sqrt { 6 } }{ 12 } $$
I don't know where to go from here to arrive at the correct answer...
|
Hint:
$$\frac{a+b}{c} = \frac ac + \frac bc$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
divisibility on prime and expression This site is amazing and got good answer.
This is my last one.
If $4|(p-3)$ for some prime $p$, then $p|(x^2-2x+4)$.
can you justify my statement?
High regards to one and all.
|
If $x \geq 0$, then $x^2 - 2x + 4 = (x - 1)^2 + 3$.
So then you want to see if $-3$ is a quadratic residue modulo $p$, and that's what you use the Legendre symbol $(\frac{-3}{p})$ for, which gives you a yes ($1$) or no ($-1$) answer.
But $p \equiv 3 \mod 4$ does not guarantee $(\frac{-3}{p}) = 1$, as André's example of $p = 11$ shows. And even if $(\frac{-3}{p})$ does equal $1$, all that tells you is that there is at least one integer $x$ such that $p|(x^2 + 3)$; for example, with $p = 7$, we readily see that $p$ is a divisor of $2^2 + 3, 5^2 + 3, 9^2 + 3, \ldots$ but not $3^2 + 3, 6^2 + 3, 7^2 + 3, \ldots$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Solving Coin Toss Problem If a coin is tossed 3 times,there are possible 8 outcomes.
HHH HHT HTH HTT THH THT TTH TTT
In the above experiment we see 1 sequnce has 3 consecutive H, 3 sequence has 2 consecutive H and 7 sequence has at least single H.
Suppose a coin is tossed n times.How many sequence we will get which contains a consequnce of H of length at least k?.How to solve this problem using recursive relation?
|
Let $x^n(i,j)$ be the number of sequences of length $n$ with exactly $i$ as the length of the longest sequence of H's and ending in exactly $j$ H's. Then $x^n(i,j)=0$ if $i<j$ or $i\gt n$ or $j\gt n.$ We can fill in the table for $x^2(i,j)$ row $i,$ column $j:$
$$
\begin{array}{c|ccc}
& 0 & 1 & 2 \\
\hline
0 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 \\
2 & 0 & 0 & 1 \\
\end{array}
$$
We can compute the array for $x^{n+1}$ from $x^n:$
We add an H or T to the (right) end of each sequence of length $n.$ Suppose we start with state $(i,j)$. If we add a T then the new value of $j$ will be $0$ no matter what the sequence was. The value of $i$ is unchanged. So the new state is $(i,0).$
If, instead, an H is added to the end, then $j$ increases by $1$ but $i$ will stay the same or increase by $1.$ We have $3$ cases:
If $i=j$ then state $(i,i)$ becomes $(i+1,i+1).$
If $i>j$ then $(i,j)$ becomes $(i,j+1).$
And $i<j$ is not possible.
Then the recursive equations are:
$x^{n+1}(i,0)=\sum_{j} x^n(i,j)$
$x^{n+1}(i,j)=x^n(i,j-1),$ for $i>j\ge 1.$
$x^{n+1}(j,j)=x^n(j-1,j-1)+x^n(j,j-1),\text { for }j\ge 1. $
and $0$ otherwise.
For example, computing the table for $n=3,4$ and then $n=5:$
$$
\begin{array}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 & 7 & 5 & 0 & 0 & 0 & 0 \\
2 & 5 & 2 & 4 & 0 & 0 & 0 \\
3 & 2 & 1 & 0 & 2 & 0 & 0 \\
4 & 1 & 0 & 0 & 0 & 1 & 0 \\
5 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
$$
Then compute the sum in each row. That gives the number with exactly $i$ as the largest number of consecutive H: $1,12,11,5,2,1.$
If you want the number with $\ge i$ consecutive H's for $i=0,1,2,3,4,5$ :
$32,31,19,8,3,1.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to integrate $\int_{-\infty}^\infty e^{- \frac{1}{2} ax^2 } x^{2n}dx$ How can I approach this integral? ($0<a \in \mathbb{R}$ and $n \in \mathbb{N}$)
$$\large\int_{-\infty}^\infty e^{- \frac{1}{2} ax^2 } x^{2n}\, dx$$
Integration by parts doesn't seem to make it any simpler.
Hints please? :)
|
Another non-IBP route: Consider the integral $\int_{-\infty}^\infty e^{-ax^2/2}e^{t x}\,dx$, which can be computed exactly by completing the square in the exponent. Expanding $e^{tx}$ in powers of $t$, we find that the coefficients are essentially just the desired integrals.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 6
}
|
Continuity of piecewise function of two variables The question looks like this.
Let $f(x, y)$ = 0 if $y\leq 0$ or $y\geq x^4$, and $f(x, y)$ = 1 if $0 < y < x^4 $.
(a) show that $f(x, y) \rightarrow 0$ as $(x, y) \rightarrow (0, 0)$ along any path through (0, 0) of the form $ y = mx^a $ with $a < 4$.
(b) Despite part (a), show that $f$ is discontinuous at (0, 0)
(c) Show that $f$ is discontinuous on two entire curves.
What I've came to conclusion is that when $ x<0, m>0 $, and $a$ being an odd number, $y$ becomes smaller then zero, so $f(x, y)$ can't be any larger than zero. But I don't think that's not enough. I think I need to find a way to generalize that $ mx^a (a<4) $is larger than $x^4$ or smaller than 0 when $x$ and $y$ is close enough to zero, where I cant' quite get to.
In regarding (b), I know $f(x, y)$ is discontinuous on certain directions, but can't elaborate it in decent form.
In regarding (C), How can I show it?
|
Substitute $e^{-t}$ for $x$. In case $m=|m|$ then substitute $e^{-at+b}$ for $y$ where $b=ln(|m|)$, in case $m=-|m|$ then $y=-e^{-at+b}$. In the first case we have $t>t_0=-b/(4-a) \Rightarrow y>x^4$ so $f(e^{-t},e^{-at+b})=0,\; \forall t>t_0$. In the second case we have $y<0 \;\forall t\in \Bbb{R}$ so that $f(e^{-t},-e^{-at+b})=0,\; \forall t\in \Bbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many functions can be constructed? How many functions $f:\left\{1, 2, 3, 4,5 \right\} \rightarrow \left\{ 1, 2, 3, 4, 5 \right\}$ satisfy the relation $f\left( x \right) =f\left( f\left( x \right) \right)$ for every $x\in \left\{ 1, 2, 3, 4, 5 \right\}$?
My book says the answer is 196.
|
Hint: Let $R\subset\{1,2,3,4,5\}$ be the range of $f$.
Then $f(x)$ is completely determined for every $x\in R$, and the only choices you have about the behavior of $f$ are for $x\notin R$.
Hint 2: If $\{1,2,3,4,5\}$ is too complicated, try solving the problem for $\{1,2\}$ instead, and then see if you can apply the same method to the $\{1,2,3,4,5\}$ case.
Hint 3: Suppose that $R$ has exactly $n$ elements, and then try to count the number of functions $f$ satisfying $f(f(x))=f(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/941888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Spectral Measures: Concentration Given a Hilbert space $\mathcal{H}$.
Consider spectral measures:
$$E:\mathcal{B}(\mathbb{C})\to\mathcal{B}(\mathcal{H}):\quad E(\mathbb{C})=1$$
Define its support:
$$\operatorname{supp}(E):=\bigg(\bigcup_{U=\mathring{U}:E(U)=0}U\bigg)^\complement=\bigcap_{C=\overline{C}:E(C)=1}C$$
By second countability:
$$E\bigg(\operatorname{supp}E^\complement\bigg)\varphi=E\left(\bigcup_{k=1}^\infty B_k'\right)\varphi=\sum_{k=1}^\infty E(B_k')\varphi=0$$
But it may happen:
$$\Omega\subsetneq\operatorname{supp}E:\quad E(\Omega)=E(\operatorname{supp}E)=1$$
What is an example?
|
Consider the operator $(Af)(x)=xf(x)$ on the Hilbert space $L^2([0,1])$ with Lebesgue measure. Then $E_A(M)=1_M$ is the spectral measure associated with $A$. Note that $\mbox{supp }E_A=\sigma(A)=[0,1]$, but $E_A(\{\lambda\})=0$ for any singleton $\lambda$, since $A$ does not have any eigenvalues. Now consider, say, the set $(0,1]$, for which $I=E([0,1])=E(\{0\})+E((0,1])=E((0,1])$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find $f$, such that $\,f,f',\dots,f^{(n-1)}\,$ linearly independent and $\,f^{(n)}=f$ I am trying to find a function $f\in\mathcal{C}^\infty(\mathbb{R},\mathbb{C})$, satisfying the differential equation
$$
f^{(n)}=f,
$$
and with $\,f,f',\dots,f^{(n-1)}\,$ being linearly independent.
Could you give me some hints?
Thanks.
|
The characteristic polynomial of the equation $x^{(n)}-x=0$ is $p(\zeta)=\zeta^n-1$,
with roots $1,\lambda,\ldots,\lambda^{n-1}$, where $\lambda=\exp(i\omega)$ with
$\omega=\dfrac{2\pi}{n}$. Hence, the functions
$$
f_k(t)=\exp \big(\lambda^k t\big), \,\,\,\text{where $k\in\{0,1,2,\ldots,n-1\}$}
$$
form a basis of the solution space of the equation $x^{(n)}-x=0$, i.e., the $f_k$'s are linearly independent and every solution of $x^{(n)}-x=0$ is a linear combination fo the $f_k$'s. Note also that $f_k^{(j)}(t)=\lambda^{kj}f_k(t)$.
Claim. If $a_0,\ldots,a_{n-1}$ are non-zero complex constants,
and $f=\sum_{k=0}^{n-1}a_kf_k$, then the functions
$f,f',\ldots,f^{(n-1)}$ are linearly independent over $\mathbb C$.
Proof. We have that
$$
f^{(j)}=\sum_{k=0}^{n-1}a_kf_k^{(j)}=\sum_{k=0}^{n-1}a_k\lambda^{kj}f_k.
$$
It $\sum_{j=0}^{n-1}c_jf^{(j)}=0$, for some $c_0,\ldots,c_{n-1}\in\mathbb C$, then
$$
0=\sum_{j=0}^{n-1}c_jf^{(j)}
=\sum_{j=0}^{n-1}c_j\left(\sum_{k=0}^{n-1}a_k\lambda^{kj}f_k\right)
=\sum_{k=0}^{n-1}a_k\left(\sum_{j=0}^{n-1}c_j\lambda^{kj}\right) f_k.
$$
Now as the $f_k$'s are linearly independent and the $a_k$'s non-zero, we obtain the system
$$
\sum_{j=0}^{n-1}\lambda^{kj}c_j=0, \quad k=0,\ldots,n-1,
$$
which is an $n\times n$ linear system with system matrix a Van der Monde matrix $$
A=\big(\lambda^{(j-1)(k-1)}\big)_{k,j=1}^n,
$$
which is invertible, since $\lambda^k\ne\lambda^j$, for $0\le k<j\le n-1$ - Note that
$\det A=\prod_{0\le k<j<n}(\lambda^{j}-\lambda^k)$.
Thus the above system has
a unique solution - the zero one - which means that the $f_j$'s are linearly independent.
Remark. The inverse also holds, for if $a_k=0$, for some $k\in\{0,1,\ldots,n-1\}$, then
the coreesponding $f_j$'s would be linearly dependent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Calculus 2 Integral of$ \frac{1}{\sqrt{x+1} +\sqrt x}$ How would you find $$\int\frac{1}{\sqrt{x+1} + \sqrt x} dx$$
I used $u$-substitution and got this far:
$u = \sqrt{x+1}$ which means $(u^2)-1 = x$
$du = 1/(2\sqrt{x-1}) dx = 1/2u dx$ which means $dx = 2udu$
That means the new integral is $$\int \frac{2u}{u + \sqrt{u^2-1}}du$$
What technique do I use to solve that new integral?
Thanks
|
Hint: Use that ${1 \over \sqrt{x+1} + \sqrt{x}} = \sqrt{x+1} - \sqrt{x}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Really advanced techniques of integration (definite or indefinite) Okay, so everyone knows the usual methods of solving integrals, namely u-substitution, integration by parts, partial fractions, trig substitutions, and reduction formulas. But what else is there? Every time I search for "Advanced Techniques of Symbolic Integration" or "Super Advanced Integration Techniques", I get the same results which end up only talking about the methods mentioned above. Are there any super obscure and interesting techniques for solving integrals?
As an example of something that might be obscure, the formula for "general integration by parts " for $n$ functions $f_j, \ j = 1,\cdots,n$ is given by
$$
\int{f_1'(x)\prod_{j=2}^n{f_j(x)}dx} = \prod_{i=1}^n{f_i(x)} - \sum_{i=2}^n{\int{f_i'(x)\prod_{\substack{j=1 \\ j \neq i}}^n{f_j(x)}dx}}
$$
which is not necessarily useful nor difficult to derive, but is interesting nonetheless.
So out of curiosity, are there any crazy unknown symbolic integration techniques?
|
You can do integration by inverting the matrix representation of the differentiation operator with respect to a clever choice of a basis and then apply the inverse of the operator to function you wish to integrate.
For example, consider the basis $\mathcal{B} = \{e^{ax}\cos bx, e^{ax}\sin bx \}$. Differentiating with respect to $x$ gives
\begin{align*}
\frac{d}{dx}e^{ax} \cos bx &= ae^{ax} \cos bx - be^{ax} \sin bx\\
\frac{d}{dx} e^{ax} \sin bx &= ae^{ax} \sin bx + be^{ax} \cos bx
\end{align*}
and the matrix representation of the linear operator is
$$T = \begin{bmatrix}
a & b\\
-b & a
\end{bmatrix}$$
To then solve something like $\int e^{ax}\cos bx\operatorname{d}\!x$, this is equivalent to calculating
$$T^{-1}\begin{bmatrix}
1\\
0
\end{bmatrix}_{\mathcal{B}} = \frac{1}{a^{2} + b^{2}}\begin{bmatrix}
a\\
b
\end{bmatrix}_{\mathcal{B}}.$$
That is,
$$\int e^{ax}\cos bx\operatorname{d}\!x = \frac{a}{a^{2}+b^{2}}e^{ax}\cos bx + \frac{b}{a^{2} + b^{2}}e^{ax}\sin bx$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "299",
"answer_count": 21,
"answer_id": 15
}
|
Not understanding how to factor a polynomial completely $$P(x)=16x^4-81$$
I know that this factors out as:
$$P(x)=16(x-\frac { 3 }{ 2 } )^4$$
What I don't understand is the four different zeros of the polynomial...I see one zero which is $\frac { 3 }{ 2 }$ but not the three others.
|
You have to solve the expression $16x^4 - 81 = 0$, and you will get
$$ 16x^4 - 81 = (4x^2 - 9)(4x^2 + 9) = 0 $$
then you will find the other three roots you didn't find.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
If $f$ is a quadratic and $f(x)>0\;\forall x$, and $g= f + f' + f''$, prove $g(x)>0\; \forall x$ If $f(x)$ is a quadratic expression such that $f(x)>0\;\forall x\in\mathbb{R},$ and if $g(x)=f(x)+f'(x)+f''(x),$
Then prove that $g(x)>0\; \forall \; x\in \mathbb{R}$.
$\bf{My\; Trial \; Solution::}$ If $f(x)>0\;\forall x\in \mathbb{R}$. Then function $f(x)$ has Minimum value .
Let Minimum occur at $x=x_{0}$. Then $f(x)_{Min} = f(x_{0})>0$
Now Given $g(x) = f(x)+f'(x)+f''(x)$. Then $g'(x) = f'(x)+f''(x)+f'''(x).$
Now at $x=x_{0}\;\;,$ Value of $g(x_{0}) = f'(x_{0})+f''(x_{0})+f'''(x_{0}) = 0+f''(x_{0})+f'''(x_{0})$
Now I did not understand How can i solve after that,
Help me
Thanks
|
Suppose $f(x)=x^2+ax+b$ with $b=f(0)>0$ and $a^2<4b$. Then,
$$
g(x)=x^2+(a+2)x+(2+a+b).
$$
We note that $g(0)=2+a+b=1+f(1)>0$ and
$$
(a+2)^2-4(2+a+b)=a^2+4a+4-8-4a-4b=(a^2-4b)-4<0
$$
so $g$ is never $0$ for real $x$. You now can infer that $g$ is always positive.
The more general case $f(x)=C(x^2+ax+b)$ is the same. It just adds the constant $C>0$ to everything.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
General formula for $\sin\left(k\arcsin (x)\right)$ I'm wondering if there's a simple way to rewrite this in terms of $k$ and $x$, especially as a polynomial. It seems to me to crop up every so often, especially for $k=2$, when I integrate with trig substitution. But $k=2$ is not so bad, because I can use the double angle formula; it's the prospect of higher values of $k$ that motivates this question.
I think the law of sines may help? Or maybe even De Moivre's theorem, to find the length of the hypotenuse as the length of the angle changes, if we think of the right triangle drawn from $\arcsin(x)$ with side1 = $x$, hypotenuse = $1$, and side2 = $\sqrt{1 - x^2}$ as a complex number, though I'm not sure how that might work.
|
I would try to write $A[k](x)=\sin (k.\arcsin(x) )$ and $B[k](x)=\cos (k.\arcsin (x) )$
and then $A[k+1](x) = \sin ( \arcsin(x) + k.\arcsin (x) ) = x B[k](x) + \sqrt{1-x^2} A[k](x)$
You could then either use the same recurrence form for $B[k+1](x)$ and have a double recurrence relation, or use $B[k](x)=\sqrt{1-A[k](x)^2}$ and have a simple one
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Terminology - Limit doesn't exist Take the following limit: $$ \lim_{x \to 2} \dfrac{x+2}{x-2} $$
This doesn't exist. My textbook says it doesn't because "The denominator approaches
0 (from both sides) while the numerator does not."
I don't understand what this means. I do understand that it doesn't exist. My thought process is that the left limit and the right limit aren't equal, so the limit doesn't exist.
But I want to know what is meant by the text in the textbook. Can anybody give me an example where the numerator also approaches 0 from both sides?
|
Consider the function $\frac{\sin x}{x}$ as x approaches 0 for a case where the limit does exist and is equal to 1. Reference if you want one for this case.
If you want another example where the limit doesn't exist consider either $\sin x$ as x tends to infinity or $(-1)^n$ as n tends to infinity, consider the cases where n is a sequence of odd numbers getting bigger versus n being even numbers getting bigger if you want more of an explanation, where because each is periodic, there are ways to create a contradiction if there was a value.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Show that every proper subgroup of this group is finite. Let $G$ be the group of rational numbers in $[0,1)$ whose
denominator is a power of $2$:
\begin{align*}
G &= \{r/2^k : \text{$r \in \mathbb Z$, $0 \le r < 2^k$, $k = 0, 1,
\ldots$} \} \\
&=\{0, \frac12, \frac14, \frac34, \frac18, \frac38,
\frac58,\frac78, \frac1{16}, \ldots \}
\end{align*}
Addition in $G$ is modulo $1$. So $3/4 + 5/8 = 3/8$.
Show that every proper subgroup of $G$ is finite.
I was planning to define $A_k = \{r/2^k : r = 0,1, \ldots, 2^k - 1\}$.
Then it is not hard to show that $A_k$ is a subgroup of $G$,
then, $A_k \subseteq A_{k+1}$, where $A_k$ is a cyclic group of order $2^k$,
and $G = \bigcup_{k=0}^\infty A_k$.
Since every $A_i$ is finite, I am done.
Not sure if I'm overthinking too much, feel like this question is harder than a few lines. Am I missing something here?
Thanks in advance.
|
Hint: Try to show that if a subgroup of $G$ is infinite, then for infinitely many $k$ it contains a generator of $A_k$. Then show that it is in fact true for all $k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Proving a function has real roots I am not interested in finding roots but interested in proving that the function has real roots.
Suppose a function $f(x) = x^2 - 1$
This function obviously has real roots.
$x = {-1, 1}$
How could I prove this without actually finding the roots?
Trial and error could work, number theory even? (modulus etc?) Calculus, any methods?
Thanks!
|
One way is using the discriminant of the quadratic equation:
$$\sqrt{b^2-4ac}$$
If the value inside the square root is greater than 0, then there are two real roots
If it is equal to 0, there is one real root
If it is less than 0, it has imaginary roots
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why is the polynomial $f(x)=x^3+x^2+x+1$ monotonic? I have to argue why the polynomial $f(x)=x^3+x^2+x+1$ has a reverse function $f^{-1}$ which is defined in on the whole of $\mathbb R$. I'm certain the argument would simply be that because $f(x)$ is monotonic on $\mathbb R$ it is also injective on $\mathbb R$. However I can not argue why $f(x)$ would be monotonic. A nudge in the right direction would be appreciated.
English is not my first language, please do say if my terminology is off.
|
Without calculus, you could look at $f(y)-f(x)=(y-x)(x^2+xy+y^2+x+y+1)$, and show that the right-hand side is positive whenever $y>x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/942922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Sum of products of positive operators I'm trying to answer the following question: Given two positive self adjoint operators $\mathcal{A}$ and $\mathcal{P}$ on a Hilbert space, is the following composition:
$\mathcal{AP}+\mathcal{PA}$ also positive?
One possible condition under which this is true is when $\mathcal{AP}=\mathcal{PA}$. Thus, for this case, we assume that the product of the operators do not commute.
What might be the conditions, in addition to the one stated previously, under which the question has an affirmative answer?
Regards,
|
Observe that
\begin{align*}
\langle (\mathcal A \mathcal P+\mathcal P\mathcal A)x,x\rangle&=\langle \mathcal A \mathcal Px,x\rangle+\langle\mathcal P\mathcal Ax,x\rangle\\
&=\langle \mathcal Px,\mathcal Ax\rangle+\langle\mathcal Ax,\mathcal Px\rangle\\
&=2\mbox{Re }\langle\mathcal Px,\mathcal Ax\rangle=2\mbox{Re }\langle\mathcal A\mathcal Px,x\rangle.
\end{align*}
Thus, $\langle (\mathcal A \mathcal P+\mathcal P\mathcal A)x,x\rangle$ is positive if and only if the real part of $\langle\mathcal A\mathcal Px,x\rangle$ lies in the right half plane. For accretive operators, do a google search or check out the book by Konrad Schmüdgen. They are related to generators of contraction semigroups.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
induction for idempotent matrix : $P^n = P$
Given that $P^2 = P$ how do i prove by induction that $P^n = P$?
I have tried the following: we know that $P^k = P$ holds for $k = \{1,2\}$. If we now take $k=3$:
$$
\begin{align}
P^3 &= P^2P
\\ &=PP \tag*{($P$ is idempotent)} \\
\\&= P^2
\\&=P
\end{align}
$$
therefore $P^k = P$ holds for all natural numbers.
however, this seems... incomplete for me... Am I missing something?
|
Suppose $P^{n-1}=P$.
Then
$$\begin{align*}P^n&=P(P^{n-1})\\
&=PP\\
&=P^2\\
&=P.
\end{align*}
$$
We're given $P^2=P$, so by induction on $n$, we're done.
Thinking of induction as reaching back to the previous cases instead of reaching forward to the next case can be insightful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
finding the generating function $\phi(s) = \mathbb{E}(s^{H_0})$. i just started the course of markov chains and i'm having a few problems with one of the excercises.
Let $Y_1,Y_2, \dots$ be i.i.d random variables with:
$\mathbb{P}(Y_1 = 1) = \mathbb{P}(Y_1 = -1) = \frac{1}{2}$ and set $X_0 = 1, X_n = X_0 + Y_1+ \cdots + Y_n$ for $n \geq0$. Further define:
$$H_0 = \inf\{n \geq0: X_n = 0\}$$
find $\phi(s) = \mathbb{E}(s^{H_0})$.
Know i know that for $0 \leq s < 1$ we have:
$$\phi(s) = \mathbb{E}(s^{H_0}) = \sum_{n<\infty} s^n \mathbb{P} (H_0 = n)$$
the most confusing part is how do i know when $X_n = 0$? The most logical thing to do here for me is to take $n = 1$, then $X_1 = X_0 + Y_1$ and $\mathbb{P}(X_1 = 0) = \mathbb{P}(X_0 = 1,Y_1 = -1) = \mathbb{P}(X_0 = 1)\mathbb{P}(Y_1 = -1) = \frac{1}{2}$
So is $\phi(s) = \frac{1}{2}s$?
Help would be appreciated :)
Kees
|
If $Y_1=-1$, then $X_1=0$ hence $H_0=1$. If $Y_1=1$, then $X_1=2$ hence $H_0=1+H'_0+H''_0$, where, in the RHS, $1$ accounts for the first step, $H'_0$ for the time to hit $1$ starting from $2$ and $H''_0$ for the time to hit $0$ starting from $1$. Thus, $H'_0$ and $H''_0$ are independent and distributed like $H_0$.
Turning to generating functions, this reads $$E(s^{H_0})=\frac12s+\frac12sE(s^{H_0})^2.$$ Solving the quadratics yields $$E(s^{H_0})=\frac{1\pm\sqrt{1-s^2}}s.$$ Finally, the LHS is $0$ when $s=0$ hence, for every $|s|\leqslant1$, $$E(s^{H_0})=\frac{1-\sqrt{1-s^2}}s=\frac{s}{1+\sqrt{1-s^2}}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Square root for Galois fields $GF(2^m)$ Can we define a function similar to square root for $G = GF(2^m)$ (Galois field with $2^m$ elements) as $\sqrt{x} = y$ if $y^2 = y \cdot y = x$ ? For which elements $x \in G : \exists y \in G : y^2 = x$ this function would be defined?
Can I approach this question like this:
If we can generate all elements of $G$ except $0$ from another element $a$ as $a^k : k = 1 \ldots (2^m-1)$, then any $x \neq 0$ can be expressed as $x=a^r$ for some $r$ and $y$ would be also $y = a^s$ for some $s$. $y^2 = x$ means that $$r = 2s \mod 2^m-1$$ $$r,s \in 0 \ldots 2^m-2$$
It looks like I can find $s$ for any $r$ to satisfy equation. That would mean that there is a "square root" for any element in $G$, right?
PS: I'm looking into options to analyze streams of data (bytes, 16 bit or 32 bit integers) as part of another computational task, therefore only specific Galois fields are interesting for me: $GF(2^m)$. Be prepared that I can be way off in the field theory, that was very long time since I touched it - but any comments are welcome!
|
For the field $GF(p^m)$ the map
$$F: x\mapsto x^p$$ is an automorphism of order $m$, that is
$$F^m(x) =x^{p^m} = x$$
and so the the inverse automorphism is
$$F^{-1} = F^{m-1}$$
or
$$\sqrt[p]{x}= x^{p^{m-1}}$$
The observation about normal bases of @Dilip Sarwate is excellent; also see http://en.wikipedia.org/wiki/Normal_basis
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Order of parameters in quantified predicates I'm studying up for my midterm in Discrete Math and I've been looking at sample questions and their solutions. There is one I don't really understand and I was hoping someone could help me out.
2. Let the domains of x and y be the set of all integers.
Compute the Boolean values of the following quantified predicates:
All x, Exist y, (x^2 < y)
Exist y, All x, (x^2 < y)
Exist x, All y, (x^2 >= y)
All y, Exist x, (x^2 >= y)
Solution:
All x, Exist y, (x^2 < y) = T
Exist y, All x, (x^2 < y) = F
Exist x, All y, (x^2 >= y) = F
All y, Exist x, (x^2 >= y) = T
I'm not really sure if I'm understanding this or not. The first solution appears to say there exists one integer that is greater than every integer squared? I guess that makes sense on a per-integer basis, but the second solution appears to say the same thing, in a different order, but is false.
I know there's more here that I'm just not seeing. Would somebody mind explaining the nature of the problem and solutions? It would mean a lot, thanks!
|
$\forall x.P$ means that every possible value of $x$ will make $P$ true.
$\exists x.P$ means that there is a value of $x$ that will make $P$ true.
One such value is enough, but there has to be at least one.
Usually each variable is restricted to some domain. For example, since this is
a discrete math course, can we stipulate that $x$ and $y$ represent integers?
Now consider, the sentence, "For $x = 4,$ there is a value of $y$ that satisfies $x^2 < y.$"
Is that true?
How about, "For $x = 17,$ there is a value of $y$ that satisfies $x^2 < y$"?
In fact, we could set $x$ equal to any integer, and there would still be a value
of $y$ that satisfies $x^2 < y.$ So the statement, "There is a value of $y$ that satisfies $x^2 < y,$" which we can write as $\exists y.(x^2<y),$ is true for all $x.$
And that is what $\forall x.\exists y.(x^2<y)$ says.
Now consider the sentence, "If $y = 4,$ every possible integer $x$ satisfies $x^2 < y.$"
Is it true?
How about, "If $y = 17,$ every possible integer $x$ satisfies $x^2 < y$" ?
In fact, is there any value to which we can set $y$ so that $\forall x.(x^2<y)$
(that is, so that every possible integer $x$ will satisfy $x^2 < y$)?
No, there is not.
The statement $\exists y.\forall x.(x^2<y)$ asserts that there is such
a value of $y,$ so that statement is false.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How many numbers can a typical computer represent? I couldn't find this elsewhere so I thought I'd give it a try to figure out exactly how many numbers a typical desktop computer can represent in memory. I'm thinking about this in the context of numerical algorithms to solve mathematical problems.
I'm pretty sure single precision (SP) numbers, along with the typical 4- and 8-byte signed and unsigned integers are all a subset of the representable numbers in double precision (DP), so I think computing the number of representable numbers in DP would answer the question. Let's assume IEEE double precision, a very typical architecture found on most machines : 1 sign bit, 11 exponent ($e$) bits and 52 mantissa bits.
First, the normalized numbers (assumed leading 1 in the mantissa). With 52 available bits, there are $2^{52}$ different mantissas possible. For each mantissa there is an exponent associated with it. Note $e \in [0, 2047]$ for IEEE DP, but $e=0$ and $e=2047$ have special meanings ($0$, $NaN$, subnormalized numbers and $\pm \infty$, depending on the mantissa). So we actually have $2046$ different exponents available for normalized numbers. Also for each mantissa there are 2 signs available, $+$ and $-$.
Next, the subnormalized numbers (no leading 1 assumed in mantissa). Each subnormalized number still has $2^{52}$ bits available, but are characterized by $e = 0$, so only one available value for $e$. Again for each mantissa there are 2 signs available, $+$ and $-$.
Finally, the four special values $0$, $NaN$, $+\infty$ and $-\infty$ can be represented.
Putting these together, there are total of
$$
2 \cdot 2046 \cdot 2^{52} + 2 \cdot 2^{52} + 4 = 1.8437736874454810628E19
$$
(18.4 quintillion!) numbers representable on a typical computer.
Does this seem correct? Does anyone know of a good resource to verify it? I'm afraid I double counted something, or left a significant set of numbers out.
|
The eight byte signed integers are not a subset of the double precision numbers if by double precision you mean $64$ bits. The eight bit signed integers have $63$ bits of mantissa plus a sign bit, while the double precision floats only have $52$ bits of precision. To compute the overlap is not so easy. Let us focus on positive values. Of the $63$ bits, we need at least $11$ zeros between the two ends to make it representable in double precision. There are $11\cdot 2^{50}$ numbers with exactly $52$ bits of precision-that have a $1$ in the leading and trailing places. The factor $11$ comes because the highest order bit can be in $11$ places. Similarly there are $12 \cdot 2^{49}$ numbers with $51$ bits of precision, $13 \cdot 2^{48}$ with exactly $50$ bits, ... $61$ with $2$ bits of precision and $62$ with a single bit of precision. Alpha tells me this is $54043195528445950$ including the negatives. Adding your count of floating point numbers to $2^{64}$ signed integers and deducting the double count gives $36830437752635916294$ floats and signed integers. To this we have to add the positive signed integers that cannot be otherwise represented. They have to have a $1$ in the most significant bit-otherwise they fit in a signed integer. This would give $2^{63}$ of them, but we have to deduct the ones that have twelve trailing zeros because they can be represented by floats. That gives $2^{63}-2^{51}$ of them. Adding these in to the previous count gives $46051557989677006854$ This compares with $5.534E19$ if we could use all $3\cdot2^{64}$ bit patterns times formats. We are only $9E18$ short of that. Of course, some languages have other ways of representing numbers that have many more possibilities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Proving a set-theoretic identity Context: Measure theory.
Reason: Just curious.
Question: Given $\{A_k\}$ with $A_k$ not disjoint, $B_1=A_1$ and $B_n = A_n - \bigcup\limits_{k=1}^{n-1} A_k$ for $n \in \mathbb{N}-\{1\}$ and $k \in \mathbb{N}$, how can I show that $$\bigcup\limits_{n=2}^{\infty}A_n = \bigcup\limits_{n=2}^{\infty}B_n?$$
Attempt: $$\bigcup\limits_{n=2}^{\infty}B_n=\bigcup\limits_{n=2}^{\infty}\left(A_n \cap (\bigcup\limits_{k=1}^{n-1} A_k)^c\right)=\bigcup\limits_{n=2}^{\infty}A_n \bigcap \bigcup\limits_{n=2}^{\infty}\left(\bigcup\limits_{k=1}^{n-1} A_k\right)^c = \cdots$$
Where do I go from here?
|
Try showing the double inclusion. One side is easy as $B_i \subseteq A_i$ for all $i$. For the other side, think of $x \in \bigcup A_i$, and let $A_k$ the first $k$ such that $x \in A_k$, what can you say about $x$ and $B_k$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Inhomogeneous modified Bessel differential equation I'm trying to solve the following inhomogeneous modified bessel equation.
$$y^{\prime\prime}+\frac{1}{x}y^{}\prime-\frac{x^2+4}{x^2}y=x^4$$
I know the homogeneous solution for this differential equation is $y_h=c_1I_2(x)+c_2K_2(x)$
Where $I_2$ and $K_2$ are the modified Bessel function of the first and second kind respectively both of order 2.
For a articular solution i'm trying to get an answer using variation of parameters and full knowing that $W[K_\nu,I_\nu]=1/x$.
Next, i know the particular solution has the form:
$$y_p=v_1(x)y_1+v_2(x)y_2$$ where $y_1$ and $y_2$ are the solutions of the homogeneous differential equation respectively.
$v_1(x)=-\int\frac{fy_2}{W}$ and $v_2(x)=\int\frac{fy_1}{W}$ where $f=x^4$
The answer to the problem is give and $y_p=-x^2(x^2+12)$
I don't know how the two integrals can be solved and give something so simple in the end, there's something i'm missing.
|
$$y^{\prime\prime}+\frac{1}{x}y^{\prime}-\frac{x^2+4}{x^2}y=x^4$$
The solution for the associated homogeneous ODE is $y_h=c_1I_2(x)+c_2K_2(x)$
The solution for the non-homogeneous ODE can be found on the form $y=y_h+p(x)$ where $p(x)$ is a particular solution of the ODE.
The seach of a particular solution using the variation parameters method is possible but arduous. Before going on this boring way, it is of use to try some simple functions and see if, by luck, one of them is convenient.
The simplest idea is to try a polynomial. Since there is $-y$ on the left side of the ODE and $x^4$ on the right side, we will try a 4th degree polynomial. Since there is $\frac{-4}{x^2}$ on the left side, the polynomial must not include terms which degree is lower than 2. So, let :
$$p(x)=ax^4+bx^3+cx^2$$
Binging it back into the ODE leads to :
$$p^{\prime\prime}+\frac{1}{x}p^{\prime}-\frac{x^2+4}{x^2}p= -ax^4-bx^3+(12a-c)x^2+5bx=x^4$$
Hence : $a=-1\space;\space b=0\space;\space c=-12$
We see that, "by luck", the polynomial $p(x)=-x^4-12x^2$ is a convenient particular solution. So, the general solution is :
$$y=c_1I_2(x)+c_2K_2(x)-x^4-12x^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/943945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Is division axiomatizable? Consider a set $G$ with a group operation. We can define a division operation $a*(b^{-1})$ and call it $\operatorname{div}$. Is the class of division operations first order axiomatizable? And if so, is it finitely axiomatizable?
|
Let $\star$ be your operator. On a group, this can be axiomatized as:
$$\forall x(1\star(1\star x)=x)\text{ (A)}\\ \forall x(x\star x = 1)\text{ (B)}\\\forall x,y,z\left((x\star y)\star z = x\star(z\star(1\star y))\right)\text{ (C)}$$
We can quickly show:
$$\begin{align}
x\star 1 &= (1\star(1\star x))\star 1 \text{ (A)}\\
&=1\star(1\star(1\star(1\star x)))\text{ (C)}\\
&=1\star(1\star x)\text{ (A)}\\
&=x\text{ (A)}
\end{align}
$$
Then if you define $x^{-1}=1\star x$ and $x\cdot y = x\star(1\star y)$, we can show:
$$x\cdot x^{-1} = x\star(1\star(1\star x)) = x\star x = 1\\
x^{-1}\cdot x = (1\star x)\star (1\star x)=1\\
1\cdot x = 1\star(1\star x)=x\\
x\cdot 1 = x\star (1\star 1) = x\star 1 = 1\\
\begin{align}(x\cdot y)\cdot z&=(x\star(1\star y))\star(1\star z)\\
&=x\star\left((1\star z)\star(1\star(1\star y))\right)\\
&=x\star\left((1\star z)\star y\right)\\
&=x\star(1\star (y\star(1\star z)))\\
&= x\cdot(y\cdot z)
\end{align}
$$
and finally:
$$a\cdot b^{-1} = a\star(1\star(1\star b)) = a\star b$$
So you've got all your group axioms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Game of dots: winning strategy? The game begins with a row of $n$ numbers, in increasing order from $1$ to $n$. For example, if $n=7$, we have a row of numbers $(1,2,3,4,5,6,7)$.
On each turn, a player must either remove 1 number, or remove 2 consecutive numbers. For example, the first player to move can remove $2$ or remove 5 and 6 together.
The player who removes the last number wins. Is there a winning strategy for the player who goes first?
p.s. Sorry for the initial confusion. Here are some clarifications. (1) There are two players. (2) Let's say 4 is removed on the first turn. This does NOT make 3, 5 consecutive. So a player can never remove 3 and 5 together.
|
Example strategy for $n=7$:
The first player takes $4$, and then until the last element:
*
*If the second player takes $x$, then the first player takes $8-x$.
*If the second player takes $(x,x+1)$, then the first player takes $(7-x,8-x)$.
General strategy for an odd $n$:
The first player takes $\dfrac{n+1}{2}$, and then until the last element:
*
*If the second player takes $x$, then the first player takes $n+1-x$.
*If the second player takes $(x,x+1)$, then the first player takes $(n-x,n+1-x)$.
General strategy for an even $n$:
The first player takes $(\dfrac{n}{2},\dfrac{n}{2}+1)$, and then until the last element:
*
*If the second player takes $x$, then the first player takes $n+1-x$.
*If the second player takes $(x,x+1)$, then the first player takes $(n-x,n+1-x)$.
Conceptual proof:
A pickable element is either a single number or a pair of consecutive numbers.
You can think of the element in the middle as a mirror.
It is the only pickable element that doesn't have a "reflecting counterpart".
So by picking this element first, you guarantee that for every element that your opponent picks, you can pick the corresponding element (located on "the other side of the mirror"), thus win the game...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Determining consistency of a general overdetermined linear system For $m > 2$, consider the $m \times 2$ (overdetermined) linear system
$$A \mathbf{x} = \mathbf{b}$$
with (general) coefficients in a field $\mathbb{F}$; in components we write the system as
$$\left(\begin{array}{cc}a_{11} & a_{12} \\ \vdots & \vdots \\ a_{m1} & a_{m2} \end{array}\right)
\left(\begin{array}{c}x_1 \\ x_2\end{array}\right)
=
\left(\begin{array}{c}b_1 \\ \vdots \\ b_n\end{array}\right),$$
where $m > 2$, so that the system is overdetermined.
If $m = 3$ and the system is consistent, (equivalently, $\mathbf{b}$ is in the column space of $A$), then the columns of the augmented matrix $\pmatrix{ A \mid {\bf b}}$ are linearly dependent, and so
$$\det \pmatrix{ A \mid {\bf b}} = 0.$$
In particular, we have produced a polynomial in the components $(a_{ij}, b_j)$ of the linear system for which vanishing is a necessary condition for system's consistency. I'll call such polynomials polynomial obstructions for the system.
If $m > 3$, then we can produce ${m}\choose{3}$ such polynomials by considering the determinants of the $3 \times 3$ minors of $\pmatrix{ A \mid {\bf b}}$.
Are essentially all polynomials obstructions to the system essentially given by these, or are there others? Put more precisely: By definition the polynomial obstructions comprise an ideal in the polynomial ring $\mathbb{F}[a_{11}, \ldots, a_{m2}, b_1, \ldots b_m]$---do the determinants of the $3 \times 3$ minors generate this ideal? If not, how does one produce a complete set of generators?
More generally, for an $m \times n$ overdetermined linear system (so that $m > n$)
$$A \mathbf{x} = \mathbf{b},$$
we can produce polynomial obstructions by taking the determinants of the ${m}\choose{n+1}$ minors (of size $(n + 1) \times (n + 1)$). What are the answers to the obvious analogues to the above questions in the $n = 2$ case?
|
When $m=3$ and $\mathbb F$ is infinite, there are no other obstructions besides the determinant. When $\mathbb F$ is finite, there are many others : for example
if we put $\chi_{\mathbb F}(X)=\prod_{t\in {\mathbb F}^*} (X-t)$, $\chi(t)$ is zero iff
$t$ is nonzero, so that the following $n$ polynomials are all obstructions :
$$
w_i=b_i\prod_{j=1}^n\chi_{\mathbb F}(a_{ij})
$$
For the infinite case, one can use the following lemma :
Generalized Euclidean division. Let $A$ and $B$ be two polynomials
in ${\mathbb F}[X_1,X_2,\ldots,X_n,Y]$. Let $a={\sf deg}_Y(A)$,
$b={\sf deg}_Y(B)$, and let $L$ be the leading coefficient of $B$
with respect to $Y$ (so that $L\in{\mathbb F}[X_1,X_2,\ldots,X_n]$
and $B-LY^b$ has degree $<b$ in $Y$). Then if $a \geq b$,
there are two polynomials $Q,R\in {\mathbb F}[X_1,X_2,\ldots,X_n,Y]$ such
that $L^{a-b+1}A=QB+R$ and ${\sf deg}_Y(R)<b$.
Proof. Let ${\mathbb K}={\mathbb F}(X_1,X_2,\ldots,X_n)$. We can view
$A$ and $B$ as members of ${\mathbb K}[Y]$, and perform ordinary euclidian
division ; this yields $Q^{\sharp},R^{\sharp}\in {\mathbb K}[Y]$ such that
$A=Q^{\sharp}B+R^{\sharp}$. Since the division process involves
$a-b+1$ divisions by $L$, we see that $Q^{\sharp}$ and $R^{\sharp}$ are of
the form $\frac{Q}{L^{a-b+1}}$ and $\frac{R}{L^{a-b+1}}$ with
$Q,R\in {\mathbb F}[X_1,X_2,\ldots,X_n,Y]$. This concludes the proof of the lemma.
Let us now explain how this can be used when $m=3$. Let $I$ be the ideal
(in the ring ${\mathfrak R}={\mathbb F}(A_{11},A_{12},A_{13},A_{21},A_{22},A_{23},B_1,B_2,B_3)$ of all obstructions. In particular, the determinant
$$
\Delta=(A_{12}A_{23}-A_{13}A_{22})B_1+
(A_{13}A_{21}-A_{11}A_{23})B_2+
(A_{11}A_{22}-A_{12}A_{21})B_3 \tag{1}
$$
is a member of $I$. Let $P\in I$, and let $p={\sf deg}_{B_3}(P)$. By the
generalized Euclidean division property above, there are polynomials
in $Q,R$ in $\mathfrak R$ such that $(A_{11}A_{22}-A_{12}A_{21})^p P=\Delta Q+R$,
such that $R$ does not contain the variable $B_3$ (note that we need $p\geq 1$ in order to apply the lemma ; but if $p=0$, we can simply take $Q=0,R=P$). Then $R\in I$. Consider the set
$$
W=\bigg\lbrace (a_{11},a_{12},a_{13},a_{21},a_{22},a_{23},b_1,b_2) \in
{\mathbb F}^{8} \ \bigg| \ a_{11}a_{22}-a_{12}a_{21} \neq 0\bigg\rbrace \tag{2}
$$
Since $\mathbb F$ is infinite, $W$ is a Zariski-dense open subset of ${\mathbb F}^{8}$.
We have a natural map $\phi : W \to V(I)$, defined by
$$
\phi(a_{11},a_{12},a_{13},a_{21},a_{22},a_{23},b_1,b_2)=
\bigg(a_{11},a_{12},a_{13},a_{21},a_{22},a_{23},b_1,b_2,
-\frac{(a_{12}a_{23}-a_{13}a_{22})b_1+
(a_{13}a_{21}-a_{11}a_{23})b_2}{a_{11}a_{22}-a_{12}a_{21}}\bigg) \tag{3}
$$
For any $w\in W$, we have $R(\phi(w))=0$ since $R\in I$. We deduce $R(w)=0$
for all $w\in W$. Since $W$ is Zariski-dense, $R$ is zero eveywhere. So $R$ must be
the zero polynomial, $(A_{11}A_{22}-A_{12}A_{21})^p P=\Delta Q$. Since $A_{11}A_{22}-A_{12}A_{21}$
and $\Delta$ have no common factors, we see that $\Delta$ divides $P$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
Can a number have infinitely many digits before the decimal point? I asked my teacher if a number can have infinitely many digits before the decimal point. He said that this isn't possible, even though there are numbers with infinitely many digits after the decimal point. I asked why and he said that if you keep adding digits to the left of a number it will eventually approach infinity which is not a number and you could no longer distinguish two numbers from one another.
Now this is the part of his reasoning I don't understand: why can we distinguish two numbers with infinitely many digits after the point but not before it? Or is there a simpler explanation for this?
|
What is the underlying reason for having infinitely many digits following the decimal point, but not infinitely many digits left of the decimal point?
The underlying reason is that real numbers can have infinite precision, but only finite size. You can find larger and larger real numbers, but each of them has a finite size. You can have a real number with a gazillion digits left of the decimal point, which is a very, very, very large number, but it's still a finite number.
To the right of the decimal point, the number of decimals is unlimited because real numbers have infinite precision. Even for rational numbers, you need an infinite number of decimals because a finite number of decimals can only represent a tiny fraction of the rational numbers. For example, 10/7 = 1.428,571,428,571,428,571... needs infinitely many decimals because if you cut of the decimals at any point the result is too small, and if you add 1 to the last digit the result is too large.
But real numbers really need an infinite number of decimals, because after every decimal you can add any other decimal you like and you get different real numbers, and go on forever doing so.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60",
"answer_count": 9,
"answer_id": 6
}
|
Show $\sum\limits_{k=1}^n{n-1\choose k-1} =2^{n-1}$
*
*Given $$\sum\limits_{k=1}^n k{n\choose k} = n\cdot 2^{n-1}$$
*
*I know that $$k\cdot{n\choose k}=n\cdot{n-1\choose k-1}=(n-k+1)\cdot{n\choose k-1}$$
Therefore $$\sum\limits_{k=1}^n k{n\choose k} = \sum\limits_{k=1}^n n{n-1\choose k-1} = n\cdot 2^{n-1}$$
So, $$n\cdot\sum\limits_{k=1}^n {n-1\choose k-1} = n\cdot 2^{n-1}$$
Therefore $$\sum\limits_{k=1}^n{n-1\choose k-1} =2^{n-1}$$
How is $\quad\sum\limits_{k=1}^n{n-1\choose k-1} =2^{n-1}\quad$?
|
With $j=k-1$
$$\sum_{k=1}^n {n-1\choose k-1}=\sum_{j=0}^{n-1} {n-1\choose j}=2^{n-1} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Is there a name for the function that gives me the signal of a number only? I know the function that gives the absolute value of a number is called either absolute function or 'modulus' function, such as:
$$
modulus(-6) = modulus(6) = 6
$$
Now, I want to name a function that gives me a unit value with the same signal as the input number, like this:
$$
function(-6) = -1
$$
$$
function(6) = 1
$$
Then I can do this:
$$
modulus(-6) \times function(-6) = -6
$$
I could just call it signal(x) but I'd like to know if there is a name for this function.
Thank you all very much!
|
That's that signum function.
Actually:
$$\text{signum}(x)=\begin{cases}\begin{align}1,\quad x>0\\0,\quad x=0\\-1,\quad x<0\end{align}\end{cases}$$
See it at wikipedia or WolframMathworld.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
$\prod\left(1-p_n\right)>0$ I want to prove that if $0\le p_n<1$ and $\sum p_n<\infty$, then $\prod\left(1-p_n\right)>0$ .
There is a hint : first consider the case $\sum p_n<1$, and then show that $\prod\left(1-p_n\right)\ge1-\sum p_n$ .
How can I use this hint to show the statement above?
|
Why it is sufficient to prove the hint: Suppose $\sum p_n < \infty$. Then there is an integer $N$ such that $\sum_{n \geq N} p_n < 1$. Now observe that both $\prod_{n < N} (1-p_n)$ (a finite product) and $\prod_{n \geq N} (1-p_n)$ (using the hint) are positive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
The rank after adding a rank-one matrix to a full rank matrix Suppose $A$ and $B$ are all $N$ by $N$ matrices.
$$rank(A) = N, rank(B) = 1$$
What's the lower bound of: $$rank(A+B)?$$
My guess in this specific case is $$rank(A+B) \geq N-1,$$ but I don't know if it's true, how to prove it, and under what condition we have $rank(A+B) = N-1$.
Can anyone help me with this? I know there is a nice result of the upper bound, $rank(A+B) \leq rank(A) + rank(B)$, but I didn't find anything about the lower bound online.
|
Think of $A,B$ as linear transformations. $rank(A)=N$ implies $A$ is one to one, hence maps every subspace of dimension $k$ to another subspace of the same dimension. $rank(B)=1$ implies $\dim\ker(B)=N-1$. Now $$\dim(A+B)(\ker(B))=\dim(A(\ker(B)))=\dim(\ker(B))=N-1$$implies $$\dim(Im(A+B))\geq N-1,$$and in other words $$rank(A+B)\geq N-1.$$ Assume now $rank(A+B)=N-1$, so there is $v\neq0$ such that $Av+Bv=0$. Obviously, $A(v)\in Im(B)$, but since $Im(B)$ is $1$-dimensional and $A$ is one to one, $A^{-1}(Im(B))$ is also $1$-dimensional, and actually $A^{-1}(Im(B))=span(v)$. It follows that for every $u\in A^{-1}(Im(B)),\;A(u)=-B(u)$.
In conclusion, it is easy to check: Just pick $u\in A^{-1}(Im(B))$ and see whether $A(u)=-B(u)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why is this set a $\sigma$-algebra?? $X$ is an uncountable set.
Why is $\mathcal{A}=\{A \subset X: A \text{ or } X \setminus A \text{ is countable } \}$ a $\sigma$-algebra ??
$$$$
A $\sigma$-algebra $\mathcal{A}$ on a set $X$ is a collection of subsets of $X$ such that :
(1) $\varnothing \in \mathcal{A}$
(2) $A \in \mathcal{A} \Rightarrow X \setminus A \in \mathcal{A}$
(3) $A_n \in \mathcal{A} \Rightarrow \cup_{n=1}^{\infty} A_n \in \mathcal{A}$
$$$$
Could you give me a hint how to show that $\mathcal{A}=\{A \subset X: A \text{ or } X \setminus A \text{ is countable } \}$ is a $\sigma$-algebra ??
|
Well, (1) and (2) are obvious.
Hint for (3): let $A_1,A_2,\dots\in \mathcal A$. Consider two cases:
*
*Either all $A_n$'s contain only countable points.
*Or at least one of them (w.l.o.g., say $A_1$) contains all but countable points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Can $\Theta(f_1) = \Theta(f_2)$? Does $\Theta(n^3+2n+1) = \Theta(n^3)$ hold? I'm so used to proving that a concrete function is Big-Whatever of another function, but never that Big-Whatever of a function is Big-Whatever of another function.
|
The problem is that $f=\Theta(g)$ is bad notation, because the two sides aren't "equal" in the obvious sense. One way to make the notation precise is to think of $\Theta(g)$ as the collection of all functions which "are big-theta of $g$." In other words, $\Theta(g)$ consists of all the functions $f$ so that $$c\cdot g(n)\le f(n)\le C\cdot g(n)$$ for some constants $c$ and $C$, so long as $n$ is large enough. Now the notation $$f\in\Theta(g)$$ makes perfect sense. It means that $f$ is a function belonging to the collection of functions which are big-theta of $g$.
I mention all of this, because with this interpretation, one can make sense of $\Theta(f)=\Theta(g)$. It is saying that two sets are equal. The way to prove that two sets are equal is to show that one contains the other, and vice-versa. So you would need to show that if a function is big-theta of $f$ then it is big-theta of $g$, and vice-versa.
In your example, a proof would look like this:
Suppose $f\in\Theta(n^3)$. Then there are constants $c$ and $C$ such that $cn^3\le f(n)\le Cn^3$ for large enough $n$. Note that $n^3\le n^3+2n+1$, and $n^3+2n+1\le 2n^3$ for $n$ large enough. Putting these inequalities together yields $$\frac{c}{2}(n^3+2n+1)\le f(n)\le C\cdot(n^3+2n+1),$$ which means $f\in\Theta(n^3+2n+1)$. Thus, any function in $\Theta(n^3)$ is also in $\Theta(n^3+2n+1)$, meaning that $\Theta(n^3)\subseteq\Theta(n^3+2n+1)$. A similar argument proves the reverse containment, from which we deduce that $\Theta(n^3)=\Theta(n^3+2n+1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/944942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Infinitely Many Circles in an Equilateral Triangle
In the figure there are infinitely many circles approaching the vertices of an equilateral triangle, each circle touching other circles and sides of the triangle. If the triangle has sides of length 1, find the total area occupied by the circles.
I need to find the total area of the circles.
I know this is going to have something to do with summation as a value approaches infinity, but I'm not entirely sure how to approach the problem. Here's what I have so far:
I know that the radius of the central inscribed circle is $ \frac{\sqrt{3}}{6} $. As such, the area of the first circle is $$ A = \pi\left(\frac{\sqrt{3}}{6}\right)^2. $$ Because there are three "branches" of infinite circles, I'm assuming that the answer will look something like: $$ A = \pi\left(\frac{\sqrt{3}}{6}\right)^2 + 3 \sum_{1}^{\infty}\text{something}.$$
|
Look at the following figure carefully,
As the triangle is equilateral ($AC$ is the angle bisector). So, $\angle ACD = 30^{\circ}$
$$\tan 30^{\circ} = \frac{AD}{DC} = 2AD\ (\because DC = 1/2) $$
$$\therefore AD = \frac{1}{2\sqrt{3}}$$
This is the radius of the bigger circle, let its area be $A_1$
$$\therefore A_1 = \frac{\pi}{12}$$
To calculate the radius of the next smaller circle (let it be $x$), please note that
$$AC = \frac{1}{\sqrt{3}}$$
$$AB =\frac{1}{2\sqrt{3}} +x$$
$$\therefore BC = AC - AB =\frac{1}{2\sqrt{3}} -x $$
Note that triangles $ADC$ and $BCE$ are similar.
$$\therefore \frac{AD}{AC} = \frac{BE}{BC}$$
$$\frac{1}{2\sqrt{3}} \times \sqrt{3} = x \times \left( \frac{2\sqrt{3}}{1-2\sqrt{3}x} \right)$$
$$\therefore x = \frac{1}{6\sqrt{3}}$$
Similarly we can find the radii of the next circles. They would be $\frac{1}{18\sqrt{3}}$,
$\frac{1}{54\sqrt{3}}, ...$
Now, the main answer,
The sequence $\frac{1}{6\sqrt{3}},\frac{1}{18\sqrt{3}}, \frac{1}{54\sqrt{3}}, ... $
can be generally written as $\frac{1}{6\sqrt{3}(3)^{n-1}}$
Total area of these circles,
$$T = \frac{\pi}{12} + 3\sum_{n=1}^{\infty} \pi {\left(\frac{1}{6\sqrt{3}(3)^{n-1}} \right)}^2 $$
Notice that,
$$\sum_{n=1}^{\infty} \pi {\left(\frac{1}{6\sqrt{3}(3)^{n-1}}\right)}^2 = \sum_{n=1}^{\infty} \pi {\left( \frac{1}{108}\right)}{\left(3^{-(n-1)}\right)}^2$$
=$$\sum_{n=1}^{\infty} \pi {\left( \frac{1}{108}\right)}{\left(3^{-2(n-1)}\right)}$$
=$$\sum_{n=1}^{\infty} \pi {\left( \frac{1}{108}\right)}{\left(\frac{1}{9}\right)}^{n-1}$$
This is a GP with $a = \frac{\pi}{108}$ and $r = \frac{1}{9}$
For infinite terms, the sum of this GP = $\frac{a}{1-r} = \frac{\pi}{96}$
Now, finally,
$$T = \frac{\pi}{12} + 3 \times \frac{\pi}{96} = \frac{11\pi}{96}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Example of a commutative square without a map between antidiagonal objects? In an abelian category, can there be a commutative diagram of (vertical/horizontal) exact sequences
$$
\require{AMScd}
\begin{CD}
0 @>>> N @>>> M\\
@. @VVV @VVV \\
0 @>>> X @>>> Y\\
@. @VVV \\
@. 0
\end{CD}
$$
such that the following conditions are true?
*
*All named objects are nonzero.
*No morphisms between named objects are zero morphisms, including $N \to Y$ (implied by the previous and the diagram).
*There are no morphisms $X \to M$ or $M \to X$ that commute with the other maps.
(I am trying to "replace" $N$ by $\ker(N \to Y)$ and $Y$ by $\text{coker}(N \to Y)$ in the diagram to get back a commutative diagram of exact sequences, and my attempts so far require the existence of a compatible map between $X$ and $M$ in at least one direction, so I would like to figure out what happens if I have no such maps.)
Original question
"Modules with morphisms 0 -> N -> M and N -> X -> 0 but no compatible maps between M and X?"
I wish for three modules N, M, and X with exact sequences $0 \to N \to M$ and $N \to X \to 0$, but there are no such morphisms $M \to X$ or $X \to M$ which commute with the maps above.
(Not homework. I'm trying to replace X, N, and M in a bigger diagram so that X becomes 0 and N and M become something maximal (universal). If the above kind of module triples exists, I might be in trouble.)
|
Consider $0 \to \mathbb{Z} \to \mathbb{Q}$, and $\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0$. Then the only map in either direction between $\mathbb{Q}$ and $\mathbb{Z}/2\mathbb{Z}$ is the zero map, since $\mathbb{Q}$ is torsionfree, and every quotient of $\mathbb{Q}$ is divisible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving this summation: $\sum_{i=1}^k i\cdot2^i$ $$\sum_{i=1}^k i\cdot2^i$$
I'm working on a recurrence relation question and now I'm stuck at this point. I have no idea how to simplify this down to something I can work with. Can I seperate the terms into
$$\sum_{i=1}^k i \cdot \sum_{i=1}^k 2^i$$ and then just use the geometric series?
|
Consider the series
\begin{align}
\sum_{k=0}^{n} t^{k} = \frac{1-t^{n+1}}{1-t}.
\end{align}
Differentiate both sides with respect to $t$ to obtain
\begin{align}
\sum_{k=0}^{n} k t^{k-1} &= \frac{1}{(1-t)^{2}} \left( -(n+1) (1-t) t^{n}+(1-t^{n+1}) \right) \\
&= \frac{1 -(n+1) t^{n} + n t^{n+1}}{(1-t)^{2}}.
\end{align}
Now let $t=2$ to obtain
\begin{align}
\sum_{k=0}^{n} k \cdot 2^{k} = 2 -(n+1) 2^{n+1} + 2^{n+2} n = 2 + 2^{n+1}(n-1).
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
For any integer $n\geq 1$, define $\sin_n=\sin\circ ... \circ \sin$ ($n$ times). Prove that $\lim_{x\to 0}\frac{\sin_nx}{x}=1$ for all $n\geq 1$ I got this problem:
For any integer $n\geq 1$, define $\sin_n=\sin\circ ... \circ \sin$ ($n$ times). Prove that $\lim_{x\to 0}\frac{\sin_nx}{x}=1$ for all $n\geq 1$.
Some hints will be appreciated.
|
Hint: $\sin(x)\sim x$ then, $\sin(\sin(x))\sim\sin x\sim x$ then $\sin_n(x)\sim\sin_{n-1}(x)\sim...\sim x$. I let you conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
}
|
If the product $(x+2)(x+3)(x+4)\cdots(x+9)(x+10)$ expands to $a_9x^9+a_8x^8+\dots+a_1x+a_0$, then what is the value of $a_1+a_3+a_5+a_7+a_9$?
When expanded, the product $(x+2)(x+3)(x+4)\cdots(x+9)(x+10)$ can be written as $a_9x^9+a_8x^8+\dots+a_1x+a_0$. What is the value of $a_1+a_3+a_5+a_7+a_9$?
|
Let $P(x) = (x+2)(x+3)..(x+10)$. Then what is $\dfrac{P(1)-P(-1)}2=?$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Upper and/or lower Bound for Numbers of different topologies on the set $\{1,...n \}$ As the title says I am looking for upper and lower bound for the cardinality of different topologies on a set $\{1,....n\}$ for natural n!
Are there some known bounds? My teacher says that there no formula which gives the exactly number.
Thanks in advance!
Feel free to add more tags I dont know which tags I should add
|
Let $X=\{1,....n,\}$and $m(n)$ is the number of topologies on $X$. We want to show that $m(n)\leq 2^{n(n-1)}$
We denote $\mathcal{M}$ for the set of all topologies on $X$ and we define for every $x \in X$ a function as follows:
$f_x:\mathcal{M}\rightarrow \mathcal{P}(X)$, $f(\tau )= \bigcap_{U\in\tau,x\in U}U$
So the map creates a topology on the "smallest" neighborhood of the point $x$
One can verify that the set $\{f_x(\tau):x\in X\}$ is a basis for $\tau$. If we pick up two topologies (e.g. $\tau_1$,$\tau_2\in \mathcal{M})$ on $X$ then they are equal if and only if the corresponding basis are equal. So its sufficient to count the numbers of the maps $f_x$ to get an upper bound.
For an arbitrary $x\in X$ there are $2^{n-1}$ sets in $\mathcal{P}(X)$ cointaining $x$. So for $f_x$ there are $2^{n-1}$ different $f_x$. We now can conclude:
$$m(n)\leq (2^{n-1})^n=2^{n(n-1)}$$ Because there are $n$ possibilities for $x$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\text{If } |z_1| = |z_2|, \text{ show that } \frac{z_1 + z_2}{z_1-z_2} \text{is imaginary.} $ $\text{If } |z_1| = |z_2|, \text{ show that } \frac{z_1 + z_2}{z_1-z_2} \text{is imaginary.} $
The first thing I tried to do was to multiply both top and bottom by the conjugate of the denominator...
$$
\frac{z_1 + z_2}{z_1-z_2} \left( \frac{z_1 + z_2}{z_1+z_2} \right) \\
= \frac{z_1^2 + 2z_1z_2 + z_2^2}{z_1^2-z_2^2}
$$
Then I $\text{Let }z_1,z_2 = x_1+iy_1,x_2+iy_2$ and expanded.. but then the equation was too big to work with. What I wanted to do was to simplify as much as I can, such as I did with $\frac{1-z}{1+z}$ which just equaled $\frac{-i\sin\theta}{1+\cos\theta}$ (after being written in Mod-Arg form, of course). So, what should I do from here on? Thanks in advance.
|
Multiply instead by $\dfrac{\bar{z_1} + \bar{z_2}}{\bar{z_1} + \bar{z_2}}$ to get $\displaystyle \frac{|z_1|^2 + z_1\bar{z_2} + \bar{z_1}{z_2} + |z_2|^2}{|z_1|^2 + z_1\bar{z_2} - \bar{z_1}{z_2} - |z_2|^2}$. The denominator is imaginary and the numerator is real.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Absolute continuity under the integral Let $f:[0,T]\times \Omega \to \mathbb{R}$ where $\Omega$ is some bounded compact space. Let $t \mapsto f(t,x)$ be absolutely continuous. Is then
$$t \mapsto \int_\Omega f(t,x)\;dx$$
also absolutely continuous provided the integral exists?
I think we need $|f(t,x)| \leq g(x)$ for all $t$ where $g$ is integrable. At least. But not sure what else we need!!
When I try to apply the definition of absolutely continuous to the integral, I tried to use the absolute continuity of $f(t)$. But the $\delta$ for $t \mapsto f(t,x)$ depends on $x$, so $\delta=\delta_x$ and that causes problems..
|
Let $\epsilon > 0$ and $(t_i, t_{i+1})_{i=1}^N$ be any non-overlapping set of intervals such that $\sum_j|f(t_j,x) - f(t_{j-1},x)| \leq \epsilon$ whenever $\sum_j |t_{j+1} - t_j| < \delta_x$, where the $\delta_x$ comes from the absolute continuity of $f$.
$\sum_j |F(t_j) - F(t_{j-1})|\leq\sum_j \int_{\Omega} |f(t_j,x) - f(t_{j-1},x)| dx = \int_{\Omega} \sum_j |f(t_j,x) - f(t_{j-1},x)| dx $ Now, $\Omega = \cup_{j=1}^n B(x_j,\delta_{x_j} )$ by compactness. So for $\sum_j |t_{j+1} - t_j| < \min_j \delta_{x_j}$, we have that $\int_{\Omega} \sum_j |f(t_j,x) - f(t_{j-1},x)| dx \leq \frac{\epsilon}{m(\Omega)} m(\Omega) < \epsilon$ as needed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to solve DE $y'=1/(x^2y)(y^2-1)^{3/2}$ Man, I'm having troubles with this differential equation. I just can't do any math if I'm tired...
What I have done:
$$\frac{y'\cdot y}{(y^2-1)^{3/2}}=\frac{1}{x^2}$$
Now I integrated both sides from $a$ to $x$ and I've got
$$-\frac{1}{(y^2-1)^{1/2}}+ \frac{1}{(y(a)^2-1)^{1/2}} = -\frac{1}{x} + \frac{1}{a}$$
But this seems wrong after checking the result on WolframAlpha. Please help!
|
I don't understand the $_a^x$ part. Why would you do that? You can evaluate the antiderivative directly(we may have the same solution but here it goes anyway);
$$\int \frac{y' y}{(y^2-1)^{3/2}} \mathrm{d}x =\frac12 \int \frac{1}{u^{3/2}} \mathrm{d}u=-\frac{1}{\sqrt{y^2-1}}+c$$
And $$\int \frac{1}{x^2} \mathrm{d}x = -\frac{1}{x}$$
Then the DE solution becomes;
$$y(x) = \frac{\sqrt{c_1^2 x^2-2 c_1 x+x^2+1}}{1-c_1 x}$$
or
$$y(x)=\frac{\sqrt{c_1^2 x^2-2 c_1 x+x^2+1}}{c_1 x-1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/945964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Quotient Topology, why is this set "saturated"?
It says $[2,3]$ is saturated with respect to $q$, but not open in $Y$.
BUt it doesn't make sense to me because
$q(A) = q([0,1) \cup[2,3]) = [0,1) \cup [2-1,3-1] = [0,1) \cup [1,2] = [0,2] = Y$, so the image is open in $Y$ as it is $Y$.
|
Note that $p$ is almost 1-1, the only exception is $p(1) = 1 = p(2)$, this is where the two closed intervals of $X$ are "glued together", giving the result $[0,2]$, the image of $p$.
$p$ induces an equivalence relation on $X$ that only identifies $1$ and $2$ and no other points.
A set $B \subset X$ is saturated under $p$, iff $1 \in A$ implies $ \in A$ and vice versa. Such a set satisfies the condition that if it contains a point from some class under that equivalence relation, it contains all members of that class. And the only non-trivial class is $\{1,2\}$. Alternatively, a saturated set is of the form $p^{-1}[C]$ for some $C \subset Y$. Simple set theory shows that this is the same notion. But the classes perspective shows why it's called "saturated",more clearly I think.
Now, $q$ is made to be injective, as we remove $1$ from $X$. So the equivalence relation induced by $q$ is the trivial one (only identify a point with itself), an so all subsets of $A$ are saturated. So the only way this could be a quotient map is to be a homeomorphism and this $q$ is not ($A$ is disconnected, $Y$ is connected, e.g.). Or as your text states it: $q^{-1}[[2,3]]$ is saturated (all sets are under $q$) and open, but $[2,3]$ is not open.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find solution for $A * X = B$ I have a Matrix:
$$A= \pmatrix{1 & -2 & 1 \\ -1 & 3 & 2 \\ 0 & 1 & 4}$$
My task is to find $X$ from:
$$A * X = \pmatrix{4 & 0 & -3 & 1 \\ 1 & 5 & 2 & -1 \\ 0 & 1 & -1 & 2}$$
My problem is, that i dont know how to do this. I mean i could build several equations like:
$$1 * x1,1 -2 *x1,2 + 0* x1,3 = 4$$
But i think this would take to much time! So what could i do?
The solution for $X$ should be:
$$X = \pmatrix{49 & 38 & -5 & -13 \\ 20 & 17 & -1 & -6 \\ -5 & -4 & 0 & 2}$$
|
If you denote by $x_1$, $x_2$, $x_3$ and $x_4$ the columns of $X$ and by $b_1$, $b_2$, $b_3$ and $b_4$ the columns of the given $3\times 4$ matrix (call it $B$), you have essentially to solve the linear systems
$$
Ax_i=b_i\qquad(i=1,2,3,4)
$$
If you consider the “multiaugmented” matrix
$$
\left[\begin{array}{c|c|c|c|c}A&b_1&b_2&b_3&b_4\end{array}\right]
$$
you can solve them all at once, by reducing the matrix to echelon form. If you find a pivot in the last four columns, your problem has no solution. If the rank of $A$ is $3$ the problem has exactly one solution. If the rank of $A$ is $1$ or $2$, and no pivot is found in the last four columns, the problem has infinitely many solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Multivariable optimization - how to parametrize a boundary?
A metal plate has the shape of the region $x^2 + y^2 \leq 1$. The plate is heated so that the temperature at any point $(x,y)$ on it is indicated by
$$T(x,y) = 2x^2 + y^2 - y + 3$$
Find the hottest and coldest points on the plate and the temperature at each of these points. (Hint: Parametrize the boundary of the plate in order to find any critical points there.)
I know how to do the actual optimization part of this problem. I already found a critical point at (0,0.5) by setting the first partial derivatives equal to 0. My problem is, how do I parametrize the boundary to find the other ones? I've seen solutions where they used cos(t) and sin(t) - where and how did they know how to do that?
|
We parametrize $x=\sin t, y=\cos t$ (although it works if we switch $x$ and $y$ as well). We obtain: $$2\sin^2 t+\cos^2 t-\cos t+3.$$Take the derivative and solve: $$\sin t(2\cos t+1)=0.$$
Edit
An important part I left out is that $0\leq t \leq 2\pi$. We choose this time interval because it traverses the circle exactly once.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Concepts in mathematics which are referred to as 'generalizations' I am curious to know some theorems usually taught in advanced math courses which are considered 'generalizations' of theorems you learn in early university or late high school (or even late university).
For example, I know that Stokes's theorem is a generalization of the divergence theorem, the fundamental theorem of calculus and Green's theorem, among I'm sure many other notions.
I've read that pure mathematics is concerned mostly with the concept of 'generalization' and I am wondering which theorems/ideas/concepts, like Stokes's theorem, are currently celebrated 'generalizations' by mathematicians.
|
The Parallelogram law of inner product spaces is a generalization of a theorem of Euclidean geometry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 21,
"answer_id": 12
}
|
Can the partial derivative of f(x,y) at (a,b) exist if f(x,y) is not continuous at (a,b)? Suppose f(x,y) is continuous for all $(x,y) \neq (a,b)$, (not continuous at (a,b)), can the partial derivative with respect to x (or y) at (a,b) still exist?
|
Another example is
$$f(x,y) = \begin{cases} \frac{xy}{x^2+y^2} & \text{ if}\ (x,y)\neq (0,0) \\ 0 & \text{ if}\ (x,y)=(0,0) \end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Product rule trig This was given as a solution to a question and I've tried working it out but can never get the same answer. Here $x=rcosϕ$ and $y=rsinϕ$
It's mostly the first 2 lines I don't understand. Wouldn't $x^2 = r^2cos^2ϕ$ and $y^2 = r^2sin^2ϕ$? And how did the first 2 terms of the second line come along? I can understand the last 2 terms from the product rule but not the beginning...
|
$$\ r^2d\phi=r^2\cdot1\cdot d\phi=r^2\cdot(\cos^2(\phi)+\sin^2(\phi))\cdot d\phi=$$
$$\ =r^2\cos^2(\phi)d\phi+r^2\sin^2(\phi)d\phi=r\cos(\phi) r\cos(\phi)d\phi+r\cos(\phi) r\cos(\phi)d\phi$$
Now you have that:
$$\ r\cos(\phi)=x$$
and
$$d(r\sin(\phi))=\sin(\phi)dr+rd(\sin(\phi))=\sin(\phi)dr+r\cos(\phi)d\phi$$
$$d(r\cos(\phi))=\cos(\phi)dr-r\sin(\phi)d\phi$$
so you get
$$\ r\cos(\phi)d\phi=d(r\sin(\phi))-\sin(\phi)dr$$
$$\ r\sin(\phi)d\phi=\cos(\phi)dr-d(r\cos(\phi))$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A question on Lie derivative For a Lie derivative $\mathscr{L}_{X} Y$ of $Y$ with respect to $X$, we mean that for two smooth vector fields $X$ and $Y$ on a smooth manifold $M$ such that the following holds
$$
\mathscr{L}_{X} Y = \underset{t \rightarrow 0}{\lim} \frac{(\varphi_{-t})_{\star} Y - Y}{t}, \tag{1}
$$
where $\varphi_{t}$ is a local one-parameter transformation group generated by $X$.
Suppose $(-\varepsilon, \varepsilon) \times U$ is the domain of $\varphi_{t}(p)$, where $U$ is an open subset of $M$. Then we can rewrite $(1)$ as the form
$$
(\mathscr{L}_{X} Y)_{p} = \underset{t \rightarrow 0}{\lim} \frac{(\varphi_{-t})_{\star} Y_{\varphi_{t}(p)} - Y_p}{t}. \tag{2}
$$
We regard the tangent vector $(\varphi_{-t})_{\star} Y_{\varphi_{t}(p)}$ as a map from an open subset of $(-\varepsilon, \varepsilon)$ into the tangent space $T_p M$.
My question is:
$1$. For an appropriate domain of $F$, why this map $F: t \mapsto (\varphi_{-t})_{\star} Y_{\varphi_{t}(p)}$ is smooth ? However, the mooth structure on $T_p M$ has been still unclear to me. Thanks in advance.
|
Since $Y$ is smooth in the variable $p$, and now $Y_{\varphi_t(p)}$ (or maybe this notation: $Y_{p(t)}$) is just the restriction of $p$ to the integral curve, and therefore also smooth. Think of it as a composition of the smooth map $t\mapsto p(t)$ with $p\mapsto Y_p$.
On the other hand, $(\varphi_{-t})_\star$ is nothing but a jacobian (linear map) parameterized by $t$, which is smooth since $\varphi_{-t}$ is.
It suffices to consider the smoothness of $t\mapsto (\varphi_{-t})_\star Y_{\varphi_t(p)}$ in local coordinates, which is the multiplication of a matrix corresponding to $(\varphi_{-t})_\star$ and a column vector corresponding to $Y_{\varphi_t(p)}$, which are both smooth in $t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show a sequence such that $\lim_{\ N \to \infty} \sum_{n=1}^{N} \lvert a_n-a_{n+1}\rvert< \infty$, is Cauchy
Attempt. Rewriting this we have, $$\sum_{n=1}^{\infty} \lvert a_n-a_{n+1}\rvert< \infty
\,\,\,\Longrightarrow\,\,\, \exists N \in \mathbb{N}\ \ s.t,\ \ \sum_{n \geq N}^{\infty} \lvert a_n-a_{n+1}\rvert < \infty$$
Taking $m>n+1$ we have, $$\sum_{m > n \geq N}^{\infty}\lvert a_n-a_m\rvert< \infty$$
From the above statement we have: $\,\lim_{\ n \to \infty} \lvert a_n-a_m\rvert=0$.
Now combining out statement we have, $\forall \varepsilon>0, \exists N \in \mathbb{N}\ s.t \ \lvert a_n-a_m\rvert< \varepsilon, \forall, m\geq n \geq N$.
|
I'd go like this:
Assuming $\;m>n\;$ , we have
$$|a_m-a_n|=|a_m-a_{m-1}+a_{m-1}-a_{m-2}+a_{m-2}-a_{m-3}+\ldots+a_{n+1}-a_n|\le$$
$$\le\sum_{k=0}^{m-n-1}|a_{m-k}-a_{m-k-1}|\xrightarrow[m,n\to\infty]{}0$$
The last limit is not actually a double one, but rather "make $\;n\to \infty\;$ and thus also $\;m\to\infty\;$"
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
What's the closure of $(a,b)$ in discrete topology on the real number $\mathbb{R}$ In my opinion, by definition, the closure of $(a,b)$ in discrete topology on the real number $\mathbb{R}$ is $(a,b)$. However, I just saw the answer for this question is $[a,b]$. Now I am not sure which one.
|
A different approach is the following. The discrete topology is induced by the discrete metric.
Now sequences $x_n$ converge to $x$ in the discrete metric if and only if there exists $n_0\in \mathbb{N}$ such that $x_n=x$ for all $n\geq n_0$. Consequently, $(a,b)$ is the closure of itself in this topology.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/946950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
finding a function from given function here is a function for:
$f(x-\frac{\pi}{2})=\sin(x)-2f(\frac{\pi}{3})$
what is the $f(x)$?
I calculate $f(x)$ as follows:
$$\begin{align}
x-\frac{\pi}{2} &= \frac{\pi}{3} \Rightarrow x= \frac{5\pi}{6} \\
f(\frac{\pi}{3}) &=\sin\frac{5\pi}{6}-2f(\frac{\pi}{3}) \\
3f(\frac{\pi}{3}) &=\sin\frac{5\pi}{6} \\
f(\frac{\pi}{3}) &=(1/3)\sin\frac{5\pi}{6}
\end{align}$$
$f(x)=(1/3)\sin\frac{5x}{2}$
|
Assuming $f$ is defined for all $x\in\mathbb{R}$. First, note that for any $x$,
$$
f(x) = \sin\!\left(x+\frac{\pi}{2}\right)-2f\!\left(\frac{\pi}{3}\right) = \cos x -2f\!\left(\frac{\pi}{3}\right)
$$
so it only remains to compute $f\!\left(\frac{\pi}{3}\right)$. From the expression above
$$
f\!\left(\frac{\pi}{3}\right) = \cos \frac{\pi}{3} -2f\!\left(\frac{\pi}{3}\right) = \frac{1}{2} -2f\!\left(\frac{\pi}{3}\right)
$$
and therefore rearranging the terms gives $f\!\left(\frac{\pi}{3}\right) = \frac{1}{6}$. Putting it all together,
$$
\forall x\in \mathbb{R}, \quad f(x)=\cos x - \frac{1}{3}\;.
$$
(It then only remains to check this expression satisfies the original functional equation, to be certain. It does; but even a quick sanity check for $x=0$, $x=\frac{\pi}{2}$ and $x=\pi$ will be enough to build confidence.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Show that $x \sin (\frac {1 } {x } ) $ is uniformly continuous on $(0,1) $ I want to prove that $f(x)=x \sin (\frac {1 } {x } ) $ is uniformly continuous on $0<x<1$.
If we consider the same function with the extra condition that $f $ is defined to equal zero at $x=0 $. then this new function would be continuous on $[0,1 ] $ and thus uniformly continuous.
Now my function isn't defined outside $(0,1)$ is it possible to claim that $f $ is uniformly continuous on $(0,1) $ from this?
Thanks in advance!
|
In general if a function $f(x)$ is continuous on $(a,b)$ such that both $\displaystyle\lim_{x\to a+}f(x)$ and $\displaystyle\lim_{x\to b-}f(x)$ exists then $f$ is uniform continuous on $(a,b)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Closed form of $\sum_{n=1}^{\infty}(-1)^{n+1} n (\log(n^2+1)-\log(n^2))$ How would you start computing this series?
$$\sum_{n=1}^{\infty}(-1)^{n+1} n (\log(n^2+1)-\log(n^2))$$
One of the ways to think of would be Frullani integral with the exponential function , but it's troublesome due to the power of $n$ under logarithm. What else might I try?
|
I'm getting the same answer as you.
$$ \sum_{n=1}^{\infty} (-1)^{n+1} n \log \left( \frac{n^{2}+1}{n^{2}}\right) = \sum_{n=1}^{\infty} (-1)^{n+1} n \int_{0}^{1} \frac{1}{n^{2}+x} \ dx$$
Then since $\displaystyle \sum_{n=1}^{\infty} \frac{(-1)^{n+1}n}{n^{2}+x}$ converges uniformly on $[0,1]$,
$$ \begin{align} &\sum_{n=1}^{\infty} (-1)^{n+1} n \int_{0}^{1} \frac{1}{n^{2}+x} \ dx \\ &= \int_{0}^{1} \sum_{n=1}^{\infty} \frac{(-1)^{n+1}n}{n^{2}+x} \ dx \\ &= \frac{1}{4} \int_{0}^{1} \left[ \psi \left(\frac{i \sqrt{x}}{2} \right)- \psi \left(\frac{1}{2} + \frac{i \sqrt{x}}{2} \right) + \psi \left(-\frac{ i \sqrt{x}}{2} \right)- \psi \left(\frac{1}{2} - \frac{i \sqrt{x}}{2} \right) \right] \ dx .\end{align}$$
The above series can be derived by working backwards and using the series representation of the digamma function (14) .
Now let $t = \sqrt{x}$.
Then
$$ \begin{align} &\sum_{n=1}^{\infty} (-1)^{n+1} n \log \left(\frac{n^{2}+1}{n^{2}} \right) \\&= \frac{1}{2} \int_{0}^{1} \left[t \psi \left(\frac{it}{2} \right) - t \psi \left(\frac{1}{2} + \frac{i t}{2} \right) + t \psi \left(-\frac{it}{2} \right) - t \psi \left(\frac{1}{2} - \frac{it}{2} \right) \right] \ dt . \end{align}$$
And integrating by parts,
$$ \begin{align} &\sum_{n=1}^{\infty} (-1)^{n+1} n \log \left(\frac{n^2+1}{n^{2}} \right) \\ &= \frac{1}{2} \Bigg[ 4 \psi^{(-2)} \left(\frac{it}{2} \right) - 2i t \log \Gamma \left( \frac{it}{2}\right) - 4 \psi^{(-2)} \left(\frac{1}{2}+ \frac{it}{2} \right) + 2i t \log \Gamma \left( \frac{1}{2} + \frac{it}{2}\right) \\ &+ 4 \psi^{(-2)} \left(-\frac{it}{2} \right) + 2i t \log \Gamma \left(- \frac{it}{2}\right) - 4 \psi^{(-2)} \left(\frac{1}{2} -\frac{it}{2} \right) - 2i t \log \Gamma \left(\frac{1}{2} - \frac{it}{2}\right)\Bigg] \Bigg|^{1}_{0} \\ &= 2 \psi^{(-2)} \left(\frac{i}{2} \right) - i \log \Gamma \left(\frac{i}{2} \right) - 2 \psi^{(-2)} \left(\frac{1}{2} + \frac{i}{2} \right) + i \log \Gamma \left(\frac{1}{2} + \frac{i}{2} \right) + 2 \psi^{(-2)} \left(-\frac{i}{2} \right) \\ &+ i \log \Gamma \left(-\frac{i}{2} \right) - 2 \psi^{(-2)} \left(\frac{1}{2} - \frac{i}{2} \right) - i \log \Gamma \left(\frac{1}{2} - \frac{i}{2} \right) + 4 \psi^{(-2)} \left(\frac{1}{2} \right). \end{align}$$
In terms of simplification, $\psi^{(-2)} \left( \frac{1}{2}\right)$ can be expressed in terms of the Glaisher-Kinkelin constant.
And further simplification is possible using the Schwarz reflection principle.
$$\begin{align} \sum_{n=1}^{\infty} (-1)^{n+1} n \log \left(\frac{n^{2}+1}{n^{2}} \right) &= 4 \ \text{Re} \ \psi^{(-2)} \left(\frac{i}{2} \right) -4 \ \text{Re} \ \psi^{(-2)} \left(\frac{1}{2} + \frac{i}{2} \right) + 2 \ \text{Im} \ \log \Gamma \left( \frac{i}{2}\right) \\ &-2 \ \text{Im} \ \log \Gamma \left(\frac{1}{2} + \frac{i}{2} \right) + 6\log A + \frac{5}{6} \log 2 + \log \pi \end{align}$$
which according to Wolfram Alpha is approximately $0.4277662568$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Runge-Kutta methods and Butcher tableau What does the Butcher tableau of a Runge-Kutta method tell me about the method, besides the coefficients in its formulation? In particular, what requirements about it guarantee consistency and therefore convergence? I have been told something necessary is the row-sum condition, i.e.:
$$c_i=\sum\limits_{j=1}^na_{ij}.$$
What does this guarantee or what is this necessary for? And could you give me proofs of any results you mention in your answers? Or links to them anyway. Thanks.
|
The Butcher Tableau determines the stability function $R(z)$ of the corresponding method.
In particular, for the Linear Test equation due to Dahlquist
$$u'(t) = \lambda u(t) \Rightarrow u(t) = u_0 e^{\lambda (t - t_0)}$$
the stability function determines how the approximation $u_{n+1}$ follows from the previous iterate $u_n$:
$$ u_{n+1} = R(z) u_n, \quad z = \lambda \Delta t_{n+1}$$
This stability function can actually be computed as (see for instance [1] or [2])
$$ R(z) = \frac{\text{det}\left(I-zA + z \boldsymbol 1 \boldsymbol b \right) }{\text{det}\left(I-zA\right)}$$
which simplifies for explicit methods with strictly lower triangular matrix $A$ to
$$ R(z) =\text{det}\left(I-zA + z \boldsymbol 1 \boldsymbol b \right). $$
This stability function determines (as the name suggests) the region of absolute stability:
$$ z \in \mathbb C, \text{Re}(z): \vert R(z) \vert \leq 1. $$
Reason I mention this is that convergence is not guaranteed for convergent methods - the method has also to be a stable for the employed finite timesteps $\Delta t$.
For explicit methods, $R(z)$ is actually a polynomial
$$ R(z) = \sum_{j=0}^S \alpha_j z^j$$
and one can directly check the order of consistency by checking to what power the coefficients $\alpha_j$ agree with the terms of the exponential, i.e.,
$$ \alpha_j = \frac{1}{j!}, j = 0, \dots , p.$$
For implicit methods, however, the order of consistency cannot be that easily read-off from $R(z)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Number of pizza topping combinations It seems there are lots of pizza questions but I'm not sure how to apply the answers to my problem. Obviously I'm not a mathematician. Essentially I'm trying to determine how many different variations of pizzas there are given the following parameters. You can choose unlimited, unique toppings (no double toppings). Each pizza can have at most 1 sauce but the rest can be mixed and matched to your hearts content. How do I do this?
6 sauces
7 cheeses
15 vegetables
7 meats
2 seasonings
|
You can choose the sauce in $\binom{6}{1} + 1$ ways (the $1$ is for no sauce). Then consider filling up $7+15+7+2$ blanks with either $0$ or $1$, $1$ if you want that topping, $0$ if you don't want it. This can be done in $2^{31}$ ways. So the total number of ways is
$$
(\binom{6}{1} + 1) \cdot 2^{31} = 7 \cdot 2\;147\;483\;648 = 15\;032\;385\;536
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Prove $\frac{n^2+2}{(2 \cdot n^2)-1} \to \frac{1}{2}$
Prove $\frac{n^2+2}{(2 \cdot n^2)-1} \to \frac{1}{2}$ for $n \to \infty$.
I've been looking at this for hours! Also, sorry I don't have the proper notation.
This is where I'm at:
$$
\left| \frac{n^2 + 2}{2 \cdot n^2 - 1} - \frac{1}{2}\right| = \left| \frac{5}{4 \cdot n^2 - 2} \right|
$$
I thought I was supposed to get to the point where I can say $1/n <n$, but I can only get to $1/n-1$ so that can't be the right approach or I'm missing something.
A friend says to make $n > \frac{5}{\varepsilon^2}$ but I''m not sure what to do with that tip.
Any help would be greatly appreciated!
|
$\left|\frac{n^2+2}{2n^2-1} - \frac {1}{2}\right| = \frac{5}{4n^2-2}$
Let $\epsilon>0$ be given. Let $n_0$ be the smallest integer such that $n\geq n_0> \sqrt{\frac{5}{4\epsilon} + \frac 12}$. Equivalently, $\epsilon>\frac{5}{4n^2-2}$.
Thus, $\left|\frac{n^2+2}{2n^2-1} - \frac {1}{2}\right|< \epsilon$, for all $n\geq n_0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Saving for retirement - how much? I'm working through a problem in the book "An Undergraduate Introduction to Financial Mathematics" and there is an example I can follow.
The problem is: Suppose you want to save for retirement. The savings account is compounded monthly at a rate of 10%. You are 25 years and you plan to retire in 40 years. You plan on being retired for 30 years and you plan on receiving a monthly payment of \$1500 as retired. How much (fixed amount) should you save monthly?
I can't follow the computation in the book so I tried to solve it my own way. What am I doing wrong?
Let's call the monthly savings amount $x$. After 480 months (40 years) the account statement will read: $x + x(1+0.1/12) + ... + x(1+0.1/12)^480 = x(1 - r^{481})/(1-r) =: P$ where I write $r = (1+0.1/12)$ for short. Next we will start subtracting \$1500 every month, but we should not forget that we still have interest:
30 years = 360 months.
Let $P_n$ be the amount we have in our account after $n$ withdrawals.
$P_0 = P$
$P_1 = P_0\times r - 1500$
...
$P_n = P_{n-1}\times r - 1500$
Finally we wish that $P_{360} = 0$. Thus $P = r^{-360}\times 1500\left(\frac{1-r^{360}}{1-r}\right)$
Solving for $x$ gives me the answer $x \approx 26.8$. The correct answer should be $x \approx 27.03$. Is this just a rounding error?
|
Your payments-in are made at the end of the months. This information is not given explicit in the text.
The equation is
$x\cdot \frac{1-(1+\frac{0.1}{12})^{480}}{-\frac{0.1}{12}}=1500\cdot\frac{1-(1+\frac{0.1}{12})^{360}}{-\frac{0.1}{12}}\cdot \frac{1}{(1+\frac{0.1}{12})^{360}}$
This gives $x\approx 27.03$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
3 Variable Diophantine Equation Find all integer solutions to $$x^4 + y^4 + z^3 = 5$$ I don't know how to proceed, since it has a p-adic and real solution for all $p$.
I think that the only one is (2, 2, -3) and the trivial ones that come from this, but I can't confirm it.
|
After a careful investigation I present some results which might be helpful in the final resolution of this problem. Lets start with the original equation i.e.
$$x^4+y^4+z^3=5$$
It is easy to see that there is no solution of this equation where $x,y$ and $z$ are all positive. Second if $(x,y,z)$ is a solution then so is $(-x,-y,z)$. From this observations we get that $z<0$. Let rewrite the equation in terms of positive values i.e.
$$x^4+y^4-z^3=5$$
where $x,y$ and $z$ are all positive. This is equivalent to
$$x^4+y^4=5+z^3$$
From this, one gets
$$x^4+y^4\equiv z^3\mod(5)$$
First we show that $x\equiv 0\mod(5)$ and $y\equiv 0\mod(5)$ iff $z\equiv 0\mod(5)$.
If $x\equiv 0\mod(5)$ and $y\equiv 0\mod(5)$ then
$$x^4+y^4\equiv0\mod(5)\Rightarrow 5+z^3\equiv z^3\equiv0\mod(5)\Rightarrow z\equiv0\mod(5)$$
Now let $z\equiv0\mod(5)$ then $$5+z^3\equiv0\mod(5)\Rightarrow x^4+y^4\equiv0\mod(5)$$
However for any $x\in\mathbb{Z}$ one has $x^4\equiv0\mod(5)$ or $x^4\equiv1\mod(5)$. Therefore
$$x^4+y^4\equiv0\mod(5)\Leftrightarrow x\equiv y\equiv0\mod(5)$$
However an inspection of our modified equation yields that we can not have simultaneously $x$ and $y$ divisible by $5$ for otherwise $z\equiv0\mod(5)$ and
$$x^4+y^4\equiv0\mod(5^3)\Rightarrow 5+z^3\equiv0\mod(5^3)$$
which is impossible as $5+z^3\equiv 5\mod(5^3)$.
Knowing that $x$ and $y$ can not be simultaneously divisible by $5$ then $z$ is not divisible by $5$ either. Applying Fermat's little theorem we can rewrite the modified equation as
$$zx^4+zy^4\equiv1\mod(5)$$
Let say without loss of generality $y\equiv0\mod(5)$ and $x^4\equiv1\mod(5)$ then
$$z\cdot 1+z\cdot 0\equiv1\mod(5)\Rightarrow z\equiv1\mod(5)$$
The other exhaustive case would be $x^4\equiv1\mod(5)$ and $y^4\equiv1\mod(5)$ in which case
$$z\cdot 1+z\cdot 1\equiv1\mod(5)\Rightarrow 2z\equiv1\mod(5)\Rightarrow z\equiv3\mod(5)$$
In this case a direct inspection for $z=3$ would yield $x=\pm 2$ and $y=\pm 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
What are Darboux coordinates? What are Darboux coordinates?
Are they different from coordinates on $\Bbb R^n$ or some smooth manifold?
I am familiar with Riemannian manifolds, but Darboux coordinates came up in some materials.
|
A smooth manifold $M$ equipped with a closed, non-degenerate two-form $\omega$ is called a symplectic manifold. It follows almost immediately that a symplectic manifold is even dimensional.
Darboux's Theorem: Let $(M, \omega)$ be a symplectic manifold. For any $p \in M$, there is a coordinate chart $(U, (x^1, \dots, x^n, y^1, \dots, y^n))$ with $p \in U$ such that $\omega$ takes the form $$dx^1\wedge dy^1 + \dots + dx^n\wedge dy^n.$$
The coordinates in the above theorem are often called Darboux coordinates.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
How to prove invariance of dot-product to rotation of coordinate system Using the definition of a dot-product as the sum of the products of the various components, how do you prove that the dot product will remain the same when the coordinate system rotates?
Preferably an intuitive proof please, explainable to a high-school student. Thanks in advance.
|
First you should show that for any two vectors $v$ and $w$ in $\mathbb{R}^n$ (taking $n=3$ if necessary) $v\cdot w = |v||w|\operatorname{cos}\theta $, where $\theta$ is the (smaller) angle between both vectors.
This is a very geometric fact and you can probably prove it to them if they know the cosines law. First observe that:
$$||v-w||^2 = ||v||^2+||w||^2-2||v||w||\operatorname{cos}\theta.$$
This comes from the the fact that the vectors $v$, $w$ and $(v-w)$ form a triangle (draw it). On the other hand,
$$||v-w||^2 = (v-w)\cdot(v-w)=v\cdot v -2v\cdot w + w\cdot w .$$
The result follows immediately. After this you only need to observe that rotations don't affect lenghts or angles, then by the formula above the dot product is invariant under rotations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
finding code of a function in GAP packages How can I find the codes related to a function in GAP? When I use "??" in front of the name of function there is no help, so I want to find the code of function among packages.
|
First, GAP is an open-source project, and both the core system and GAP packages are supplied in the GAP distribution with their source code. You may use various tools available at your system to search for a string in the source code, for example
grep -R --include=*.g* BoundPositions pkg/*
Secondly, you may print a function, for example:
gap> Print(BoundPositions);
function ( l )
return Filtered( [ 1 .. Length( l ) ], function ( i )
return IsBound( l[i] );
end );
end
gap>
In this case, the code is formatted accordingly to GAP rules, and comments in the code are not displayed.
Thirdly, you may use PageSource to show the exact place in the file where the code is contained. In this case, you will see all comments and the author's formatting of the code. For example,
gap> PageSource(BoundPositions);
Showing source in /Users/alexk/gap4r7p5/pkg/fga/lib/util.gi (from line 6)
##
#Y 2003 - 2012
##
InstallGlobalFunction( BoundPositions,
l -> Filtered([1..Length(l)], i -> IsBound(l[i])) );
As one could see, this is an (undocumented) function from the FGA package. Clearly, the package should be loaded before using either Print(BoundPositions) or PageSource(BoundPositions).
Note that the argument must be a function, so for operations, attributes, properties, filters etc. one should use ApplicableMethod first, otherwise you may see something like this:
gap> PageSource(Size);
Cannot locate source of kernel function Size.
gap> PageSource(IsGroup);
Cannot locate source of kernel function <<and-filter>>.
Some more details are given in this GAP Forum thread and in this question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/947945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
2nd order homogeneous ODE I am trying to solve a system of differential equations (for the full system see below) and I am stuck the following 2nd order ODE (with $a$ and $b$ being constants and $\dot x = \frac{dx}{dt}$):
$$\ddot x - \frac34 a\dot x^2 -b \dot x + 2 a b = 0$$
I tried to substitute $v := \dot x$, which leads me to a different ODE which I was not able to solve. And now I have no idea what else I could try here and would love to get some pointers.
And here the original system / IVP I want to solve:
$$ \dot x = 2 a + 2 e^{ax}y \\ \dot y = \frac32 e^{-ax}a^3+(a^2+b)y-\frac12 a e^{ax}y^2 \\ x(0)=0 \\ y(T)=0$$
In which I solved the first equation for $y$ and substituted $y$ and $\dot y$ in the 2nd which gave me the equation above.
|
As I said in a comment, start defining $z=x'$; so the differential equation becomes $$\frac{dz}{dt} - \frac34 a z^2 -b z + 2 a b = 0$$ that is to say $$\frac{dz}{dt} = \frac34 a z^2 +b z - 2 a b $$ then, as Semsem suggested, it is separable; so $$\frac{dt}{dz} = \frac{1}{\frac34 a z^2 +b z - 2 a b}$$ so $$t+C=-\frac{2 \tanh ^{-1}\left(\frac{3 a z+2 b}{2 \sqrt{b} \sqrt{6
a^2+b}}\right)}{\sqrt{b} \sqrt{6 a^2+b}}$$ from which $$z=-\frac{2 \left(\sqrt{b} \sqrt{6 a^2+b} \tanh \left(\frac{1}{2} \sqrt{b} \sqrt{6
a^2+b} (C+t)\right)+b\right)}{3 a}$$ Now, one more unpleasant integration leads to $$x=-\frac{2 \left(2 \log \left(\cosh \left(\frac{1}{2} \sqrt{b} \sqrt{6 a^2+b}
(C+t)\right)\right)+b t\right)}{3 a}+D$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How do I solve this geometric series I have this geometric series $2+1+ \frac{1}{2}+ \frac{1}{4}+...+ \frac{1}{128}$to solve. So I extract the number two and get $2(\frac{1}{2}^0+ \frac{1}{2}^1+...+ \frac{1}{2}^7)$
I use the following formula $S_n= \frac{x^{n+1}-1}{x-1}$ so I plug in the values in this formula and get $S_n= 2\frac{\frac{1}{2}^{7+1}-1}{\frac{1}{2}-1}$ but the result is not correct.
What did I do wrong?
Thanks!!
|
Hint: Its $$2 + (1+\frac12+ \frac14+ \cdots + \frac{1}{128})$$ not multiplied with $2$.
You can also think of it as follows: The first term is $a_1=2$ and the common ratio is $r=1/2$ and then you sum it using the formula where you last term is $a_9=1/128$.
Edit: If you do want to factor out a $2$, then you get $$2(1+\frac12 + \frac14 + \cdots + \frac{1}{256})= 2(\frac{1}{2^0} + \frac{1}{2^1} + \frac{1}{2^2} + \cdots + \frac{1}{2^8})$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Show that $k[x]/(x^{2})$ is an indecomposable (1), but not irreducible (2) $k[x]$-module. Exercise: Show that $k[x]/(x^{2})$ is an indecomposable (1), but not irreducible (2) $k[x]$-module.
I'm not sure about all different kind of modules, but this is a question of a book about associative algebras. It is not really exercise, it is just stated in the text, so I guess it must be rather trivial, but I'm not getting it. I think it may be I forgot some old ring theory/linear algebra stuff.
If I understand correctly: $k[x]/(x^{2})$ can be seen as $\{a+bx: a,b \in k\}$.
So for (1), I must show that there don't exist two non-zero representation $V_{1},V_{2}$ of $k[x]$so that $k[x]/(x^{2})$ is isomorphic to $V_{1}\oplus V_{2}$. Well as $k[x]/(x^{2})$ is 2 dimensional. $V_{1},V_{2}$ have to be $1$ dimensional. But I'm not sure how the 1-dimensional subrepresentation look like. Are they just isomorphic to $k$ ?
For (2), I need to show there exist an non-trivial subrepresention. So I have to show there exists a subspace $W$ of $\{a+bx: a,b \in k\}$ such that $fW \subset W$ for any polynomial $f$. Well, that seems impossible to me. I can't think of any if $W=k$, then $x\cdot \alpha$ is not in $W$. If $W=\{bx:b\in k\}$ then $bx bx$ is not in $W$.
I feel like I look at this the complete wrong way, feel free to ignore all my stuff above, and show me a way how to look at this. :)
|
For a field $k$, $k[x]$ is a principal ideal ring. By the correspondence theorem, the only ideals of $k[x]/(x^2)$ are those generated by divisors of $x^2$. Thus $k[x]/(x^2)$ has exactly three submodules: $(x^2)/(x^2),(x)/(x^2)$ and $k[x]/(x^2)$.
So the existence of $(x)/(x^2)$ immediately tells you why $k[x]/(x^2)$ isn't irreducible. And how could it be directly decomposable? There is only one proper ideal, so there isn't a second proper ideal to complement the first. So, there isn't a nontrivial decomposition of $k[x]/(x^2)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Example of a non-trivial function such that $f(2x)=f(x)$ Could you give an example of a non-constant function $f$ such that
$$
f(x) = f(2x).
$$
The one that I can think of is the trivial one, namely $\chi_{\mathbb{Q}}$, the characteristic function on the rationals.
I am wondering if there is any other such function other than this one. TQVM!
|
One more example
$$f(x) = \sin(2\pi\log_2x)$$
$$f(2x) = \sin(2\pi\log_2(2x)) = \sin(2\pi(1 + \log_2x)) = \sin(2\pi\log_2x) = f(x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Proof of induction principle Theorem 1.1.3 (induction principle) of Dirk Van Dalen "Logic and Structure" states:
Let $A$ be a property, then $A(\phi)$ holds for all $\phi \in PROP$ if:
*
*$A(p_i)$, for all i;
*$A(\phi),A(\psi) \Rightarrow A(\phi \square \psi)$
*$A(\phi) \Rightarrow A(\neg \phi)$
I don't understand the little proof he gives. He writes let $X=\{\phi \in PROP | A(\phi) \}$, then X satisfies the conditions of the recursive definition of $PROP$. So $PROP \subseteq X$,i.e. for all $\phi \in PROP$ $A(\phi)$ holds.
|
For the sake of avoiding confusion, I feel it should be pointed out precisely in what sense PROP is the "smallest" set of well-formed formulae. For example, $\{10,11\}$ is certainly the smallest set of consecutive integers that add up to $21$, but that does not mean that it is a subset of $\{6,7,8\}$.
However, when it comes to being a set of all propositional formula, all the conditions are of the form "you must contain these things" and "if you contain these things you must also contain these things". Because of this very particular form (there are mathematically more precise ways to formulate it), it is the case that there does exist a set, namely PROP, such that if you satisfy the conditions, you must contain PROP as a subset.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How many non-collinear points determine an $n$-ellipse? $1$-ellipse is the circle with $1$ focus point, $2$-ellipse is an ellipse with $2$ foci. $n$-ellipse is the locus of all points of the plane whose sum of distances to the $n$ foci is a constant.
I know that $3$ non-collinear points determine a circle. $5$ non-collinear points on a plane determine an ellipse.
After that my question is: how many non-collinear points determine an $n$-ellipse on a plane?
Futhermore: is there a unique shape which is kind of generalization of circle or ellipse and it is determined by $4$ given non-collinear points on a plane? What can we say in this case? Is there a special fitted unique closed curve for any points?
|
The number of points needed to identify a $n$-ellipse is $2n+1$. This directly follows from the general equation of a $n$-ellipse
$$\sum_{i=1}^n \sqrt{(x-u_i)^2+(y-v_i)^2}=k$$
where the number of parameters is $2n+1$. So, for a $1$-ellipse (circle) we need $3$ noncollinear points to identify $3$ parameters ($u_1,v_1,k$), for a $2$-ellipse we need $5$ noncollinear points to identify $5$ parameters ($u_1,v_1,u_2,v_2,k$), and so on.
As regards the "shape" identified by $4$ points, since these points allow to define a $2$-ellipse with the exception of a single parameter that remains unknown, the resulting figure is a set of $2$-ellipses. For example, we could use the $4$ points to calculate $u_1,v_1,u_2,v_2$, leaving $k$ unknown. This would create a set of $2$-ellipses where the only variable parameter is $k$, that is to say the sum of the distances to the two foci.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Books that use probabilistic/combinatorial/graph theoretical/physical/geometrical methods to solve problems from other branches of mathematics I am searching for some books that describe useful, interesting, not-so-common, (possibly) intuitive and non-standard methods (see note *) for approaching problems and interpreting theorems and results in number theory, analysis, algebra, linear algebra, and other branches of mathematics.
(*) Such methods can be (but not limited to) from the areas of
*
*probability;
*combinatorics;
*graph theory;
*physics;
*geometry.
Examples of such books can be Uspenskii's Some Applications of Mechanics to Mathematics or Apostol's and Mnatsakanian's New Horizons in geometry.
|
Mark Levi, The Mathematical Mechanic: Using Physical Reasoning to Solve Problems.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
If $f(x)\in\mathbb{Q}[x]$ of degree $p$ and $\operatorname{Gal}(K/\mathbb{Q})$ has element of order $p$ then $f(x)$ is irreducible. Let $f(x)\in\mathbb{Q}[x]$ , $p$ prime, $\deg f(x)=p$ and $G = \operatorname{Gal}(K/\mathbb{Q})$ has element of order $p$, where $K$ the the splitting field of $f(x)$ over $\mathbb{Q}$.
Show that $f(x)$ is irreducible over $\mathbb{Q}$.
|
Assume that $\sigma$ is an automorphism of order $p$, also $\sigma$ is a permutation of the roots $a_1, \ldots , a_p$ of $f(x)$. Thus $\sigma$ must permute these roots in a cycle. This means that all the roots are in fact conjugate. Thus $f(x)$ is irreducible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Prove this equality by using Newton's Binomial Theorem Let $ n \ge 1 $ be an integer. Use newton's Binomial Theorem to argue that
$$36^n -26^n = \sum_{k=1}^{n}\binom{n}{k}10^k\cdot26^{n-k}$$
I do not know how to make the LHS = RHS. I have tried $(36^n-26^n) = 10^n $ which is $x$ in the RHS, but I don't know what to do with the $26^{n-k}$ after I have gotten rid of the $26^n$ on the right. I also know I might have to use $\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$ pascal's identity in this question.
Maybe I am approaching it from a completely wrong point of view, If someone can help point me in the right direction. It would be much appreciated!!!
|
Bring the $26^n$ to the other side. You are then looking at the binomial expansion of $(26+10)^n$. The $26^n$ is the $k=0$ term that was missing in the given right-hand side.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the precedence of the limit operator? I would like to know the precedence of the $\lim$ operator.
For instance, given the following expression:
$$f(x) = \lim_{x \to a} u(x) + v(x)$$
Does the limit apply only to the term?
$$f(x)=\left(\lim_{x \to a} u(x)\right) + v(x)$$
Or does it apply to the entire expression?
$$f(x) = \lim_{x \to a} \left( u(x) + v(x)\right)$$
|
In most textbooks I've seen the limit operator has higher precedence than addition/subtraction:
$$\lim_{x \to a} u(x) + v(x) \equiv \left(\lim_{x \to a} u(x)\right) + v(x)$$
Where it gets hairy is whether the limit operator has higher precedence than multiplication/division:
$$\lim_{x \to a} u(x) v(x) \stackrel?= \left(\lim_{x \to a} u(x)\right) v(x)$$
I don't think there's an established convention so you would have to guess from context. However, it's usually a bad idea to shadow variables (to reuse the same variable symbol for both the variable in the limit and also another variable outside the limit). So if you wanted to be absolutely clear, it's a good idea to write your equation as:
$$f(x) = \lim_{y \to a} u(y) + v(x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A question on endomorphisms of an injective module This is a homework question I am to solve from TY Lam's book Lectures on Modules and Rings, Section 3, exercise 23.
Let $I$ be an injective right $R$-module where $R$ is some ring.
Let $H= \operatorname{End}(I),$ the endomorphisms on $I$.
I need to show that given $f,h \in H$, if $\ker h \subseteq \ker f$, then $f \in H\cdot h$. That is, there is some endomorphism $g \in H$ such that $g\circ h = f$.
I can see that if we have that $f$ is one to one, then this will force $\ker h = \{0\}$ so we will have that $h$ must be an injection from $I$ to $I$ with filler $g:I \rightarrow I$ such that $h \circ g = f$.
How would I go about handling the case where I do have $h$ guaranteed to be an injection?
|
Hint. $0\to\operatorname{Im}(h)\to I$ and define $\bar f:I/\ker h\to I$ by $\bar f(\bar x)=f(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/948950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is there a proof for what I describe as the "recursive process of mathematical induction for testing divisibility". I was working on my homework for Discrete Math, and we were asked to "Prove: $6 \mid n^{3}+5n$,where $n\in \mathbb{N}$" my solution varied significantly from how I have seen it done by others. I noticed a pattern and used it to say "$6\mid Q(n)$ iff $Q(k+1)-Q(k)$,where $k in \mathbb{N}$, is also divisible by $6$ ∵ $Q(k+1)-Q(k)$ represents the recursive process of mathematical induction for testing divisibility." I was not satisfied with it, even though it appears true. Attached is a picture for better context. But my overall question is has anyone heard of a proof that verifies the pattern I was describing, or could they come up with one?
It appears that when one subtracts $P(k)$ via the induction hypothesis from $P(k+1)$, then one is left with a new function $Q(n)$, which we know logically needs to be divisible by the target number for our proof to work. Therefore, $Q(k+1)-Q(k)$ will either yield a new function that is divisible by the term one is looking for, or will lead to a new function that logically needs to be divisible by the target number for our proof to work, and so on and so forth.
|
Nice!
Yes that's correct, if you have a sequence $a_1,a_2,a_3,\ldots$ of terms which are all divisible by a fixed integer $m$, then their differences must be also. If you are trying to prove that each of the $a_i$ are divisible by $m$, then you could construct a new sequence:
\begin{align*}
b_1&=a_2-a_1 \\
b_2&=a_3-a_2 \\
b_3&=a_4-a_3 \\
\vdots
\end{align*}
and try to prove that the $b_i$ are divisible by $m$. Provided you prove that $m\mid a_1$ as well, this would mean that $a_n=a_1+b_1+b_2+\cdots+b_n$ is also divisible by $m$.
In your example, this works particularly nicely, because $a_n$ is a cubic polynomial in $n$. When you calculate the the difference between $a_{n+1}$ and $a_n$, you get a quadratic polynomial in $n$, from which is it simpler to prove the divisibility.
In general in fact, if you have a $k$th degree polynomial $p(x)$, and you calculate the difference
$$p(x+1)-p(x)$$
the resulting polynomial will have degree $(k-1)$ (because the $x^k$ terms cancel). Using your idea you can repeatedly calculate differences, decreasing the degree of the polynomial, until you reach a constant polynomial, from which it will be trivial to calculate any divisibility.
I don't believe this has a name, but it is a relatively standard technique when trying to inductively prove divisibility of polynomials such as this one, along with factoring.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Exercise books in analysis I'm studying Rudin's Principles of mathematical analysis and I was wondering if there are some exercise books (that is, books with solved problems and exercises) that I can use as a companion to Rudin.
The books I'm searching for should be:
*
*full of hard, non-obvious, non-common, and thought-provoking problems;
*rich of complete, step by step, rigorous, and enlightening solutions;
|
This is what I recommend to students learning analysis as a good companion:
http://minds.wisconsin.edu/handle/1793/67009
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
}
|
Finding cartesian equation for trigonometric parametric forms I'm trying to find the cartesian equation for these parameteric forms:
$$
x = sin\theta + 2 cos \theta \\
y = 2 sin\theta + cos\theta
$$
I tried:
$$\begin{align}
x^2 & = sin^2\theta + 4cos^2\theta \\
& = 1 - cos^2\theta + 4cos^2\theta \\
& = 1 + 3cos^2\theta \\
\\
y^2 & = 4sin^2\theta+cos^2\theta\\
& = 4(1 - cos^2\theta) + cos^2\theta \\
& = 4 - 3cos^2\theta \\
\\
\therefore & \space4 - y^2 = x^2 - 1\\
\space & x^2 + y^2 = 5
\end{align}$$
Which differs from the given answer of $5x^2 + 5y^2 - 8xy = 9$. Where am I going wrong?
|
In general, $(a+b)^2\ne a^2+b^2$ unless $ab=0$
Solve for $\sin\theta,\cos\theta$ in terms of $x,y$
Then use $\sin^2\theta+\cos^2\theta=1$ to eliminate $\theta$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding Solutions to Trigonometric Equation Find all $x$ in the interval (0, $\frac{\pi}{2}$) such that $$\frac{\sqrt{3}-1}{\sin x} + \frac{\sqrt{3}+1}{\cos x} = 4\sqrt{2}.$$
|
Rewrite it in the form
$$2\sqrt2\left(\frac{\sqrt3-1}{2\sqrt2}\cos x+\frac{\sqrt3+1}{2\sqrt2}\sin x\right)=2\sqrt2\sin 2x.$$
For $\phi=\arcsin\frac{\sqrt3-1}{2\sqrt2}$ it implies
$$\sin(x+\phi)=\sin 2x,$$
i.e. $x+\phi=2x+2\pi n$ or $x+\phi=\pi-2x+2\pi n$, $n\in\Bbb Z$. Therefore, the only solutions in $(0,\pi/2)$ are $\phi$ and $\frac{\pi-\phi}{3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Let $V$ be finite dimensional v.s. and $0 \ne T\in \mathscr L(V)$ , then $\exists$ $S \in \mathscr L(V)$ such that $0 \ne T \circ S$ is idempotent If $V$ is a finite dimensional vector space and $T \ne0$ is a linear operator on $V$ , then how may we prove that there is a linear operator $S$ on $V$ such that $T\circ S$ is non-zero and idempotent?
|
Hint: Suppose $(c,v)$ is an eigenpair of $T$. Consider $T \circ P_v$, where $P_v$ is the orthogonal projector onto the span of $v$. When $c=1$ or $c=0$, I claim this is idempotent. How do you fix it otherwise?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that if $f:[a,b]\to\Bbb{R}$ is one to one and has the intermediate value property, then $f$ is strictly monotone I got this problem:
Prove that if $f:[a,b]\to\Bbb{R}$ is one to one and has the intermediate value property, then $f$ is strictly monotone.
That is, we must show that $\forall x,y\in[a,b], x<y \to f(x)<f(y)$ or $\forall x,y\in[a,b], x<y \to f(x)>f(y)$.
Start of my proof:
Suppose not.
That is suppose that $\exists x_1,y_1\in[a,b], x_1<y_1 \,\,and\,\, f(x_1)\geq f(y_1)$ and suppose that $\exists x_2,y_2\in[a,b], x_2<y_2 \,\,and\,\,\, f(x_2)\leq f(y_2)$.
Now since $f$ is one to one we get that $\exists x_1,y_1\in[a,b], x_1<y_1 \,\,\,and f(x_1)> f(y_1)$ and that $\exists x_2,y_2\in[a,b], x_2<y_2 \,\,and \,\,\,f(x_2)< f(y_2)$.
Now I am not sure how to split the proof into cases?
Thanks on any hints.
|
I'll show that if $f$ is one to one and not monotone in $[a,b]$ then there exist $x_1,x_2,x_3\in[a,b]$ such that $x_1<x_2<x_3$ and $f(x_2)<f(x_1),f(x_3)$ or $f(x_1),f(x_3)<f(x_2)$:
(Note: I tried to write the proof similiar to nested if statements in computer programming for better flow)
Since $f$ is one to one we get that $f(a)\neq f(b)$ and hence there are two cases $f(a)<f(b)$ or $f(a)>f(b)$
if $f(a)<f(b)$ then:
Since $f$ is not monotone, we get that in particualr $f$ is not decreasing and so there exists $u,v\in[a,b]$ such that $f(u)>f(v)$ (because $f$ is one to one, there cannot be equalty).
Now there two cases $u=a$ or $u\neq a$:
if $u=a$ then:
Since $u<v$ we get that $a<v$ and that $f(a)>f(v)$
Now we'll prove that $v\neq b$:
If $v = b$ we get that $f(a)>f(b)$ which contradicts the fact that $f(a)<f(b)$, And so $v\neq b$ and hence we get that $v\in(a,b)$.
Now since $f(a)>f(v)$ and since $f(b)>f(a)$ we get that $f(v)<f(a),f(b)$.
Now set $x_1 = a, x_2 = v, x_3=b$ and we get that $x_1<x_2<x_3$ and that $f(x_2)<f(x_1),f(x_3)$.
if $u\neq a$ then:
Now we'll show that $u\neq b$ because if $u = b$ we get that $b=u<v$ and so $v\notin [a,b]$ which is a contradiction and so $u\in(a,b)$.
Now there are two cases: $v=b$ or $v\neq b$
if $v=b$ then:
We get that $f(u)>f(b)$ and since $f(b)>f(a)$ we get that $f(a),f(b)<f(u)$
and since $u\in(a,b)$ we get that $a<u<b$.
Now set $x_1 = a, x_2 = u, x_3=b$ and we get that $x_1<x_2<x_3$ and that $f(x_1),f(x_3)<f(x_2)$.
if $v\neq b$ then:
We'll show that $v\in(a,b)$:
Since $v\in[a,b]$ we get that $v\leq b$ but since $v\neq b$ we got that $v<b$. Now since $a<u<v$ we get that that $a<u$ and so $v\in(a,b)$.
and we got $a<u<v<b$.
Now there are two cases: $f(a)<f(u)$ or $f(b)>f(v)$
Because if $f(a)>f(u)$ and $f(b)<f(v)$ and since $f(u)>f(v)$ we get that $f(a)>f(u)>f(v)>f(b)$ and so $f(a)>f(b)$ which is a contradiction.
if $f(a)<f(u)$ then:
Take $x_1=a,x_2=u,x_3=v$ and we get that $x_1<x_2<x_3$ and since $f(u)>f(a),f(v)$ we get $f(x_1),f(x_3)<f(x_2)$.
if $f(b)>f(v)$ then:
Take $x_1 = u, x_2=v,x_3=b$ and we get that $x_1<x_2<x_3$ and since $f(b),f(u)>f(v)$ we get $f(x_2)<f(x_1),f(x_3)$.
Similarly we prove for the case $f(a)>f(b)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Onion-peeling in O(n^2) time I am working on the Onion-peeling problem, which is: given a number of points, return the amount of onion peels. For example, the one below has 5 onion peels.
At a high level pseudo-code, it is obvious that:
count = 0
while set has points:
points = find points on convex hull
set.remove(points)
count+=1
return count
But I'm asked to give this in O(n^2) time. Graham scan works in O(n*log(n)) time and gift-wrapping in O(n^2) time. Basically, I'm wondering which algorithm should I use internally to find the points on the convex hull efficiently?
If I use gift-wrapping: I'll get O(n^3) time, and with Graham, I'll get O(log(n)n^2) time.
What would be the best way to design an algorithm that solves the problem in O(n^2)?
Thanks in advance.
|
Jarvis' March (Gift Wrapping) Algorithm takes $O(nh)$ time, where $h$ is number of points on convex hull.
Hint:
Suppose algorithm takes $i$ iterations to complete, and $h_k$ be the number of points on the $k$th convex hull $(1 \le k \le i )$. Also, let $n_k$ be the number of points remaining after first $k-1$ iterations. (Note that $n_1 = n, n_{i+1} = 0$).
What is $\sum\limits_{k = 1}^ih_k$?
Explanation:
Let the algorithm takes $cnh$ time for some $c > 0$.
At $k$th iteration, the algorithm will take $cn_kh_k$ time.
The algorithm terminates after $i$ iterations. Therefore, the total time taken for the algorithm is:
$$\begin{align}\mathrm{Time} &= \sum\limits_{k = 1}^i c n_k h_k
\\&= c\sum\limits_{k = 1}^i n_k h_k
\\&\le c\sum\limits_{k = 1}^i n h_k\qquad \text{because }\quad n_k \le n
\\&= cn \sum\limits_{k = 1}^i h_k
\\&= cn \cdot n \qquad \text{because } \sum\limits_{k = 1}^i h_k \text{is total number of points removed in all iterations, which is n}
\\&= cn^2
\\&= O(n^2)
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Differentiability question ends up in contradiction. Let $f(x)=x^3cos\frac{1}{x}$ when $x\neq0$ and $f(0)=0$.
Is $f(x)$ differentiable at $x=0$?
My first attempt
Definition: A function is differentiable at $a$ if $f'(a)$ exists.
$$f'(x)=\lim_{h \to 0}\frac{f(x+h)-f(x)}{x+h-x}$$
$$f'(0)=\lim_{h \to 0}\frac{f(h)-f(0)}{h}$$
since $f(0)=0$
$$f'(0)=\lim_{h \to 0}\frac{f(h)-0}{h}$$
$$f'(0)=\lim_{h \to 0}\frac{h^3cos\frac{1}{h}}{h}$$
$$f'(0)=\lim_{h \to 0}h^2cos\frac{1}{h}$$
We know that $-1\leq cos(a) \leq 1$ for any real number $a$, so this implies:
$$-h^2 \leq h^2cos\frac{1}{h} \leq h^2$$
were both the lower and upper bounds approach $0$ when $h$ approaches $0$, therefore this seems to imply that a limit exists for $f'(0)$ and it is precisely $0$. So therefore $f(x)$ is differentiable at $0$.
But my second attempt results in a different answer.
$$f'(x)=3x^2cos\frac{1}{x}-xsin\frac{1}{x}$$
$cos\frac{1}{0}$ is undefined but since $f(0)=0$ if we could get the terms to be expressed in terms of $x^3cos\frac{1}{x}$ then maybe that is defined:
$$f'(x)=3\frac{x^3cos\frac{1}{x}}{x}-\frac{x\sqrt{x^6-x^6cos^2\frac{1}{x}}}{x^3}$$
since $f(0)=0^3cos\frac{1}{0}=0$
$$f'(0)=\frac{0}{0}-0$$
which is undefined.
So there lies my contradiction.
I think my questionable step is assuming that $f(0)=0 \implies 0^3cos\frac{1}{0}=0$
How do I solve this contradiction and determine whether $f'(0)$ is differentiable?
|
$1$) You showed correctly, from the definition of the derivative, that $f'(0)=0$.
$2$) You used the ordinary differentiation formula to find the derivative of $f(x)$ when $x\ne 0$. That is perfectly fine.
Then you decided to use the limit as $x\to 0$ of the $f'(x)$ calculated in $2$) to calculate $f'(0)$. That is in principle not OK. This limit exists and is equal to $f'(0)$ precisely if $f'(x)$ is continuous at $x=0$. In our case, $f'(x)$ happens to be continuous at $x=0$. However, let $g(x)=x^2\sin(1/x)$ when $x\ne 0$, and let $g(0)=0$. Then it turns out that $g(x)$ is differentiable everywhere, but $g'(x)$ is not continuous at $x=0$.
Remark: It turns out that $\lim_{x\to 0}f'(x)=0$. However, your particular manipulation did not show this. We give a simillar but more extreme example. Let $h(x)=e^x$. Of course $\lim_{x\to 0} h(x)=1$. But note that if $x\ne 0$, then $h(x)=\frac{x^3e^x}{x^3}$. If we set $x=0$, we get an undefined result. That's not a problem: setting $x=0$ is just not the right way to calculate the limit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Prove the difference of roots is less than or equal to the root of the difference I am doing a larger proof that requires this to be true:
$|\sqrt{a} - \sqrt{b}| \leq \sqrt{|a - b|}$
I can square both sides to get
$a - 2\sqrt{a}\sqrt{b} + b \leq |a - b|$
Note that a and b are > 0.
I also know that
$|c| - |d| \leq |c - d|$
It seems like a mistake that $+b$ is there on the left...
How can I prove this is true
|
$\sqrt{a}$ and $\sqrt{a}$ are positive, therefore,
$$-\sqrt{b}\leq \sqrt{b}$$ ant it follows that
$$\sqrt{a}-\sqrt{b}\leq \sqrt{a} +\sqrt{b} $$
by symmetry we have
$$|\sqrt{a}-\sqrt{b}|\leq |\sqrt{a} +\sqrt{b} |=\sqrt{a} +\sqrt{b}$$
Now muptiply both sides by $|\sqrt{a}-\sqrt{b}|$ to get
$$|\sqrt{a}-\sqrt{b}|^2\leq |a-b|$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Why is $\sum_{k=j}^{i+j}(j+i-k) = \sum_{k=1}^{i}(k)$ $\displaystyle\sum_{k=j}^{i+j}(j+i-k) = \displaystyle\sum_{k=1}^{i}(k)$
I know the above are equal through testing it out with arbitrary values, but I can't get an intuitive grasp as to why this is.
|
\begin{align}
S &= \sum_{k=j}^{i+j} (i+j-k) \\
&= (i) + (i-1) + \cdots + ((i+j)-(i+j-1)) + ((i+j)-(i+j)) \\
&= (i) + (i-1) + \cdots + 1 + 0 \\
&= 0 + 1 + \cdots + (i-1) + i \\
&= \sum_{k=0}^{i} k.
\end{align}
Also
\begin{align}
\sum_{n=1}^{m} n = \frac{m(m+1)}{2}
\end{align}
such that
\begin{align}
\sum_{k=j}^{i+j} (i+j-k) = \sum_{k=1}^{i} k = \binom{i+1}{2}.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/949996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Bibinomial coefficient integer For integers $n \ge k \ge 0$ we define the bibinomial coefficient. $\left( \binom{n}{k} \right)$ by
$$ \left( \binom{n}{k} \right) = \frac{n!!}{k!!(n-k)!!} .$$
What are all pairs $(n,k)$ of integers with $n \ge k \ge 0$ such that the corresponding bibinomial coefficient is an integer?
(Note: The double factorial $n!!$ is defined to be the product of all even positive integers up to $n$ if $n$ is even and the product of all odd positive integers up to $n$ if $n$ is odd. So e.g. $0!! = 1$, $4!! = 2 \cdot 4 = 8$, and $7!! = 1 \cdot 3 \cdot 5 \cdot 7 = 105$.)
The question is from a European math competition, which is already over.
|
Hint:
Use that
$$
n!! = 2^kk!
$$
when $n=2k$ and that
$$
n!! = \frac{n!}{2^k k!}
$$
when $n=2k+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/950119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Reroll 2 dice sum probability My statistics are very far in my memory and I am not a math guru so I do not understand half of fuzy symbols used in most post that could have the answer I am looking for. So I would ask for a very simple and easy to understand answer pretty please :)
I have 2 dice, numbered {0,0,1,1,2,2}.
It gives me 36 possible results with the following distribution:
0 : 4
1 : 8
2 : 12
3 : 8
4 : 4
Now, if I want to reroll once when I do not get at least a sum of 3, what would be:
1) The amount of possible results?
2) The result distribution?
Thanks
|
Consider we roll the two dice and, conditional on the sum of the face values, roll again (if the sum of face values is 0, 1, or 2) or stop (if the sum of the face values is 3 or 4). The sum of the two dice on the first roll will be 0 (zero) with probability $\dfrac{4}{36}$ as indicated. In this case, roll again and the sum of the two dice on the second roll will be:
outcome - probability
0 - $\dfrac{4}{36}$
1 - $\dfrac{8}{36}$
2 - $\dfrac{12}{36}$
3 - $\dfrac{8}{36}$
4 - $\dfrac{4}{36}$
or 144 possible ways (0-0, 16 ways, with probability $\dfrac{4}{36}$.$\dfrac{4}{36}$ for this (0-0) outcome; 0-1, 32 ways, with probability $\dfrac{4}{36}$.$\dfrac{8}{36}$ for this outcome; 0-2, 48 ways, with probability $\dfrac{4}{36}$.$\dfrac{12}{36}$ for this outcome; 0-3, 32 ways, with probability $\dfrac{4}{36}$.$\dfrac{8}{36}$ for this outcome; and 0-4, 16 ways, with probability $\dfrac{4}{36}$.$\dfrac{4}{36}$ for this outcome). Next, the sum of the two dice on the first roll will be 1 (one) with probability $\dfrac{8}{36}$ as indicated. In this case, roll again and the sum of the two dice on the second roll will be:
outcome - probability
0 - $\dfrac{4}{36}$
1 - $\dfrac{8}{36}$
2 - $\dfrac{12}{36}$
3 - $\dfrac{8}{36}$
4 - $\dfrac{4}{36}$
or 288 possible ways (determined similar to the above). Next, the sum of the two dice on the first roll will be 2 (two) with probability $\dfrac{12}{36}$ as indicated. In this case, roll again and the sum of the two dice on the second roll will be:
outcome - probability
0 - $\dfrac{4}{36}$
1 - $\dfrac{8}{36}$
2 - $\dfrac{12}{36}$
3 - $\dfrac{8}{36}$
4 - $\dfrac{4}{36}$
or 432 possible ways. Next, the sum of the two dice on the first roll will be 3 (three) with probability $\dfrac{8}{36}$ as indicated. In this case, stop. Next, the sum of the two dice on the first roll will be 4 (four) with probability $\dfrac{4}{36}$ as indicated. In this case, also stop. Drawing a probability tree using the above information might prove beneficial. Hope this reply helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/950228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$\int\frac{2x+1}{x^2+2x+5}dx$ by partial fractions $$\int\frac{2x+1}{x^2+2x+5}dx$$
I know I'm supposed to make the bottom a perfect square by making it $(x+1)^2 +4$ but I don't know what to do after that. I've tried to make $x+1= \tan x$ because that's what we did in a class example but I keep getting stuck.
|
$$
\int\frac{2x+1}{x^2+2x+5}dx
$$
I would first write $w=x^2+2x+5$, $dw=(2x+2)\,dx$, and then break the integral into
$$
\int\frac{2x+2}{x^2+2x+5}dx + \int\frac{-1}{x^2+2x+5}dx.
$$
For the first integral I would use that substitution. Then
$$
\overbrace{\int\frac{-dx}{x^2+2x+5} = \int\frac{-dx}{(x+1)^2 + 2^2}}^{\text{completing the square}} = \frac{-1}2\int \frac{dx/2}{\left(\frac{x+1}{2}\right)^2+1} = \frac{-1}2 \int\frac{du}{u^2+1}.
$$
Then we get an arctangent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/950354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Russell's paradox and axiom of separation I don't quite understand how the axiom of separation resolves Russell's paradox in an entirely satisfactory way (without relying on other axioms).
I see that it removes the immediate contradiction that is produced by unrestricted comprehension, but it seems that we still need further axioms to guarantee that a well-formed set $S$ will never contain the set of all given elements (of $S$) which do not contain themselves.
Is that correct?
|
I don't think the axiom scheme of separation "resolves" Russell's paradox at all, but restricts the way of using predicates to determine sets.
The paradox is nothing but a proof that there is no one-to-one correspondence between predicates and classes: there are predicates that not defines a class. When writing sets as $\{x|p(x)\}$ that one-to-one correspondence is understood. Therefore axiomatic set theory is to show that sets exists and show rules how to create sets. If $A$ is a set, then the set $\{x\in A|p(x)\}$ never will cause any trouble, due to the theories.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/950428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
$\operatorname{rad}(I)=\bigcap_{I\subset P,~P\text{ prime}}P$ $R$ commutative ring with unity. $I$ $R$-ideal. Then $\operatorname{rad}(I)=\bigcap_{I\subset P,~P\text{ prime}}P$. That is, the radical of $I$ is the intersection of all prime ideals containing $I$.
There is a proof of this in my textbook, but I do not understand a certain piece of it. Here is the entire proof:
Clearly, $\operatorname{rad}(I) \subset \bigcap P$. Conversely, if $f\notin \operatorname{rad}(I)$, then any ideal maximal among those containing $I$ and disjoint from $\{f^n\mid n\geq1\}$ is prime, so $f\notin \bigcap P$.
I understand the first containment, but I do not know how to show that any ideal maximal among those containing $I$ and disjoint from $\{f^n\mid n\geq1\}$ is prime.
Could someone please give me a push in the right direction?
|
The following theorem, or some equivalent, probably appears in your book.
In a ring $R$, given an ideal $I$ and a multiplicative subset $S$ such that $S \cap I = \emptyset$, there exists a maximal ideal among those containing $I$ and disjoint from $S$; any such ideal is prime.
To prove this, note that such ideals are in one-to-one order-preserving correspondence with proper ideals of $S^{-1}R$ containing $S^{-1}I$. Thus it is enough to take the inverse image of a maximal ideal containing $S^{-1}I$ under the canonical homomorphism $R \to S^{-1}R$. The inverse image of a maximal ideal is always prime.
In the present situation, apply the theorem to $S = \{1 \} \cup \{f^n|n \geq 1\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/950562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.