Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Is $1-\alpha x\le (1-x)^{\alpha}\le 1-\alpha x+\frac{\alpha(\alpha-1)}{2}x^2,$ an inequality about generalized binomial coeffients true? Is the following inequality true?
$$1-\alpha x\le (1-x)^{\alpha}\le 1-\alpha x+\frac{\alpha(\alpha-1)}{2}x^2$$ for real numbers $x,\alpha.$ We may assume $0\le x\le 1$ and put some requirements about $\alpha.$ And $(1-x)^\alpha=\sum_{k=0}^\infty\binom{\alpha}{k}x^k,$ where $\binom{\alpha}{k}=\frac{\alpha(\alpha-1)\cdots(\alpha-k+1)}{k!}.$
I think I might meet with this inequality in Stanley's book Enumerative Combinatorics 1 http://www-math.mit.edu/~rstan/ec/ec1.pdf or somewhere else, saying that $(1-x)^\alpha$ greater the sum of its expansion terms up to a negative sign, and less than sum up to a positive sign. But I cannot remember it now and I am not sure. And it seems natural if we view it as a Taylor expansion. Any proof, comments or reference are very welcome!
|
For the upper bound, use
\begin{align}
(1-x)^r\leq e^{-rx}= 1-rx+ \frac{r^2x^2}{2!}-\frac{r^3x^3}{3!}+\ldots
\end{align}
for all $x<1$ and $r>0$.
Next, consider the following fact: for $x<0$, we have
\begin{align}
e^x \leq T_{2n}(x)
\end{align}
for all $n$ where
\begin{align}
T_n(x) = 1+x+\frac{x}{2!}+\ldots +\frac{x^n}{n!}.
\end{align}
Combining both result yields the desired inequality for all $r>0$ and $0<x<1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2252384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
continuity of $e^{-1/z}$ in $\mathbb C$ Let f be the function defined by $f(z)=e^{-1/z}$ in $\mathbb C$, prove that $f$ is continuous in the set $0< \vert z \vert < 1$ and $\vert arg(z) \vert <\pi/2$ but it's not uniformly continuous on it.
I think an easy way to prove it is firstly to show that $f$ is analytic, therefore f is differentiable. Hence $f$ is continuous.
Is there an easier way to prove it?
|
For continuity, note that $-1/z$ maps $\{0<|z|<1\}$ into $\{1<|z|<\infty\}$ continuously, $e^z$ is continuous everywhere, and the composition of continuous maps is continuous. As for uniform continuity, consider the behavior of $f(x+i\sqrt x) - f(x)$ as $x\to 0^+.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2252482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Relationship between the number of edges in the dual of graph with the degree of the original graph? Is the number of edges in the dual of a graph (not necessarily a true dual) related to the degree of the nodes of the (original) graph? If so, is there a generalized formula for this relationship?
|
To avoid bad cases like stars, I’ll consider only finite simple 3-connected planar graphs. (In particular, all triangulations are 3-connected). Due to Steinitz theorem such graphs are exactly the 1-skeletons of convex polyhedra. By Whitney's theorem, all plane embeddings of a polyhedral graph $G$ are equivalent, that is, obtainable from one another by a plane homeomorphism up to the choice of outer face. In particular, the set of facial cycles (i.e., boundaries of faces) of $G$ does not depend on a particular plane embedding. I recall that the dual of a polyhedral graph $G$ is a graph $G^*$ whose nodes are the faces of $G$ (represented by their facial cycles). So the number of edges of the graph $G^*$ equals the number of edges of the original graph $G$, which equals a half of the sum of degrees of its nodes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2252566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is uniform convergence for one function and how is that equivalent to continuity? My teacher wanted to prove that function
$$\frac{1}{x} - \sum_{n=1}^\infty \frac{2x}{n^2-x^2}$$
is continuous. He said that it is equivalent to proving that
$$\sum_{n=1}^\infty \frac{1}{n^2-x^2}$$
uniformly converges. However uniform convergence is defined for a sequence of function as I understand it. Here I see only one function: the equation depends purely on x and nothing else than x - that is just one function in terms of x.
But even if it made sense, how is uniform convergence equivalent to continuity?
According to my knowledge he didn't even prove uniform convergence (despite saying that). He only proved that the equation above:
$$\sum_{n=1}^\infty \frac{1}{n^2-x^2}$$
converges (to a number smaller than something). And convergence (as I understand it) is different from uniform convergence.
How is that correct?
|
Theorem: the uniform limit of a sequence of continuous functions on an interval is a continuous function on that interval.
If you examine the teacher's proof that $\sum_{n=1}^\infty \frac{1}{n^2 - x^2}$ converges (presumably on some interval that doesn't contain any positive integer $n$), it may have in it some estimate that is true for all
$x$ in the interval. That will allow you to conclude that the convergence is
uniform.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2252672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to show that $\{(x_1,x_2) \in \mathbb{R}^2 | \exp(x_1) + \exp(x_2) \leq c\}$ is unbounded if $c > 0$ Define $f(x) := \exp(x_1) + \exp(x_2) $
Let the sublevel set be given by $\{x = (x_1,x_2) \in \mathbb{R}^2 | \exp(x_1) + \exp(x_2) \leq c\}$
A plot of this function along with its contour is given as:
Clearly, the sublevel sets are unbounded for every $c>0$. But how do you prove this?
I am thinking let $x_2 = x_1 + a, a \in \mathbb{R}$, then we have $ \exp(x_1) + \exp(x_2) \leq c \implies \exp(x_1)(1+\exp(a)) \leq c \implies \exp(x_1) \leq k = \frac{c}{1+\exp(a)}$ and the latter inequality is satisfied by an uncountably many $x_1$, this way we cannot place a ball large enough to contain this set. But then this is only on one line (<- never mind). Very rough arguments here.
|
Since $\lim_{x\to -\infty} e^x =0$ it follows that for any $\frac{c}{2} \in \mathbb {R}^+$ there is some $a$ such that $e^x\leq \frac{c}{2}$ for all $x<a$ so $e^x + e^y \leq c$ for $x<a$ and $y<a$
This implies that the sublevel set $f^{-1}((-\infty,c])$ contains the set $\{(x,y) \in \mathbb {R}^2|x<a , y<a\}$ which is unbounded so $f^{-1}((-\infty,c])$ is unbounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2252788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
In which cases do multiple hyperbola branches have two intersection points? I am researching on hyperbolic localization techniques. In these techniques there are usually three anchor nodes $a_1, a_2$ and $a_3$ trying to position a blind node $b$. To do this, hyperbola branches are estimated which pass through the blind node and have the anchor nodes as foci. The position is then estimated as the intersection point of these hyperbola branches.
Image: Two hyperbola branches. Both passing through the blind node, one using $a_1$ and $a_2$ as foci, the other using $a_1$ and $a_3$.
However, hyperbola branches can have two intersection points. I am trying to understand, for which positions of the blind node, relative to the anchor nodes, there are two intersection points.
Image: The same scenario as before, but a different position of the blind node. The hyperbolas intersect in two points.
I have seen this figure in an academic paper, where the areas were colored, which lead to two intersection points, if the blind node falls in one of these areas:
My goal is to create such figures myself. Therefore I have to understand the mathematical relation.
In this figure, for each anchor node, there is one such area, constraint by a hyperbola branch. Appearently this hyperbola branch has as foci: the respective anchor node and the anchor node mirrored on the midpoint between the other two anchor nodes. But I do not know how to determine the semi-axis $a$.
I am happy for any suggestions.
|
Without loosing in generality we can place two anchor nodes ($A_m$ and $A_p$) symmetrically on the $x$ axis , and
place the third ($C$) in the upper half-plane as shown in the scheme above.
Let's consider the localization effectuated by node $C$ respectively with nodes $A_m$ and $A_p$, by crossing
the red and blue hyperbolich branches at point P.
Denote as $2c_m$ and $2c_p$ the distances from node $C$ to the other nodes, and as $M_m$ and $M_p$
the middle points of the connecting segments.
Clearly the hyperbolas will be centered on such midlle-points, and will have linear eccentricity
(distance center to focus) equal to $c_m$, $c_p$.
Let's call $a_m$ and $a_p$ the semiaxes from center to vertices (the measured differences in distance).
We can deduct, from the figure and the properties of hyperbola, that the branches will intersect
"properly" iff the respective asymptotes are "interleaved", i.e. if their points at infinite alternates (one "red", one "blue", ..).
That can be better figured by noting the "improper" situations below.
Now it is known that the angle that the asymptotes make with the hyperbola axis, the angles $\beta$ in the pictures,
are given by
$$
\beta = \arctan \left( {\sqrt {\left( {\frac{c}{a}} \right)^{\,2} - 1} } \right)
$$
The angles $\alpha$ made by the axis with the horizontal line are determined
from the positioning of the nodes.
So, with the notations in the figure, the angles $\gamma$ between the asymptotes
and the $x$ axis will be:
$$
\begin{array}{l}
\gamma _{\,m} = \alpha _{\,m} \pm \beta _{\,m} = \alpha _{\,m} \pm \arctan \left( {\sqrt {\left( {\frac{{c_{\,m} }}{{a_{\,m} }}} \right)^{\,2} - 1} } \right) \\
\gamma _{\,p} = \pi - \alpha _{\,p} \pm \beta _{\,p} = \pi - \alpha _{\,p} \pm \arctan \left( {\sqrt {\left( {\frac{{c_{\,p} }}{{a_{\,p} }}} \right)^{\,2} - 1} } \right) \\
\end{array}
$$
Therefore we shall ensure that
$$
\gamma _{\,m\, - } < \gamma _{\,p\, - } < \gamma _{\,m\, + } < \gamma _{\,p\, + }
$$
i.e.
$$ \bbox[lightyellow] {
\left\{ \begin{array}{l}
0 < \beta _{\,p} ,\beta _{\,m} < \pi /2 \\
- \alpha _{\,c} < \beta _{\,p} - \beta _{\,m} < \alpha _{\,c} \\
\alpha _{\,c} < \beta _{\,m} + \beta _{\,p} \left( { < \pi } \right) \\
\end{array} \right.\quad \left| {\;\alpha _{\,c} = \pi - \alpha _{\,p} - \alpha _{\,m} } \right.
} $$
where $\alpha _{\,c} $ is thus the angle in $C$.
The set of inequalities is rendered graphically as below.
From here it is just a computational task to deduce the bounds for $a_m$ and $a_p$ and from these,
which are the differences between the distances from $C$ and from the other nodes,
the boundary positions of the detectable point $P$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2252889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Show that there exists a step function Suppose $f:[a,b] \rightarrow X$ is a continuous map. By an argument based on uniform continuity, show that for any $\epsilon>0$ there exists a step-function $u: [a,b] \rightarrow X$ such that for all $x \in [a,b]$, $||f(x)-u(x)||<\epsilon$.
Okay, so I have looked back to the uniform continuity-definition, but I still can't see how I can connect that definition with the exercise...
Any help would be highly appreciated...
|
We know that $x-\lfloor{x}\rfloor\leq1$. Using this fact,
$$u(x)=\epsilon\cdot\lfloor \frac{f(x)}{\epsilon}\rfloor$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Substitution rule for dirac measure I am trying to apply a substitution rule to a Lebesgue integral and get very strange results.
Substitution rule (an instantiation of Fremlin, D.H. (2010), Measure Theory, Volume 2, Theorem 263D): Let $\phi: \mathbb{R} \to \mathbb{R}$ be an injective function with derivative $\phi': \mathbb{R} \to \mathbb{R}$. Let $f: \mathbb{R} \to \mathbb{R}$ be measurable and $\mu: \Sigma_{\mathbb{R}} \to \mathbb{R}$ a measure ($\Sigma_{\mathbb{R}}$ is a $\sigma$-algebra on $\mathbb{R}$). Then,
$$
\int_{x \in \phi(\mathbb{R})} f(x) d\mu = \int_{x \in \mathbb{R}} |\phi'(x)|*f(\phi(x)) d\mu
$$
I instantiate $\mu$ with the dirac measure $\delta(S) := [0 \in S]$, $f$ with the measurable function $f(x) := [x = 0]$ and $\phi$ with the function $\phi(x) := x+1$. Of course, $\phi'(x) = 1$.
Using this instantiation, I get
\begin{align*}
1 &= f(0) \\
&= \int_{x \in \mathbb{R}} f(x) d\delta && \text{property of the dirac measure} \\
&= \int_{x \in \phi(\mathbb{R})} f(x) d\delta && \mathbb{R} = \phi(\mathbb{R}) \\
&= \int_{x \in \mathbb{R}} |\phi'(x)|*f(\phi(x)) d\delta && \text{substitution} \\
&= \int_{x \in \mathbb{R}} 1*f(x+1) d\delta \\
&= f(0+1) && \text{property of the dirac measure} \\
&= 0
\end{align*}
What did I do wrong?
|
What you wrote doesn't match Theorem 263D in my copy of Fremlin. I think you missed an important sentence at the beginning of Section 263.
Throughout this section, as in the rest of the chapter, $\mu$ will denote Lebesgue measure on $\mathbb R^r$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I simplify $\tan(\alpha-\beta)$ into $\frac{\tan\alpha-\tan\beta}{1+\tan\alpha\tan\beta}$? How do I simplify $\tan(\alpha-\beta)$ into $\frac{\tan\alpha-\tan\beta}{1+\tan\alpha\tan\beta}$?
I tried:
$$\tan(\alpha-\beta) = \\\frac{\sin(\alpha-\beta)}{\cos(\alpha-\beta)}=\\\frac{\sin(\alpha)\cos(\beta)-\cos(\alpha)\sin(\beta)}{\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)} = \\\frac{\sin\alpha\cos\beta}{\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)}-\frac{\cos\alpha\sin\beta}{\cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)} = ???$$
What do I do next?
|
From
$$ \frac{\sin \alpha \cos \beta - \cos \alpha \sin \beta}{\cos \alpha \cos \beta + \sin \alpha \sin \beta} $$
divide the numerator and denominator by $\cos \alpha \cos \beta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does rotating a matrix change its determinant? For a $2 \times 2$, it is easy to see the determinant only changes sign.
\begin{align*}
\left(
\begin{array}{cc}
a & b \\ c & d
\end{array}
\right) \mapsto
\left(
\begin{array}{cc}
c & a \\ d & b
\end{array}
\right)
\end{align*}
We can see that $\det(A) = -\det(A')$, where $A$ is the original matrix and $A'$ is the rotated matrix. Is this always the case for any $n \times n$ matrix?
Also, this would imply that $\det(A) = \det(A'')$.
Thanks for any advice!
|
With a $4\times 4$ matrix, rotating preserves the determinant.
In general rotating means transposing (determinant-preserving) followed by turning matrix upside down (multiplies determinant by $(-1)^{\lfloor n/2\rfloor}$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How many ways are there to go from the point $(0,0)$ to $(m,n)$ using,right up and down moves that we don't pass a point more than once? How many ways are there to go from the point $(0,0)$ to $(m,n)$ using,right up and down moves that we don't pass a point more than once?
I tried using to calculate every case(depending on the down moves) then maybe the identity $\sum\limits_{r=0}^m n^r \binom{m}{r}=(n+1)^m$ will work.(because the answer in the book is $(n+1)^m$).But it didn't work.Any hints?
|
Under the assumption that movement is only allowed in the grid formed by corner points $(0,0), (m, 0), (0, n), (m,n)$:
For each column, there are $n+1$ ways to pick where the horizontal movement occurs. After placing the horizontal movements for each column, there's only one way to connect them all (and these connections are done entirely in terms of up/down movements that do not revisit any position more than once).
Over all $m$ columns, that's $(n+1)^m$ paths in total, which matches the answer in your book.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A more modern textbook on Axiomatic Set Theory, at the same level of rigor as Suppes? I'm currently using Suppes textbook to learn axiomatic set theory. Is there a more modern textbook that is just as well-written? I'm thinking of a textbook that still has a treatment of urelements (for example), but is modern enough that the empty set isn't denoted by 0. Thanks!
|
My personal preference is Set Theory: An Introduction To Independence Proofs by K.Kunen. Although it does not cover ur-elements. It's almost entirely on ZF and ZFC. And I get lost in the details in the def'n of Godel's constructable class L. For an easy and useful def'n of L, I suggest the essay in the book Lectures In Set Theory (various authors: edited by Morley).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Equation with a permutation composition Is there any method to solving such an equation:
$$f_1\circ f = f_2$$
Where $f_1, f, f_2 \in S_7$ and:
$f_1 = (1234)(5)(6)(7)$
$f_2 = (172536)(4)$
|
So to my mind that would be the solution:
1) first we are to find $f_1^{-1}$:
$$f_1^{-1}\circ f = id \rightarrow f_1^{-1} = (13421)(5)(6)(7)$$
2) now we are to solve:
$$f_1^{-1} \circ f_2 = (17)(254)(36)$$
And that is our permutation $f$ we were looking for.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find parameter so that the equation has roots in arithmetic progression Find the parameter $m$ so that the equation
$$x^8 - mx^4 + m^4 = 0$$
has four distinct real roots in arithmetic progression.
I tried the substitution $x^4 = y$, so the equation becomes
$$y^2 -my + m^4 = 0$$
I don't know what condition should I put now or if this is a correct approach.
I have also tried to use Viete's by the notation $$2x_2=x_1+x_2, 2x_3=x_2 +x_4$$ but I didn't get much out of it.
|
Zero can't be a root, else $m=0$, in which case all the roots would be zero.
If $r$ is any root, so is $ri$, hence there must be exactly $4$ real roots, and $4$ pure imaginary roots.
Also, if $r$ is a root, so is $-r$, hence the real roots sum to zero.
Ordering the real roots in ascending order, let $d > 0$ be the common difference.
Then the four real roots are
$$-\frac{3}{2}d,\;-\frac{1}{2}d,\;\frac{1}{2}d,\;\frac{3}{2}d$$
and the other four roots are
$$-\frac{3}{2}di,\;-\frac{1}{2}di,\;\frac{1}{2}di,\;\frac{3}{2}di$$
Since the $4$-th powers of the roots satisfy the quadratic
$$y^2 - my + m^4 = 0$$
Vieta's formulas yields the equations
$$\left(\frac{1}{2}d\right)^4+\left(\frac{3}{2}d\right)^4 = m$$
$$\left(\frac{1}{2}d\right)^4\left(\frac{3}{2}d\right)^4 = m^4$$
$$\text{or, in simpler form}$$
$$\frac{41}{8}d^4 = m\tag{eq1}$$
$$\frac{81}{256}d^8 = m^4\tag{eq2}$$
Solving $(\text{eq}1)$ for $d^4$, substituting the result into $(\text{eq}2)$, and then solving for $m$ yields
$$m = \frac{9}{82}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $ab$ is a square number and $\gcd(a,b) = 1$, then $a$ and $b$ are square numbers.
Let $n, a$ and $b$ be positive integers such that $ab = n^2$. If $\gcd(a, b) = 1$, prove that there exist positive integers $c$ and $d$ such that $a = c^2$ and $b = d^2$
So far I have tried this:
Since $n^2 = ab$ we have that $n = \sqrt{ab}$.
Because $\gcd(a,b) = 1$, there exists integers $k$ and $l$ such that $ak + bl = 1$. This means that $\sqrt{a}(k\sqrt{}) + \sqrt{b}(l\sqrt{b}) = 1$.
Hence $\sqrt{a}$ and $\sqrt{b}$ are both positive integers and we can set $\sqrt{a} = c$ for some arbitrary integer $c$ and $\sqrt{b} = d$ for some arbitrary integer $d$. Therefore, $a = c^2$ and $b = d^2$.
|
Let $ab=c^2$ for some $c\in N$ then the result will hold if any one of the integers is 1 as 1^2=1.
So let us take a>1,b>1 and c>1. We can use prime factorization and represent the integers as follows:
a=$p_1^{d_1} * p_2^{d_2}$
b= $q_1^{e_1} * q_2^{e_2}$
and c=$k_1^{l_1} *k_2^{l_2}$
thus $ab=c^2$ becomes
$p_1^{d_1} * p_2^{d_2} * q_1^{e_1} * q_2^{e_2} = k_1^{2l_1} *k_2^{2l_2}$
(a,b)=1 so all the primes $p_1...$ are different from $q_1...$. Hence we can say that p and q are just rearrangement of k and each of $d_i$ and $e_j$ are also similar rearrangement of $2l_1....2_r$. Hence each of $d_i$ and $e_j$ are even.
Using this we can also prove that $ab=c^n$ then $a=x^n$ and $b=y^n$
P.s Extension of Alex's answer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2253940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
Given the product is measurable, is each factor measurable? Given a random variable $M$ on $(\Omega,\mathscr F, \Bbb P)$ and $M=X\cdot Y$, can we proof that $X$ and $Y$ are also measurable?
To be more specific, I was thinking about if a process $M_t=X_t\cdot Y_t$ is adapted to its natural filtration $\mathscr F_t^M$. Is $X_t$ and $Y_t$ also adapted to the natural filtration?
Thanks for any thought.
|
No; not necessarily.
Witness a sample space, $(\Omega,\mathcal F) =\big( \{(1,1),(1,2),(2,1)\},\{\emptyset, \{(1,1)\}, \{(1,2),(2,1)\}, \Omega\}\big)$.
We define the random variables $X:(x,y)\mapsto x$, $Y:(x,y)\mapsto y, M:(x,y)\mapsto xy$.
Then $M(\omega)=[X\cdot Y] (\omega)$, and $M$ is $\mathcal F$-measureable, but neither $X$ nor $Y$ are.
*
*$M^{-1}\{1\}= \{(1,1)\}\in \mathcal F\\M^{-1}\{2\}=\{(1,2),(2,1)\}\in \mathcal F$
*$X^{-1}\{1\}= \{(1,1),(1,2)\}\notin \mathcal F\\X^{-1}\{2\}=\{(2,1)\}\notin \mathcal F$
*$Y^{-1}\{1\}= \{(1,1),(2,1)\}\notin \mathcal F\\Y^{-1}\{2\}=\{(1,2)\}\notin \mathcal F$
So $M$ being $\mathcal F$-measurable and $M=X\cdot Y$ is insufficient to prove that either $X$ or $Y$ are $\mathcal F$-measurable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Question on Limits - Asymptote As $x$ gets larger, $(x^3-8)/(x^2-4)$ approaches
a. 0
b. 1
c. 2
d. 3
e. infinity.
The answer is 3 but I do not think it is correct. Shouldn't it be infinity as we will have a slanted asymptote?
|
We have an $x^3$ in the numerator and an $x^2$ in the denominator. $x^3$ increases much faster than $x^2$, so as $x$ gets large, the fraction will approach infinity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove that $\int_{0}^{\infty}{\sin^4(x)\ln(x)}\cdot{\mathrm dx\over x^2}={\pi\over 4}\cdot(1-\gamma)?$
How to prove that
$$\int_{0}^{\infty}{\sin^4(x)\ln(x)}\cdot{\mathrm dx\over x^2}={\pi\over 4}\cdot(1-\gamma).\tag1$$
Here is my attempt:
$$I(a)=\int_{0}^{\infty}{\ln(x)\sin^4(x)\over x^a}\,\mathrm dx\tag2$$
$$I'(a)=\int_{0}^{\infty}{\sin^4(x)\over x^a}\,\mathrm dx\tag3$$
$$I'(2)=\int_{0}^{\infty}{\sin^4(x)\over x^2}\,\mathrm dx\tag4$$
$$I'(2)=\int_{0}^{\infty}{\sin^2(x)\over x^2}\,\mathrm dx-{1\over 4}\int_{0}^{\infty}{\sin^2(2x)\over x^2}\,\mathrm dx=-{\pi\over 2}\tag5$$
Why is this way wrong?
How to prove (1)?
|
I will outline a self-contained approach, too. By differentiating the integral definition of the $\Gamma$ function, we get the following Lemma:
$$ \mathcal{L}(\log x) = -\frac{\gamma+\log(s)}{s}\tag{1} $$
and it is not difficult to compute from $(1)$ the Laplace transform of $\sin^4(x)\log(x)$.
By Euler/De Moivre's formula we have
$$ \sin^4(x) = \frac{1}{16}\left( e^{4ix}+4 e^{2ix}+6+4 e^{-2ix}+e^{-4ix}\right)\tag{2}$$
and by the shift properties of the Laplace transform and $(1)$ we get
$$ \forall k\in\mathbb{Z},\qquad \mathcal{L}\left(e^{-kix}\log x\right) = -\frac{\gamma+\log(ki+s)}{ki+s}\tag{3} $$
so by $(1),(2),(3)$ and simple algebraic manipulations we get:
$$ \mathcal{L}\left(\sin^4(x)\log x\right) = -\frac{24\gamma}{64s+20s^3+s^5}+\text{LogTerm}$$
$$ {\scriptsize\text{LogTerm} = \frac{1}{16} \left(-\frac{6 \log(s)}{s}+\frac{4 \left(4 \arctan\left(\frac{2}{s}\right)+s \log\left(4+s^2\right)\right)}{4+s^2}-\frac{8\arctan\left(\frac{4}{s}\right)+s \log\left(16+s^2\right)}{16+s^2}\right)}\tag{4} $$
and since $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)=s$, the computation of the original integral boils down to the computation of elementary integrals by integration by parts. For instance, the term $-\frac{\pi\gamma}{4}$ comes from
$$ \int_0^{+\infty } \frac{24}{(4+s^2)(16+s^2)} \, ds=\frac{\pi}{4}.\tag{5}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
On generating set for abelian $p$-group Let $G$ be a finite abelian $p$-group. Let $\{x_1,\cdots,x_r\}$ be a subset of $G$ with following property:
(1) $\langle x_1, \cdots, x_r\rangle=\langle x_1\rangle \oplus \cdots \oplus \langle x_r\rangle$.
(2) No $x_i$ is a $p$-th power in $G$. [In other words, all $x_i$'s are outside Frattini subgroup $\Phi(G)$.]
Q. Can we always extend the above set to set to $\{x_1,\cdots, x_r, x_{r+1},\cdots, x_l\}$ such that
$$G=\oplus_{i=1}^l \langle x_i\rangle?$$
In the proof of fundamental theorem of finite abelian group, the set $\{x_1,\cdots, x_r\}$ has an additional property: $x_1$ is of maximum order in $G$; $x_2$ is of maximum order with property that $\langle x_2\rangle$ intersects trivially with $\langle x_1\rangle$, and so on. Then one proceeds inductively to extend above set to generating set which gives cyclic decomposition of $G$.
In above problem, I am not considering any such restriction related to orders of elements.
|
No. Let $G = \langle a \rangle \oplus \langle b \rangle$ where $a$ and $b$ have orders $p$ and $p^3$, respectively, and let $H = \langle x \rangle$ with $x = ab^p$. So $H \cong C_{p^2}$ and is not a direct summand of $G$.
(I once made a mistake myself in a proof and this was essentially the counterexample.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Square and multiply algorithm I'm trying to understand the square and multiply algorithm:
If I understand it correctly, whenever the exponent is even, we divide it by 2 but square the base, and whenever it is odd, we take an x out and subtract 1 off the exponent.
So, when running the algorithm on $2^{10}$, I was expecting the following to happen:
10 is even, so we square: $(2^2) ^5$
5 is odd, so we subtract 1 and take an x out: $2*(2^2) ^4$
But this is obviously 512 and not 1024 anymore.
|
Error: in second step, As @Kenny mentioned, $x$ should be $2^2$ instead of $2$. We are talking about $(x^2)^{y/2}$ and here, $x$ happens to be $2^2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $f$ is integrable on $[a,b] $ Suppose that $f(x)=0$ for all $x$ in $[a,b]$ except for some $ z $ in $[a,b]$ Prove that $f$ is integrable on $[a,b] $.
My try:If we can show that $f$ is continuous in $[a,b]$,then the result will follow.Thank you.
|
Continuity would give integrability, but this function cannot be continuous since $\displaystyle \lim_{x \rightarrow z} f(x) = 0 \neq f(z)$, and this condition must be true for every $y \in [a,b]$ if $f$ is indeed continuous on this interval. However, it isn't too difficult to tackle the problem directly from the definition:
A function is Riemann integrable on $[a,b] \iff$ for any $\varepsilon > 0$, there exists a partition $ \mathcal{P} = [a\!=\!x_1, \ x_2, \ \cdots, \ x_n\!=\!b]$ of $[a,b]$ such that $U(f, \mathcal{P}) - L(f, \mathcal{P}) < \varepsilon$, where:
$$\displaystyle L(f, \mathcal{P}) = \sum_{i} (x_{i+1} - x_i)\inf \Big( \{f(x) \ | \ x \in [x_i, x_{i+1}] \} \Big)$$
$$\displaystyle U(f, \mathcal{P}) = \sum_i (x_{i+1} - x_i)\sup \Big( \{ f(x) \ | \ x \in [x_i, x_{i+1}] \} \Big)$$
and $x_i \in \mathcal{P}$. Now consider any partition $\mathcal{P}$ of $[a,b]$. The lower sum is always zero because the infimum of the function values along any interval, even the interval containing $z$, is zero. Further, the supremum of the function values along any interval is $0$ except for the interval containing $z$. Therefore:
$$U(f, \mathcal{P}) - L(f, \mathcal{P}) = U(f, \mathcal{P}) = \text{width of the interval containing z}$$
So, is $f$ integrable on $[a,b]$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
If $\lim_{n \rightarrow \infty} (a_{n+1}-\frac{a_n}{2})=0$ then show $a_n$ converges to $0$. I have been stuck on this question for a while now. I have tried many attempts. Here are two that I thought looked promising but lead to a dead end:
Attempt 1:
Write out the terms of $b_n$:
$$b_1=a_{2}-\frac{a_{1}}{2}$$
$$b_2=a_{3}-\frac{a_{2}}{2}$$
$$b_3=a_{4}-\frac{a_{3}}{2}$$
$$\cdots$$
$$b_n=a_{n+1}-\frac{a_{n}}{2}$$
Adding up the terms you get:
$$\sum_{i = 1}^n b_i=a_{n+1}+\frac{a_n}{2}+\frac{a_{n-1}}{2}+\cdots+\frac{a_2}{2}-\frac{a_1}{2}.$$
But a dead end here.
Attempt 2:
For $ε=\dfrac{1}{2}$, $\exists K$ such that $\forall n>K$, $$\left|a_{n+1}-\frac{a_n}{2}\right|<\frac{1}{2}.$$
Now I attempt to prove $\{a_n\}$ is Cauchy and hence converges.
For $m>n>K$,
\begin{align*}
|a_m-a_n|&=\left|a_m-\frac{a_{m-1}}{2}+\frac{a_{m-1}}{2}-\frac{a_{m-2}}{2^2}+\cdots -+\frac{a_{n+1}}{2^{m-n-1}}-a_n\right|\\
&\leq \left|a_m-\frac{a_{m-1}}{2}\right|+\frac{1}{2}\left|a_{m-1}-\frac{a_{m-2}}{2}\right|+\cdots+\left|\frac{a_n}{2^{m-n}}-a_n\right|\\
&\leq \frac{1}{2}+\frac{1}{2} × \frac{1}{2}+\cdots+\left|\frac{a_n}{2^{m-n}}-a_n\right|\\
&<1+\left|\frac{a_n}{2^{m-n}}-a_n\right|,
\end{align*}
and a dead end.
|
Let $\epsilon > 0$.
Since $a_{n+1}-a_n/2$ converges to $0$, there is an integer $m$ such that
for any $n \ge m$, $|a_{n+1}-a_n/2| \le \epsilon/4$.
For such an $n$, you have
$|a_{n+1}| - \epsilon/2 \\
\le |a_{n+1} - a_n/2| + |a_n/2| - \epsilon/2 \\
\le \epsilon/4 + |a_n|/2 - \epsilon/2 \\
= |a_n|/2 - \epsilon/4 \\
= (|a_n| - \epsilon/2)/2$
Intuitively you can interpret this as something saying that $|a_n|$ has to decrease somewhat exponentially at least until $|a_n|$ gets too close to $\epsilon/2$.
Then let us show there is an $m' \ge m$ such that $|a_{m'}| \le \epsilon$.
If $|a_m| \le \epsilon$ then we are already done by picking $m'=m$, so suppose $|a_m| > \epsilon$.
Now, $(|a_m|- \epsilon/2) / (\epsilon/2) > 1 > 0$ so there is an integer $k$ such that $2^k \ge (|a_m|- \epsilon/2) / (\epsilon/2)$.
Looking at $m' = m+k$ we get
$|a_{m+k}| - \epsilon/2 \le (|a_m| - \epsilon/2) 2^{-k} \le \epsilon/2$,
and so $|a_{m'}| \le \epsilon$.
Then we can prove by induction that for any $n \ge m'$, $|a_n| \le \epsilon$ :
This is true for $n=m'$.
Suppose $n \ge m'$ and $|a_n| \le \epsilon$.
Then $|a_n|-\epsilon/2 \le \epsilon/2$, and so because $n \ge m$,
$|a_{n+1}| - \epsilon/2 \le (|a_n| - \epsilon/2)/2 \le \epsilon/4 < \epsilon/2$, and finally $|a_{n+1}| \le \epsilon$.
Therefore, for all $n \ge m', |a_n| \le \epsilon$, and we have shown that the sequence $a_n$ converges to $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
}
|
What's the magnitude of a real number? As a student of mathematics (first year master degree) I have to admit that I'm somewhat ashamed to ask this.
We know that if $z=x+iy$ is a complex number then we can identify it as $z=r\cdot\exp(i\theta)$. But what if $z$ is real - in other words its $y$ equals 0? Then $z=r\exp(i\cdot0)=r$ and this means that $z$ would be equal to its magnitude $r$ if $z$ is positive. But what if $z$ is negative? We know that the magnitude is always positive and so we'll get $z$(negative) = $r$(positive)?
I'm sure there's something I'm missing here.
|
Since it has not been pointed out so far, it is crucial to realize that while cartesian coordinates are unique, polar coordinates are not unique, because the pair $(r\cos(t),r\sin(t))$ is exactly the same as $(r\cos(t+2πn),r\sin(t+2πn))$ for any integer $n$. Furthermore, $(0\cos(t),0\sin(t)) = (0,0)$ for any real $t$. So if we want to represent points by polar coordinates $(r,t)$, we usually stipulate a range for $t$. Commonly $0 \le t < 2π$ or alternatively $-π < t \le π$. In either case, if we require $r>0$ then $(r,t)$ is unique for any point except the origin, because the distance from origin and the angle of the ray from the origin are uniquely determined. There is still no unique polar coordinates for the origin.
One could choose to allow $r$ to be negative and restrict $t$ further to $0 \le t < π$, so that the point $(-1,0)$ (cartesian coordinates) can have 'polar' coordinates $(-1,0)$ rather than $(1,π)$. However, this is not as elegant or convenient as the usual definition of polar coordinates.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 6
}
|
One-step transition probabilities for a markov chain? Imagine m balls being exchanged between two adjacent chambers (left and right) according to the following rules. At each time step, one of the m balls is randomly selected and moved to the opposite chamber, i.e., if the selected ball is currently in the right chamber, it will be moved to the left one, and vice versa. Let $X_n$ be the number of balls in the left chamber after the nth exchange. For m=3 I want to find all the one step transition probabilities. I know the state space will be {0,1,2,3} and that I am looking for Probabilities, when it goes from 0->1, 1->0, 2->1, 1->2, 3->2, 2->3. I am struggling with how to account for the fact that the balls can have different starting positions? For example going from 1->0 you can either pick the one ball in the left chamber and move it, or pick one of the two balls in the right chamber and move it to the left making it a 1->2 transition, so what would the probability for something like that look like?
|
$\{0,1,2,3\}$ is the set of states, that is, the set of the possible numbers of balls in the left chamber.
Assume that the system is in state $i$ ($i=0,1,2,3$). The probability that the sytem goes to state $i-1$ is $\frac i3$ because this is the probability that one selects a ball from the left box. The probability that the system goes to state $i+1$ is $\frac{3-i}3$ because this is the probability that one selects a ball from the right box.
For example, if the system is in state $1$ then there is only two possible transitions, as shown below
The system can go to state $2$ (with probability $\frac23$) or to state $0$ (with probability $\frac13$).
The state transition probability matrix is then
$$P=
\begin{bmatrix}
0&1&0&0\\
\frac13&0&\frac23&0\\
0&\frac23&0&\frac13\\
0&0&1&0
\end{bmatrix}.$$
(Here the rows are assigned to the present state and the columns to the next state.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2254873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How can we prove that there are $2^{\mathfrak{c}}$ Hamel bases? I know that there are $2^{\mathfrak{c}}$ distinct Hamel bases for $\mathbb{R}$ over $\mathbb{Q}$ but what is the demonstration for that?
|
There are many ways to show this; here's one simple way. Let $B$ be any Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$, which must have cardinality $\mathfrak{c}$, and partition $B$ into two subsets $C$ and $D$ with a bijection $f:C\to D$ (so $|C|=|D|=\mathfrak{c}$). For any subset $S\subseteq C$, the following set is a Hamel basis:
$$C\setminus S\cup D\setminus f(S)\cup\{c+f(c):c\in S\}\cup\{c-f(c):c\in S\}.$$ In words: split the basis $B$ into pairs $\{c,f(c)\}$, and then for each $c\in S$ replace $\{c,f(c)\}$ by $\{c+f(c),c-f(c)\}$, which has the same span. This gives a distinct basis for each $S\subseteq C$, and there are $2^\mathfrak{c}$ such subsets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does Yoneda embedding reflect equivalent categories? Let $\mathsf{Cat}$ denote the category of small categories. For categories $\mathcal A$ and $\mathcal B$ in $\mathsf{Cat}$, let $[\mathcal A,\mathcal B]$ denote the category whose objects are functors form $\mathcal A$ to $\mathcal B$ and morphisms are natural transformation between those functors. My question is
Given a functor $F:\mathcal A\to\mathcal B$. Suppose for any $\mathcal C\in\text{ob}\mathsf{Cat}$ we have $F^*:[\mathcal B,\mathcal C]\to[\mathcal A,\mathcal C]$ is an equivalence, or for any $\mathcal C\in\text{ob}\mathsf{Cat}$ we have $F_*:[\mathcal C,\mathcal A]\to[\mathcal C,\mathcal B]$ is an equivalence. Can we deduce that $F$ is an equivalence?
|
Yes. This is a special case of the $2$-categorical Yoneda Lemma. Here is a direct proof.
Assume that $F^*$ is an equivalence for all categories $C$. In particular, $F^* : [B,A] \to [A,A]$ is essentially surjective. Choose some $G : B \to A$ with $GF \cong \mathrm{id}_A$. We have $FG \cong \mathrm{id}_B$ since $FGF \cong \mathrm{id}_B F$ and $F^* : [B,B] \to [A,B]$ is fully faithful.
You can use the same proof for $F_*$. Or you can give a quick argument as shown by Clive Newstead in the comments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Find a non-zero integer matrix $X$ such that $XA=0$ where $X,A,0$ are all $4 \times 4$ Let $A$ be the following $4 \times 4$ matrix.
\begin{bmatrix}1&2&1&3\\1&3&2&4\\2&5&3&7\\1&4&1&5\end{bmatrix}
How can we find a non-zero integer matrix $C$ such that $CA = 0_{4 \times 4}$
Note that $0$ is a $4 \times 4$ matrix.
|
Hint : The rank of $A$ is $3$ so : $$\exists P,Q \in GL_n(\mathbb R),A=P\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&0\end{bmatrix}Q$$
$$CA=0 \Rightarrow CP\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&0\end{bmatrix}Q=0\Rightarrow C\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&0\end{bmatrix}=Q^{-1}P^{-1}$$
Calculate $P$ and $Q$ to find a necessary condition on $C$ and take a matrix fulfilling this condition and verify it is indeed a solution to your problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Quotient group is complete, so is the group Let $G$ be a topological metrizable group and $K$ a normal subgroup of $G.$ Consider the homogeneous space $G/K$ and assume that both $K$ and $G/K$ are complete. I need to prove $G$ is complete.
More specifically, assume there is a right-invariant metric on $G$ such that the restriction of said metric to $K$ makes $K$ complete (hence closed in $G$) and the metric $\dot d(\dot x, \dot y) = d(xK, yK)$ on the homogeneous space $G/K$ makes it a complete topological space. How to prove $G$ is also complete?
I am not sure what I am not getting, I have seen some posts here and else where mentioning this result (but never proving it) and the few books I have read about topological groups always give this as an exercise. Now, my try so far goes as follows.
Consider a fundamental sequence $(x_n)$ in $G,$ since $K$ contains the neutral element, $\dot d(\dot x, \dot y) \leq d(x, y)$ and so the projection sends $x_n$ to $\dot x_n$ which is fundamental in the homoegenous space, making it to converge to some element $\dot x.$ Now, if $x$ belongs to $\dot x,$ one can show that $x_n x x_n^{-1}$ converges (possibly via a subsequence in $K$). This is where I am stuck, if $x_n x x_n^{-1} \to k,$ how to conclude $k = fxf^{-1}$ for a suitable $f$? Any help is appreciated.
|
I can finish your proof as follows. For each $n$ pick an element $x’_n\in \dot x$ such that $d(x_n, x’_n)< d(x_n, \dot x)+1/n$. Since the sequence $(\dot x_n)$ converges to $\dot x$ and is fundamental, the sequence $(x’_n)$ is fundamental too. Since the space $\dot x\supset (x’_n)$ is complete in the induced metric, the sequence $(x’_n)$ converges to some point $x’\in\dot x$. Then the sequence $(x_n)$ converges to $x’ $ too.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Integration of f(x) where f(x) is x in binary, used as a decimal
Define $f(x)$ when $x$ $\in [0,1]$ as $x_2$ ($x$ base 2) considered as a decimal value.
Therefore, $f(0.75) = 0.11$, and $f(0.25) = 0.01$.
Compute $\int_0^1f(x)dx$.
To do this, I figured that the answer might be $0$ because integrals aren't defined by the value at a point, and the graph shows its discontinuity. I also tried using a program, and I got $0$. Is this the correct answer? Is there a better way of doing it?
|
Here's why zero can't be right. You can see $f(x) \geq 0.1$ on $[0.5,1.0)$.
So $\int_0^1 f(x)\,dx \geq (0.1)(0.5) = 0.05$.
Also, $f(x) \geq 0.01$ on $[0.25,0.5)$, $f(x) \geq 0.1$ on $[0.5,0.75)$, and $f(x) \geq 0.11$ on $[0.75,1)$. So
$$
\int_0^1 f(x)\,dx \geq (0.01)(0.25) + (0.1)(0.25) + (0.11)(0.25) = (0.22)(0.25) = 0.055
$$
Keep going. Then try to generalize.
Let $\chi_n(x)$ be the $n$th digit of $x$ in its binary expansion. Notice that
$$
\int_0^1 \chi_n(x)\,dx = \frac{1}{2}
$$
That is, half of the numbers in $[0,1]$ have this digit zero, and half have it one.
Also,
$$
f(x) = \sum_{n=1}^\infty \frac{\chi_n(x)}{10^n}
$$
Integrating term-by-term, we have
$$
\int_0^1 f(x)\,dx = \sum_{n=1}^\infty \frac{1/2}{10^n} = \frac{1/20}{1-1/10} = \frac{1}{18} = 0.05555\dotsc
$$
You will need to make sure that integrating the series term-by-term is justified. That should be OK because $0 \leq \chi_n(x) \leq 1$. By the Weierstrass $M$-test, the series converges uniformly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Symmetries of Tetrahedral Dice I am trying to find the number of distinguishable tetrahedral dices, where the sides are numbered 1,2,3,4. I found this webpage (http://mathworld.wolfram.com/PolyhedronColoring.html) that claims that there are 2 distinct ways to do so, but I don't see how they came to that answer. I have been trying to use Burnside's Theorem, but I have been unable to figure out how to apply it correctly. I believe that the order of the group here is $24$, but I have been having trouble computing the number of dice fixed by each group action.
|
We can answer the questions about the number of colorings of the faces
of a tetrahedron using some maximum number of colors or some specific
number of colors using the cycle index of the symmetries acting on the
faces of the tetrahedron and applying Burnside. The cycle index is
quite simple here, we now show how to compute it by enumerating the
types of permutations. Start with the rotations. First there is the
identity which contributes
$$a_1^4.$$
Next we have rotations by $120$ degrees and $240$ degrees about an
axis passing through a vertex and the center of the opposite face
which fixes that face for a contribution of
$$4\times 2a_1 a_3.$$
Finally we have three $180$ degree rotations about an axis passing
through the midpoints of opposite edges, getting
$$3\times a_2^2.$$
Now for the reflections, there is the first type which exchanges the
vertices of an edge and fixes the faces incident on that edge for a
contribution of
$$6\times a_1^2 a_2.$$
Lastly there is a type of reflection exchanging the midpoints of
opposite edges followed by a $90$ or $270$ degree rotation about the
axis linking those two midpoints for a contribution of
$$3\times 2 a_4.$$
This last class is the most difficult and may require making a diagram
of the tetrahedron with the faces labeled before and after the
reflection and rotation is applied and factoring the resulting
permutation by converting the map from table form to a product of
disjoint cycles, just one cycle in this case.
We thus have the cycle index
$$Z(G) = \frac{1}{24}
(a_1^4 + 8 a_1 a_3 + 3 a_2^2 + 6 a_1^2 a_2 + 6 a_4).$$
We recognize the cycle index $Z(S_4)$ of the symmetric group $S_4$
acting on four elements. We could have noted that given a four-cycle
(second type of reflection) we may choose two elements adjacent on
that cycle to form a transposition (first reflection). Together these
two generate all of $S_4,$ the cycle index of which can be computed
recursively (Lovasz) or by enumeration of the conjugacy classes
(partitions of $n=4.$)
Applying Burnside we get for the number of colorings with at most $N$
colors
$$\frac{1}{24}(N^4 + 11 N^2 + 6 N^3 + 6 N)$$
which produces the sequence
$$1, 5, 15, 35, 70, 126, 210, 330, 495, 715, \ldots$$
which is OEIS A000332.
We also have for colorings using exactly $M$ colors
$$\frac{M!}{24}\left({4\brace M} + 11 {2\brace M}
+ 6 {3\brace M} + 6 {1\brace M}\right)$$
which yields the finite sequence
$$1, 3, 3, 1, 0,\ldots $$
so there is just one coloring using four colors. This is correct since
with all colors different we have $4!$ possible assignments and all
orbits have the same size, namely $24$, the number of permutations,
for a result of $4!/24$ or one possibility. We get three colorings
using exactly two colors, this corresponds to one coloring using two
instances of each and two colorings using three instances of one and
one instance of the other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
I don't really understand this calc question What do you even do here? Take the other two variable to RHS?
|
HINT:
Start by considering,
$$z^z = \dfrac{c}{x^xy^y}$$
Next, apply logarithm laws to obtain,
$$z\log (z) = \log (c) - x\log (x) + y\log (y)$$
Hopefully, from here you can apply ideas of partial differentiation to obtain your answer (the product rule will come in handy).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving $8yy'^2 - 2xy' + y = 0$ I'm solving the differential equation $8yy'^2 - 2xy' + y = 0$
My attempt:
We divide both sides by $x$, obtaining:
$$8\frac{y}{x}y'^2 - 2y' + \frac{y}{x} = 0$$
Then, we introduce $t = y'$, hence the differential equation becomes:
$$8\frac{y}{x}t^2 - 2t + \frac{y}{x} = 0$$
from which follows:
$$\frac{y}{x} = \frac{2t}{8t^2 +1}$$
Hence, in parametric equation we obtain:
$$\begin{cases} y' = t \\ \frac{y}{x} = \frac{2t}{8t^2 +1}\end{cases}$$
From the second equation, after differentiating with respect to $t$, we obtain:
$$\frac{dy}{dt} = \frac{dx}{dt}\frac{2t}{8t^2 + 1} + \frac{-16t^2 + 2}{(8t^2 +1)^2}$$
Hence:
$$dy = dx\frac{2t}{8t^2 + 1} + \frac{-16t^2 + 2}{(8t^2 +1)^2}dt$$
From the first equation we have $dy = tdx$
Therefore:
$$tdx= dx\frac{2t}{8t^2 + 1} + \frac{-16t^2 + 2}{(8t^2 +1)^2}dt$$
Or after rearranging:
$$dx = \frac{-2dt}{(8t^2+1)dt}$$
And by integrating:
$$x(t) = -2\ln|t| - \frac{1}{2}\ln|8t^2 + 1| + c$$
And by the first equation:
$$y(t) = \frac{2t}{8t^2+1}(-2\ln|t| - \frac{1}{2}\ln|8t^2 + 1| + c)$$
Hence, the solution in parametric equation, is:
$$\begin{cases} x(t) = -2\ln|t| - \frac{1}{2}\ln|8t^2 + 1| + c \\ y(t) = \frac{2t}{8t^2+1}(-2\ln|t| - \frac{1}{2}\ln|8t^2 + 1| + c) \end{cases}$$
Can someone verify whether this is correct? The answer my book gives is $$y^2 - 4cx + 32c^2 = 0$$ with singular integral $$8y^2 - x^2 = 0$$ How would I derive this answer?
Thanks in advance.
|
You lost the factor $x$ in the last term while differentiating the parametric equation.
It is even easier to multiply with $y$ and then substitute $u=y^2$ to get
$$
2u'^2-xu'+u=0\iff u=xu'-2u'^2
$$
which is a Clairaut differential equation. This has the lines
$$
u=cx-2c^2
$$
as solutions and their envelope which is the non-linear solution to $0=(x-4u')u''$. Thus inserting $u'=x/4$ gives
$$
u=\frac{x^2}4-\frac{x^2}8=\frac{x^2}8
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2255914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Concurrency of the heights of a tetrahedron with opposite edges perpendicular. Can anyone give me a vectorial solution to the following problem:
Prove that if each pair of opposite edges of the tetrahedron $ABCD$ is perpendicular (that is, $AB \perp CD$ and $AC \perp BD$ and $AD \perp BC$), then the heights of the tetrahedron are concurrent.
Here, the heights (also known as the altitudes) of tetrahedron $ABCD$ are the perpendicular from $A$ to the plane $BCD$, and three other similarly defined perpendiculars.
Tetrahedra satisfying the condition of this problem are called orthocentric, and this appears to be a known result.
|
That is a simple exercise in visualization. Imagine that $A,B,C$ are embedded in the $xy$ plane (the screen) and $D$ lies on the $z$-axis (orthogonal to the screen), so that the origin $O$ is the projection of $D$ on the plane through $A,B,C$. Since $DB\perp AC$ (in $3$D) we have $OB\perp AC$ (in $2$D). Similarly we get $OA\perp BC$ and $OC\perp AB$, hence $O$ is the orthocenter of $ABC$.
The orthocenter $H_A$ of the $BCD$ face lies on on the line joining $D$ with its projection on $BC$, hence the projection of $H_A$ on the $ABC$ plane lies on the $AO$ line. In particular the lines $AH_A,BH_B,CH_C,DH_D$ are concurrent when projected on the $ABC$ plane. The same holds by replacing $ABC$ with any face of the tetrahedron, hence the lines $AH_A,BH_B,CH_C,DH_D$ are concurrent in the $3$D space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Find the maximum and minimum value of $P =x+y+z+xy+yz+zx$ Let $x^2+y^2+z^2\leq27$ and $P = x+y+z+xy+yz+zx$. Find the value of $x, y, z$ such that $P$ is the maximum value and minimum value.
My attempt :
$$(x-y)^2 + (y-z)^2 + (z-x)^2 \geq 0$$
$$27 \geq x^2+y^2+z^2 \geq xy+yz+zx\tag{1}$$
$$(x+y+z)^2 \leq 3(x^2+y^2+z^2) \le 3 \cdot 27$$
$$(x+y+z)^2 \leq 81$$
$$x+y+z \leq 9\tag{2}$$
From $(1), (2)$, $ x+y+z+xy+yz+zx \leq 36$, so $P_{\text{max}} = 36$ with equality hold at $x=y=z=3$.
Please suggest how to find $P_{\text{min}}$.
|
You may use the same method to find the minimum. First, we obtain a lower bound for $P$:
$$
\begin{align}
P&=x+y+z+xy+yz+zx\\
&=\frac12 [ (x+y+z+1)^2 - (x^2+y^2+z^2) - 1 ]\\
&\ge\frac12 (0 - 27 - 1)\tag{1}\\
&= -14.
\end{align}
$$
Next, note that at $\left(\frac{\sqrt{53}-1}2,-\frac{\sqrt{53}+1}2,0\right)$, we have $x+y+z+1=0$ and $x^2+y^2+z^2=27$. Hence tie can occur in $(1)$ and the lower bound $-14$ is attainable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Finding the preimage I want to find the preimage of $]-2,4]$ for the function $f(x)=x^2-x$
This is what I have done so far:
We have $0=x^2-x-y$ and therefore the inverse is:
$$f^{-1}(y)=\frac{-1\pm \sqrt{1+4y}}{2}$$
And how do I find out the boundaries of the preimage? If I put the boundaries $-2$ and $4$ into the function $f^{-1}$, I will probably not get them. Also I'm not sure which function I have to take when, the one with plus sign or the one with a minus?
|
$$f(x) = x^2-x=x^2-x+\frac14-\frac14 = (x-1)^2 - \frac14 \geq -\frac14 $$
you are asked to find $x$ such that $-2\leq f(x) \leq 4$, but $f \geq -\frac14$ so the first inequality doesn't restrict us at any way.
$$ f(x) \leq 4 \Rightarrow (x-1)^2 -\frac14 \leq 4 \Rightarrow (x-1)^2 \leq \frac{17}4\Rightarrow$$
$$ -\frac{\sqrt{17}}{4} \leq x-1 \leq \frac{\sqrt{17}}{4} $$
$$ 1-\frac{\sqrt{17}}{4} \leq x \leq 1+\frac{\sqrt{17}}{4} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to find modulo using Euler theorem? I don't know how that's possible using phi, the question starts with this one:
a) Decompose 870 in prime factors and compute, ϕ(870)
I know how to resolve this, first 870 = 2*3*5*29 and ϕ(870)= 224
Now this is the question I don't know how to resolve:
b) Compute 77^225 modulo 870 [Using a)] <--- The above question
870 isn't even a prime number.
Thanks for any help.
|
Use euler's theorem. If $(a,n)=1$, then
$$
a^{\varphi(n)}\equiv1\pmod{n}
$$
Since $77$ and $870$ are coprime (their prime factorizations have no prime in common)
$$
77^{225}
\equiv
77^{224}\times77\equiv1\times77\equiv77\pmod{870}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Heat equation with different boundary conditions Consider the heat equation
$$
u_t=u_{xx}
$$
on an interval $[-L,L]$
with Dirichlet, Neuman and periodic boundary conditions.
Am I Right that with Dirichlet b.c. all solutions are exponentially decaying in $L_2$-Norm (and that this corresponds to a spectrum in the left half-plane) while with the other two boundary conditions solutions are not decaying in the L2- norm (and we have the spectrum in the right half plane)?
|
The spectrum is determined by the $x$ equation and associated endpoint conditions after performing separation of variables. The equation in $X$ is
$$
-X''(x)=\lambda X(x)
$$
and there are two general types of conditions:
*
*Separated Conditions, which as described as a two-parameter family in real $\alpha,\beta$:
$$
\cos\alpha X(a) + \sin\alpha X'(a) = 0 \\
\cos\beta X(b) + \sin\beta X'(b) = 0.
$$
These include the Dirichlet conditions ($\alpha=\beta=0$) and the Neumann ($\alpha=\beta=\pi/2$) and the more general Robin types of conditions.
*Periodic Conditions
$$
X(a) = X(b),\;\; X'(a) = X'(b).
$$
There are other variants, but the above is the only practical one.
In all cases there are discrete eigenvalues $\lambda_0 < \lambda_1 < \lambda_2 < \cdots$ which tend to $\infty$ with the index $n$, and the PDE has solution
$$
u(t,x)=\sum_{n=0}^{\infty}C_n e^{-\lambda_n t}X_n(x)
$$
The constants $C_n$ are determined by the initial condition $u(0,x)=\sum_{n=0}^{\infty}C_nX_n(x)$.
You don't generally get decay in the periodic case because $\lambda_0=0$ is an eigenvalue with constant solution $X(x)\equiv 1$, and that term in the series solution remains stationary throughout time. The Dirichlet condtions $X(a)=X(b)=0$ give strictly positive eigenvalues $\lambda_n$ and, so, you do get exponential decay in the $L^2$ norm and pointwise as well. The Neumann problem $X'(a)=X'(b)=0$ also has a constant solution with $0$ eigenvalue.
For general separated conditions, there can be negative eigenvalues, which gives you an unstable, expanding solution. For example, $e^{x}$ is a solution of $-X''=-1\cdot X(x)$ and satisfies the conditions
$$
X(0)-X'(0) = 0,\;\; X(1)-X'(1)=0.
$$
So the heat equation with these conditions has a solution $X(x)=Ce^{t+x}$, which definitely is not stable in time. There can be two negative eigenvalues, depending on the robin conditions imposed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Height of hill when angle of elevation for each vertex is same
The angle of elevation of the top of a hill from each of the vertices $A, B$ and $C$ of a horizontal triangle is $\alpha$. Prove that the height of the hill is $\frac{a}{2} \tan\alpha \csc(A)$.
Could someone help me to approach this question. I am not getting how to start.
|
radius of circumcentre R= a/2sinA =b/2sinB= c/2sinC IN WHICH AH = BH =CH= R
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Doubt over the proof of Cayley- Hamilton heorem I am having some doubt in the proof of Cayley Hamilton theorem. This theorem says that every matrix is a root if its characteristic polynomial.
Proof goes as follows:
Let us assume that matrix $A$ is of order $n\times n$. If $P(\lambda)$ be its characteristic polynomial, then by the definition of the characteristic polynomial
$P(\lambda) = det (A - \lambda I) = P_0 + P_1\lambda + P_2 \lambda^2 +\ldots P_n \lambda^n$.
Next, suppose that $Q(\lambda)$ be the adjoint matrix of $(A - \lambda I)$, such that
$Q(\lambda) =Q_0 + Q_1\lambda + Q_2 \lambda^2 +\ldots Q_k \lambda^k$.
I am not able to understand why the polynomial expression of $Q(\lambda)$ is of degree $k$? Can't I write $Q(\lambda)$ as follows (degree $n$ polynomial in $\lambda$)
$Q(\lambda) =Q_0 + Q_1\lambda + Q_2 \lambda^2 +\ldots Q_n \lambda^n$.
Thank you
|
If $Q(\lambda)$ is the adjoint of $P-\lambda I$, by definition $Q(\lambda)(P-\lambda I)=det(P-\lambda I) I$.
Compare the maximum degree element of each matrix of the equality. If an element $q_{i,j}$ of $Q(\lambda)$ had degree $k$, then element in each diagonal element of the matrix would have max degree element would have degree $k+1$. You can prove it writing down the product on the left. So $n=k+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2256914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
Show that if $E$ is not measurable, then there is an open set $O$ containing $E$ that has finite outer measure and for which $m^*(O-E)>m^*(O)-m^*(E)$ Let $E$ have finite (Lebesgue) outer measure. Now we need to show that if $E$ is not measurable, then there is an open set $O$ containing $E$ that has finite outer measure and for which $m^*(O-E)>m^*(O)-m^*(E)$. (Here $m^*(E)$ denotes the Lebesgue outer measure of $E$). The following is my attempt.
Suppose $E$ is not measurable. Assume that for all open sets $O$ containing $E$ that has finite outer measure we have $m^*(O-E)\leq m^*(O)-m^*(E)$. Let $\epsilon >0$. Since $m^*(E)=\inf\{m^*(U)|E\subseteq U\ \text{and}\ U\ \text{is open}\}$, $\exists U\ \text{open with}\ E\subseteq U\ \text{such that}\ m^*(U)-m^*(E)<\epsilon$. Then $U$ has finite outer measure. So by assumption it follows that $m^*(U-E)<\epsilon$. Therefore $E$ is measurable; contradiction. Hence the result.
Could someone please tell me if this proof is alright? Thanks.
|
How do you define $m^*(U)$ for $U$ open? In order to say that for any $\epsilon$, $\exists U \supseteq E$ open such that $m^*(U) - m^*(E) < \epsilon$, it seems like you need to make precise how you could find this $U$ with outer measure between $m^*(E)$ and $m^*(E) + \epsilon$.
Other than this point, the proof looks correct to me.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
Help with finding expectancy So I have $x\sim U(\{1,2,..., 20\})$
and I need to find $E(x^2)$.
I have tried searching our textbook but could not really understand the logic behind the steps they showed.
Where am i supposed to start solving something like this?
|
$x$ is uniformly distributed on the set $\{1, 2, \dots, 20\} $, so there are the same number of $1$'s as $2$'s as $3$'s, etc. Now, the variable we are investigating to find the expected value is $x^2$. So, the set $\{1, 4, 9, \dots, 400\}$ of squares will be uniformly distributed. The expected value is the average of that set. As Siong said, that average can be calculated using the formula $$\frac 1{n}\sum_{i=1}^{n}i^2=\frac{n(n+1)(2n+1)}{6\times n}$$ where $n=20$ in this case.
The formula $\sum_{i=1}^ni=\frac{n(n+1)}2$ in your textbook works for sets of the form $\{1, 2, 3, \dots, n\} $. The logic behind that function is that the terms can be paired when summing them all, so that there are $\frac n2$ pairs, whose sums each give $n+1$ (if $n$ is odd, the middle term equals $\frac {n+1}2$). The first and last terms give $1+n$; the second and second-last terms give $2+n-1=n+1$; and so on.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
absolute values of algebraic numbers under Galois automorphism This could be very easy question, but I have no idea about it in depth.
Q. Let $\alpha$ be an algebraic integer. Let $\sigma$ be an automorphism of Galois group of $\mathbb{Q}(\alpha)$ (over $\mathbb{Q}$). If, as complex number, $|\alpha|<1$, is it necessary that $|\sigma(\alpha)|<1$?
|
In general, no.
For example, let $\alpha = 2-\sqrt{2},\,$ and consider the automorphism $\sigma$ of $\mathbb{Q}(a)$ such that $\sigma(\sqrt{2}) = -\sqrt{2}$.
Then
$$|a| = |2-\sqrt{2}| < 1$$
but
$$|\sigma(a)| = |2+\sqrt{2}| > 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is $f(x + dx) -f(x) = f'(x) \,\mathrm dx$ a valid equation? $\def\d{\mathrm{d}}$We know that it is true that$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) -f(x)}{\Delta x} = \frac{f(x + \d x) -f(x)}{\d x} = f'(x),$$
where $\d x$ is define to be an infinitesimal.
Then we could rearrange the equation and say that$$f(x + \d x) -f(x) = f'(x) \,\d x.$$
Will this last equation be valid or correct?
Update. Do you agree that:
$$f'(x) \,\d x = \int_{x}^{x+\d x}f'(x)\,\d x.$$
Is this last equation valid or making sense? Does it even mean anything if you put a $dt$ in the limit? What I meant by valid is that would it be possible to apply it like in the context of the question here
|
Yes its valid we use this result in cases where we want to find an approximate value without using calculator. For eg say we want the value of $\sqrt {64.1} $ so we define $f (x)=\sqrt {x}$ then using your equation its $\sqrt {64.1} \approx \sqrt {64}+0.1 \frac {1}{2\sqrt {64}}=8+0.1/16=8.00625$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
Applying Fundamental theorem of calculus problem In this link provided is a question about the Fundemental Theorem of Calculus since I don't know how to use LaTeX. I somehow can't get the right answer, I'm using the chain rule and everything but still getting it wrong
link
$$F(x)=\int_0^{x^3} 4\sin \pi t^2dt$$
Find $F(0)$ and $F'(x)$
|
The interesting thing is that what you put for your answer is right. There must be some formatting issue with your homework software. Have you tried something like changing the "pi" to a "Pi"?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How to find all the positive integers n of 4 digits such that all its digits are perfect squares and n is a multiple of 2, 3, 5 and 7? I was trying to use the divisibility rules of 2,3,5 and 7 but I becomes very tedious and couldn's solve the problem. I think there could be a faster way to solve it or to apply those rules. Please help me and thank you very much.
|
Hint: The last digit has to be $0$ to be divisible by $2$ and $5$ simultaneously. Thus, The number you are seeking is $ABC0$ where $A,B,C\in\{0,1,4,9\}$ but $A \ne 0$.
Also, $A+B+C=3k$ for divisibility via $3$-Equation $(1)$
And $A+B+C =7 \lambda$-Equation$(2)$
Although I'd recommend checking divisibility rules for $7$ before taking equation $(2)$ into account.
or in short, it should be divisible by $21$.
I can see only one number here i.e. $4410$
Disclaimer: But there can be more. I haven't check the whole sample space yet.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Does $x_n$ converge, given $\lim(3x_{n+1} - x_{n})=1 $ I want to prove that $x_n$ converges, given that $\lim (3x_{n+1} - x_n ) = 1$
Attempt: Since $\lim (3x_{n+1} - x_n ) = 1$, set $\epsilon > 0, $ such that $\forall n > N, |3x_{n+1} - x_n -1 | < \epsilon. $ Then,
$$|x_{n+1} - x_n|< \min(\frac{\epsilon + 1 - 2x_N}{3},\frac{1 -\epsilon - 2x_N}{3},1)\ .$$
So
$$|x_{n+k} - x_n| \leq |x_{n+k} - x_{n+k-1}| + |x_{n+k-1} - x_{n+k-2}| + ... + |x_{n+1}-x_n|< $$
$$< k \times \min(\frac{\epsilon + 1 - 2x_N}{3},\frac{1 -\epsilon - 2x_N}{3},1)$$
I tried showing that $x_n$ is Cauchy but there seems to be some problems, any advice?
|
The limit, if it exists, is $\frac{1}{2}$, so I think it is easier to work with $u_n:=x_n-\frac{1}{2}$. The hypothesis then is $3 u_{n+1} - u_n \to 0$. We want to prove that $u_n\to 0$.
Let $\epsilon>0$. For big enough $n$ we can get $3 |u_{n+1}|\leqslant |u_n| +\epsilon$, and then
$$
|u_{n+k}|\leqslant \frac{1}{3^k}|u_n| + \frac{\epsilon}{3}.\frac{1}{1-\frac{1}{3}}$$
which will give the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why is generating function proof of Fibonacci formula correct? The proof goes as follows:-
Let $F = 1 + x + 2x^2 + 3x^3 + 5x^4 + 8x^5 + ...$
Then
$$\begin{align} 1 + Fx + Fx^2 &= 1 + (x + x^2 + 2x^3 + 3x^4 + ...) + (x^2 + x^3 + 2x^4 + 3x^5) \\
1 + Fx + Fx^2 &= 1 + x + (x^2+x^2 + 2x^3+x^3 + 3x^4+2x^4 + ...) \\
1 + Fx + Fx^2 &= F \\
\frac{1}{1-x-x^2} &= F \end{align} $$
We rearrange the terms and we get this result which can be then manipulated further to find formula for nth Fibonacci term.
I want to focus on the rearrangement of terms in an infinite series. This is no different from results like $ 1 + 2 + 4 + 8 + ... = -1$.
The usual answer given is that we cannot treat an infinite summation like this like real numbers and apply normal addition and subtraction rules. We must apply sophisticated techniques like limits to evaluate these sums. So $ 1 + r + r^2 + r^3 + ... $ only makes sense when $|r| < 1$.
So going back to the Fibonacci proof above. We are applying normal addition rules for an infinite summation and also the result $\displaystyle F = \frac{1}{1-x-x^2} $ doesn't make sense for any value of $x$. But we still use this result to complete the proof. Why does it work?
Isn't this the same as using the result $ 1 + 2 + 4 + 8 + ... = -1 $ to prove other results? It would be absurd to use it as basis for other proofs.
Please shed some light on this. I am really confused.
|
One way to do this is to first prove that the series converges and then do these calculations, as Foobaz John outlines. But in fact, all these manipulations are perfectly valid without even taking convergence into consideration if you frame them in the right way - that is, in terms of formal power series.
A formal power series is just an arbitrary infinite sequence $(a_0, a_1, a_2, \ldots)$ of (let's say) real numbers. But we "decorate" this sequence using the notation $(a_0, a_1, a_2, \ldots ) = \sum_{k=0}^\infty a_k x^k$. Note that we are not using any notion of convergence here. We just the notation $$\sum_{k=0}^\infty a_k x^k$$ to mean the sequence $(a_0, a_1, a_2, \ldots)$. In this context infinite sums like $\sum_{k=0}^\infty k! x^k$ that are convergent nowhere except $x=0$ are perfectly fine, since it just refers to the sequence of numbers $(1,1,2,6,24, \ldots)$.
This would be perfectly useless unless we allowed to manipulate them in certain ways. So we make the definitions
$$\sum_{k=0}^\infty a_k x^k + \sum_{k=0}^\infty b_k x^k := \sum_{k=0}^\infty (a_k + b_k) x^k$$
$$(\sum_{k=0}^\infty a_k x^k ) \cdot ( \sum_{k=0}^\infty b_k x^k) := \sum_{k=0}^\infty \sum_{i=0}^k a_ib_{k-i} x^k. $$
Now you can go through and prove that all the usual rules of algebra apply, so that $F(G+H) = FG + FH$, $(FG)H = F(GH)$ etc. for formal power series $F,G,H$. This means that the set of formal power series forms a ring. You can sometimes divide: it's possible to prove that $F = \sum_{k=0}^\infty a_k x^k$ has a multiplicative inverse $G$ so that $FG = 1$ if and only if $a_0 \neq 0$. (Here $1 := \sum_{k=0} b_k x^k$ where $b_0 = 1$, $b_k = 0$ for $k > 0$.)
In the ring of formal power series all of these computations are valid.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
}
|
Closed subset of $C([0,1],\mathbb R$)
Let $B=\{f\in C^1[0,1]:\Vert f\Vert _\infty \le A\}$. Is $B$ a closed subset of $C([0,1],\mathbb R)$?
Here's what I tried to do:
Let $\{f_n\}$ be a sequences of functions of $B$ such that $f_n \to f$, with $f\in C([0,1],\mathbb R)$.
If I prove that $f\in B$, then I finish the proof, i.e. $B$ will be closed in $C([0,1],\mathbb R)$.
But I don't know how could I do that.
Note: $C([0,1],\mathbb R)$ is the space of continuous functions with domain $[0,1]$.
|
Ahh, another simple example of sequence of functions is $$ f_{n} = \sqrt{x + \frac{1}{n}} $$
which uniformly convergences to $f(x)= \sqrt x $ which is not differentiable at $x=0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2257978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Show $f(x,y)=x^2\log(x^4+y^2)$ is differentiable at $\vec 0$ I have to show that $f$ is differentiable at $\vec 0$, where
$$
f(x,y)=x^2\log(x^4+y^2),
$$
and $f(0,0)=0$.
I’ve already shown that $f$ is continuous at $\vec 0$. I started off by calculation the first partial derivative:
$$
D_1f(\vec 0)=\lim_{t\to 0}t\log t^4.
$$
However, I don’t know how to calculate this limit even. I looked at the plot, and it seems that $D_f(\vec 0)=D_2f(\vec 0)=0$, so apparently $t$ goes faster to zero then $\log t^4$ goes to minus infinity. How can I show this? Can I use Taylor? Should I evaluate then at $x=1$? This would yield:
$$
\log x=(x-1)-\frac{(x-1)^2}{2}+O((x-1)^3).
$$
Is this the way to go? I've never expanded $\log x$ before like this, and I'm unsure if it's correct.
|
$t\log t^4=4t\log t\to 0$ as $t\to 0_+$ is a standard limit from high school.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
smallest number of socks to guarantee that the selection contains at least $10$ pairs A drawer in a darkened room contains $100$ red socks, $80$ green socks, $60$ blue socks and $40$ black socks.
A youngster selects socks one at a time from the drawer but is unable to see the color of the socks drawn.
What is the smallest number of socks that must be selected to guarantee that the selection contains at lest $10$ pairs?
(A pairs of socks is two socks of the same color. No sock may be counted in more than one pair.)
What is the smallest number of socks that must be selected to guarantee that the selection contains at least $10$ pairs?
My attempt:
$100 = a $
$ 60 = b$
$ 40 = c$
$ 80 = d$
$a + b +c +d =280 $ socks
I know the probability of choosing each color on the first try are :
$p(a) = 0,3571;\,\, p(b) = 0,2142;\,\, p(c) = 0,1428;\,\, p(d) = 0,2857$
How can I find the smallest number of socks that must be selected to guarantee that the selection contains at lest $10$ pairs?
|
Suppose you put $n$ socks into $4$ color boxes such that there are a total of exactly $k$ pairs of socks in the $4$ color boxes. Then, $n$ must be at least $2k$ because that many socks are required for the $k$ pairs. The maximum possible value of $n$ is $2k+4$ because each of the $4$ color boxes can have an odd number of socks and so there can be one unused socks in each color box. Thus, $2k \le n \le 2k+4$. If $k=9$, then $18 \le n \le 22$. Thus, the maximum number of socks we can have in the $4$ color boxes and still not have guaranteed $10$ pairs is $22$. So the answer to your question is $23$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
Definite Integral in the study of Prophet Inequalities I wish to integrate the function $\frac{1}{a + x - x \cdot \ln x}$ for $x$ going from $0$ to $1$, where $0<a<1$ is a constant. It's easy to see that when $a=0$ this function has a simple indefinite integral $-\ln(\ln x - 1)$. However, for non-zero $a$, solvers like Mathematica are failing to find the indefinite integral.
More particularly, for my work I am interested in finding $a$ that is the solution to the following equation:
$$ \int_{x=0}^{1} \frac{1}{a + x - x \cdot \ln x} dx = 1. $$
One can use numerical calculators to check that $a$ is close to $0.34148$, but I am hoping to obtain a "closed-form'' solution or find out if this cannot be simplified further. Thanks!
Remark: This integral appears in the study of Prophet Inequalities. See the following paper "Comparisons of Stop Rule and Supremum Expectations of I.I.D. Random Variables" of Hill and Kertz: http://www.jstor.org/stable/2243434
|
The bad news is that the integral actually diverges at $0,$ despite the nice indefinite integral. The good news is that if you evaluate it (say, numerically) at one point ($a=1$ is good), you have a rapidly converging power series expansion. Namely, by differentiating Brevan's expression under the integral sign with respect to $a.$ Actually, you dont even have to do that. Writing Brevan's integrand as
$$\frac1 a \exp(-u+1) \frac{1}{1 + u\exp(-u+1)/a},$$ expand this in a geometric series, and note that each term is easily evaluated (and decays exponentially).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help needed with modulus addition and multiplication proof We have recently started working with modular arithmetic in my discrete mathematics course, and I found two problems in my textbook that I am having trouble with. What are these kinds of proofs called, and what is the usual approach that is undertaken? Lastly, how would you suggest tackling these proofs in particular? Thank you so much in advance!
1.(a mod m) + (b mod m) ≡ (a + b mod m)
2.(a mod m)(b mod m) ≡ (ab mod m)
|
The idea here is to write $a\mod m$ as $a+k_1m$, where $k_1$ is some constant. Then we can construct a proof quite easily.
For 1, we have $(a+k_1m) + (b + k_2m) = a+b+(k_1+k_2)m \equiv a+b\mod m$
And for 2, we have $(a+k_1m)(b + k_2m) = ab+(bk_1+ak_2+k_1k_2m)m \equiv ab \mod m$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Why the equation of an arbitrary straight line in complex plane is $zz_o + \bar z \bar z_0 = D$ Why the equation of an arbitrary straight line in complex plane is $zz_o + \bar z \bar z_0 = D$ where D $\in R$
I understand that a vertical straight line can be defined by the equation $z+\bar z= D$ because suppose $z =x+yi$ then $\bar z = x-yi$ Thus, $z+\bar z = x+yi+x-yi=2x$ which is an arbitrary vertical straight line in w-plane.
But why $zz_o + \bar z \bar z_0 = D$ is an arbitrary straight line in complex plane?
|
Hint: given any two points $z_1, z_2 \in \mathbb{C}\,$, then $z$ is collinear with $z_1, z_2$ iff there exists $\lambda \in \mathbb{R}$ such that $z-z_1 = \lambda(z-z_2)$. Eliminate $\lambda$ between the following, then define $z_0, D$ appropriately:
$$
\begin{cases}
\begin{align}
z-z_1 &= \lambda(z-z_2) \\
\bar z- \bar z_1 &= \lambda(\bar z- \bar z_2)
\end{align}
\end{cases}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
If $\int_{a}^{b}f(x)g^n(x)dx=0, \quad \forall n \in \mathbb{N}, \; $ then $\; f \equiv 0$ Let g be continuous, not negative, and strictly increasing in [a, b]. Prove that if $f$ is continuous and
$$\int_{a}^{b}f(x)g^n(x)dx=0, \quad \forall n \in \mathbb{N},$$
then $f\equiv 0$.
With a change of variable I have arrived here but I could not continue:
$$\int_{g(a)}^{g(b)}f(u)u^n \frac{du}{u'}=0, \quad \forall n \in \mathbb{N},$$
Two particular cases already resolved by the community are:
$g(x)=x$ and $x \in [0,1]$: $\quad \int_{0}^{1}f(x)x^ndx=0, \quad \forall n \in \mathbb{N}, \; $
then $f\equiv 0$.
$g(x)=e^x$: $\quad \int_{a}^{b}f(x)e^{nx} dx=0, \quad \forall n \in \mathbb{N}, \; $
then $f\equiv 0$.
|
Hagen von Eitzen above has indeed given an elegant solution valid when the given g is strictly increasing but no assumption that g be non-negative is needed or used. However it is not necessary assume the integral (1) I[a,b]f(x)(g(x))^n dx =0 for all non negative integers n= 0.1,2,3... . It suffices to assume this for integers
n >= M where M is any positive integer and by increasing M we assume M is odd .We use the Stone Weierstrass theorem instead of the Weierstrass Theorem .Note that (1) implies that I[a,b]f(x) (g(x)^M (p(g(x))dx =0 for all polynomials If g(x) is never 0 in [a,b] then g(x)^M is strictly increasing hence the algebra of functions of the form (2)q(x)= (g(x)*M (p(g(x)) separates the points of [a,b] and don't all vanish at any one point and so dense in the continuous functions for the mac norm .Using such a sequence converging to f(x) uniformly we see that I(f(x)^2 ) =0 so f vanishes identically as before
What happens if g(c) = 0 for some c in [a,b] (exactly one ,g is strictly increasing . Then the functions of the form C + q (q as in (2),C a real constant ) are a dense set . But if h(c)=0 H continuous then a sequence of functions c+q oonverges uniformly to h and all q(c)=0 hence the sequence of constants c tends to 0 so the the sequence of q 's =c+q -c tends to h uniformly . Take h = f T where T(x) is between 0 and 1 ,T(c) =0 .and T=1 off and interval of length e containing c , T continuous then see that I (f^2T) =0 Let e tend to 0 to get I(f*2)=0 and hence f=0 . Stuart M.N.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Is meaning of same thing is different in mathematical logic and english? Is meaning of "if and only if" is different in mathematical aspects and English aspect?
let's take an example:
Example:I will go home if and only if it is not raining.
Now according to me in english aspect ,I cannot comment anything about rain if i did not go home,
but in mathematical aspect, I am sure that it is raining if i didn't go home,because in mathematics if and only if represent equivalence logic.
|
There is indeed a mismatch between the way we typically treat the 'if and only if' statement in English (or any other natural language) and the way we treat the logical $\leftrightarrow$. Take the following example:
'Mary lives in France if and only if Mary lives in Germany'
In any natural language would immediately say that this statement is false.
But if we suppose that Mary lives in Nigeria, then following the logical biconditional, we would evaluate it to $False \leftrightarrow False = True$!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find how many complex roots the equation has How many complex roots has each of the equations:
$$z^3 = \overline{z}$$
$$z^{n-1} = i \overline{z}$$
Where $\overline{z}$ is the complex conjugate of $z$.
For the first one I tried giving the form $z = a + bi,\:a, b \in \mathbb R$ and find the roots but I think there's something else because at the second equation I can't do that.
|
For the first equation, we have $$|z|^3=|\bar z|\implies |z|^3 = |z|\implies |z| \in\{0,1\}.$$
If $z\neq 0$, we can multiply the first equation by $z$ to get $z^4 = z\bar z = |z|^2 = 1$ and we have four different solutions $\{\pm 1,\pm i\}$. Since $z=0$ is solution of the original equation as well, this gives total of five solutions.
For the second equation, if $n = 1$, then $i\bar z = 1$ has unique solution.
If $n>2$, just like in the first equation, we conclude that $|z| \in\{0,1\}$. So, if $z\neq 0$, multiply the equation to get $z^n = i$ which has $n$ different complex roots. Adding $z = 0$ gives total of $n+1$ solutions.
The case $n=2$ is special, as Marc van Leeuwen kindly pointed out in the comments. The equation becomes $z = i\bar z$. In this case we can't conclude that $|z| \in\{0,1\}$. Let $z=x+iy$ for $x,y\in\Bbb R$. The equation becomes $$x+iy = i(x-iy)\iff x+iy = y + ix \iff x = y$$ and thus, there are infinitely many solutions and they are of the form $r(1+i),\ r\in\Bbb R$.
More generally, these are equations of the form $z^{n-1}=a\bar z$ for some complex $a$, $|a|=1$. For $n=1$ solution is unique, and for $n>2$, we have $|z|\in\{0,1\}$. Again, either $z=0$ or we can multiply the equation by $z$ to get $z^n = a$ which has precisely $n$ different complex solutions. The case $n=2$ is again a special case and needs to be treated separately:
$$x+iy = (\alpha+i\beta)(x-iy)\iff \begin{align} (\alpha-1)x+\beta y &= 0\\ \beta x - (\alpha+1)y &= 0\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Triangles within a Parallelogram ABCD is a parallelogram.
E is the point where the diagonals AC and BD meet.
Prove that triangle ABE is congruent to triangle CDE.
|
AE and EC are on the same line, so have the same gradients. Same goes for BE and ED. Because it is a parallelogram, AB=CD, as it has to be so that the shape holds. Therefore, you have proved this by the Angle-Side-Angle rule, where two triangles with two identical angles and a side are congruent, even if they are reflected.
If you know the gradient, and set the known side to 0 degrees, the gradient shows the vector, which is also an angle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2258983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is it possible to compute Right Triangle's Legs starting from another Right Triangle with the same Hypotenuse? In an application of the Manhattan Distance trough the haversine formula, I was stuck in a problem that doesn't allow me to compute the right distance among two points in a space.
Despite the scope, it could be useful to many other applications, so I'm trying to find a "good enough" solution of this tedious problem.
Take a look at this simple picture to easily understand the problem:
right triangles with same hypotenuse
There are two right triangles, one red and one blue, which have the same hypotenuse but different legs and legs ratios.
The two legs of the red triangle are known, so it is easy to compute both hypotenuse and angles gamma and beta, but what is important for me is the computation of c and d which are the legs of the blue triangle.
There doesn't exist a same ratio among the red legs and the blue ones (such as 16:9 in TV monitors), so it is probably impossible to solve this problem, but maybe I'm wrong.
I spent some time trying to compute alpha and now I think that this is impossible, I know that putting alpha equal to 45° I will be able to compute c = d but this is not the solution that I want, as you can see the blue legs are different each other.
If you have any idea concerning this problem please let me know your POV, I will appreciate because I was not able to find any suggestion. THANK YOU
|
To know all sides of a triangle you must know either:
*
*One side and two angles, or
*Two sides and one angle, or
*Three sides
You only know one side and one angle. Ergo, you cannot compute any further sides in the blue triangle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
A question about the equivalence relation on the localization of a ring. Let $A$ be a ring and $S$ a multiplicative closed set. Then the localization of $A$ with respect to $S$ is defined as the set $S^{-1}A$ consisting of equivalence classes of pairs $(a, s)$ where to such pairs $(a,s), (b,t)$ are said to be equivalent if there exists some $u$ in $S$ such that
$$u(at-bs)=0$$
Now, in the Wikipedia article about the localization of a ring, it says that the existence of that $u\in S$ is crucial in order to guarantee the transitive property of the equivalence relation.
I've seen the proof that the equivalence relation defined above is indeed an equivalence relation, but I fail to see how crucial the existence of $u$ is. For example, why doesn't it work if we simply say that two pairs $(a,s),(b,t)$ are equivalent iff $at - bs = 0$? I tried to come up with a counterexample for such case, but failed in the attempt.
|
For a more geometric example, let $A = \mathbb{R}[x,y] / (xy)$ and $S = \{ 1, x, x^2, x^3, \ldots \}$. (In algebraic geometry, $A$ represents a ring of functions on the union of the two axes in $\mathbb{R}^2$.) Then in $S^{-1} A$, $y/1 = 0/1$ even though $y \cdot 1 - 0 \cdot 1 \ne 0$ in $A$ - but in fact, $x (y \cdot 1 - 0 \cdot 1) = 0$.
Intuitively, the reason $y = 0$ must hold in $S^{-1} A$ is that $xy = 0$ is inherited from $A$, and then you can multiply both sides by $x^{-1}$. (The localization corresponds to the geometric operation of intersecting with the set where $x \ne 0$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
Mental division of two fractions? I've got a non-calc paper coming up, and when going through a test, this fraction came up:
$$
\frac{8}{-0.4} \equiv \frac{8}{\big(\frac{-2}{5}\big)}
$$
Going through the answers he says: $$8/2=4$$ I then assume he did -(4*5)
so: $$\frac{8}{\big(\frac{-2}{5}\big)} = -20$$
I can see what he's done, but I don't see what's happening mathematically?
$$\frac{a}{\frac{b}{c}} \equiv \frac{a}{b}\cdot c$$
|
What he is using is that $\frac{a}{\frac{1}{b}} = a*b$. That way he could transform a division involving decimals into simple integer multiplication.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Line Integral and Residue Theorem
I know applying the residue theorem to some integral gives that when you differentiate $(1+z)^{2n} $ n times and evaluate at 0 you get 2n choose n but I don't understand how the highlighted step is taken as automatic~
|
HINT:
Apply the binomial theorem to $\left(\frac{z+z^{-1}}{2}\right)^n$ and exploit
$$\oint_{|z|=1}z^m\,dz=\begin{cases}0&,m \ne -1\\\\2\pi i&,m=-1\end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How can I find the area between the graphs of a function and its inverse? I have the following function $$f(x)=x\cdot e^{x^2-1} $$ and I want to find the are between this function and its inverse. I'm not sure how to calculate the integral because I know that for this type of problem I need to find where the two functions intersect.
|
HINT: The functions intersect where $f(x)=x$.
HINT2: This is at $-1, 0, 1$
HINT3: The inverse function is the function reflected around the line $y=x$, so you find the intersections of $f(x)$ and $f^{-1}(x)$ when $f(x)$ intersects with $y=x$. You use this to integrate and find the area between $f(x)$ and $x$, then multiply by $2$ for symmetry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Decomposition of the tensor product $\mathbb{Q}_p \otimes_{\mathbb{Q}} \mathbb{Q}[i]$ into a product of fields I am trying to solve the following problem:
For each rational prime $p$, describe the decomposition of the tensor product $\mathbb{Q}_p \otimes_{\mathbb{Q}} \mathbb{Q}[i]$ into a product of fields, where $\mathbb{Q}_p$ is the field of $p$-adic numbers.
I know that the tensor product of $2$ extensions of a field one of which is finite is Artinian and is therefore a product of Artinian local rings, but I do not know how to compute these Artinian local rings. Please if you can help me with this, I'll really appreciate it.
|
In general, tensoring a number field with $ \mathbf Q_p $ gives you an appropriate direct product of the completions of it at the different primes lying over $ p $. The key isomorphism is
$$ \mathbf Q_p \otimes \mathbf Q(i) \cong \mathbf Q_p \otimes \mathbf Q[x]/(x^2 + 1) \cong \mathbf Q_p[x]/(x^2 + 1) $$
Now, we have three cases. If $ p = 2 $, then $ X^2 + 1 $ remains irreducible in $ \mathbf Q_2[X] $, since it is irreducible modulo $ 4 $. If $ p \equiv 3 \pmod{4} $, $ X^2 + 1 $ is irreducible modulo $ p $, thus it is also irreducible in $ \mathbf Q_p $. Finally, if $ p \equiv 1 \pmod{4} $, then $ X^2 + 1 $ has a root modulo $ p $ and its derivative does not vanish at this root, so Hensel's lemma gives a root in $ \mathbf Q_p $. Therefore:
$$ \mathbf Q_p \otimes_{\mathbf Q} \mathbf Q(i) \cong \mathbf Q_p(\sqrt{-1}) \textrm{ if } p = 2 \textrm{ or } p \equiv 3 \pmod{4} $$
$$ \mathbf Q_p \otimes_{\mathbf Q} \mathbf Q(i) \cong \mathbf Q_p \times \mathbf Q_p \textrm{ if } p \equiv 1 \pmod{4} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Galois Theory Quadratic Subfield Let $ζ_7=e^{i2\pi/7}$ be a 7th root of unity. The field $\Bbb{Q}(ζ_7)$ contains a quadratic subfield that can be expressed in the form of $\Bbb{Q}(\sqrt{D})$ where D is an integer. What is D?
I understand that there is a field extension of order 6 and therefore there will be a quadratic subfield, but how do we find out what D is?
|
Not every field extension of degree $ 6 $ has a quadratic subfield. For example, $ \mathbf Q(\sqrt{1 + \sqrt[3]{2}}) $ has no subfield that is quadratic over $ \mathbf Q $. It is, however, true that every Galois extension of degree $ 6 $ has a unique quadratic subfield. It has been pointed out in another answer how to find this subfield of $ \mathbf Q(\zeta_7)/\mathbf Q $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Markov Chains - understand proof that if x and y communicate then if x is recurrent then y must also be recurrent I am trying to understand the proof that if two states that communicate, then if one state is recurrent the other must also be recurrent.
The book I'm looking at has this proof:
Suppose $x$ is recurrent, and that $y$ communicates with $x$. So there exist integers $k,l \geq 1$ such that $p_k(x,y)>0$ and $p_l(y,x)>0$.
By Chapman-Kolmogorov, $p_{k+n+l}(y,y)\geq p_{l}(y,x)p_{n}(x,x)p_{k}(x,y)$
So that's my first question - how do we derive the above from the Chapman-Kolmogorov equations?
The proof goes on: so by the recurrence of x and a previous theorem stating that state $x$ is recurrent if and only if the expected number of visits to $x$ is infinite, $\sum ^\infty _{n=0}p_n(y,y) \geq p_{l} (y,x)p_{k}(x,y)\sum^\infty _{n=0}p_n(x,x)=\infty$
My second question is - is what allow us to move from the previous inequality to those summations (especially the first summation where we're going from $p_{k+n+l}(y,y)$ to $\sum^\infty _{n=0}p_n(y,y)$
Thank you.
|
Regarding $p_{k+n+l}(y,y) \ge p_l(y,x) p_n(x,x) p_k(x,y)$ can be intuitively explained as follows:
*
*The left-hand side is the probability of starting at $y$ and after $k+n+l$ steps landing at $y$ again.
*The right-hand side is the probability of starting at $y$, then after $l$ steps landing at $x$, then after $n$ more steps landing at $x$ again, and then after $k$ more steps landing at $y$.
*The paths considered by the right-hand side each go from $y$ to $y$ in $l+n+k$ steps, but there are many other ways to do so, so the probability on the right-hand side is smaller.
More succinctly,
\begin{align}
&P(\text{go from $y$ to $y$ in $k+n+l$ steps})
\\
&= \sum_{a,b} P(\text{go from $y$ to $a$ in $k$ steps, then to $b$ in $n$ steps, then to $y$ in $l$ steps})
\\
&\ge P(\text{go from $y$ to $x$ in $k$ steps, then to $x$ in $n$ steps, then to $y$ in $l$ steps}).
\end{align}
For the second question,
$$\sum_{j=0}^\infty p_j(y,y) \ge \sum_{j=l+k}^\infty p_j(y,y) \ge p_l(y,x) p_k(x,y) \sum_{n=0}^\infty p_n(x,x),$$
where the last step comes from writing $j=l+n+k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Maps to a Subobject Classifier Let $\Omega$ be the subobject classifier of a category $C$. Choose some map $m: X \rightarrow \Omega$. Is there necessarily a subobject $p: P \rightarrow X$ for which $m$ is the characteristic map?
It seems that in the definition of subobject classifier you can only go the other way around, i.e. from subobjects of $X$ to characteristic maps so that there may be more maps than subobjects. Is that possible?
|
Usually you only speak of a subobject classifier in a category that has all pullbacks. In that case, the pullback of the universal map $1\to\Omega$ along $m$ is a subobject $p:P\to X$ for which $m$ is the characteristic map. Really, the existence of at least all pullbacks of this form should be part of the definition of a subobject classifier, but it is sometimes not mentioned since the definition is only used in categories which have all pullbacks.
If you don't assume any pullbacks exist, then such a subobject need not exist. For instance, consider the category of nonempty sets. The usual subobject classifier for sets still satisfies the definition "every subobject has a unique characteristic map" and so gives a "subobject classifier" in this category by that definition. But a function which is the characteristic function of the empty subset is not the characteristic function of any subobject in this category.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2259937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find the real and imaginary parts of $ln(z)$ This is on my homework on differentials and partial differentiation, so I'm not sure what application these could have on the natural log of z
|
The answer of Iti Shree is correct, but under tacit assumptions, which I would
like to clarify here. In particular, $\Im(\ln(z)) = \arctan(\frac{y}{x})$ is
not defined if $x = 0$ and does not distinguish between opposite complex
numbers.
First, $\ln(z)$ needs to be defined properly. Notice that it cannot be defined
continuously on the whole complex plane, because of the branch point at
$z = 0$. But it can be defined on the complex plane without the negative real
numbers as the unique analytical continuation of the standard logarithm
defined on the positive real numbers. This definition is assumed here.
Also, $\arctan(x)$ needs to be defined properly. Let $\arctan(x)$ be defined
such that $-\frac{\pi}{2} < \arctan(x) < \frac{\pi}{2}$.
Finally, let $x$ denote $\Re(z)$ and $y$ denote $\Im(z)$.
Then, for $x > 0$ or $y \ne 0$:
$
\begin{aligned}[t]
\Re(\ln(z)) &= \ln(|z|) = \frac{\ln(x^2 + y^2)}{2} \\
\Im(\ln(z)) &= \arg(z) = 2 \cdot \arctan\left(\frac{y}{x + \sqrt{x^2 + y^2}}\right)
\end{aligned}
$
where $\arg(z)$ is defined such that $-\pi < \arg(z) \le \pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why are random walks in dimensions 3 or higher transient? I watched this PBS video a while ago (relevant part here) and have been trying to get my head around the idea of transient walks. The video says that a recurrent random walk is one that is guaranteed to return to it's starting position - all 1D and 2D walks - and a walk is transient if there is a positive probability that it never returns - 3D or higher. I've tried to have a think about this and looked some stuff up but I haven't had any breakthroughs.
What confuses me is this: A random walk in 3 dimensions can be split up into 3 independent random 1D walks. If each of these walks is guaranteed to return to the starting position infinitely many times we can say that there is a finite positive probability that they will return to the starting point on a given 'turn'. The product of the three finite probabilities is finite so isn't there a finite chance that any random walk in three dimensions will return to the start on any given 'turn' and hence they are guaranteed to return at some point?
I imagine I am just making incorrect assumptions about the nature of these infinite systems as is too easy to do but I'd like to know exactly where my intuition is wrong.
|
In 1 dimension, although the expected number of times that you will return to the origin before a give time approaches infinity as time approaches infinity, it varies sublinearly with time. When you have 3 independent 1-dimensional random walks all starting at the origin, although you can expect that each of them individually will return to the origin infinitely many times, it does not follow that there will ever be a time when all 3 of them are at the origin, although that might happen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Am I supposed to use generating functions or combinatorics or something for this question? If there are 201 seats in the Parliamentary chamber, how many different ways are there for the numbers
of seats to be allocated amongst three political parties, subject to no one party having an overall
majority?
I could figure it out manually. 201/3 = 67 per party. Thats one way. Then divide 67/2 (because you can move 2 from one party at a time to ensure the other 2 are equal) = 33.5 but use lower bound as a person cannot be split in half. That makes it 100+100+1 = 201. So now thats 33 ways. But party that gives up 2 each time can change. i.e. Party A gives up 2 members to B&C until 1 remain. Then party B gives up... then C. So would it be 1+(33*3) = 100 ways?
Is there a cleaner way to do this, say using generating funtions or combinatorics of some sort? If not I dont know why the would ask this question.
Any help appreciated.
|
We seek the number of integer solutions to the equation
$$
x_1+x_2+x_3=201\tag{1}
$$
where $ 0\leq x _i \leq 100$. You can solve this using generating functions by noting that this is equivalent to finding the coefficient of $x^{201}$ in the generating function of
$$
(1+x+\dotsb x^{100})^3
=\frac{(1-x^{101})^3}{(1-x)^3}
=(1-3x^{101}+3x^{202}+x^{303})\left(\frac{1}{(1-x)^3}\right).\tag{2}
$$
Use the identity
$$
\frac{1}{(1-x)^3}=\sum_{n=0}^\infty \binom{n+2}{2}x^n.\tag{3}
$$
(which can be obtained by repeatedly differentiating the geometric series) to find that the coefficient of $x^{201}$ is
$$
\binom{201+2}{2}-3\binom{(201-101)+2}{2}\tag{4}
$$
which can be obtained equivalently using inclusion exclusion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Ratio of sum of squares of Normal Distributions Find $$E[\frac{\sum_{i=1}^{51} X_i^2}{\sum_{i=51}^{101} X_i^2}], $$if $X_1,X_2,...,X_{51}$ are independent and are distributed as N(0,1). This question appeared in my final exam of probability and stochastic processes, that occurred today .I couldn't answer this in the exam and would love to find out more. I did simplify this a bit though. Here's what I did:
$Y = \sum_{i=1}^{50} X_i^2$, $Z = \sum_{i=52}^{101} X_i^2,X = X_{51}$. Then,
$X \sim \gamma(1/2,1/2), Y \sim \gamma(25,1/2), Z \sim\gamma(25,1/2). $ Then we wish to find $E[\frac{X+Y}{X+Z}]$, where X,Y,Z are independent random variables. I was stuck after this, as there was this integral of $$\frac{x^{-1/2}}{x+z}e^{-25x}, 0\leq x < \infty$$ Any help will be appreciated.
|
You can write the ratio as $\frac{Y}{X+Z} + \frac{X}{X+Z}$. Then $\frac{Y}{X+Z}$ has an F-distribution with $d_1 = 50$ and $d_2 = 51$ degrees of freedom. And $\frac{X}{X+Z}$ has a Beta-distribution with parameters $\alpha = \frac{1}{2}, \beta = \frac{50}{2} = 25$ .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Identity theorem for $\mathbb{R}^n$ This question is follow up to an interesting question I found here.
The results of this question states the following:
If $U$ is a domain, and $f,g$ are two real-analytic functions defined
on $U$, and if $V\subset U$ is a nonempty open set with $f\lvert_V
\equiv g\lvert_V$, then $f \equiv g$. If the domain is one-dimensional
(an interval in $\mathbb{R}$), then it suffices that $f\lvert_M \equiv
g\lvert_M$ for some $M\subset U$ that has an accumulation point in
$U$.
I have a few question about this theorem:
*
*I was looking for a reference for this. However, in the previously pointed out reference A Primer of Real Analytic Functions by Krantz and Parks, I was not able to locate this theorem. I would appriciate if someone could point me to proper reference of this theorem.
*My main question, is about the second part of the theorem. Specifically, I am interested in why there is such a difference going from $\mathbb{R}$ to $\mathbb{R}^n$. That is in one-dimension we can assume that $M$ is just a set with an accumulation point, but in $\mathbb{R}^n$ we have to assume that $M$ is an open set. I would really like see a counter example that demonstraes that assuming that $M$ is a set with accumulation points is not sufficient in $\mathbb{R}^n$.
*I would really like for you to speculate or suggest an extra assumptions on functions $f$ and $g$ such that it suffices to consider $M$ to be only a set with an accumulation point.
|
$M$ being open is not a necessary condition for $M$ to be a uniqueness set in higher dimensions. For example, if $f,g$ are real analytic on $\mathbb R^2$ and $f = g$ on $\cup_{k=1}^\infty \{(x,y):x=1/k\},$ then $f\equiv g.$ Also any set of positive measure, in any dimension, will give the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Which fallacy is this? Assume that all people are either right-handed or left-handed, and likewise either right-footed or left-footed. 90% of people are right-handed. 90% of right-handed people are right-footed, but only 50% of left-handed people are left-footed as well. Which is more common: left-handedness or left-footedness?
STOP here, and have a go at answering that question first before continuing.
And then read on:
The answer is easy enough to get if you calculate it. Yet it feels counter-intuitive. Two people I tested this on (another used maths and got it right) assumed that because right-footedness is so dominant among right-handers and common among left-handers, that right-footedness should be even more common than right-handedness. There's some sort of fallacy at work here: any idea of what it is?
|
https://en.wikipedia.org/wiki/Prosecutor's_fallacy
People will see that half of the the left handed people are left footed and assume that left handed people are more common, but they don't realize that a small set of a large population can be comparable to a large set of a small population.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Find the exact value of $A(\beta)=8\pi-16\sin(2\beta)$ with $\tan(\beta)= \frac{1}{2}$
The picture below represents a semi-circumference of diameter [AB] and
center C. Point D belongs to the semi-circumference and it's one of
the vertices of the triangle $ABC$. Consider that BÂD = $\beta (\beta
\in ]0,\frac{\pi}{2}[)$ and AC = 4.
The area of the pink part of the picture is given by $$A(\beta) =
8\pi-16\sin(2\beta)$$
Find the exact value of the area of the pink part with $\tan(\beta)=
\frac{1}{2}$
I tried:
$$\tan \beta = \frac{\sin(\beta)}{\cos(\beta)} = \frac{1}{2}\\ \Leftrightarrow \sin(\beta) = \frac{\cos(\beta)}{2} \\ \Leftrightarrow \sin(\beta) = \frac{\sqrt{1-\sin^2\beta}}{2}\\ \Leftrightarrow ???$$
What do I do next?
|
Given: $\tan \beta = \frac 12$, so $2\sin\beta = \cos \beta$
$\sin 2\beta = 2\sin\beta\cos\beta = \cos^2\beta = \frac 1{\sec^2 \beta} = \frac 1{1 + \tan^2 \beta} = \frac 1{ 1 + \frac 14} = \frac 45$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Show $\sum_{l=0}^\infty \sum_{i=0}^\infty x^{{2^l}(2i+1)}=\sum_{j=1}^\infty x^j$
Show $\sum_{l=0}^\infty \sum_{i=0}^\infty x^{{2^l}(2i+1)}=\sum_{j=1}^\infty x^j$ where $|x|<1$
Thus show it is the geometric series.
I know, I could just write out the sum and it would make sense, but is there a more formal proof for this equality?
|
If $|x| < 1$,
$$\sum_{i=0}^\infty x^{2^l(2i+1)} = x^{2^l} \sum_{i=0}^\infty x^{2^{l+1} i} = \frac{x^{2^l}}{1 - x^{2^{l+1}}}.$$
Now, intuitively, if you sum the first $L$ terms you get the geometric series except for terms where the power of $x$ is divisible by $2^{L+1}$. So, you would expect the $L$th partial sum of the double sum to be equal to:
$$\sum_{l=0}^L \sum_{i=0}^\infty x^{2^l(2i+1)} = \frac{x}{1-x} - \frac{x^{2^{L+1}}}{1 - x^{2^{L+1}}}.$$
You can now either prove this by induction, or observe that it's in the form of the result of a telescoping sum, so it's sufficient to show
$$\frac{x^{2^l}}{1 - x^{2^{l+1}}} = \frac{x^{2^l}}{1 - x^{2^l}} - \frac{x^{2^{l+1}}}{1 - x^{2^{l+1}}}.$$
Then, once you have this result, it's easy to take the limit as $L \to \infty$, again assuming $|x| < 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Implication without if statement In logic, can we have an implication if there is no "if"?
Ex, John shall do $x$ regardless
Then is this an implication?
|
The word "regardless" in your example doesn't modify the meaning of the sentence. Its job is merely to explicitly point out that there is no implication or condition present.
It is up to the reader/listener to figure out from context which potential implication it is he should note isn't there.
In mathematics, where our ideal is that all conditions are stated explicitly, we have no need for representing such things in symbolic logic. Then, conditions are never just assumed to be there if they're not actually written down, so a symbolic form of "observe that there is no condition here" is not necessary. Of course, we can still ask the reader to think about a particular property of a formula, but we use words for that, not symbols.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2260965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Characterization for basis of generated topology
Definition: Given a set $X$ and $\mathcal{B}$ a collection of subsets of $X$, the topology generated by $\mathcal{B}$ is the set:
$$[\mathcal{B}]=\bigcap_{\tau\in T}\tau$$
where $T=\{\tau \in \mathscr{P}(X):\tau$ is a topology on $X$ and $\mathcal{B}\subset\tau\}$
Question: Show that if $\mathcal{B}$ satifies:
*
*$\mathcal{B}$ covers $X$, that is, $\forall x \in X$, there is a $B\in \mathcal{B}$ such that $x\in \mathcal{B}$.
*$\forall B_1,B_2\in\mathcal{B}$ and $\forall x\in B_1\cap B_2$, there is a $C\in\mathcal{B}$ such that $x\in C\subset B_1\cap B_2$.
then $\mathcal{B}$ is a basis for the generated topology on $X$.
Almost everywhere I see the conditions in the question as the very definition of generated topology, and in the few places that said both definitions were equivalent, I didn't see a proof.
From 1, I got that (fairly obviously) that for every open set $A\in [\mathcal{B}$] we have:
$$A\subset\bigcup_{x\in A}B_x$$
where $x\in B_x\in\mathcal{B}$. But I got stuck when I tried to show the other inclusion for a subset $\mathcal{B}'\subset\mathcal{B}$.
I also tried by contradiction, only to- hit another wall.
Any tips are appreciated.
|
The set $\tau$ of arbitrary unions of elements of $\mathcal{B}$ is a topology.
It is clear that $X\in\tau$, because of property 1; also $\emptyset\in\tau$ (union of the empty family).
It is also clear that $\tau$ is closed under finite intersections. Indeed, suppose
$$
U=\bigcup_{i\in I}B_i,
\qquad
V=\bigcup_{j\in J}C_j,
\qquad
(B_i,C_j\in\mathcal{B})
$$
Then
$$
U\cap V=\bigcup_{i,j}(B_i\cap C_j)
$$
so it's enough to prove that if $B,C\in\mathcal{B}$, then $B\cap C\in\tau$. This is obtained applying property 2.
Since $\tau$ is a topology and clearly it is included in every topology that includes $\mathcal{B}$, we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to interpret limit notation $\lim\limits_{x \to a} f(x)= L$ is by most; intuitively thought of "as $x$ gets close to $a$, $f(x)$ gets close to $L$", however my lecturer said this is not correct. She told me to go away and somehow find out why, by formal definition, the intuition "$f(x)$ is close to $L$, for all $x$ sufficiently close to $a$" is correct, not the former.
I went on to find examples; Simply consider; $f(x) = x/|x|$ when $x$ tends to some number.
and to recall an emphasise; “As $x$ gets close to $a$, $f(x)$ gets close to $L$”
The emphasise on gets is important as it suggests some change towards $L$, however when investigating, as $x$ tends to some number (like $0$), $f(x) = L$, no matter where on the domain you fly. There ceases to be a case in this function where $f(x)$ moves/gets close to $L$ anywhere.
“$f(x)$ is close to $L$, for all $x$ sufficiently close to $a$” includes the idea of ‘there exists some interval’ where $f(x)$ is close to $L$.
Is that a sufficient answer to the question? I can't find anything online.
|
The epsilon-delta definition is pretty straight-forward:
$$\lim _{{x\to c}}f(x)=L\iff (\forall \varepsilon >0)(\exists \ \delta >0)(\forall x\in D)(0<|x-c|<\delta \ \Rightarrow \ |f(x)-L|<\varepsilon )$$
What does this mean? Well, we break it down, part by part:
*
*$(\forall\varepsilon>0)\dots(\dots|f(x)-L|<\varepsilon)$
This means that $f(x)$ can get arbitrarily close to $L$.
*$(\exists \ \delta >0)(\forall x\in D)(0<|x-c|<\delta\dots)$
This means that the previous statement is true for every $x$ in the domain that is a certain distance from $c$, the value $x$ is approaching.
This is different from your definition in that it requires $f(x)$ to be close to $L$ with some maximum error $\varepsilon$ for all values $x$ close to $c$. $x$ does not merely approach $c$, but instead, we must have
$$|f(x)-L|<\varepsilon$$
for every $x$ close to $c$. The next requirement would then be that the distance between $f(x)$ and $L$ can keep getting smaller and smaller, and that it would still hold for every $x$ values a certain distance from $c$.
There is no such "$x$ approaches $c$" here.
So yes, $f(x)$ is close to $L$ for all $x$ sufficiently close to $a$ is the accurate statement.
In the example $x/|x|$, notice that no matter what $x$ value you choose, either the result will be $1$ or $-1$. Let's imagine taking the limit to $10$.
$$\lim_{x\to10}\frac x{|x|}\stackrel?=1$$
We then make a table of values:
$$\begin{array}{c|c}x&f(x)\\\hline9&1\\9.9&1\\9.99&1\\\vdots&\vdots\\10.01&1\\10.1&1\\11&1\end{array}$$
Notice that $f(x)$ does not approach anything. It is simply constant. I suppose you could then try to fix your statement with "as $x$ approaches $10$, $f(x)$ is close to $1$ within some amount of error that tends to zero."
But that misses the intuition you can get with the epsilon-delta definition:
Notice that $f(x)-L=0\forall x>0$. It thus follows that $|f(x)-L|=0<\varepsilon$, which holds when $\delta=10$. $\delta$ needn't get smaller. It simply needs to be small enough.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How does one evaluate $\lim _{n\to \infty }\left(\sqrt[n]{\int _0^1\:\left(1+x^n\right)^ndx}\right)$? I tried using Lebesgue's dominated convergence theorem and I'm getting $\lim _{n\to \infty }\left(1+\int _0^1x^ndx\:\right)$ which is $1$. But the answer should be 2.
|
Let $I_n$ be given by
$$I_n=\int_0^1(1+x^n)^n\,dx \tag1$$
From the binomial theorem, we can write
$$(1+x^n)^n=\sum_{k=0}^n\binom{n}{k}x^{nk}\tag 2$$
Using $(2)$ in $(1)$ reveals
$$\begin{align}
I_n&=\int_0^1 \sum_{k=0}^n\binom{n}{k}x^{nk}\,dx\\\\
&=\sum_{k=0}^n\binom{n}{k}\frac{1}{1+nk}\tag3
\end{align}$$
Clearly from $(3)$, we obtain the estimates for $(I_n)^{1/n}$
$$\frac{2}{(1+n^2)^{1/n}}\le (I_n)^{1/n}\le 2 \tag 4$$
whereupon applying the squeeze theorem to $(4)$ yields the coveted limit
$$\lim_{n\to \infty}\left(\int_0^1(1+x^n)^n\,dx\right)^{1/n}=2$$
as was to be shown!
NOTE:
The bounds given in $(4)$ follow by letting $k=0$ and $k=n$ in the term $\frac{1}{1+nk}$ in the binomial expansion of $(1+x^n)^n$. Since $x\le 1$ for $x\in [0,1]$, these bounds are tantamount to the bounds $2x^n\le 1+x^n\le 2$ in $(1)$ (as used by @didgogns), which lead immediately to $(4)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 3
}
|
A function is differentiable $n$ times. Assume there are $n+1$ distinct points. Prove that $\exists$ one point $y$ such that $f^{(n)}(y)=0$ So I'm stuck on this question, I have an idea on the question but I missed the lecture which it pertained to. So I'm unsure of the theory behind it so, it'd be appreciated if someone could help me out! :)
Question
Suppose the function $f : \mathbb{R}\rightarrow\mathbb{R}$ is $n$ times differentiable on $\mathbb{R}$. Assume there are $n+1$ distinct points {$x_1, x_2,...,x_n,x_{n+1}$} such that $x_1<x_2<...<x_n<x_{n+1}$ and $f(x_i)=0$ for all $i=1,2,...,n,n+1$. Prove that there exists at least one point $y$ such that $f^{(n)}(y)=0$.
Note
Unfortunately I don't really have an attempt as I've been sitting on it for 2 hours now unsure where to even start, because as I said I missed the lecture, however I have come to the conclusion that it could possibly involve doing Rolle's theorem multiple times but I don't really know how to actually apply it, etc. Anyway, ANY help would GREATLY be appreciated! :)
|
By Rolle's Theorem, there exist $c_i$ such that $x_i < c_i < x_{i+1}$ and $f'(c_i) = 0$. Now apply Rolle's Theorem in this way to the $c_i$'s. Keep doing this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Evaluate $\int_0^\infty \frac{\ln^2(z)}{1+z^2}$dz by contour integration Background: This is part b of problem 12.4.3 from Arfken, Weber, Harris Math Methods for Physicists to show that $\int_0^\infty \frac{\ln^2(z)}{1+z^2}$dz$=4(1-\frac{1}{3^3}+\frac{1}{5^3}-\frac{1}{7^3}+\dots)$.
Part b of the question asks to show that this series evaluates to $\frac{\pi^3}{8}$ by contour integration. Where is my mistake: $\lim_{z \to 0}zf(z)=0$ and $\lim_{z \to \infty}zf(z)=0$ so the big and little circle equal 0.$
Drawing a branch cut along the positive x axis and integrating counterclockwise along the positive x-axis around a big circle the negative x-axis from infinity and the little circle:
Assume $I=\int_0^\infty \frac{\ln^2(x)}{1+x^2}\text{dx}$
We can add the components of along the contour and set that equal to the value of $2\pi i \text{Res}[f(z),i]$ evaluated at the poles $\pm i$
$$\int_0^\infty \frac{\ln^2(z)}{1+z^2}\text{dz}+\int_{\infty}^0 \frac{\ln^2(z)}{1+z^2}\text{dz}=2\pi i \text{Res}[f(z),\pm i]\tag{1}$$
$$\int_0^\infty \frac{(\ln^2 \mid x\mid}{1+x^2}\text{dx}-\int_0^{\infty} \frac{(\ln\mid x\mid+2i\pi)^2}{1+x^2}\text{dx}=2\pi i \left (\lim_{z \to i}\frac{\ln^2(z)}{2z}+\lim_{z \to -i}\frac{\ln^2(z)}{2z}\right )\tag{2}$$
$$\int_0^\infty \frac{(\ln^2\mid x\mid}{1+x^2}\text{dx}-\int_0^{\infty} \frac{(\ln^2\mid x\mid+\color{red}{4\ln|x|i\pi}-4\pi^2)}{1+x^2}\text{dx}=2\pi i \left (\lim_{z \to i}\frac{\ln^2(z)}{2z}+\lim_{z \to -i}\frac{\ln^2(z)}{2z}\right )\tag{3}$$
$$0I+\color{red}{0}-\left[\tan^{-1}(x)\right]\mid^{\infty}_0(4\pi^2)\text{dx}=(2\pi i) \left (\frac{-\pi^2/4+9\pi^2/4}{2i}\right )\tag{4}$$
$$0I+2\pi^3=\frac{8\pi^3}{4}\tag{5}$$
For explanation of the red integral see here, here or here.
I found my error. It was a negative sign, and the two sides cancel to zero so you can't evaluate it this way, but I found an answer which evaluates it from negative to positive infinity so I'm marking the question as a duplicate. See dustin's answer at the link for the contour integration.
|
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
The contour is a key-hole $\ds{\,\mc{C}}$ which takes into acount the $\ds{\ln}$-branch cut
$$\ln\pars{z} = \ln\pars{\verts{z}} + \,\mrm{arg}\pars{z}\ic\,;\qquad z \not= 0\,,\quad
0 < \,\mrm{arg}\pars{z} < 2\pi
$$
when an integration of $\ds{I \equiv \oint_{\mc{C}}{\ln^{3}\pars{z} \over 1 + z^{2}}\,\dd z}$ is performed. The integrand has single poles at, according to the above branch cut, $\ds{\expo{\pi\ic/2}}$ and $\ds{\expo{3\pi\ic/2}}$.
\begin{align}
I & = 2\pi\ic\,\bracks{{\pars{3\pi\ic/2}^{3} \over -\ic - \ic} + {\pars{\pi\ic/2}^{3} \over \ic + \ic}} =
{13 \over 4}\,\pi^{4}\ic\label{1}\tag{1}
\end{align}
\begin{align}
I & =
\int_{0}^{\infty}{\ln^{3}\pars{x} \over 1 + x^{2}}\,\dd x +
\int_{\infty}^{0}{\bracks{\ln\pars{x} + 2\pi\ic}^{\,3} \over 1 + x^{2}}\,\dd x
\\[5mm] & =
\int_{0}^{\infty}{\ln^{3}\pars{x} \over 1 + x^{2}}\,\dd x -
\int_{0}^{\infty}{\ln^{3}\pars{x} + 3\ln^{2}\pars{x}\pars{2\pi\ic} + 3\ln\pars{x}\pars{2\pi\ic}^{2} + \pars{2\pi\ic}^{3} \over 1 + x^{2}}\,\dd x
\\[5mm] & =
-6\pi\ic\int_{0}^{\infty}{\ln^{2}\pars{x} \over 1 + x^{2}}\,\dd x +
{1 \over 12}\,\pi^{2}\
\underbrace{\int_{0}^{\infty}{\ln\pars{x} \over 1 + x^{2}}\,\dd x}_{\ds{=\ 0}}\ +\
8\pi^{3}\ic\ \underbrace{\int_{0}^{\infty}{\dd x \over 1 + x^{2}}}
_{\ds{=\ {\pi \over 2}}}
\\[5mm] & =
-6\pi\ic\int_{0}^{\infty}{\ln^{2}\pars{x} \over 1 + x^{2}}\,\dd x + 4\pi^{4}\ic
\label{2}\tag{2}
\end{align}
With \eqref{1} and \eqref{2}:
\begin{align}
{13 \over 4}\,\pi^{4}\ic & =
-6\pi\ic\int_{0}^{\infty}{\ln^{2}\pars{x} \over 1 + x^{2}}\,\dd x + 4\pi^{4}\ic
\\[5mm]
\implies &
\bbx{\int_{0}^{\infty}{\ln^{2}\pars{x} \over 1 + x^{2}}\,\dd x =
{\phantom{^{3}}\pi^{3} \over 8}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Convert statement from English to logic: "to pass philosophy it is not necessary to make notes every week" I saw this on a previous thread,
To pass philosophy it is not necessary to make notes every week.
Let $p = \text{Pass phil}$ and $m = \text{make notes}$,
Then basically what the sentence is is,
if you take notes or you don't take notes, then you will pass philosophy.
So is it
$m \lor \neg m \implies p$
?
|
You might use: $(m\land p) \lor (\neg m \land p)$ which is equivalent to $p$.
See truth table at: http://www.wolframalpha.com/input/?i=truth+table+((m%26p)or(~m%26p))
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
The group corresponding to the Rubik's cube Why is this group never studied in a group theory course at university? Is it too complicated or is it just not useful in connecting ides to other systems like vector spaces, creating modules, etc?
I would like to study this group but I did not find much useful info. Can you please point me to something easy to understand for a student who only took abstract algebra 1 and linear algebra 1?
|
The Rubik's cube group is studied in some universities:
W.D.Joyner's Lecture notes on the mathematics of the Rubik's cube
The Mathematics of the Rubik's cube
Group Theory and the Rubik’s Cube
Mathematics of the Rubik's Cube
Rubik’s Magic Cube
$\cdots$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Intuitive understanding of maximum value of quadratic function In trying to understand why the maximum area of a rectangle with a fixed perimeter occurs when the base is equal to the height, I got as far as this expression:
$A = (p/2)x - x^2$
from
$p = 2x + 2y,x + y = p/2, y = p/2 -x, A = x(p/2 - x)$
I know that I need to find the maximum value of the top expression, which can be generalised to $bx - x^2$
My question is, is there an intuitive (possibly visual) way to understand how to find the greatest possible value of this expression? Ideally I'd like to avoid anything but the most basic algebraic steps, and not have to refer to graphs or the quadratic equation.
|
A quadractic equation
$$
q(x) = a x^2 + bx + c
$$
with non-zero coefficient $a$ has either a minimum or a maximum, depending on the sign of $a$.
If you plot the graph you will see the typical parabola shape.
One way to determine the extremum is to bring $q$ into the form
$$
q(x) = a (x - S)^2 + T
$$
where $(S, T)$ are the coordinates of the extremal point (vertex).
In your case we have
\begin{align}
A(x) &= -x^2 + (p/2) x \\
&= -(x^2 - (p/2) x) \\
&= -((x - p/4)^2 - (p/4)^2) \\
&= -(x- p/4)^2 + (p/4)^2
\end{align}
from which you can read the coordinates of the extremum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2261884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Functional equations $f\left(\frac{x+y}{2}\right)=\frac{f(x)+f(y)}{2}$ and $f(x)=\frac{f\left(\frac{2}{3}x\right)+f\left(\frac{4}{3}x\right)}{2}$. Suppose $f$ is continuous and $f\left(\frac{x+y}{2}\right)=\frac{f(x)+f(y)}{2}$. Can we claim that $f(x)=kx$?
What if $f$ only satisfy $f(x)=\frac{f\left(\frac{2}{3}x\right)+f\left(\frac{4}{3}x\right)}{2}$?
This functional equation was called Jensen's equation on wiki, but there is no further discussion about it
|
For the first problem, fix $x, y \in \Bbb{R}$ and define the set $S_{x,y}$ by
$$S_{x,y} = \{\lambda \in [0, 1] : f(\lambda x + (1-\lambda) y) = \lambda f(x) + (1-\lambda)f(y) \}. $$
We easily check that $0, 1 \in S_{x,y}$ and if $\alpha, \beta \in S$ then $\frac{\alpha+\beta}{2} \in S_{x,y}$. It immediately follows that all the dyadic rationals (i.e. rationals of the form $k/2^n$ for some $k \in \Bbb{Z}$ and $n \geq 0$) in $[0, 1]$ are in $S_{x,y}$. By the continuity of $f$, this implies $S_{x,y} = [0, 1]$.
This is enough to conclude that $f$ is of the form $f(x) = ax + b$.
For the second problem, changing the parameter a bit gives a nowhere piecewise-linear example. Indeed, consider the functional equation
$$ f(x) = \frac{f(\alpha x) + f(\beta x)}{2}, \qquad \forall x \in \Bbb{R} \tag{*}$$
where $f$ is continuous, $\alpha \in (0, 1)$ and $\alpha + \beta = 2$. If we choose $\beta = \phi = \frac{1+\sqrt{5}}{2}$ (which is the golden ratio), then $\alpha = \phi^{-2}$ and the following function
$$ f(x) = |x|^{1+2\pi i/\log \phi} $$
solves the functional equation $\text{(*)}$.
I believe that this kind of solution does not appear in our case $\alpha = \frac{2}{3}$ since the set $\{\alpha^k \beta^l\}_{k, l \in \Bbb{Z}}$ is now dense in $[0,\infty)$. But I have no good idea to begin with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Basic question on countable intersection and union of sets I am just a beginner at measure theory, and I have a very basic question on the following fact found in Robert Ash, Probability and Measure theory , page 7:
Now this strikes me as a little asymmetric.
Very informally speaking, and I know this makes no sense, but we seem to have to have a concept of "$b^-$" in that say $[a,b)=a...b^-$, a concept of {b} in $[a,b]=[a,b) \cup \{b\}$, but no corresponding concept of "$a..b^+$" which I would have defined exactly as per the first equation $\cap_1^{+\infty} [a,b+ {1\over n}) $
I ask because in David Williams "Probability with Martingales" 3.12, "Skorokhod representation of a random variable with prescribed distribution function", the distributions are perforce right-continuous - really, this is enforced (in my mind as it stands) by the 'asymmetry' above, as exhibited by the first equation.
Put in another way which might make more sense:
*
*Is it possible to build a series of strictly decreasing sets $F_{n+1} \subset F_n$ such that the countable intersection $ \cap F_n=[a,b)$ ?
*(And just to make sure): Does anything change is we allow uncountable intersections in (1) above ?
Thank you in advance.
|
*
*Take $F_n=(a−1/n,b)$.
*I may be very ignorant of this, but I think uncountable intersection is not a thing. I have only encountered finite or infinite intersection, which compile the arbitrary intersection. I don't think you can define $F_n$ if the intersection is uncountable, since the notation implies that it is countable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
RSA Cryptography: Given $n$ and $\varphi(n)$, find $e$ such that $e=d$ The modulus, $n=8633$, is given (it's simple to find $p$ and $q$ such that $n=pq$, i.e. $p=89$ and $q=97$) and the task is to find an encoding exponent, $e$, such that the correpsonding decoding exponent, $d$, is equal to $e$.
As $n=89 \times 97$, $\varphi(n)=88 \times 96 = 8448$. Also, if we require $e=d$, we need $$\text{gcf}(e,8448)=1$$ and $$ed \equiv 1 \pmod{8448} \implies e^2 \equiv 1 \pmod{8448}.$$ How would I go about solving a problem like this?
|
So $e^2\equiv 1(\textrm{mod}\quad 89)$ and $e^2\equiv 1(\textrm{mod}\quad 97)$. This tell us that $e\equiv\pm 1(\textrm{mod}\quad 89)$ and similarly $e\equiv\pm 1(\textrm{mod}\quad 97)$. Take a nontrivial pair, (for example, $e\equiv-1(\textrm{mod}\quad 89)$ and $e\equiv 1(\textrm{mod}\quad 97)$) and use Chinese remainder theorem to work out the congruence modulo $89\times 97$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to get the inverse of this function, when we have qudratic? $$f(x)=-x^2+6x-5$$
How do I find the inverse? I tried by making it
$$y=-x^2+6x-5$$
Then swapping $y$ with $x$, and then solve it for y, but I got $y^2$.
The domain is $x$ greater or equal to $m$, and in this case $m=5$.
After that we need to find the domain of the inverse.
|
The simplest one is by completing the square
$f(x)=y=-((x-3)^2-4)$,
$4-y=(x-3)^2$,
$x=3\pm\sqrt{4-y}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Weak * lower semicontinuity I was asking myself the following question: let $X$ be a Banach space, dual of a separable Banach space, if $I: X \to R$ is a convex weak lower semicontinous functional, does it follow that $I$ is weak-* lower semicontinous?
If not, does the claim is true if we consider sequential lower semicontinuity?
Because in Dacorogna's book on direct methods of calculus of variations there is a proof of weak-* sequential lower semicontinuity in $W^{1,\infty}$ of a particular functional which goes like: the functional is strong l.s.c. thus weak-* seq. L.s.c, applying the general theorem on Banach spaces which states that strong l.s.c. implies WEAK l.s.c.
Now I don't see how to say it easy for general spaces, essentially because the proof of strong l.s.c. implies weak l.s.c. is based on the fact that in every locally convex t.v.s. strongly closed convex sets are weakly closed. Any ideas?
|
The proof of the "general theorem" uses the following arguments:
*
*Since $I$ is convex, its epigraph is convex.
*Since $I$ is l.s.c., the epigraph is closed.
*Closed and convex sets are weakly closed.
*Hence, the epigraph is weakly closed and $I$ is weakly l.s.c.
The third step does not work with the weak-* topology, there are closed and convex sets which are not weak-* closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Exterior derivative of 2-forms and divergence I'm working through A Geometric Approach to Differential Forms. The deriative of a $2$-form $\omega$ in $\mathbb{R}^3$, denoted $d\omega$ and operating on vectors $U, V, W \in T_p \mathbb{R}^3$, is defined as
$$d\omega(U, V, W) = \nabla_U \omega(V, W) - \nabla_V \omega(U, W) + \nabla_W \omega(U, V)$$
where $\nabla_U \omega(V, W)$ denotes the directional derivative of $\omega(V, W)$ in the direction $U$.
Suppose that we have a $2$-form $\omega = f(x, y, z) \ dx \wedge dy + g(x, y, z) \ dy \wedge dz + h(x, y, z) \ dx \wedge dz$. One way to calculate $d\omega$ is to realize that it must have the form $d\omega = a(x, y, z) \ dx \wedge dy \wedge dz$. Geometrically, we can think of $d\omega$ as taking three vectors, calculating the signed volume of the parallelogram formed by those three vectors, and then applying a scaling factor $a(x, y, z)$. So one way to determine $a(x, y, z)$ is to use the above definition of $d\omega(U, V, W)$ to see how it acts on vectors corresponding to a parallelogram of signed volume $1$.
If I take $U = (1, 0, 0)$, $V = (0, 1, 0)$, and $W = (0, 0, 1)$, and I go through the computations, I arrive at
$$d\omega = \left( \frac{\partial g}{\partial x} - \frac{\partial h}{\partial y} + \frac{\partial f}{\partial z} \right) \ dx \wedge dy \wedge dz.$$
This seems to be right as far as I can tell. In particular, the second term is given by $\nabla_V \omega(U, W) = \partial h / \partial y$, with the negative sign coming from the alternating signs in the definition of $d\omega(U, V, W)$. Explicitly, my calculation is
$$\nabla_V \omega(U, W) = \nabla \left( \underbrace{f \begin{vmatrix} 1 & 0 \\ 0 & 0 \end{vmatrix}}_0 + \underbrace{g \begin{vmatrix} 0 & 0 \\ 0 & 1 \end{vmatrix}}_0 + \underbrace{h \begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix}}_h \right) \cdot V = \frac{\partial h}{\partial y}.$$
Furthermore, I know that there's a connection between the exterior derivative of a $2$-form in $\mathbb{R}^3$ and the operation of the divergence. From, e.g., the Wikipedia description, it seems that divergence is
$$(g, h, f) \mapsto \frac{\partial g}{\partial x} + \frac{\partial h}{\partial y} + \frac{\partial f}{\partial z}$$
where I've ordered $f, g, h$ for consistency with the representation of $\omega$.
The difference here compared to my expression for $d\omega$ is that the term $\partial h / \partial y$ is positive rather than negative.
Whence the difference?
|
You are using $dx\wedge dz$ and the wiki page is using $dz\wedge dx$. So there is no difference.
remark: In general for an $n-1$ form, one usually insert some $(-1)$'s to deal with this: a general $n-1$ form $\alpha$ is written as
$$\alpha = \sum_{i=1}^n (-1)^{i-1} \alpha_i dx^1 \wedge \cdots \wedge\widehat{dx^i} \wedge\cdots \wedge dx^n$$
so that
$$d\alpha = \left( \frac{\partial \alpha_1}{\partial x^1}+ \cdots + \frac{\partial \alpha_n}{\partial x^n}\right) dx^1 \wedge \cdots \wedge dx^n. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Artin–Schreier polynomial Suppose we have the finite field $K=\Bbb{F}_{p^n}$ ($p$ prime and $n>0$) and an Artin–Schreier polynomial $f=x^p-x+\gamma \in K[x]$.
Suppose that $f$ is irreducible. How do we prove that $tr_K(\gamma)=\gamma+\gamma^p+\cdots +\gamma^{p^{n-1}} \neq 0$ ?
I think it helps to see that $tr_K(\gamma)=\sum_{\sigma \in Gal(K/\Bbb{F}_p)} \gamma^\sigma$. But I don't really see the relation between $f$ and the galois group.
|
This follows from Hilbert's Satz 90 for finite cyclic extensions, i.e if $ L/K $ is a finite cyclic extension with Galois group generated by $ \sigma $, then for an $ x \in L $, we have that $ \textrm{Tr}_{L/K}(x) = 0 $ if and only if $ x = \sigma(y) - y $ for some $ y \in L $. The extension $ \mathbb F_{p^n}/\mathbb F_p $ has cyclic Galois group generated by the Frobenius automorphism $ X \to X^p $. Then, we have that $ \textrm{Tr}(\gamma) = \textrm{Tr}(-\gamma) = 0 $ if and only if $ -\gamma = y^p - y $ for some $ y \in K $. But then, $ y $ is a root of $ X^p - X + \gamma $ in $ K $, which therefore cannot be irreducible in $ K[X] $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2262937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Multivariable Optimization - Distance Formula, Use of Square Root I'm working through a problem in which I am trying to find the point $P(x,y,z)$ closest to the surface $f(x,y)$. I am not concerned with the actual distance, I just want to find the closest point $P$.
To do this I am minimizing the distance between the surface and the point using the standard distance formula $\sqrt{x^2 + y^2 + f^2}$.
I think believe however that the square root is not needed, for I am only concerned with the closest point, not the actual distance.
Question: Do I need to include the square root?
I am very confident that I don't need it, but I just wanted to make sure.
Thanks
|
You are correct in that you don't need the square root:
If $\sqrt{h(x,y,f)}$ is at a minimum, then $h(x,y,f)$ is at a minimum, for $h\geq 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If $H/\Gamma$ is a compact Riemann surface , the generator of $\Gamma$ is not commutative In the book Compact Riemann Surfaces by Jurgen Jost, the Exercises for 2.4 is asking to prove that:
Let $H/\Gamma$ be a compact Riemann surface. Show that each nontrivial abelian subgroup of $\Gamma$ is infinite cyclic group. Where H is an upper complex plane equipped with hyperbolic metric $\frac{2}{(z-\bar{z})^2}dzd\bar{z}$
Since is $H/\Gamma$ is compact, $\Gamma$ is a group generated by a finite set $\{g_1,\dots,g_m\}$. So it is suffice to prove that $g_i$ and $g_j$ is not commutative for any $i\neq j$. But I have trouble in finding the contradiction by assuming $g_ig_j=g_jg_i$. Furthermore, each generator maps one side of a fundamental polygon to another side. Different pairs of such sides are carried to each other by different elements of $\Gamma$.
Any help will be appreciated!
|
Here's a (long) hint. The automorphism group of the upper-half plane is isomorphic with $\operatorname{PSL}(2,\mathbb{R})$, which contains three types of elements: elliptics, parabolics, and hyperbolics, depending on whether the absolute value of the trace is less than, equal to, or greater than $2$. If $\Gamma$ contains an elliptic the quotient will not be a Riemann surface, and if it contains a parabolic there will be a cusp, and hence the quotient won't be compact. Thus $\Gamma$ consists only of hyperbolic elements. Now you need to show that distinct hyperbolic isometries (one not being a power of the other) cannot commute. You can do this by showing that if two hyperbolic isometries commute, then they have the same fixed point set, and hence leave invariant the same axis in $\mathbb{H}$. Then one isometry is a power of the other precisely if their translation distances are integer multiples. And if their translation distances are not integer multiples, the group they generate contains elements of arbitrarily small translation distance, so that the quotient object isn't even Hausdorff.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Index of subgroup is 2 then for any $g$, $g^2$ belongs to subgroup If index of a subgroup $H$ is 2, then $g^2 \in H$ for every $g$ in G.
Proof:
Since index is 2, there are only two distinct cosets. Now if $g \in H$ then it trivially holds because $H$ is a subgroup. Let $H$ and $gH$ be cosets where $g \notin H$ therefore given any other coset which is of the latter form then it is equal to $gH$.
Hence,
$gH$=$g^{-1}H$
$g^2H=H$ $\implies g^2 \in H$
$\blacksquare$
|
Typically one shows that if $|G:H|=2$ then $H$ is normal in $G$. This follows directly since for $g\in G$ and $g\notin H$ we have $G = H \cup gH = H \cup Hg$ which implies $gH = Hg$.
Therefore $G/H \cong \mathbb{Z}_2$ since there is only one group of order 2. Hence $g^2H = H$ for any $g$ which implies $g^2 \in H$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Eigenvalues of a $2\times2$ matrix Let $a,b$ be distinct eigen values of a $2\times2$ matrix $A.$Then which of the following statement is true?
*
*$A^2$ has distinct eigen values.
*$A^3=\frac{a^3-b^3}{a-b}A-ab(a+b)I$
*Trace of $A^n$ is $a^n+b^n$ for every positive integer n.
*$A^n$ is not a scalar multiple of identity matrix for any positive integer n.
I think first option is wrong because if $1,-1$ are distinct eigenvalues of $A$ but $A^2$ has eigenvalues $1,1$. Third option is correct as we have result. I tried second option. Second is also right. For fourth option, characteristic equation implies $A^n$ cannot be a scalar multiple of identity matrix for any positive integer $n$. Is it correct?
|
Hint: For option $4$, construct a matrix $A$ with the diagonal elements $1$ and $-1$. Then check $A^2$ = identity matrix.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How can we integrate integral(s) of this type? So I was able to free $dx$ from the power. Now only wolfram can solve this Integral. How can I do this on my own?
$$r=\int_0^1\left(\frac{x^{12}}{(1-x^4)^3}+1\right)^{1/4}~dx$$
|
Hint:
$\int_0^1\left(\dfrac{x^{12}}{(1-x^4)^3}+1\right)^\frac{1}{4}~dx$
$=\int_0^1\left(\dfrac{x^3}{(1-x)^3}+1\right)^\frac{1}{4}~d\left(x^\frac{1}{4}\right)$
$=\dfrac{1}{4}\int_0^1x^{-\frac{3}{4}}\left(\dfrac{x^3}{(1-x)^3}+1\right)^\frac{1}{4}~dx$
$=\dfrac{1}{4}\int_1^0(1-x)^{-\frac{3}{4}}\left(\dfrac{(1-x)^3}{x^3}+1\right)^\frac{1}{4}~d(1-x)$
$=\dfrac{1}{4}\int_0^1(1-x)^{-\frac{3}{4}}\left(\dfrac{3x^2-3x+1}{x^3}\right)^\frac{1}{4}~dx$
$=\dfrac{1}{4}\int_0^1x^{-\frac{3}{4}}(1-x)^{-\frac{3}{4}}(3x^2-3x+1)^\frac{1}{4}~dx$
Which relates to Appell Hypergeometric Function
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If the connected sum $A\#B$ is homeomorphic to $S^2$ then $A\cong B \cong S^2$ I was looking for this, but I can't find anything.
Problem. Let $A, B$ two compact surfaces such that $A\#B \cong S^2$ then $A\cong B \cong S^2$.
I considerd infinite connected sum $A\#B\#A\dotsc$ and $B\#A\#B\dotsc$ These are homeomorphic to $\mathbb{R}^2$ and with $\{\infty\}$ to $S^2$, but I cant finalize the prove.
Can you get me a hint? Thank
Edit. Can I prove it using this? Connected sum of non orientable surfaces is non orienable, then A and B are orientables, in fact, are respectively connected sum of n and m torus. Hence we also know that connected sum of A and B is homeomorphic to the connected sum of $n+m$ torus. Therefore $m+n=0$ and $m,n\geq 0$ so $m=n=0$.
|
As you have mentioned, $A\#(B\#A\#B\#\cdots)\simeq \mathbb{R}^2$ and $B\#A\#B\#\cdots\simeq \mathbb{R}^2$.
But observe that if $X$ is a compact surface, $X\#\mathbb{R}^2 \simeq X\setminus\{x_0\}$ (=$X$ minus a point) Therefore, it means that $A$ minus a point is homeomorphic to $\mathbb{R}^2$, and then it is easy to conclude that $A \simeq S^2$. That $B\simeq S^2$ follows easily by symmetry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Parametrization of two surfaces $\frac{x^2}{a^2}-\frac{y^2}{b^2}-\frac{z^2}{c^2}=1$ and $\frac{x^2}{p}+\frac{y^2}{q}=2z$. Can someone please help me to parametrize the following surfaces in terms of hyperbolic(for second it might not be possible but i need some more convenient set of parametric equation than mine ) and trigonometric functions
$$\frac{x^2}{a^2}-\frac{y^2}{b^2}-\frac{z^2}{c^2}=1 $$ and $$\frac{x^2}{p}+\frac{y^2}{q}=2z$$
I have tried to do but the set of parametric equations I got were too complicated as I have to use those in some further calculation which makes the result very ugly.
For first equation the set of parametric equations is: $$x=a\sqrt{1+\frac{u^2}{c^2}}\cos v, \ \ y=b\sqrt{1+\frac{u^2}{c^2}}\sin v \ \ z=u$$
and for second: $$x=\sqrt{2pu} \cos v ,\ \ y=\sqrt{2qu} \sin v, \ \ z=u $$
|
For the first: $$\frac{x^2}{a^2}\color{red}{-}\left(\frac{y^2}{b^2}+\frac{z^2}{c^2}\right)=1$$
\begin{eqnarray}
&x&=a\cosh\theta,\\
&y&=b\cos\phi\sinh\theta,\\
&z&=c\sin\phi\sinh\theta.\\
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
series solutions to 2nd order ODEs i am very confused about how to tell whether a point is ordinary or regular singular. i know the definitions but think that I am physically doing it wrong. Do you sub the point into p @ q and see if you get zero, or see if they are equal? can you please explain in the most basic manner the step used.
|
Let's look at the definitions: given a differential equation
$$ y'' + p(x) y' + q(x) = 0, $$
(the leading coefficient must be $1$ to do this: if it isn't, divide by it!) the point $x=a$ is
*
*An ordinary point if $p(x)$ and $q(x)$ are regular at $x=a$ (continuous or bounded in a neighbourhood is good enough).
*A regular singular point if not an ordinary point and $(x-a)p(x)$ and $(x-a)^2 q(x)$ are bounded in a neighbourhood of $x=a$
*An irregular singular point if neither of these is true.
To check which one we have, normally the best way is first to check if $p(a)$ and $q(a)$ exist. If they do, it's an ordinary point. If not, find $\lim_{x \to a} (x-a)p(x)$ and $\lim_{x \to a} (x-a)^2q(x)$. If these exist, it is a regular singular point. If they don't, it's an irregular singular point.
It may be possible to spot what sort of singularities $p$ and $q$ have without needing to take the limit in simple cases.
Examples:
*
*$y'' + y'\sin{x}+y\cos{x} = 0$
$\sin{x}$ and $\cos{x}$ are analytic at every point, so every point is an ordinary point.
*$y'' + \frac{1}{x}y'+\frac{1}{x}y = 0$
$p(x) = 1/x$, $q(x)=1/x$. $p(0)$ and $q(0)$ are undefined, so $0$ is not a regular point. $xp(x) \to 1$ as $x \to 0$ and $x^2 q(x) \to 0$ as $x \to 0$, so $x=0$ is a regular singular point. Elsewhere $p(x)$ and $q(x)$ are defined, so we have ordinary points.
*$y'' + \frac{1}{x^2}y' + y = 0$
$p(x) = 1/x^2$. $\lim_{x \to 0}xp(x)$ does not exist, so $x=0$ is an irregular singular point.
*$ y'' + y\cot{x} = 0 $
$p(x)=0$, $q(x)=\cot{x}$. Since $\cos{x}/\sin{x} \approx 1/x$ as $x \to 0$, $x^2q(x) \to 0$ as $x \to 0$, so $0$ is a regular singular point. $\cot{x}$ has singularities whenever $\sin{x}=0$, so at $n\pi$. $\cot{(x+n\pi)} = \cot{x}$, so every singularity looks like the one at $x=0$. Hence $x=n\pi$ are regular singular points.
You may wonder why there is the distinction between types of singular points. Suppose $a=0$. Suppose that $y=x^{\sigma}u$, where $u$ is analytic at $a$ (has a power series expansion $a_0+a_1x+a_2x^2+\dotsb$). Then
$$ y' = \sigma x^{\sigma-1} u + x^{\sigma} u' \\
y'' = \sigma(\sigma-1) x^{\sigma-2}u+2\sigma x^{\sigma-1}u' + x^{\sigma} u'', $$
so the differential equation becomes
$$ ( x^2u'' + 2\sigma x u' + \sigma(\sigma-1) u + xp(x) (\sigma u+xu') + x^2q(x) u )x^{\sigma-2} = 0 $$
In particular, everything but $p$ and $q$ has lowest term constant in the vicinity of $x=0$. To avoid having terms with negative powers in the bracket, which won't necessarily cancel out, $xp(x)$ needs to have an expansion in nonnegative powers, and likewise $q(x)$. Hence a solution of the form $x^{\sigma} (a_0+a_1x+\dotsb)$ will only definitely work if $0$ is a regular singular point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2263964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Longest, First Attempt, Chain of Heads on a Coin Flip So taking the question of :
If everyone in the world flipped a coin until they got tails what is the most likely longest chain of heads?
Assuming population is 7.347*10^9
And everyone has an ideal coin and has the ability to flip a coin "randomly"
What would the answer be?
: thinking that out of two people one is likely to get heads, I applied the reasoning to larger quantities dividing the Population till I reached the smallest number <1 making it 32.
Also how would you solve using compound distribution?
|
Suppose the population of the world is $k$.
Then the probability they all have at least one tail in up to $n$ flips is $$\left(1-\frac1{2^n}\right)^k \approx \exp\left(- \dfrac{k}{2^n}\right)$$ so the probability that the longest string of heads is exactly $n$ $$\left(1-\frac1{2^{n+1}}\right)^k - \left(1-\frac1{2^{n}}\right)^k \approx \exp\left(- \dfrac{k}{2^{n+1}}\right)-\exp\left(- \dfrac{k}{2^{n}}\right)$$
giving the following figures for different $n$ when $k=7.347\times 10^{9}$:
n probability
29 0.00106637
30 0.03160525
31 0.14808332
32 0.24439811
33 0.22688430
34 0.15545051
35 0.09111492
36 0.04934329
37 0.02567859
38 0.01309898
39 0.00661543
40 0.00332433
41 0.00166633
making the most likely outcome $32$ flips as the longest string of heads worldwide.
The corresponding median would be $33$ and the expected value would be about $33.11$. All three are reasonably close to the order-of-magnitude estimate of about $\log_2(7.347\times 10^{9}) \approx 32.8$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2264120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proving the nth derivative of a polynomial of degree n-1 is zero using linear Algebra. I want to prove by using linear algebra only that the nth derivative of a polynomial of degree n-1 is zero. My idea is using proving first that every square matrix $A$ such the only not zero entries are those that $j=i+1$ then $A^{n}=0$.
Then by noticing that for the derivative operator $D:P_{n}(\mathbb{R}) \to P_{n}(\mathbb{R})$ and the canonical basis $\beta= \lbrace 1, x,x^{2},...,x^{n} \rbrace$ of the vector space $P_{n}(\mathbb{R})$ we have that $[D]_{\beta}$ is such the entry $j=i+1$ then $([D]_{\beta})^{n}=0$ so the nth derivaritve of a polynomial can be seen in a matrix representation as
$$ [D^{n}p(x)]_{\beta}=[D^{n}]_{\beta}[p(x)]_{\beta}=([D]_{\beta})^{n}[p(x)]_{\beta}=0 [p(x)]_{\beta}=0 $$
Proving that the the nth derivaritve of a polynomial of degree $n-1$ is zero but this also proves thath the nth derivative of a polynomial of degrees $n$ is zero which is not true. So.. what I am doing wrong and end up the proof of this only using linear algebra? Thanks
|
Note that $[D]_{\beta}$ is an $(n+1)\times (n+1)$ matrix. What we actually have is that $([D]_{\beta})^{n+1} = 0$, not $([D]_{\beta})^n=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2264233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.