Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Is the equation $\phi(\pi(\phi^\pi)) = 1$ true? And if so, how? $\phi(\pi(\phi^\pi)) = 1$
I saw it on an expired flier for a lecture at the university. I don't know what $\phi$ is, so I tried asking Wolfram Alpha to solve $x \pi x^\pi = 1$ and it gave me a bunch of results with $i$, and I don't know what that is either.
|
It's a joke based on the use of the $\phi$ function (Euler's totient function), the $\pi$ function (the prime counting function), the constant $\phi$ (the golden ratio), and the constant $\pi$. Note $\phi^\pi\approx 4.5$, so there are two primes less than $\phi^\pi$ (they are $2$ and $3$), so $\pi(\phi^\pi)=2$. There is only one positive integer less than or equal to $2$ which is also relatively prime to $2$ (this number is $1$), so $\phi(2)=1$. Hence we have
$$\phi(\pi(\phi^\pi))=\phi(2)=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/861618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
}
|
The difference between a fiber and a section of a vector bundle If $ E_x := \pi^{-1}(x) $ is the fiber over $x$ where $(E,\pi,M)$ is the vector bundle. And the section is $s: M \to E $ with $\pi \circ s = id_M $. This implies that $\pi^{-1} = s $ on $M$. So then whats the difference between a fiber over $x$ and the section restricted to $x$.
Thanks.
|
A section is any function s that assigns to every point p in the base an element in its fiber $E_p$ The restriction s(p) assigns to p a point in $E_p$. As an example, a section of the tangent bundle is a vector field, i.e., an assignment of a tangent vector to each tangent space $T_pM$; its restriction to p assigns to p a tangent vector $X_p \in T_pM$ . The fiber over a point $p$ is the inverse image of $p$ under the specific map $\pi$. In a vector bundle, this fiber is a vector space. In the same example of the tangent bundle to a manifold $M$ , with $\pi(x,p)=p$ , $\pi^{-1}(p)=T_pM$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/861697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Cholesky factorization and non-positive definite matrices When Cholesky factorization fails, is there an alternative method to obtain the $\mathbf{L}$ matrix in:
$\mathbf{A}=\mathbf{L}\mathbf{L}^{*}$
I'm dealing with a matrix not guaranteed to be positive-definite; I'm wondering if there is a sure-fire way to find $\mathbf{L}$ (even if it's costly)?
|
In order that such a decomposition exists, $A$ must be Hermitian ($A=A^H$) and positive semi-definite. Then we can diagonalize $A$ with an unitary matrix $U$ ($UU^H=I$):
$$
A = U^HDU.
$$
$D$ is a diagonal matrix with non-negative diagonal entries. Then we can write
$$
A = (U^HD^{1/2}U)(U^HD^{1/2}U),
$$
where $D^{1/2}$ is the diagonal matrix, where the diagonal entries are the square root of the diagonal entries of $D$.
Observe that $(U^HD^{1/2}U)^H=U^HD^{1/2}U$.
Now perform $QR$-decomposition of $U^HD^{1/2}U$:
$$
U^HD^{1/2}U = QR
$$
with $R$ upper triangular, $Q$ unitary.
Then it holds
$$
A = (U^HD^{1/2}U)^H(U^HD^{1/2}U) = (QR)^H(QR)=R^HR,
$$
hence $L = R^H$ gives the desired decomposition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/861810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Limit of $a(k)$ in $ \sum_{k=1}^n \frac{a_k}{(n+1-k)!} = 1 $ For n = 1, 2, 3 ... (natural number)
$ \sum_{k=1}^n \frac{a_k}{(n+1-k)!} = 1 $
$ a_1 = 1, \ a_2 = \frac{1}{2}, \ a_3 = \frac{7}{12} \cdots $
What is the limit of {$ a_k $}
$ \lim_{k \to \infty} a_k $ = ?
I have no idea where to start.
|
Note that
$$
\begin{align}
\frac{x}{1-x}
&=\sum_{n=1}^\infty x^n\\
&=\sum_{n=1}^\infty\sum_{k=1}^n\frac{a_k}{(n-k+1)!}x^n\\
&=\sum_{k=1}^\infty\sum_{n=k}^\infty\frac{a_k}{(n-k+1)!}x^n\\
&=\sum_{k=1}^\infty\sum_{n=0}^\infty\frac{a_k}{(n+1)!}x^{n+k}\\
&=\frac{e^x-1}{x}\sum_{k=1}^\infty a_kx^k\tag{1}
\end{align}
$$
Therefore,
$$
\sum_{k=1}^\infty a_kx^k=\frac{x^2}{(e^x-1)(1-x)}\tag{2}
$$
If $a_k$ limit to some $b$, then as $x\to1$ we would have
$$
\begin{align}
b
&=\lim_{x\to1^-}\frac{\displaystyle\sum_{k=1}^\infty a_kx^k}{\displaystyle\sum_{n=0}^\infty x^k}\\
&=\lim_{x\to1^-}\frac{x^2}{e^x-1}\\[9pt]
&=\frac1{e-1}\tag{3}
\end{align}
$$
Now that we have an idea of what the limit would be, let's try to prove it.
From the defining formula,
$$
a_n=1-\sum_{k=1}^\infty\frac{a_{n-k}}{(k+1)!}\tag{4}
$$
where we define $a_k=0$ for $k\le0$.
Thus, if we let $c_n=a_n-\frac1{e-1}$, then for $n\gt1$, we have
$$
c_n=-\sum_{k=1}^\infty\frac{c_{n-k}}{(k+1)!}\tag{5}
$$
Note that if $|c_k|\lt c\,(4/5)^k$ for $k\lt n$ then
$$
\begin{align}
|c_n|
&\le\sum_{k=1}^\infty\frac{c\,(4/5)^{n-k}}{(k+1)!}\\
&=c\,(4/5)^n\sum_{k=1}^\infty\frac{(4/5)^{-k}}{(k+1)!}\\
&=c\,(4/5)^n\frac45\left(e^{5/4}-1-\frac54\right)\\[9pt]
&\le c\,(4/5)^n\tag{6}
\end{align}
$$
Since $c_1=\frac{e-2}{e-1}$ and $c_n=-\frac1{e-1}$ for $n\le0$, we can use $c=\frac1{e-1}$ in $(6)$. Therefore,
$$
\begin{align}
\left|\,a_n-\frac1{e-1}\,\right|
&=|c_n|\\
&\le\frac{(4/5)^n}{e-1}\tag{7}
\end{align}
$$
Therefore,
$$
\lim_{n\to\infty}a_n=\frac1{e-1}\tag{8}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/862914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Closed-form term for $\sum_{i=1}^{\infty} i q^i$ I am interested in the following sum
$$\sum_{i=1}^{\infty} i q^i$$
for some $q<1$.
Is there a closed-form-term for this? If yes, how does one derive this?
I am also interested in
$\sum_{i=x}^{\infty} i q^i$
for some $x>1$.
|
$$\sum_{i=1}^{\infty} i q^i=q\sum_{i=1}^{\infty} i q^{i-1}=q\frac{d}{dq}\sum_{i=1}^{\infty} q^{i}=q\frac{d}{dq}\sum_{i=0}^{\infty} q^{i}=q\frac{d}{dq}\frac{1}{1-q}=\frac{q}{(1-q)^2}$$
$$\sum_{i=x}^{\infty} i q^i=q\sum_{i=x}^{\infty} i q^{i-1}=q\frac{d}{dq}\sum_{i=x}^{\infty} q^{i}=q\frac{d}{dq}\left[\sum_{i=0}^{\infty} q^{i}-\sum_{i=0}^{x-1} q^{i}\right]=$$$$=q\frac{d}{dq}\left[\frac{1}{1-q}-\frac{1-q^{x}}{1-q}\right]=q\frac{d}{dq}\frac{q^{x}}{1-q}=\frac{(1-x)q^{x+1}+xq^{x}}{(1-q)^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/862966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
A single-segment Newton polygon implies henselian? I have a question about Newton polygons and henselian fields.
In p149 of Neukirch’s book(algebraic number theory:the beginning of Proposition 6.7), he says that
“We have just seen that the property of $K$ to be henselian follows from the condition that the Newton polygon of every irreducible polynomial $f(x)\in K[x]$ is a single segment.”
Question: In which place, this fact is shown? Is it in the proof of 6.6? This fact is correct?
In Proposition 6.4, he showed that the uniqueness of extended valuations implies the condition of Newton polygons.
In Proposition 6.6, he showed that the equivalence between the uniqueness of extended valuations and henselianness.
However, I think that the proof of 6.6 uses not only the condition of Newton polygons but also the uniqueness(which I cannot remove).
On the other hand, there exists a monic irreducible polynomial such that its Newton polygon is a single segment but the splitting field of it has two extended valuations. (By an example $f(x)=x^2+1\in Q[x]$ and $p=5=(2+i)(2-i)$ given by Hagen at stackexchange/862893.)
|
Where do you see a problem?
Theorem 6.6 says that a field is henselian (that is Hensel's Lemma holds) if and only if the valuation extends uniquely to every algebraic extension.
The proof of the implication $\Rightarrow$ is already given in Theorem 6.2.
For the proof of the implication $\Leftarrow$ one assumes uniqueness, in which case the Newton polygon of an irreducible polynomial is a line. The latter is a consequence of Satz 6.4.
OK, I see now that the proof of Satz 6.7 really starts a bit in a confusing way. The point here is, that he shows that EVERY irreducible polynomial has a Newton polygon with only one segment, which by the remark three pages before is equivalent to the fact, that all roots possess the same value taking any extension of the valuation to the splitting field.
Now if for some algebraic extension of $K$ there are two different extensions of $v$, then one can find an irreducible polynomial having two roots with different values with respect to one of the two extensions of $v$.
That's it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Easy way to compute $Pr[\sum_{i=1}^t X_i \geq z]$ We have a set of $t$ independent random variables $X_i \sim \mathrm{Bin}(n_i, p_i)$.
We know that $$\mathrm{Pr}[X_i \geq z] = \sum_{j=z}^{\infty} { n_i \choose j } p_i^j (1-p_i)^{n_i -j}.$$
But is there an easy way to compute:
$$\mathrm{Pr}\left[\sum_{i=1}^t X_i \geq z\right]?$$
MY IDEAS:
This should have to do something with convolution, but I am not sure.
Is it easier to compute
$$\mathrm{Pr}\left[\sum_{i=1}^t X_i \leq z \right]?$$
What I thought of is maybe:
$$\mathrm{Pr}\left[\sum_{i=1}^t X_i \leq z \right]=\sum_{j=1}^{z} \mathrm{Pr}\left[\sum_{i=1}^t X_i = z \right]$$
but this seems to be quite hard with $t$ random variables.
Would appreciate any hint and if you don't write an answer I am interested in whether it is too difficult or too easy?! thank you..
|
I'll give you a general idea. If you want the details, look up an article by Tomas Woersch (here).
So you have
$$
S_m(n) = \sum_{k=0}^{m} \binom{n}{k}x^k = 1 + \binom{n}{1}x +\ldots + \binom{n}{m}x^m\\
\frac{S_m(n)}{\binom{n}{m}x^m} = 1 + \ldots + \frac{\binom{n}{1}x}{\binom{n}{m}x^m} + \frac{1}{\binom{n}{m}x^m} = 1 + \ldots a_{m-1}x^{1-m} +a_m x^{-m}
$$
Now you need to do the following: obtain the ration of binomial coefficients I denoted with $a_k$ and then approximate then using Stirling approach. Use the fact that
$$(1 - o(1))\sqrt{2 \pi n} (\frac{n}{e})^n \leq n! \leq (1+o(1))\sqrt{2 \pi n} (\frac{n}{e})^n$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Deriving marginal effects in multinomial logit model For the multinomial logit model, it holds that:
$$P[y_i=j]=\frac{\exp{\beta_{0,j} + \beta_1 x_{ij}}}{\sum_h \exp(\beta_{0,h} + \beta_1 x_{ih})}$$.
Now my book states that the marginal effect is as follows:
$$\dfrac{\partial \operatorname{P}[y_i = j]}{\partial x_{ij}} = \operatorname{P}[y_i=j](1-\operatorname{P}[y_i=j])\beta_1$$
I tried derving this but I did not find an easy way. Could anyone please help? Thanks in advance.
|
Cross multiply the equation to obtain:
$$\operatorname{P}(y_i=j)\sum_h \exp(\beta_{0,h} + \beta_1 x_{ih})=\exp(\beta_{0,j} + \beta_1 x_{ij})$$
Then deriving with respect to $x_{ij}$ on both sides of the equality gives the following:
$$\dfrac{\partial \operatorname{P}(y_i = j)}{\partial x_{ij}}\sum_h \exp(\beta_{0,h} + \beta_1 x_{ih})+\operatorname{P}(y_i=j)\beta_1 \exp(\beta_{0,j} + \beta_1 x_{ij})= \beta_1\exp(\beta_{0,j} + \beta_1 x_{ij})$$
Now if we divide by $\sum_h \exp(\beta_{0,h} + \beta_1 x_{ih})$ we obtain:
$$\dfrac{\partial \operatorname{P}(y_i = j)}{\partial x_{ij}}+\operatorname{P}(y_i=j)\beta_1 \operatorname{P}(y_i=j)= \beta_1\operatorname{P}(y_i=j)$$
And the result is complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Definite integral $\int_0^{2\pi}(\cos^2(x)+a^2)^{-1}dx$ How do I prove the following?
$$ I(a)=\int_0^{2\pi} \frac{\mathrm{d}x}{\cos^2(x)+a^2}=\frac{2\pi}{a\sqrt{a^2+1}}$$
|
If anyone wants to see a complex analysis solution.
Let $\gamma$ be the unit$\require{autoload-all}$ circle. This proof holds for all complex $a$ such that the integral exists.
$$ I(a)=\int_0^{2\pi}\!\!\! \frac{\mathrm{d}t}{\cos^2(t)+a^2}$$
$$
\toggle{
\text{Set} \; x = e^{it}\quad\enclose{roundedbox}{\text{ Click for Information }}
}{
\begin{align}
x &= e^{it}\\
\sin(t) &= \frac{1}{2i}\left(x-\frac{1}{x}\right)\\
\cos(t) &=\frac{1}{2}\left(x+\frac{1}{x}\right)\\
\mathrm{d}t &= \frac{-i \, \mathrm{d}x}{x}
\end{align}
}\endtoggle
$$
$$\begin{align} I(a)&=\int_\gamma \frac{-i\,\mathrm{d}x}{x\frac{x^4+2x^2+4x^2a^2+2}{4x^2}} \\[.2cm] &=\int_\gamma \frac{-4ix\,\mathrm{d}x}{x^4+2x^2+4x^2a^2+2} \end{align}$$
The zeros of the denominator are
$$x= \pm\sqrt{-2 a^2-2\sqrt{a^2} \sqrt{a^2+1}-1},\quad x = \pm\sqrt{-2 a^2+2\sqrt{a^2}\sqrt{a^2+1}-1}$$
Only the second two roots will be inside the unit circle. Call these roots $x_+$ and $x_-$ and set $\alpha = x_+^2$.
$$I(a) = 2\pi i \left(\lim_{x\to x_+} \frac{-4ix(x-x_+)}{x^4+2x^2+4x^2a^2+2} + \lim_{x\to x_-} \frac{-4ix(x-x_-)}{x^4+2x^2+4x^2a^2+2}\right)$$
Factor out constants and apply L'Hopitals rule for both of the limits
$$\begin{align}I(a) &= 8\pi \left(\lim_{x\to x_+} \frac{2x-x_+}{4x^3+4x+8xa^2} + \lim_{x\to x_-} \frac{2x-x_-}{4x^3+4x+8xa^2}\right)\\
&= 8\pi \left(\frac{x_+}{4x_+^3+4x_++8x_+a^2} + \frac{x_-}{4x_-^3+4x_-+8x_-a^2}\right) \\
&=8\pi \left(\frac{1}{4x_+^2+4+8a^2} + \frac{1}{4x_-^2+4+8a^2}\right)\end{align}$$
Using the fact that $x_+^2 = x_-^2=\alpha$
$$\begin{align} I(a) &= 4\pi \left(\frac{1}{\alpha+1+2a^2}\right) \\ &= \frac{4\pi}{(-2 a^2+2\sqrt{a^2}\sqrt{a^2+1}-1)+1+2a^2} \\ &= \frac{2\pi}{\sqrt{a^2}\sqrt{a^2+1}}\end{align}$$
This is the result for $a \in \mathbb{C}$. For $a \in \mathbb{R}$ we can say that
$$I(a)=\int_0^{2\pi}\!\!\! \frac{\mathrm{d}t}{\cos^2(t)+a^2} = \frac{2\pi}{a\sqrt{a^2+1}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Proving a lemma for the irrationality of $e$ Define
$$I_n = \int_0^1 e^tt^ndt$$ where $n$ is a non-negative integer.
In a related question here, I asked how $e$ can be proven based on the following definitions and results:
$$I_{n+1} = e - (n + 1)I_n$$
$$I_n = (-1)^{n + 1}n! + e\sum_{r = 0}^n (-1)^r\frac{n!}{(n-r)!}$$
$$\frac{1}{n+1}\le I_n < \frac{e}{n} ~~~\text{for all $n \ge 1$}$$
The thing is, how would one go about proving the third inequality? I know that $e^{-1} = \sum_{k=0}^{n} \frac{(-1)^{k}}{k!}$ but I t doesn't seem to be of much use.
|
Notice that if $n \geq 1$, then $e^tt^n \leq e^1t^n$ for all $t \in [0, 1]$. This implies that:
$$\int_0^1 e^tt^ndt \leq e\int_0^1 t^ndt = \frac{e}{n+1} < \frac{e}{n}$$
Hence, $\displaystyle I_n < \frac{e}{n}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Two functions whose order can't be equated - big O notation Our teacher talked today in the class about big O notation, and about order relations.
she mentioned that the set of order of magnitude, is not linear
Meaning, there are function $f,g$ such that $f$ is not $O(g)$ and $g$ is not $O(f)$, but she gave no such example and im having trouble coming up with one.
Just out of sheer curiosity, could anyone come up with such 2 functions?
|
$f(x)=x^2\sin x$ and $g(x)=x.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Proof about AM-GM inequality generalized Note: I'm not sure this type of questions are welcome on the site. In case tell me.
Let's define the $p$ mean as
$$M_p(x_1, \dots, x_n) = \sqrt[p] { \frac 1n \sum_{i = 1}^n x_i^p}$$
for $x_1, \dots, x_n > 0$.
Your goal is to prove that
$$\dots \le M_{-2} \le M_{-1} = \mathcal{H} \le M_0 = \mathcal{G} \le M_1 = \mathcal{A} \le M_2 = \mathcal{Q} \le M_3 \dots$$
($M_0$ should be interpreted as the geometric mean, that is $M_0 = \sqrt[n]{\prod_{i = 1}^n x_i}$. The notation is justified if we consider $p \to 0$ )
Ideally I would like to have as many proof as possible of the above, the more elegant the better. Extra points for not using any "nuclear bomb" in proving the result!
P.S. In case it is not clear, $\mathcal{H}, \mathcal{G}, \mathcal{A}, \mathcal{Q}$
are respectively the harmonic, geometric, arithmetic and quadratic means.
|
The magic word is convexity, and there is really not much more to it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
About Trigonometry Is there anything cool about trigonometry? I was just curious. I'm learning trig right now and I often find myself asking myself, "What's the point?" I feel if I knew what I was working on and why, I'd be more successful and goal-oriented.
|
*
*The beautiful rhythmic patterns of trigonometric identities.
*The fact that things like Fourier series exist. Look at what it led to: http://en.wikipedia.org/wiki/List_of_Fourier_analysis_topics
*And look at this list of cycles
*The uses of trigonomety.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
}
|
Find the speed S by using radical equations There are two word problems that I cannot write as radical equations.
1.A formula that is used for finding the speed s, in mph, that a car was going from the length L, in feet, of its skid marks can be written as s=2 square root of 5L. In 1964, a jet powered car left skid marks nearly 6 miles long. According to the formula, how fast was the car going? (5280 feet= 1 mile)
2.A ship's speed s (in knots) varies based on the following equation s= 6.5 7th root p where p is the horsepower generated by the engine. If a ship is traveling at a speed of 25 knots, how much horsepower was the engine generating?
Thanks for your help.
|
For the first question, we can see that the speed if a function of the length of the skid mark. Therefore,
$$
S = f(L)
$$
We also know that the relation between $ S $ and $ L $ is that $ S $ is $2$ times square root of $5L$
$$
S = 2 \times \sqrt{5L}
$$
Similarly, for the next question,
$$
S = 6.75 \times \sqrt[7]{P}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/863924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that f is measurable Let $U$ be a open Set of $\mathbb{R} \times [0,\infty]$ and let f be defined as
$$f: \mathbb{R}\mapsto [0,\infty], \quad f(x) := \max\{0,\sup\{y| (x,y) \in U\}\} $$
How can I show that $f$ is measurable?
|
Define $g:\mathbb{R}\rightarrow\left[0,\infty\right]$ by $x\mapsto\sup\left\{ y\mid\left(x,y\right)\in U\right\} $ and let $c\in\mathbb R$ be a constant.
If $x\in\left\{ g>c\right\} $ then set $U$ contains an element $\left(x,y\right)$
with $y>c$.
Find some $\epsilon>0$ s.t. $y-\epsilon>c$ and $\left(x-\epsilon,x+\epsilon\right)\times\left(y-\epsilon,y+\epsilon\right)$
is a subset of $U$.
This is possible because $U$ is open and leads
to: $\left(x-\epsilon,x+\epsilon\right)\subset\left\{ g>c\right\} $.
Proved is now that set $\left\{ g>c\right\} $ is open, hence Borel-measurable.
This is true for any $c\in\mathbb{R}$ allowing the conclusion that
$g$ is Borel-measurable.
Then also $f$ prescribed by $x\mapsto\max\left\{ 0,g\left(x\right)\right\} $
is Borel-measurable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Use a proof by cases to show that $\lfloor n/2 \rfloor$ * $\lceil n/2 \rceil$ = $\lfloor \frac{n^2}{4} \rfloor$ for all integers $n$. Question
Use a proof by cases to show that $\lfloor n/2 \rfloor$ * $\lceil n/2 \rceil$ = $\lfloor \frac{n^2}{4} \rfloor$ for all integers $n$.
My Attempt:
I can only think of two cases,
*
*$n/2 \in \mathbb{Z}$
*$n/2 \notin \mathbb{Z}$
First case is straightforward:
$\lfloor n/2 \rfloor = \lceil n/2 \rceil = n/2$,
$\frac{n}{2}*\frac{n}{2} = \frac{n^2}{4}$
Second case troubled me,
$\lceil n/2 \rceil = \lfloor n/2 \rfloor + 1\\
\lceil n/2 \rceil = \lfloor n/2 + 1\rfloor$
$n/2 - 1 \leq \lfloor n/2 \rfloor < n/2\\
n/2 \leq \lfloor n/2 + 1 \rfloor < n/2 + 1$
I multiply both inequalities,
$\frac {n^2 - 2n}{4} \leq \lfloor n/2 \rfloor * \lfloor n/2 + 1 \rfloor < \frac{n^2 + 2n}{4}$
I need to prove that $\lfloor n/2 \rfloor * \lfloor n/2 + 1 \rfloor$ should be at least $n^2 /4$ and less than $n^2 /4 + 1$, this ensures that if I floor that, it will be $n^2/4$, but I'm lost.
My second attempt, (I didn't think the top have anywhere to go). This time I used some epsilon $\epsilon \in (0, 1)$,
$\lfloor n/2 \rfloor = n/2 - \epsilon\\
\lceil n/2 \rceil = n/2 + 1 - \epsilon$
$\lfloor n/2 \rfloor * \lfloor n/2 + 1 \rfloor = (n/2 - \epsilon)*(n/2 + 1 - \epsilon)\\
= n^2/4 + n/2 - n*\epsilon/2 - n*\epsilon/2 - \epsilon + \epsilon ^ 2\\
= n^2/4 + n/2 - 2n\epsilon/2 + 2\epsilon^2/2\\
= n^2/4 + \frac{n-2n\epsilon - 2\epsilon + 2\epsilon^2}{2}$
The problem now is I need to prove that $\frac{n-2n\epsilon - 2\epsilon + 2\epsilon^2}{2}$ is between 0 and 1. I don't really think this one is the solution is either so I gave up.
|
So you've got the case when $n$ is even. When $n$ is odd, then $\lfloor n/2 \rfloor * \lceil n/2 \rceil = \frac{n - 1}{2} * \frac{n + 1}{2} = \frac{n^2 -1}{4}$.
We want to show that this equals the right hand side. Since $n^2/4 = (n/2)^2$, we can rewrite $n^2/4$ as $(m + 1/2)^2$ where $m$ is an integer, and $m = \frac{n-1}{2}$. Furthermore, $(m + 1/2)^2 = m^2 + m + 1/4$.
Observe that $\lfloor m^2 + m + 1/4 \rfloor = m^2 + m$ since $m$ is an integer. Let's go write $m^2 + m$ in terms of $n$:
$m^2 + m = \frac{(n-1)^2}{2^2} + \frac{n-1}{2} = \frac{n^2 - 2n + 1}{4} + \frac{2n - 2}{4} = \frac{n^2 -1}{4}$.
I hope that's not too roundabout. The important thing was to show that flooring $n^2/4$ is the same as subtracting $1/4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Is there a finite non-solvable group which is $p$-solvable for every odd $p\in\pi(G)$? Let $G$ be a finite non-solvable group and let $\pi(G)$ be the set of prime divisors of order of $G$. Can we say that there is $r \in \pi(G)-\{2\}$ such that $G$ is not a $r$-solvable group?
|
Yes. A finite group $G$ is $p$-solvable if every nonabelian composition factor of $G$ has order coprime to $p$.
If $G$ is not solvable, then it has a nonabelian simple group as a composition factor. This nonabelian simple group must have order divisible by some odd prime $r$ (finite groups of order $2^k$ are solvable). Then $r$ also divides the order of $G$, so $G$ is not $r$-solvable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Lines in $\mathbb{CP}^n$ and in $\mathbb{CP}^2$ The question is, in how many points does a line in $\mathbb{CP}^n$ intersect $\mathbb{CP}^2$?
By a line in $\mathbb{CP}^n$, I mean a copy of $\mathbb{CP}^1$. I have tried with a system of equations, (because a line in $\mathbb{CP}^n$ is the zero locus of a polynomial in $n$ variables of degree $1$) and see if that line passes through one point or two.
Thank you.
|
It depends on $n$, and on the line. $\def\C{\mathbb C}\def\P{\mathbb P}\def\CP{\C\P}$We consider the embedding:
$$ \CP^2 = \{[x_0:x_1:x_2: 0 :\cdots : 0] \mid [x_0:x_1:x_2] \in \CP^2\} \subseteq \CP^n $$
Lifting this to $\C^{n+1}$, we have $\C^3 \subseteq \C^{n+1}$ embedded as the first three coordinates. A line in $\CP^n$ corresponds to a 2-dimensional subspace $A\subseteq \C^{n+1}$. This subspace intersects $\C^3$ in $A$, or (possible for $n \ge 3$) in a one-dimensional subspace $L$, or only in $\{0\}$ (possible for $n \ge 4$).
Hence a line $[A]$ in $\CP^n$ intersects $\CP^2$ in the line $[A]$, a point $[L]$ (possible for $n \ge 3$) or in no point (possible for $n \ge 4$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Constant in Sobolev-Poincare inequality on compact manifold $M$; how does it depend on $M$? Let $M$ be a smooth compact Riemannian manifold of dimension $n$. Let $p$ and $q$ be related by $\frac 1p = \frac 1q - \frac 1n$. There is a constant $C$ such that for all $u \in W^{1,q}(M)$
$$\left(\int_M |u-\overline{u}|^p\right)^{\frac 1p} \leq C\left(\int_M |\nabla u|^q\right)^{\frac 1q}.$$
I'm not interested in the best constant $C$ but want to know how does $C$ depend on $M$? I am hoping for something like $C=C(|M|,n)$ where $|M|$ is the volume of $M$. Can someone refer me to this result? Thanks.
|
No, it's not nearly that simple. The constant $C$ quantifies the connectivity of the manifold. It can be imagined as the severity of traffic jams that occur when all inhabitants of the manifold decide to drive to a random place at the same time. For example, let $M$ be two unit spheres $S^2$ joined by a thin cylinder of radius $r\ll 1$ and length $1$. There is a smooth function $u$ that is equal to $1$ on one sphere, $-1$ on the other, and has gradient $\approx 2$ on the cylinder. For this function $\bar u=0$, so
$$\left(\int_M |u-\overline{u}|^p\right)^{\frac 1p} \approx (8\pi)^{1/p} $$
On the other hand, the gradient is suppored on the cylinder, so
$$\left(\int_M |\nabla u|^q\right)^{\frac 1q} \approx (4\pi r)^{1/q} $$
The constant $C$ depends on $r$ and blows up as $r\to 0$, though the diameter and volume of the manifold hardly change.
See also: Poincaré Inequality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to find know if function is onto? How do you figue out whether this function is onto?
$\mathbb{Z}_3\rightarrow \mathbb{Z}_6:f(x)=2x$
Onto is of course is for all the element b in the codomain there exist an element a in the domain such that $f(a)=b$
Here the co domain is mod 6
So let $k\in\mathbb{Z}_6$
But I am not sure how to see if it is onto.
|
To put it a bit differently from Bananarama, the map gives you the values {$f(0),f(1),f(2)$}$(Mod 6)$. Even if these values are a different, can the map be onto?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Decomposing a square matrix into two non-square matrices I have a matrix $A$ with dimensions $(m \times m)$ and it is positive definite. I want to find the matrix $B$ with dimensions $(n \times m), (n \ll m)$, which follows the following expression: $$A = B'B$$ Here $B'$ is the transpose of $B$.
I was also thinking of SVD to re-write $A$ as: $$A = QDQ'$$ but I am not sure how to proceed from there on wards.
Any help to find the matrix $B$ is highly appreciated.
|
If $A$ is positive definite (all eigen-values positive) then you will not ever be able to find such $B$ that gives you exactly $A = B'B$ because $A$ is invertible but $B'B$ is not (it will have rank at most $n < m$). However, you can use the SVD to come up with an approximation as $A = B'B$ with $B = \sqrt{D_n} Q'$, where $D_n$ is the sub-matrix of $D$ with the all but the top $n$ singular values set to $0$. You can rewrite such $B$ as $n \times m$ as desired, and the approximation to $A$ will be as close as possible in terms of $L_2$ norm
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Constructing a vector space of dimension $\beth_\omega$ I'm trying to solve Exercise I.13.34 of Kunen's Set Theory, which goes as follows (paraphrased):
Let $F$ be a field with $|F| < \beth_\omega$, and $W_0$ a vector space over $F$ with $\aleph_0 \le \dim W_0 < \beth_\omega$. Recursively let $W_{n+1} = W_n^{**}$ so that $W_n$ is naturally identified with a subspace of $W_{n+1}$. Then let $W_\omega = \bigcup_n W_n$. Show that $|W_\omega| = \dim W_{\omega} = \beth_\omega$.
Some useful facts:
*
*If $W$ is a vector space over $F$ with basis $B$, there is an obvious bijection between $W^*$ and ${}^{B}F$ (i.e. the set of functions from $F$ to $B$, denoted this way to avoid ambiguity with cardinal exponentiation). Hence $|W^*| = |F|^{\dim W}$.
*Asaf Karagila showed in this answer that $|W| = \max(\dim W, |F|)$.
*By the "dual basis" construction we have $\dim W^* \ge \dim W$. (There's an assertion on Wikipedia that the inequality is strict whenever $\dim W$ is infinite, but I don't immediately see how to prove that.)
One inequality is pretty easy. Using Fact 1, we get $|W^*| = |F|^{\dim W} \le |F|^{|W|}$. Now thanks to the simple fact that ${}^{\beth_n} \beth_m \subset \mathcal{P}(\mathcal{P}(\beth_m \times \beth_n))$ we have $\beth_m^{\beth_n} \le \beth_{\max(m,n)+2}$. So by induction it follows that $|W_n| < \beth_\omega$ for each $n$, and hence (using Kunen's Theorem 1.12.14) we get $|W_\omega| \le \beth_\omega$.
For the other direction, if $\dim W \ge |F|$ then Fact 3 gives us $\dim W^* \ge |F|$ and hence by Facts 1 and 2 $$\dim W^* = \max(\dim W^*, |F|) = |W^*| = |F|^{\dim W} \ge 2^{\dim W}.$$
So if $\dim W_0 \ge |F|$ then by induction we get $\dim W_n \ge \beth_{2n}$ and therefore $\dim W_\omega \ge \beth_\omega$. Since $|W_\omega| \ge \dim W_\omega$ we must have equality throughout.
But I am stuck on the case $\aleph_0 \le \dim W_0 < |F|$. Intuitively it still seems like $\dim W^*$ should be "much larger" than $\dim W$. We shouldn't really need to go through the cardinalities of the spaces themselves, but I can't see what to do. Any hints?
|
Hint: First prove the case when $F$ is countable. For the case $|F|<\beth_\omega$, consider the prime field $K$ of $F$. Let $W'_\omega$ be the space obtained using the construction above considering $W_0$ as a $K$-vector space, then $|W_\omega'|=\beth_\omega$. As there is a copy of $W_\omega'$ in $W_\omega$, we obtain $|W_\omega|=\beth_\omega$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Numerical integration of $\sin(p_{m})$ and $\cos(p_{m})$ for a polynomial $p_{m}$ I was wondering if anyone knew about any numerical methods specifically designed for integrating functions of the form $\sin(p_{m})$ and $\cos(p_{m})$ where $p_{m}$ is a polynomial of degree $m$. I don't think $m$ will be too large in the application, but it will probably be at least $4$. I was reading this thread about Laplace's method but I'm not really sure that's what I want.
|
I think you can use Gaussian quadrature with weight functions $w(x)=\sin(p_m)$ and $w(x)=\cos(p_m)$. For any weight function there is a family of orthogonal polynomials related to that function in $[a,b]$. This family can be obtained by Gram-Schmidt orthogonalization process on $1,x,x^2,\ldots,x^n$.
Let $w(x)$ be a weight function in $[a,b]$ and $\{p_0,p_1,\ldots,p_n\}$ be a family of orthogonal polynomials with degree of $p_k=k$. Let $x_1,x_2,\ldots,x_n$ be the roots of $p_n(x)$ and
$$
L_i(x)=\prod_{j=1, j\ne i}^{n}{\frac{x-x_j}{x_i-x_j}}
$$
be the Lagrange polynomials of these roots. Then the Gaussian quadrature is defined by
$$
\int_a^b{w(x)f(x)dx}\approx \sum_{i=1}^{n}{w_i f(x_i)}=\sum_{i=1}^{n}{\left(\int_a^b{w(x)L_i(x)dx} \right) f(x_i)}
$$
The above formula is exact for all polynomials of degree $\le 2n-1$. This is better than Newton-Cotes formulas since the $(n+1)$ points Newton-Cotes rule is exact for polynomials of degree $\le n$ ($n$ odd) or $\le (n+1)$ ($n$ even).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why is a random variable called so despite being a function? According to my knowledge, its a function $P(X)$ which includes all the possible outcomes a random event.
|
IMO, the random variable X can be considered as a variable resulted from a function, and it's often being used as a variable in $P(X)$. For example, $y=mx+c$ is a function, but it can be used as a variable to $f(y)=y^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
Is $-|x|\le\sin x\le|x|$ for all $x$ true? I have seen in Thomas' Calculus that says to prove $\lim_{x\rightarrow0}\sin x=0$, use the Sandwich Theorem and the inequality $-|x|\le\sin x\le|x|$ for all $x$.
My question is how could the inequality be true? If we derive from $-1\le\sin x\le1$, we could only get $-|x|\le|x|\sin x\le|x|$?
In the textbook it is written that it follows from the definition of $\sin x$ that $-|x|\le\sin x\le|x|$ for all $x$. Is there any easy way to show this?
Thanks for any help.
|
The unit circle is parametrized by $\theta\mapsto(\cos\theta,\sin\theta)$:
$\hskip 1.5in$
Consider an angle $0\le\theta\le\frac{\pi}{2}$, as depicted above. The red line's length is $\theta$, and $\sin\theta$ is the purple length. Since they both travel the same vertical distance but the red one also travels horizontal distance, the red line is longer, hence $\sin\theta\le\theta$. (This can be proved formally with calculus if one so desires.) After $\theta>1$, the angle will forevermore be too much for sine to ever take for any angle. Since $\sin(-\theta)=-\sin\theta$, the claim follows for negative angles too.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Completing the following equation by the suitable method i got this linear equation two variable problems for my school.
I understand the basics of the normal linear equation but this seems different instead having a pure number after the "=" they got a ration, here is the problem.
$$X:2Y = 5:14$$
$$(X+4) : (3Y-21) = 2:3$$
What i've tried to do is just guess the X and Y and to solve it (took a long time and still dont find it) so as example if the X is 5 and the Y is 7, it fit the first equation but wont fit in the 2nd question.
The ":" sign is for ration, not "divided by"
Please answer how you solve and not just give the answer, because as you see im trying to learn not trying to just solve my homework.
|
We can start by solving both equations for the same variable.
$$X:2Y = 5:14\implies Y = \frac{7 X}{5}$$
$$(X+4) : (3Y-21) = 2:3 \implies Y = \frac{X + 18}{2}$$
We now equate the two "solutions" of $Y$.
$$\frac{7 X}{5}=\frac{X + 18}{2}\\
\implies \frac{7 X}{5} = \frac{X}{2} + 9\\\implies 14X-5X =90\\
\implies X=10\qquad
\implies Y=\frac{7 X}{5}
=\frac{70}{5}=14$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/864999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Greatest value of f If $f'(x)=6-x$ then which of the following has the greatest value?
*
*$f(2.01)-f(2)$
*$f(3.01)-f(3)$
*$f(4.01)-f(4)$
*$f(5.01)-f(5)$
*$f(6.01)-f(6)$
I know the answer is $f(2.01)-f(2)$ but how to prove?
|
Hint:Use the
$$f'(x)\approx \frac{f(x+h)-f(x)}{h}$$ with $h=0.01$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Big $O$, little $o$ Notation So I've gone through the typical undergraduate math sequence (two semesters of real analysis, two semesters of abstract algebra, some measure theory, but I haven't taken discrete math) and in various posts online, I keep on seeing things such as
$$O(x^2) $$
which I've never encountered in my formal education. What in the world does the big $O$ notation mean, to someone who's completely new to the notation, so that I understand how to use it? (It seems to me like a way to write "some polynomial stuff.")
I've also encountered "little" $o$ notation in studying for actuarial exams.
Also, why do we care about $O$ and $o$?
|
Big $O$ notation:
$f(n)=O(g(n))$:
$\exists \text{ constant } c>0 \text{ and } n_0 \geq 1 \text{ such that, for } n \geq n_0: \\ 0 \leq f(n) \leq c g(n)$
$$$$
Little $o$ notation:
$f(n)=o(g(n))$:
$\text{ if for each constant } c>0 \text{ there is } n_0 \geq 1 \text{ such that } \\ 0 \leq f(n) < c g(n) \ \forall n \geq n_0$
In other words $\displaystyle{\lim_{n \rightarrow \infty} \frac{f(n)}{g(n)}=0}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
How to find the limit of the following function? I found this task in test $$\lim_{x \to 0+} \frac{1}{x} \int_0^{2x} (\sin{t})^t\,\mathrm{d}t$$. The answer is 2. But I can't find out the algorithm of solving of such an improper integral.
|
Using the L' Hospital rule we have: $$\lim_{x\to 0_+ } \frac{1}{x}\int_0^{2x} (\sin t)^t dt =\lim_{x\to 0_+ } 2(\sin 2x )^{2x} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Showing $\ln(\sin(x))$ is in $L_1$
Prove $\ln[\sin(x)] \in L_1 [0,1].$
Since the problem does not require actually solving for the value, my strategy is to bound the integral somehow. I thought I was out of this one free since for $\epsilon > 0$ small enough, $$\lim_{\epsilon \to 0}\int_\epsilon^1 e^{\left|\ln(\sin(x))\right|}dx=\cos(\epsilon)-\cos(1) \to 1-\cos(1)<\infty$$
and so by Jensen's Inequality, $$e^{\int_0^1 \left| \ln(\sin(x))\right|\,dx}\le \int_0^1e^{\left|\ln(\sin(x))\right|}\,dx\le1-\cos(1)<\infty$$ so that $\int_0^1 \left|\ln(\sin(x))\right|\,dx<\infty$.
The problem, of course, is that the argument begs the question, since Jensen's assumes the function in question is integrable to begin with, and that's what I'm trying to show.
Any way to save my proof, or do I have to use a different method? I attempted integration by parts to no avail, so I am assuming there is some "trick" calculation I do not know that I should use here.
|
A simpler approach would be to observe that the function $x^{1/2}\ln \sin x$ is bounded on $(0,1]$, because it has a finite limit as $x\to 0$ -- by L'Hôpital's rule applied to $\dfrac{\ln \sin x}{x^{-1/2}}$. This gives $|\ln \sin x|\le Mx^{-1/2}$.
As Byron Schmuland noted, $e^{|\ln \sin x|} = 1/\sin x$, which is nonintegrable; this is fatal for your approach.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Function is defined on the whole real line and $|f(x) -f(y)| \leq |x-y|^\alpha$, then.... Given: $f(x)$ is defined on $\mathbb{R}$ and $|f(x) -f(y)| \le |x-y|^\alpha$. Which of the following statements are true?
I. If $\alpha > 1$, then $f(x)$ is constant.
II. If $\alpha = 1$, then $f(x)$ is differentiable.
III. $0 < \alpha < 1$, then $f(x)$ is continuous.
Answer: I $-$ true, II $-$ false, III $-$ true.
I wonder how this result was obtained. Maybe somebody can give some explanations?
|
Hints: For (I) $0\le\vert\frac{f(x)-f(y)}{x-y}\rvert\le\vert x-y\rvert^{\alpha-1}$
For (II) Think about the function $x\mapsto\lvert x\rvert$
For (III) $0\le\lvert f(x)-f(y)\rvert\le\lvert x-y\rvert^{\alpha}$ let $x$ tend to $y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
What exactly is a number? We've just been learning about complex numbers in class, and I don't really see why they're called numbers.
Originally, a number used to be a means of counting (natural numbers).
Then we extend these numbers to instances of owing other people money for instance (integers).
After that, we consider fractions when we need to split things like 2 pizzas between three people.
We then (skipping the algebraic numbers for the purposes of practicality), we use the real numbers to describe any length; e.g. the length of a diagonal of a unit square.
But this is when our original definition of a number fails to make sense- when we consider complex numbers, which brings me to my main question: what is a rigorous definition of 'number'?
Wikipedia claims that "A number is a mathematical object used to count, label, and measure", but this definition fails to make sense after we extend $\mathbb{R}$.
Does anyone know of any definition of number that can be generalised to complex numbers as well (and even higher order number systems like the quaternions)?
|
There is no concrete meaning to the word number. If you don't think about it, then number has no "concrete meaning", and if you ask around people in the street what is a number, they are likely to come up with either example or unclear definitions.
Number is a mathematical notion which represents quantity. And as all quantities go, numbers have some rudimentary arithmetical structures. This means that anything which can be used to measure some sort of quantity is a number. This goes from natural numbers, to rational numbers, to real, complex, ordinal and cardinal numbers.
Each of those measures a mathematical quantity. Note that I said "mathematical", because we may be interested in measuring quantities which have no representation in the physical world. For example, how many elements are in an infinite set. That is the role of cardinal numbers, that is the quantity they measure.
So any system which has rudimentary notions of addition and/or multiplication can be called "numbers". We don't have to have a physical interpretation for the term "number". This includes the complex numbers, the quaternions, octonions and many, many other systems.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64",
"answer_count": 19,
"answer_id": 11
}
|
Finding a matrix projecting vectors onto column space I can't find $P$, for vectors you can do $P = A(A^{T}A)^{-1}A^T$. But here its not working because matrices have dimensions that can't multiply or divide. help
|
The dimensions of the matrices do match.
Matrix $A$ is 3x2, which matches with $(A^TA)^{-1}$, which is 2x2.
The result $A(A^TA)^{-1}$ is again 3x2.
When multiplying it with $A^T$, which is 2x3, you get a 3x3 matrix for $P$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
I need help finding the critical values of this function. So $h(t)=t^{\frac{3}{4}}-7t^{\frac{1}{4}}$. So I need to set $h'(t)=0$. So for $h'(t)$ the fattest I've gotten to simplifying os $h'(t)=\frac{3}{4 \sqrt[4]{t}}-\frac{7}{4\sqrt[4]{t^3}}$ and that is as farthest as I can simplify. So i'm having a had time having $h'=0$ So could someone show me how to properly simplify and find the critical values step by step I would immensely appreciate it. In the most recent material we have covered in class ir seems that my biggest struggle comes from simplifying completely. Thanks in advance for the help.
|
Hint
To make your life easier, just define $x=t^{\frac{1}{4}}$. So $h(t)=x^3-7x$ and then $$\frac{dh}{dt}=\frac{dh}{dx}\frac{dx}{dt}=(3x^2-7)\frac{dx}{dt}$$ So,$h'(t)=0$ if $x^2=\frac{7}{3}$ that is to say $\sqrt t=\frac{7}{3}$ then $t=\frac{49}{9}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
limit of a sum of powers of integers I ran across the following problem in my Advanced Calculus class:
For a fixed positive number $\beta$, find
$$\lim_{n\to \infty} \left[\frac {1^\beta + 2^\beta + \cdots + n^\beta} {n^{\beta + 1}}\right]$$
I tried manipulating the expression inside the limit but didn't come up with anything useful. I also noted that the numerator can be rewritten as
$$\sum_{i=1}^{n}i^\beta$$
which is a well-known formula with a closed form (Faulhaber's formula) but I don't fully understand that formula and we haven't talked about the Bernoulli numbers at all, so I think the author intended for the problem to be solved a different way. Any suggestions on how to tackle this would be much appreciated.
|
$$\frac{1^{\beta}+\cdots+n^{\beta}}{n^{\beta+1}}=\frac{1}{n}\Big(\Big(\frac{1}{n}\Big)^{\beta}+\cdots+\Big(\frac{n}{n}\Big)^{\beta}\Big)\to\int_0^1x^{\beta}dx=\frac{1}{\beta+1}.$$
If You did not have integrals yet use Stolz theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
why is it that the conjugate of a+bi is a-bi? if a+bi is an element of a group, then its conjugate is a-bi,
how can we prove this by using the fact that the conjugate of an element g of a group is h if there is an x in the group such that h=xgx^(-1)?
|
Given a mathematical structure $S$ and a group of automorphisms $G \subseteq \mathrm{Aut}(S)$, we often say that the elements $\sigma(x)$ for $\sigma \in G$ are the conjugates of $x$. And if $\sigma$ is any particular automorphism, we might call $\sigma(x)$ the conjugate of $x$ by $\sigma$.
For the complex numbers, $\sigma(a+bi) = a-bi$ is an automorphism.
For your group theory, there is a standard homomorphism $G \to \mathrm{Aut}(G)$ that sends any element $g$ to the inner automorphism $\sigma_g(x) := gxg^{-1}$. This mapping from elements to inner automorphisms is so natural, that rather than talk about conjugation by $\sigma_g$, we simply call it conjugation by $g$.
More verbosely, the condition you are thinking of "$h$ is a conjugate of $g$ by an inner isomorphism". But the complex conjugation example is not an inner isomorphism of any (reasonable) group structure on the complexes, so you can't think of it that way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$\forall$ At the beginning or at the end? I have a set of real numbers $x_1, x_2, \ldots, x_n$ and two functions $f:\mathbb{R} \rightarrow \mathbb{R}$ and $g:\mathbb{R} \rightarrow \mathbb{R}$.
What are the differences between the following statements?
*
*$\forall v \in \{1, \ldots, n\} ~f(x_v) = 0 \vee g(x_v) = 0$
*$f(x_v) = 0 \vee g(x_v) = 0 ~\forall v \in \{1, \ldots, n\} ~$
|
The second one is simply wrong, that's not how things should be written.
It's unambiguous here because there is only one quantifier, but if had been something like $\exists yP(x,y)\forall x$ you wouldn't know whether $\forall x\exists yP(x,y)$ or $\exists y\forall xP(x,y)$ was meant. In my experience, the second option is what's usually meant. But I've seen it mean the first one too.
I reiterate, the second one is wrong, the fact that it's not really ambiguous (in some sense), doesn't make it right.
I personally find it acceptable to put quantifiers in the end only if you're mixing natural language with mathematical symbols. For example, "$\exists yP(x,y)$ for all $x$" is of the form "$Q(x)$ for all $x$" which should be translated to "$\forall xQ(x)$", that is, "$\forall x\exists yP(x,y)$".
What's the difference between this and the second (candidate) statement in the question? I didn't use '$\forall$', I actually wrote "for all".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Radius of $\sum a_n b_n x^n$ via radii of $\sum a_n x^n$ and $\sum b_n x^n$
Series $\sum a_n x^n$ and $\sum b_n x^n$ have radii of convergence of
1 and 2, respectively. Then radius of convergence R of $\sum a_n b_n x^n$ is
*
*2
*1
*$\geq 1$
*$ \leq 2$
My attempt was to apply ratio test:
$$\lim_{n \rightarrow \infty} \frac{a_{n} b_{n}}{a_{n+1} b_{n+1}} = \lim_{n \rightarrow \infty} \frac{a_{n}}{a_{n+1}} \cdot \lim_{n \rightarrow \infty} \frac{ b_{n}}{ b_{n+1}} = 1 \cdot 2 = 2$$ i.e. option (1).
But the answer is (3). Why?
|
The problem in the attempt is that we cannot use in general the ratio test because we are not sure that the terms are different from $0$.
Since the radius of convergence of $\sum_n b_nx^n$ is $2$, the series converges at $1$ hence $b_n\to 0$. This implies that the series $\sum_n |a_nx^n|$ is convergent if $|x|<1$.
Considering $a_n$ and $b_n$ such that $a_{2n}=0$, $a_{2n+1}=1$ and $b_{2n+1}=0$, $\sum_n b_{2n}x^n$ has a radius of convergence of 2., we can see that 1., 2. and 3. do not necessarily hold.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/865944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Quaternion Group as Permutation Group I was recently, for the sake of it, trying to represent Q8, the group of quaternions, as a permutation group. I couldn't figure out how to do it.
So I googled to see if somebody else had put the permutation group on the web, and I came across this:
http://mathworld.wolfram.com/PermutationGroup.html
Not all groups are representable as permutation groups. For example, the quaternion group cannot be represented in terms of permutations.
This strikes me as a very odd statement, because I quickly checked this:
http://mathworld.wolfram.com/CayleysGroupTheorem.html
Every finite group of order n can be represented as a permutation group on n letters
So it seems Q8 should be representable a sa permutation group on 8 letters.
How can these two quotes be reconciled?
|
Follow Cayley's embedding: write down the elements of $Q_8=\{1,-1,i,-i,j,-j,k,-k\}$ as an ordered set, and left-multiply each element with successively with each element of this set - this yields a permutation, e.g. multiplication from the left with $i$, gives you that the ordered set $(1,-1,i,-i,j,-j,k,-k)$ goes to $(i,-i,-1,1,k,-k,-j,j)$, which corresponds to the permutation $(1324)(5768)$. Etc. Can you take it from here? So it can be done and the statement on the WolframMathWorld - Permutation Groups page must be wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
}
|
What are the properties of the set of the Real Numbers without the Integers? This question came up in a lunchtime discussion with coworkers. None of us are professional mathematicians or teachers of math. I apologize for any incorrect math or sloppy terminology.
We were discussing getting from one number to another along the real number line. The challenge is that we removed the integers from the number line.
So can an ant get from 0.5 to 1.5 by crawling along this "broken" line?
Searching the web, we discovered this may be a problem in topology, somehow (possibly) related to something called the "Long Line". This is far outside of any of our knowledge so we would appreciate an explanation at roughly the basic calculus level.
|
The space you describe
$...(-(n+1),-n)...(-2,-1)(-1,0)(0,1)(1,2)...(n,n+1)...$
is homeomorphic to countably infinite ("$\omega$") many copies of $(0,1)$. The Long Line you refer is made of uncountably infinite many copies of $[0,1)$. It can be thought of as a generalization of the nonnegative reals $[0,\infty)$, because we can view this space as $\omega$-many copies of $[0,1)$:
$[0,1)[1,2)[2,3)...$
Your question about the ant is related to path connectedness (which is equivalent to connectedness here). There is no continuous path from a point in $(n-1,n)$ to a point in $(n,n+1)$ as the subspace $(n-1,n)(n,n+1)$ is not connected, that is, it is the union of two nonempty open subsets. In the reals this would not be a problem as $(n-1,n+1)$ is connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Prove or disapprove the statement: $f(n)=\Theta(f(\frac{n}{2}))$ Prove or disapprove the statement:
$$f(n)=\Theta(f(\frac{n}{2}))$$
where $f$ is an asymptotically positive function.
I have thought the following:
Let $f(n)=\Theta(f(\frac{n}{2}))$.Then $\exists c_1,c_2>0 \text{ and } n_0 \geq 1 \text{ such that } \forall n \geq n_0:$
$$c_1 f(\frac{n}{2}) \leq f(n) \leq c_2 f(\frac{n}{2})$$
Let $f(n)=2^n$.Then:
$$c_1 2^{\frac{n}{2}} \leq 2^n \leq c_2 2^{\frac{n}{2}} \Rightarrow c_1 \leq 2^{\frac{n}{2}} \leq c_2 $$
$$2^{\frac{n}{2}} \leq c_2 \Rightarrow 2^n \leq c_2^2 \Rightarrow n \leq \lg c_2^2 \text{ Contradiction}$$
Could you tell me if it is right?
|
Just set $f=f(n) = 2^{\frac{n}{2}}, \ g=g(n) = 2^n$ and take $\lim_n \frac{f}{g} = 0$, so $f <c_1 g \ \forall \ c_1>0 $ and $n>n_0$ or simply $f=o(g)$. Clearly $\lim_n \frac{g}{f} = \infty$, so LHS of the inequality is never fulfilled: $\not \exists c_2 \ \text{s.t.} f>c_2g \ \forall n>n_0$ or simply $g=\omega(f)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
At what extent I can use trigonometric functions and properties with parametric curves? I have a know-how and a library about trigonometry and trigonometric operations, I would like to know if I can possibly rely on trigonometry for parametric curves too and how the trigonometry from the circle with $\mathbb{ \mathcal{r} = 1\ unit}$ .
My interest is mostly about measuring angles, distances and all the good stuff that trigonometric functions and properties reveal, like with atan2 in C/C++ that given 2 points computes the angle between them ( clockwise ), and what kind of properties I get if apply the same rules to a parametric curve ?
To be honest by parametric curve I mean bezier curve because that's what I have in input like 99.9% of the time ( note that with bezier I mean any kind of bezier, mostly cubic and quadratic ) but I would like to know the general idea about applying trigonometry to a parametric curve ( if this is possible ).
|
A few examples ...
If the vector $U = (u_x, u_y)$ is the derivative vector of a Bézier curve (or any other curve, actually), then $\text{atan2}(u_x,u_y)$ is the angle between the curve tangent and the $x$-axis.
Once you know to calculate this angle, you can find places where the tangent is horizontal or vertical, which is often useful.
If you calculate this angle at two points that are very close together, you can get an estimate of the curvature of the curve.
You can measure the total angle that the curve turns through, which sometimes indicates that it needs to be split in order for some computation to work reliably.
Since Bézier curves are parameterized using polynomials, trig functions do not come into the picture very much (less so than with circles, for example).
It sounds like your trig function software/expertise is a so-called "solution in search of a problem".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Putnam Exam question Prove or disprove: if $x$ and $y$ are real numbers with $y\ge 0$ and $y(y+1)\le (x+1)^2$, then $y(y-1)\le x^2$.
How should I approach this proof? The solution starts with assuming $y\ge 0$ and $y\le 1$, but I'm not sure how to arrive at that second assumption or go from there. Thank you in advance!
|
Draw a diagram. For $y\ge0$, the curve
$$y(y+1)=(x+1)^2$$
is the upper half of a hyperbola with turning point at $(-1,0)$. One of the asymptotes of this hyperbola is $y=x+\frac{1}{2}$. The inequality
$$y(y+1)\le (x+1)^2$$
defines the region below this hyperbola. The hyperbola
$$y(y-1)=x^2$$
has turning point $(0,1)$ and is congruent to the first hyperbola; one of its asymptotes is also $y=x+\frac{1}{2}$; the second hyperbola is shifted upwards along the direction of this asymptote. From a diagram it is therefore clear that the region lying below the first hyperbola also lies below the second. Therefore, if the first inequality is true (and $y\ge0$) then so is the second.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Are the real numbers really uncountable? Consider the following statement
Every real number must have a definition in order to be discussed.
What this statement doesn't specify is how that loose-specific that definition is.
Some examples of definitions include:
"the smallest number that takes minimally 100 syllables to express in English" (which is indeed a paradox)
"the natural number after one" (2)
"the limiting value of the sequence $(1 + 1/n)^n$ as $n$ is moved towards infinity, whereas a limit is defined as ... (epsilon-delta definition) ... whereas addition is defined as ... (breaking down all the way to the basic set theoretic axioms) " (the answer to this being of course e)
Now here is something to consider
The set of all statements using all the characters in the English in English language is a countable set. That means that every possible mathematical expression can eventually be reduced to an expression in English (that could be absurdly long if it is to remain formal) and therefore every mathematical expression including that of every possible real number that can be discussed is within this countable set.
The only numbers that are not contained in this countable set are...
That's a poor question to ask since the act of answering it is a violation of the initial assumption that the numbers exist outside of the expressions of our language.
Which brings up an interesting point. If EVERY REAL number that can be discussed is included here, then what exactly is it that is not included?
In other words, why are the real numbers actually considered to be uncountable?
|
The countably infinite set $c{\mathbb R}$ of computable real numbers is difficult to define and to handle. But it is embedded in the uncountable set ${\mathbb R}$ which is easy to handle and is characterised by a small set of reasonable axioms. Beginning with data from $c{\mathbb R}$ we freehandedly argue in the environment ${\mathbb R}$ and arrive at "guaranteed" elements of ${\mathbb R}$ (solutions of equations, as $x=\tan x$, etc.) that are at once accepted as elements $c{\mathbb R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 8,
"answer_id": 6
}
|
Is $\tan^2\theta+1=\large\frac{1}{\sin^2\theta}$ a Fundamental Identity Wrote this down during class, and I am wondering if I incorrectly transcribed from the board. Is this identity true? And if so, how?
|
It is not true. For example, let $\theta=\frac{\pi}{6}$ ($30$ degrees). Then $\tan^2\theta+1=\frac{1}{3}+1=\frac{4}{3}$ while $\frac{1}{\sin^2\theta}=4$.
But $\tan^2\theta+1=\frac{1}{\cos^2\theta}$ is true.
To show that the identity $\tan^2\theta+1=\frac{1}{\cos^2\theta}$ holds, recall that $\sin^2\theta+\cos^2\theta=1$ and divide both sides by $\cos^2\theta$.
Maybe what was written on the board is $\cot^2\theta+1=\frac{1}{\sin^2\theta}$. This can be proved in a way very similar to the way we proved the identity $\tan^2\theta+1=\frac{1}{\cos^2\theta}$.
Remark: Are the identities $1+\tan^2\theta=\sec^2\theta$ and $1+\cot\theta=\csc^2\theta$ fundamental? I do not think they are. For one thing, they are too close relatives of the Pythagorean Identity $\cos^2\theta+\sin^2\theta=1$, which is more basic, and more generally useful. However, $1+\tan^2\theta=\sec^2\theta$ does come up fairly often, particularly when we are integrating trigonometric functions. And $\tan\theta$ does come up naturally in geometry, and then from $1+\tan^2\theta=\sec^2\theta$ the other trigonometric functions can be calculated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Why can one take the power of $e$ directly? The definition of Euler's constant to the power $x$, $e^x$, is
$$e^x = \sum_{n=0}^\infty \frac{x^n}{n!} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + {...}$$
And of course, we have the number $e$ defined as
$$e = \sum_{n=0}^\infty \frac{1}{n!} = 1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + {...}$$
or
$$e = \lim_{n\to \infty} (1+\frac{1}{n})^{n}$$
$e$ and $e^x$ here are expressions of a sum of infinite series. When one calculate $e^x$, he doesn't go by the definition of $e^x$, but instead calculates the numerical value of $e$, and takes the power of that numerical value directly.
How can one simply take the power of the numerical value of $e$ directly, and be sure the answer is $e^x$? And what about in the context of arbitrary powers of $e$?
p.s There are also different definitions of $e$, like:
$$\int_1^{e^x}{\frac{1}{t}dt}=x$$
$$\frac{d}{dx}e^x = e^x$$
$$\frac{d}{dx}log_e{x}=\frac{1}{x}$$
But they do not explain the concern too.
|
When you say calculate the value of $e$ and then take the power $e^x$, what does taking a power $a^x$ mean? By definition, we let
$$
a^x = \exp(x\log(a)),
$$
where $\exp$ is defined as the power series you mentioned and $\log$ is its left inverse. Thus,
$$
e^x = \exp(x\log(e)) = \exp(x\log(\exp(1)) = \exp(x),
$$
since $e$ is defined as $\exp(1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
The definition of the right regular representation I'm having difficulties understanding the definition of the right regular representation as it appears in Dummit & Foote's Abstract Algebra text. On page 132 it says
Let $\pi:G \to S_G$ be the left regular representation afforded by the action of $G$ on itself by left multiplication. For each $g \in G$ denote the permutation $\pi(g)$ by $\sigma_g$, so that $\sigma_g(x)=gx$ for all $x \in G$. Let $\lambda:G \to S_G$ be the permutation representation afforded by the corresponding right action of $G$ on itself, and for each $h \in G$ denote the permutation $\lambda(h)$ by $\tau_h$. Thus $\tau_h(x)=xh^{-1}$ for all $x \in G$ ($\lambda$ is called the right regular representation of $G$).
I can't make since of that definition. Earlier, on page 129 the authors explain exactly what is the right action corresponding to some given left action:
For arbitrary group actions it is an easy exercise to check that if we are given a left group action of $G$ on $A$ then the map $A \times G \to A$ defined by $a \cdot g=g^{-1} \cdot a$ is a right group action. Conversely, given a right group action of $G$ on $A$ we can form a left group action by $g \cdot a=a \cdot g^{-1}$. Call these pairs corresponding group actions.
If I try to find the right group action corresponding to $g \cdot a=ga$, I get $a \cdot g:=g^{-1} \cdot a=g^{-1}a$. Hence it seems to me that the definition should be $\tau_h(x)=h^{-1}x$ and not $xh^{-1}$.
Are there any flaws with my reasoning?
Thanks!
|
You seem to worry about these sentences:
"Let $\lambda : G \to S_G$ be the permutation representation afforded by the corresponding
right action of $G$ on itself, and for each $h \in G$ denote the permutation $\lambda(h)$ by
$\tau_h$. Thus $\tau_h(x)=xh^{−1}$ for all $x \in G$ ($\lambda$ is called the right regular representation of $G$)."
What you did at the end of your post is the following:
You considered the left group action of $G$ on itself by left multiplication.
Then you constructed the corresponding right group action and what you get is, correctly, $x \leftharpoonup h = h^{-1} x$.
The authors do something else in the above paragraph.
They first consider the right action of $G$ on itself by right multiplication, i.e. $x \leftharpoonup g := xg$.
But they want to obtain a "permutation representation", which they denote by $\lambda : G \to S_G$.
Usually one wants this map to be a group homomorphism. In order to assure this, one needs to start with a left group action.
What one can do is therefore the following:
Consider the left group action of $G$ on itself corresponding to the above right group action of $G$ on itself by right multiplication.
By your second quote this is given by $g \rightharpoonup x = x \leftharpoonup g^{-1} = x g^{-1}$. And this is exactly what the authors claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Find y-coordinate on a line between two (known) points Iam a littlebit stuck with a simple task and hope to find some help here, since my days in school are now long time over and to be honest i can’t remember so well how to do it.
I have a straight line between two points, lets say (8,20) (300,50) and i want to figure out whats the y-value of (200,y). Now i think i need to find the slope by using (y2-y1)/(x2-x1). But from there i’am stuck.
Any help is appreciate
|
Compute the gradient,
$$ m= \frac{y_2-y_1}{x_2-x_1} $$
Then the y-intercept is
$$ c=y_1 - mx_1 $$
or you could do $c=y_2-mx_2$ which would give the same answer.
Then to find $y$ for a particular $x$ (e.g. $x=200$), you just do
$$y = mx+c$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Simplify [1/(x-1) + 1/(x²-1)] / [x-2/(x+1)] Simplify: $$\frac{\frac{1}{x-1} + \frac{1}{x^2-1}}{x-\frac 2{x + 1}}$$
This is what I did.
Step 1: I expanded $x^2-1$ into: $(x-1)(x+1)$. And got: $\frac{x+1}{(x-1)(x+1)} + \frac{1}{(x-1)(x+1)}$
Step 2: I calculated it into: $\frac{x+2}{(x-1)(x+1)}$
Step 3: I multiplied $x-\frac{2}{x+1}$ by $(x-1)$ as following and I think this part might be wrong:
*
*$x(x-1) = x^2-x$. Times $x+1$ cause that's the denominator =
*$x^3+x^2-x^2-x = x^3-x$.
*After this I added the $+ 2$
*$\frac{x^3-x+2}{(x-1)(x+1)}$
Step 4: I canceled out the denominator $(x-1)(x+1)$ on both sides.
Step 5: And I'm left with: $\frac{x+2}{x^3-x+2}$
Step 6: Removed $(x+2)$ from both sides I got my UN-correct answer: $\frac{1}{x^3}$
Please help me. What am I doing wrong?
|
It simplifies things a lot if you just multiply the numerator and denominator by $(x+1)(x-1)$
$$\frac{\frac{1}{x-1} + \frac{1}{x^2-1}}{x-\frac 2{x + 1}}\cdot\frac{\frac{(x+1)(x-1)}{1}}{\frac{(x+1)(x-1)}{1}} = \frac{(x+1)+1}{x(x+1)(x-1)-2(x-1)}=\frac{x+2}{(x-1)(x(x+1)-2)}$$
$$=\frac{x+2}{(x-1)(x^2+x-2)}=\frac{x+2}{(x-1)(x+2)(x-1)}=\frac{1}{(x-1)^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/866935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Find $\lambda$ if $\int^{\infty}_0 \frac{\log(1+x^2)}{(1+x^2)}dx = \lambda \int^1_0 \frac{\log(1+x)}{(1+x^2)}dx$ Problem : If $\displaystyle\int^\infty_0 \frac{\log(1+x^2)}{(1+x^2)}\,dx = \lambda \int^1_0 \frac{\log(1+x)}{(1+x^2)}\,dx$ then find the value of $\lambda$.
I am not getting any clue how to proceed as if I put $(1+x^2)\,dx =t $ then its derivative is not available. Please suggest how to proceed in this. Thanks.
|
Setting $x=\tan y,$
$$I=\int_0^\infty\frac{\ln(1+x^2)}{1+x^2}\ dx=\int_0^{\dfrac\pi2}\ln(\sec^2y)\ dy=-2\int_0^{\dfrac\pi2}\ln(\cos y)\ dy (\text{ as } \cos y\ge0 \text{ here})$$
which is available here : Evaluate $\int_0^{\pi/2}\log\cos(x)\,\mathrm{d}x$
The Right Hand Side can be found here : Evaluate the integral: $\int_{0}^{1} \frac{\ln(x+1)}{x^2+1} \mathrm dx$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
$\lim_{n \rightarrow \infty} n ((n^5 +5n^4)^{1/5} - (n^2 +2n)^{1/2})$ $$\lim_{n \rightarrow \infty} n ((n^5 +5n^4)^{1/5} - (n^2 +2n)^{1/2})$$
Please, help me to find the limit.
|
Use the general fact
$$n(n+a-\sqrt[k]{n^k +akn^{k-1}})\rightarrow \frac{k+1}{2}a^2$$ as $n\to \infty$
to get a limit of
$$\frac{2+1}{2}-\frac{5+1}{2}=-\frac{3}{2}$$
For proof of the above fact.
If we let $A=\sqrt[k]{n^k +akn^{k-1}}$
then $$n((n+a)-A)=n\frac{(n+a)^k-A^k}{(n+a)^{k-1}+\cdots +A^{k-1}}
=\frac{\binom{k}{2}a^2n^{k-1}+\cdots }{(n+a)^{k-1}+\cdots +A^{k-1}}\rightarrow \frac{k+1}{2}a^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
finite dimensional integral domain containg $\mathbb C$ Let $ R$ be an integral domain containing $\mathbb C$.
Suppose that $R$ is a finite dimensional $\mathbb C$-vector space . Show that $R=\mathbb C$.
One side $\mathbb C \subset R$ is obvious. What about the other?
Show me right way.Thanks in advance.
|
Hint: If it's finite dimensional, over $\mathbb{C}$, then it must be a field. This follows from integrality, or more simply just write down a polynomial killing any non-zero element of $R$, and show how you can make an inverse from this equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$\frac{1}{\sqrt{2\pi}}\int_\frac {1}{2}^0\exp(-x^2/2)dx$ How do we analytically evaluate $J=\frac{1}{\sqrt{2\pi}}\int_\frac {-1}{2}^0\exp(-x^2/2)dx$?
This is what I tried:
$$ J^2=\frac{1}{{2\pi}}\int_\frac {-1}{2}^0\int_\frac {-1}{2}^0\exp(-(x^2+y^2)/2)dxdy \\
=\frac{1}{{2\pi}}\int_\pi ^\frac {3\pi}{2}\int_0 ^\frac {1}{2}\exp(-r^2/2)rdrd\theta \\
=\frac{1}{4}(1-\exp(-1/8))$$
WHere on, I got $J=.17$
But, this does not tally with the value from the nornal distribution table. Am I doing it wring? Is there a better way to evaluate the integral?
|
You could use
$$e^x=\sum_{i=1}^{\infty}\frac{x^i}{i!}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Let $p$ be a prime of the form $3k+2$ that divides $a^2+ab+b^2$ for some integers $a,b$. Prove that $a,b$ are both divisible by $p$. Let $p$ be a prime of the form $3k+2$ that divides $a^2+ab+b^2$ for some integers $a,b$. Prove that $a,b$ are both divisible by $p$.
My attempt:
$p\mid a^2+ab+b^2 \implies p\mid (a-b)(a^2+ab+b^2)\implies p\mid a^3-b^3$
So, we have, $a^{3k}\equiv b^{3k}\mod p$ and by Fermat's Theorem we have, $a^{3k+1}\equiv b^{3k+1}\mod p$ as $p$ is of the form $p=3k+2$.
I do not know what to do next. Please help. Thank you.
|
Suppose that $p=3k+2$ is prime and
$$
\left.p\ \middle|\ a^2+ab+b^2\right.\tag1
$$
then, because $a^3-b^2=(a-b)\left(a^2+ab+b^2\right)$, we have
$$
\left.p\ \middle|\ a^3-b^3\right.\tag2
$$
Case $\boldsymbol{p\nmid a}$
Suppose that $p\nmid a$, then $(2)$ says $p\nmid b$. Furthermore,
$$
\begin{align}
a^3&\equiv b^3&\pmod{p}\tag3\\
a^{3k}&\equiv b^{3k}&\pmod{p}\tag4\\
a^{p-2}&\equiv b^{p-2}&\pmod{p}\tag5\\
a^{-1}&\equiv b^{-1}&\pmod{p}\tag6\\
a&\equiv b&\pmod{p}\tag7\\
\end{align}
$$
Explanation
$(3)$: $\left.p\ \middle|\ a^3-b^3\right.$
$(4)$: modular arithmetic
$(5)$: $3k=p-2$
$(6)$: if $p\nmid x$, then $x^{p-2}\equiv x^{-1}\pmod{p}$
$(7)$: modular arithmetic
Then, because of $(1)$ and $(7)$,
$$
\begin{align}
0
&\equiv a^2+ab+b^2&\pmod{p}\\
&\equiv 3a^2&\pmod{p}\tag8
\end{align}
$$
which, because $p\nmid 3$, implies that $p\mid a$, which contradicts $p\nmid a$ and leaves us with
Case $\boldsymbol{p\mid a}$
If $p\mid a$, then $(2)$ says $p\mid b$ and we get
$$
\left.p^2\ \middle|\ a^2+ab+b^2\right.\tag9
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
Number theory proofs regarding perfect squares How do you prove that $3n^2-1$ is never a perfect square
|
Let $3n^2-1=b^2, \text{ for a } b \in \mathbb{Z}$
$$3n^2-1 \equiv -1 \pmod 3 \equiv 2 \pmod 3$$
$$b=3k \text{ or } b=3k+1 \text{ or } b=3k+2$$
Then:
$$b^2=9k \equiv 0 \pmod 3 \text{ or } b^2=3n+1 \equiv 1 \pmod 3 \text{ or } b=3n+1 \equiv 1 \pmod 3$$
We see that it cannot be $b^2 \equiv 2 \pmod 3$,so the equality $3n^2-1=b^2$ cannot be true.
Therefore, $3n^2-1$ is never a perfect square.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find the equation of a line tangent at a specific point I have to find an equation for the line tangent to the graph of
$\large\frac {\sqrt{x}}{6x+5}$
at the point $(4,f(4))$, and write it out in the form of $y=mx+b$
Using the quotient rule I get..
$(6x+5)\frac12 x^{-{\frac12}} - \large\frac{(6\sqrt{x})}{(6x+5)^2}$
I try plugging in $4$ for the slope and solving for "$b$" but it is not coming out correctly.
I end up with..
$y=\frac{-4.75}{29^2}x+ \frac{2}{29}$
What am i doing wrong?
|
First you can simplify your derivative to: $$\dfrac{\mathrm d}{\mathrm dx}\left\{\dfrac{\sqrt{x}}{6x+5}\right\}=\dfrac{5-6x}{2\sqrt{x}(6x+5)^2},$$ which would make your calculations a little bit simpler. To find $b$ you just use the fact that $(4,f(4))$ lies in your tangent line, and you use the point-slope formula: $$y-y_0=m(x-x_0)\ \ \Rightarrow \ \ y-f(4)=f(4)(x-4) \ \ \Rightarrow \ \ \ldots $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove a statement with elements for Set Theory I am stuck on this proofing question and I would like some clarification.
Q: $A\subseteq B \iff A\cap B^{\prime} = \emptyset$
I already proved that LHS goes to RHS, but I am confused for the other way around because the textbook answer key gives a weird answer.
It says that for $A\cap B^{\prime}=\emptyset$, let $x$ be an element of $A$. If x isn't an element of $B$ then, then $x$ is an element of $B^{\prime}$, therefore $x$ is an element of $A\cap B = \emptyset$. Hence $x$ is an element of $B$ and $A$ is a subset of $B$.
I am mainly confused about how they say $x$ isn't an element of $B$ and then all of a sudden say $x$ is an element of $B$...? How could it be both an element and not an element of $B$??
|
The point is that there's a contradiction here; this is how a standard proof by contradiction goes: Start by assuming something (that is hopefully false), and use it to get something you know is false. Then the original statement must be false too.
So to clarify the proof, I'll expand it a bit:
Suppose, intending a contradiction, that $x \in A$ but $x \notin B$. However, this implies that $x \in B^c$, so that $$x \in A \cap B^c = \emptyset$$
This is the desired contradiction, so $x$ couldn't have existed to start with. Thus if $x \in A$, we have $x \in B$ too, so that $A \subseteq B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Geometric series of matrices I am currently reading 'Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach' by J. Hubbard and B. Hubbard. In the first chapter, there is the proposition:
Let A be a square matrix. If $|A|<1$, the series $$S=I+A+A^2+\cdots$$
converges to $(I-A)^{-1}$.
The proof first shows that
$$S_k(I-A)=I-A^{k+1}$$
and similarly
$$(I-A)S_k=I-A^{k+1}$$
where $S_k$ is the sum of the first $k$ terms in the series. Then it shows that
$$|A^{k+1}|\leq|A|^{k+1}$$
and according to the proof in the book, it can be said from this that $\lim_{k\to \infty}A^{k+1}=0$ when $|A|<1$. Consequently, $S(I-A)=I$ and $(I-A)S=I$. Therefore $S=(I-A)^{-1}$.
However I do not understand how it can be said that $\lim_{k\to \infty}A^{k+1}=0$ when $|A|<1$. In addition, all similar propositions I have found on the internet state $|\lambda_i|<1$ as the necessary and sufficient condition. How does this relate to the condition, $|A|<1$?
|
Since $|A|<1$, and since you state that $\left|A^k\right|\le|A|^k$, you clearly have $\lim_{k\to\infty}\left|A^k\right|\to0$.
I don't know which norm you are using, but for every norm this also means $\lim_{k\to\infty}A^k\to0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
}
|
Double integral of a rational function Consider the region $D$ given by $1\leq x^2+y^2\leq2\land0\leq y\leq x$. Compute $$\iint_D\frac{xy(x-y)}{x^3+y^3}dxdy$$
Attempt: The region $D$ is part of a ring in the first quadrant below the line $y=x$
Any hints are wellcome.
|
Changing to polar coordinates, $x=\rho \cos\theta$, $y=\rho \sin\theta$, and the Jacobian of the transformation is $J=\rho$. Then:
$$\int_1^\sqrt2 \rho d\rho\int_0^\frac{\pi}{4}\frac{\sin\theta\cos\theta(\cos\theta-\sin\theta)}{\cos^3\theta+\sin^3\theta}d\theta$$
The first integral is immediate and yields $\frac{1}{2}$, so we'll multiply the answer given by the trigonometric integral by one half. For the trigonometric integral, let's use the substitution $u=\cos^3\theta +\sin^3\theta$, $du=(-3\cos^2\theta\sin\theta+3\sin^2\theta\cos\theta)d \theta=-3(\cos^2\theta\sin\theta-\sin^2\theta\cos\theta)d\theta$. The integral becomes:
$$-\frac{1}{3}\int_1^\frac{\sqrt2}{2}\frac{du}{u}=-\frac{1}{3}\log u\bigg|_{u=1}^{u=\frac{\sqrt2}{2}}=-\frac{1}{3} \log \frac{\sqrt 2}{2}$$
Multiplying by one half yields $I=-\frac{1}{6} \log \frac{\sqrt 2}{2}=\frac{\log 2}{12}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Surface area of sphere $x^2 + y^2 + z^2 = a^2$ cut by cylinder $x^2 + y^2 = ay$, $a>0$ The cylinder is given by the equation $x^2 + (y-\frac{a}{2})^2 = (\frac{a}{2})^2$.
The region of the cylinder is given by the limits $0 \le \theta \le \pi$, $0 \le r \le a\sin \theta$ in polar coordinates.
We need to only calculate the surface from a hemisphere and multiply it by two. By implicit functions we have:
$$A=2\iint\frac{\sqrt{\left(\frac{\partial F}{\partial x}\right)^2 + \left(\frac{\partial F}{\partial y}\right)^2 + \left(\frac{\partial F}{\partial z}\right)^2}}{\left|\frac{\partial F}{\partial z} \right|} dA$$
where $F$ is the equation of the sphere.
Plugging in the expressions and simplifying ($z \ge 0)$, we get:
$$A=2a\iint\frac{1}{\sqrt{a^2 - x^2 - y^2}} dxdy$$
Converting to polar coordinates, we have:
$$A = 2a \int_{0}^\pi \int_{0}^{a\sin(\theta)} \frac{r}{\sqrt{a^2 - r^2}} drd\theta$$
Calculating this I get $2\pi a^2$. The answer is $(2\pi - 4)a^2$. Where am I going wrong?
|
Given the equations
$$
x^2+y^2+z^2=a^2,
$$
and
$$
x^2+y^2 = ay,
$$
we obtain
$$
ay + z^2 = a^2.
$$
Using
$$
\begin{eqnarray}
x &=& a \sin(\theta) \cos(\phi),\\
y &=& a \sin(\theta) \sin(\phi),\\
z &=& a \cos(\theta),\\
\end{eqnarray}
$$
we obtain
$$
a^2 \sin(\theta) \sin(\phi) + a^2 \cos^2(\theta) = a^2
\Rightarrow \sin(\theta) = \sin(\phi) \Rightarrow \theta=\phi \vee \theta=\pi-\phi.
$$
For the surface we have
$$
\begin{eqnarray}
\int d\phi \int d\theta \sin(\theta) &=&
\int_0^{\pi/2}d\phi \int_0^\phi d\theta \sin(\theta) +
\int_{\phi/2}^{\pi}d\phi \int_0^{\pi-\phi} d\theta \sin(\theta)\\
&=& 2 \int_0^{\pi/2}d\phi \int_0^\phi d\theta \sin(\theta).
\end{eqnarray}
$$
We can calculate the surface as
$$
\begin{eqnarray}
4 a^2 \int_0^{\pi/2}d\phi \int_0^\phi d\theta \sin(\theta)
&=& 4 a^2 \int_0^{\pi/2}d\phi \Big( 1 - \cos(\phi) \Big)\\
&=& 4 a^2 \Big( \pi/2 - 1 \Big)\\
&=& a^2 \Big( 2\pi - 4 \Big).
\end{eqnarray}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/867961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Ramanujan type sum Let $$f_b(x)=\sum\limits_{a=1 , (a,b)=1}^{b}\frac{1}{1-e^{2\pi i \frac{a}{b}}x}$$
For example:
$$f_6(x) = \frac{1}{1-e^{2\pi i \frac{1}{6}}x}+\frac{1}{1-e^{2\pi i \frac{5}{6}}x}$$
I'm wondering if there is a simple closed for for my function. For instance, if we get rid of the $(a,b)=1$ condition, we see that
$$\sum\limits_{a=1 }^{b}\frac{1}{1-e^{2\pi i \frac{a}{b}}x}=\frac{b}{1-x^b}$$
Which is very nice. And when we add in the coprime condition, we are reducing the case to primitive roots of unity.
|
Define
$$g_b(x) = \sum\limits_{a=1 }^{b}\frac{1}{1-e^{2\pi i \frac{a}{b}}x}.
$$
Then it's clear that
$$
g_b(x) = \sum_{d\mid b} f_b(x)
$$
(where the sum is over all positive integers $d$ dividing $b$). By Mobius inversion, we conclude that
$$
f_b(x) = \sum_{d\mid b} \mu(b/d) g_d(x) = \sum_{d\mid b} \mu(b/d) \frac d{1-x^d}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is this Goldbach-type problem easy to solve? Problem: Given an odd prime number $p$, are there odd prime numbers $q$, $p'$, $q'$ such that $\{p,q\} \neq \{ p',q'\}$ and $p+q = p'+q'$ ?
This comment informs that it's an obvious corollary of the Polignac's conjecture.
This conjecture is still open, and my problem seems much weaker, so that I ask for a proof.
|
It seems likely that your result can be proven using methods like those used to bound the number of exceptions to the Goldbach conjecture. Let $E(x)$ be the number of even integers $\le x$ that cannot be written as a sum of two primes. It is known that $E(x) \in O(x^{1-\delta})$ for some $\delta>0$ (for instance, see references here). (That is, the number of exceptions, if there are any, grows relatively slowly.) Therefore, given a set $A\subseteq \mathbb{N}_{\text{even}}$ that is sufficiently dense (e.g., such that the number of its elements $\le x$ grows much faster than $x^{1-\delta}$), we can guarantee that some member of $A$ is a Goldbach number. In your case, let $A=\{p+q : q {\text{ is an odd prime}}\}$. This is a sufficiently dense set of even numbers: by the prime number theorem, the number of primes grows faster than $x^{1-\delta}$ for any $\delta>0$. So we have this:
For any odd composite integer $p$, there exist primes $q,p',q'$ such that $p+q=p'+q'$.
But to prove your statement when $p$ is prime, we need some member of $A$ to have not one but two distinct Goldbach partitions. Let $E_2(x)$ be the number of even integers $\le x$ that cannot be written as a sum of two distinct pairs of primes. (The only known exceptions are $6$, $8$, and $12$.) A proof that $E_2(x)\in O(x^{1-\delta})$ for some $\delta>0$ would imply your statement. Since the required bound is so weak, and the analogous result for $E(x)$ is long-known, it is plausible that this bound could be proven as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
How to call two subsets that can be deformed into each other? Given a topological space $X$, is there a canonical name for the equivalence relation generated by the following relation on the subsets of $X$?
$A \sim B :\Leftrightarrow \exists \text{ continuous } h:[0,1]\times X \to X,\; h(0, •) = id_X,\; h(1, A) = B$
that is $A$ can be “homotopically deformed” to match $B$.
|
This very closely matches the notion of homotopy. However, homotopy is slightly different, as it represents the notion of a map $h\colon [0,1] \times A \rightarrow X$ such that $h(0,-)$ is the identity and $h(1,-) = B$. This is basically a deformation of maps into $X$, but doesn't require that all of $X$ be mapped at each step. Technically, you wouldn't say the sets $A$ and $B$ are homotopic, because being homotopic is a property of maps, not sets, so you would say the inclusion of $A$ into $X$ is homotopic to a surjective map $A \rightarrow B$. But many people use the term "homotopic sets" as shorthand for this.
For many familiar spaces, being homotopic is equivalent to the condition you define. However, the notions are not equivalent for all spaces - for example, consider the union of the negative real numbers and the positive rationals. Then the singleton sets $\{0\}$ and $\{-1\}$ are homotopic, but do not satisfy your property.
There's also the notion of ambient isotopy, which does specify a map $h\colon [0,1] \times X \rightarrow X$ that covers the full space, but two sets being ambient isotopic is stronger than what you've stated, as it requires the maps $h(t, X)$ to be homeomorphisms for each fixed $t$. I don't know of a term that only requires them to be continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
about the intersection of nested intervals Consider a sequence $\{a_n\}$ (we have not informations about its convergence) and moreover consider a sequence of semi-open intervals of $\mathbb R$:
$$\left[\frac{a_0}{2^0},\frac{a_0+1}{2^0}\right[\supset \left[\frac{a_1}{2^1},\frac{a_1+1}{2^1}\right[\supset\cdots\supset\left[\frac{a_n}{2^n},\frac{a_n+1}{2^n}\right[\supset\cdots$$
Can I conclude that the intersection is only a point? Be aware of the fact that I can't use the Cantor intersection theorem since my intervals are not closed!
Many thanks in advance.
|
As Daniel Fischer noted, the length of the intervals shrinks to $0$ so the intersection is either empty or contains exactly one point.
If you choose $a_n = 0$ for each $n$, the intervals will be nested and their intersection is $\{0\}$. On the other hand, if you choose $a_n$ so that $\{\frac{a_n + 1}{2^n}\}$ is constant, i.e. $a_{n + 1} = 2a_n + 1$, and $a_0 ≥ 0$ then the intervals will be again nested but the intersection will be empty.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Help with composite functions? Suppose that $u$ and $w$ are defined as follows:
$u(x) = x^2 + 9$
$w(x) = \sqrt{x + 8}$
What is:
$(u \circ w)(8) = $
$(w \circ u)(8) = $
I missed this in math class. Any help?
|
When the function isn’t too complicated, it may help to express it in words. So, your $u$ is “square the input, and then add $9$, to get your final output”. And your function $w$ is “add $8$ to your input, and then take the square root to get your final output”. And I’m sure you know that $u\circ w$ means to perform the $w$ process first, and then, using your output from $w$ as input to $u$, perform the $u$ process. That is exactly what @olive euler has done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Integration by substitution, why do we change the limits? I've highlighted the part I don't understand in red. Why do we change the limits of integration here? What difference does it make?
Source of Quotation: Calculus: Early Transcendentals, 7th Edition, James Stewart
|
The original limits is for variable $x$, and the new limits is for the new variable $u$. If you can get a primitive function
$$\frac{8}{27}(1+\frac{9}{4}x)^\frac{3}{2}$$
by observation, it is unnecessary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
}
|
Calculating the central point with minimal average distance to other points I work at an office with colleagues coming from all over the country. Our office is quite centrally located, but some colleagues have to travel quite a lot further than others. I often wondered how I could calculate a central point which minimizes the average traveling distance for each employee (traveling as the crow flies). So if we're ever going to relocate, that would be the ideal spot; we could all save on time and fuel.
Look at this image:
Here we can see that the employees live in different places. With 2 employees, it's rather simple. Let's pick John and Pete. The point halfway between John and Pete would be the perfect spot for them. But how about when we include a 3rd person or an n'th? I'm kind of lost there.
Bonus points for explaining it in a way an average but not expert mathematician understands. :)
|
The average distance is the sum of distances divided by the number of colleagues. Since the latter is fixed, you can as well ask for the point which minimizes the sum. Which, by the way, indicates that for two employees the situation would be not as simple as you make it to be, since any point on the connecting line will satisfy the requirement.
Taking the terms “point minimize distance sum” to Wikipedia, you can find that such a point is called the geometric median of your set of employee locations. If you continue reading, you will find that computing it might be tricky, but there has been work on the subject.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Can two distinct formulae (or series of formulae) have the same Gödel number? As I am studying Gödel's incompleteness theorem I am wondering if two distinct formulae or series of formulae can have the same Gödel number? Or the function mapping each formula or series of formulae to a Gödel number is not invertible?
|
Two different formulas, or strings of formulas, cannot have the same index (Gödel number).
In the usual indexing schemes, not every natural number is an index. So the function is one to one but not onto.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
2 dimensional (graphical) topological representation of a sphere One page 37 of this pdf - Surfaces - it gives a graphical representation of a sphere in 2 dimensional topological format. I don't see how the image for a sphere here actually describes a sphere. Does anyone know how this image describes a sphere?
|
It might be easier to think about it the other way round. Imagine you have a sphere and you cut from the north pole down to the south pole and then pull it apart and try to lay it flat. If the material was suitably stretchy then it would be able to lay flat on a table and would be (topologically) a disk because there are no 'holes'.
Now, run this in reverse. Start with a disk, bend it a bit and then 'zip up' the boundary to get a sphere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/868964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is it possible to describe $Q(x)$ as the extension field of $R$ freely generated by $\{x\}$? Given an integral domain $R$, the polynomial ring $R[x]$ can be defined as the commutative $R$-algebra freely generated by $\{x\}$. Also, let $Q$ denote the field of fractions associated to $R$. Then we can describe $Q(x)$ as the field of fractions of $R[x]$. But suppose we want to describe $Q(x)$ more directly. By an extension field of $R$, let us mean an $R$-algebra that just happens to be a field.
Question. Is it possible to describe $Q(x)$ as the extension field of $R$ freely generated by $\{x\}$?
What I mean is that okay, we've got a forgetful functor "extension fields of $R$" $\rightarrow \mathbf{Set}$. This should have a left-adjoint, call it $L$. Then I'm thinking that $Q(x)$ can be described as $L(\{x\})$. Is this correct? If not, what is the correct way to describe $Q(x)$?
|
In the category of extension fields of $R$, the morphisms $Q(x) \rightarrow S$ correspond bijectively to the elements of $S$ that are transcendental (over $Q$). So if you define $\mathcal{R}$ to be the functor that sends an extension field of $R$ to the set of elements that are transcendental (over $Q$), then its left adjoint $\mathcal{L}$ has the property that $\mathcal{L}(\{x\}) = Q(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
A function $f$ such that $f(x)$ increases from $0$ to $1$ when $x$ increases from $0$ to infinity? I am looking for a function f(x) with a value range of [0,1].
f(x) should increase from 0 to 1 while its parameter x increases from 0 to +infinity.
f(x) increases very fast when x is small, and then very slow and eventually approach 1 when x is infinity.
Here is a figure. The green curve is what I am looking for:
Thanks.
It would be great if I can adjust the slope of the increase. Although this is not a compulsory requirement.
|
Simple version:
$$f(x)=1-\mathrm e^{-a\sqrt{x}}\qquad (a\gt0)$$
Slightly more elaborate version:$$f(x)=1-\mathrm e^{-a\sqrt{x}-bx}\qquad (a\gt0,\ b\geqslant0)$$
Every such function fits every requisite in the question, including the infinite slope at $0$. The parameter $a$ can help to tune the increase near $0$. The parameter $b$ can help to tune the increase near $+\infty$. To get even quicker convergence to $1$ when $x\to+\infty$, one can replace $-bx$ in the exponent by $-bx^n$ for some $b\gt0$ and $n\gt1$.
Once one understands the principle, a host of other solutions springs to mind.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 8,
"answer_id": 3
}
|
Composition of an analytic function with a continuous function that is analytic If $f$ is a continuous function such that $g(z)=\sin{f(z)}$ is analytic, then is $f$ analytic?
I know we can take $f(z)=\bar{z}$ then $f$ is continuous but $g$ is not analytic. Same holds if we take $f(x+iy)=x$.
I tried letting $f(z)=u+iv$ then expanding $g(z)=\sin u\cosh v+i\cos u\sinh v$ taking the partial derivatives and using the Cauchy Riemann equations. That seems like a messy way to go.
|
For points where $f(z) \neq \left(k+\frac{1}{2}\right)\pi$, the sine is locally biholomorphic, and
$$f(z) = \arcsin \left(\sin f(z)\right)$$
is holomorphic in a neighbourhood of $z$ as a composition of two holomorphic functions.
It remains to deal with the points where $f(z) = \left(k+\frac{1}{2}\right)\pi$ for some $k \in \mathbb{Z}$.
Use the identity theorem (on $\sin\circ f$, since we don't know yet that $f$ is holomorphic everywhere) to deduce that (on each component of its domain) either $f$ is constant (follows from the continuity of $f$ if $\sin \circ f$ is constant), or these points are isolated. In the latter case, the Riemann removable singularity theorem tells you that $f$ is holomorphic also in these points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
How to solve this elementary induction proof: $\frac{1}{1^2}+ \cdots+\frac{1}{n^2}\le\ 2-\frac{1}{n}$? This is a seemingly simple induction question that has me confused about perhaps my understanding of how to apply induction
the question;
$$\frac{1}{1^2}+ \cdots+\frac{1}{n^2}\ \le\ 2-\frac{1}{n},\ \forall\ n \ge1.$$
this true for $n=1$, so assume the expression is true for $n\le k$. which produces the expression,
$$\frac{1}{1^2} + \cdots + \frac{1}{k^2} \le\ 2-\frac{1}{k}.$$ now to show the expression is true for $k+1$,
$$\frac{1}{1^2}+\cdots+ \frac{1}{k^2} + \frac{1}{(k+1)^2} \le\ 2-\frac{1}{k}+\frac{1}{(k+1)^2}.$$
this the part I am troubled by, because after some mathemagical algebraic massaging, I should be able to equate,
$$2-\frac{1}{k}+\frac{1}{(k+1)^2}=2-\frac{1}{(k+1)},$$
which would prove the expression is true for $k+1$ and I'd be done. right? but these two are not equivalent for even $k=1$, because setting $k=1$ you wind up with $\frac{5}{4}=\frac{3}{2}$, so somewhere i am slipping up and I'm not sure how else to show this if someone has some insight into this induction that I'm not getting. thanks.
|
What you really need is $2 − \frac{1}{k} + \frac{1}{(k+1)^2} \leq 2 − \frac{1}{(k+1)}$,
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Show that $\lim_{a\to 0^+} \int \frac{e^{-|x|/a}}{2a}f(x)dx=f(0)$ I'm trying to show $\displaystyle \lim_{a\to 0^+} \int \frac{e^{-|x|/a}}{2a}f(x)dx=f(0)$ where $f$ is continuous with compact support.
I've already shown that for any $a>0, \displaystyle\int\frac{e^{-|x|/a}}{2a}dx=1$ and that for any fixed $\delta>0$, $\displaystyle\lim_{a\to 0^+}\int_{|x|>\delta}\frac{e^{-|x|/a}}{2a}dx=0$.
Since the limit is supposed to be $f(0)$, I assume I'll need to use some sort of integration by parts/Fundamental Theorem of Calculus. None of the convergence theorems seem helpful since, at $x=0$, $\displaystyle\lim_{a\to 0^+}\frac{e^{0}}{2a}=\infty$. Is there a theorem or technique that I should consider for this? Thank you
|
Do you know about mollifiers? If you do, the last thing to see is that
$$\frac{e^{-|x|/a}}{2a} \ge 0 \qquad \forall a > 0$$
And then notice that
$$\int \underbrace{\frac{e^{-|x|/a}}{2a}}_{=: \phi_a} f(x) \mathrm dx = (\phi_a \ast f)(0)$$
Then use that $(\phi_a)_{a>0}$ is a mollifier famliy and use
$$\lim_{a\searrow 0} (\phi_a \ast g)(x) = g(x)$$
in all points of continuity (this works even for non-compactly supported $g$).
Using this result, the conditions on $f$ can be significantly weakened to $f\in L^1(\mathbb R)$ and $f$ is continuous at $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the GCD of two Gaussian integers How do you calculate the GCD of $6-17i$ and $18+i$ in $\Bbb Z [i]$?
|
You can use the Euclidean algorithm in $\Bbb Z[i]$: divide one number by the other to obtain a quotient and remainder, then repeat with the previous divisor and remainder, and so on. The quotient must of course be an element of $\Bbb Z[i]$, and the remainder must have norm less than that of the divisor: to ensure this we'll take the quotient to be the exact quotient, with real and imaginary parts rounded to the nearest integer. In this case we have
$$\frac{18+i}{6-17i}=\frac{91+312i}{325}$$
and so we take the first quotient to be $i$. Thus
$$\eqalign{
18+i&=(i)(6-17i)+(1-5i)\cr
6-17i&=(\cdots)(1-5i)+(\cdots)\cr
1-5i&=(\cdots)(\cdots)+(\cdots)\cr}$$
and so on. See if you can finish this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that some set is a Borel set If $B$ is a borel set, is $B+c$ a borel set for some constant $c$ ? I know that it is not possible to characterize a Borel set.
|
Yes. You know that the set $B+c$ is the inverse image of the measurable set, $B$, under the continuous function: $f(x)=x-c$, and since continuous functions are measurable, and the inverse image of a measurable set is measurable, $B+c$ is also measurable.
Note: there is nothing particularly special about Lebesgue measure here other than it being a Haar measure. I only use that the measure is a Haar measure on a locally compact group: in any such case translation is a continuous operation, so you can get the same result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How find the maximum of the $x^3_{1}+x^3_{2}+x^3_{3}-x_{1}x_{2}x_{3}$ Let $$0\le x_{i}\le i,\, i=1,2,3$$ be real numbers. Find the maximum of the expression
$$x^3_{1}+x^3_{2}+x^3_{3}-x_{1}x_{2}x_{3}$$
My idea: I guess
$$x^3_{1}+x^3_{2}+x^3_{3}-x_{1}x_{2}x_{3}\le 0^3+2^3+3^3-0\cdot2\cdot 3=35$$
But I can't prove it. Can you help ?
This problem is a special case of:
|
Hint:
As the objective function is convex and continuous, and the domain of interest is compact and convex, so we must have maximum when each $x_i \in \{0, i\}$. This cuts down the problem (even the general $n$ variable case) to checking a few extreme cases, many of which are trivial...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
How prove $A^2=0$,if $AB-BA=A$
let $A_{2\times 2}$ matrix, and The matrix $B$ is order square,such
$$AB-BA=A$$ show that
$$A^2=0$$
My idea: since $$Tr(AB)=Tr(BA)$$ so
$$Tr(A)=Tr(AB-BA)=Tr(AB)-Tr(BA)=0$$
Question:2
if $A_{n\times n}$ matrix,and the matrix $B$ is order square,such
$$AB-BA=A$$
then we also have
$$A^2=0?$$
and then I can't Continue .Thank you
|
An alternative geometric approach:
*
*We have $A\in \mathfrak{sl}_2$ and it can be assumed that $B\in \mathfrak{sl}_2$ as well. Hence $A$, $B$ can be seen as vectors $\vec{a},\vec{b}\in \mathbb{C}^3$. In this picture, $[A,B]\sim \vec{a}\wedge \vec{b}$ and $\operatorname{Tr}AB\sim \vec{a}\cdot\vec{b}.$
*Now since $\vec{a}\wedge \vec{b}\sim \vec{a}$,
taking the scalar product with $\vec{a}$, we get $\operatorname{Tr}A^2\sim\vec{a}\cdot \vec{a}=0 $. As for $A\in\mathfrak{sl}_2$ one has $A^2=\frac{\operatorname{Tr}A^2}{2} I$, the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Is it possible to find square root using only rational numbers and elementary arithmetic operators Suppose I have a number a
How can I find it's square root using only +, -, /, * and rational numbers?
If it is impossible how to prove it?
|
If you allow infinite number of operations, then you can use some algorithm.
One easy example is root searching via Newton's method. Here we do the iteration
$$x_{n+1} = \frac{a + x_n^2}{2x_n},$$
which eventually converges to $\sqrt{a}$ if $a$ and $x_0$ are positive.
See https://en.wikipedia.org/wiki/Methods_of_computing_square_roots for other methods.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
}
|
Prove that $\lim_{x \to -1^-}\frac{5}{(x+1)^3} = -\infty $ using the $\delta M$ definition of infinite limits I am posting this for you guys to let me know whether it's wrong and/or give me any advice regarding the proof.
Thank you.
Given $ M < 0 $ we need $\delta > 0$ such that $ -1 -\delta< x < -1 \Rightarrow \frac{5}{(x+1)^3} < M$
Now $ -1 -\delta< x < -1 \Leftrightarrow -\delta < x + 1 < 0$
Now $ \frac{5}{(x+1)^3} < M \Leftrightarrow x + 1 > \sqrt[3]{\frac{5}{M}} $
So take $\delta = -\sqrt[3]{\frac{5}{M}}$
Then $-1 -\delta< x < -1 \Leftrightarrow -1 - (-\sqrt[3]{\frac{5}{M}})< x < -1 \Rightarrow \frac{5}{(x+1)^3} < M$
So $\lim_{x \to -1^-}\frac{5}{(x+1)^3} = -\infty $
|
Edit: The proof now starts out correctly. There are still issues.
For example, the line
$ \frac{5}{(x+1)^3} < M \Rightarrow$$x + 1 > \sqrt[3]{\frac{5}{M}}$
though correct, has the implication running in the wrong direction. We want to show that if $\delta$ is chosen appropriately, then $\frac{5}{(1+x)^3}$ is $\lt M$. That is not the same thing as showing that if $\frac{5}{(1+x)^3}\lt M$ then $\dots$.
Obsoleted answer below:
The argument starts out incorrectly. You need to show that for any $M$ there exists a $\delta$ such that $\dots$.
Remark: Your handling of inequalities indicates that you can undoubtedly modify things to give a correct proof. But if the solution as it stands were being graded, it is likely that the grader would look at the first line, put an X through it, and go on to the next question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/869966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
An integral inequality with inverse Let $f:[0,1]\to [0,1]$ be a non-decreasing concave function, such that $f(0)=0,f(1)=1$. Prove or disprove that :
$$ \int_{0}^{1}(f(x)f^{-1}(x))^2\,\mathrm{d}x\ge \frac{1}{12}$$
A friend posed this to me. He hopes to have solved it, but he is not sure. Can someone help? Thanks.
|
I am not so sure this works. Take $f(x)=nx$ for $x \in [0;\frac{1}{n+1}]$ and $f(x)=\frac{n}{n+1} + (x-\frac{1}{n+1})\frac{1-\frac{n}{n+1}}{1-\frac{1}{n+1}} = \frac{n}{n+1} + \frac{1}{n}(x-\frac{1}{n+1})$ on $]\frac{1}{n+1},1]$.
Then $f^{-1}(y)= \frac{y}{n}$ if $y \in [0,\frac{n}{n+1}]$ and $f^{-1}(y)= n(y-\frac{n}{n+1})+\frac{1}{n+1}$ if $y \in [\frac{n}{n+1},1]$
Then, noticing that$\forall x$ $f(x)\leq 1$ and $f^{-1}(x)\leq 1$, we can rewrite:
$\int_0^1 (f f^{-1} )^2\leq \int_0^1 f^{-1}(x )^2dx \leq \int_0^{\frac{n}{n+1}}\frac{x^2}{n^2}dx + \int_{\frac{n}{n+1}}^1 1dx \leq \frac{1}{n ^2} + \frac{1}{n+1}$ which converges towards 0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
}
|
Find $\delta >0$ such that $\int_E |f| d\mu < \epsilon$ whenever $\mu(E)<\delta$ I am studying for a qualifying exam, and I am struggling with this problem since $f$ is not necessarily integrable.
Let $(X,\Sigma, \mu)$ be a measure space and let
$$\mathcal{L}(\mu) = \{ \text{ measurable } f \quad| \quad \chi_Ef \in L^1(\mu) \text{ whenever } \mu(E)<\infty\}.$$
Show that for any $f\in \mathcal{L}(\mu)$ and any $\epsilon >0$ there is $\delta >0$ such that $\int_E|f| d\mu < \epsilon$ whenever $\mu(E)< \delta$.
A technique I've used in other similar problems is to define $A_n = \{ x\in X \, | \, 1/n \leq |f(x)| \leq n \}$ and let $A = \displaystyle \bigcup_{n=1}^\infty A_n$. We can also define $A_0 = \{ x\in X \,|\, f(x) = 0\}$ and $A_\infty= \{x\in X\, | \, |f(x)| = \infty\}$. The part where I'm stuck is now that
$$\int_X|f|d\mu = \int_{A_0} |f|d\mu + \int_{A} |f| d\mu + \int_{A_\infty} |f|d\mu$$
where the first term on the right is zero, and I want the last term on the right to be zero.
Is there another way to go about this problem? Explanations are helpful to me since I'm studying and I don't want to confuse myself further. Thanks!
|
Suppose not: there is $\varepsilon_0\gt 0$ such that for each positive $\delta$, there is a measurable set $A$ such that $$\mu(A)\lt \delta\quad \mbox{ and }\quad \int_A|f|\mathrm d\mu\gt\varepsilon_0.$$
In particular, for each integer $k$ and $\delta:=2^{-k}$, there exists $A_k$ of measure smaller than $2^{-k}$ for which $\int_{A_k}|f|\mathrm d\mu\gt\varepsilon$. Define $A:=\bigcup_k A_k$ (a set of finite measure). We have
$$\varepsilon_0\lt \int_{A_k}|f|\mathrm d\mu\leqslant n\mu(A_k)+\int \chi_{A_k}\chi_{\{|f|\gt n\}}|f|\mathrm d\mu\\\leqslant n2^{-k}+\int \chi_A\chi_{\{|f|\gt n\}}|f|\mathrm d\mu,$$
hence
$$\varepsilon_0\leqslant\int \chi_A\chi_{\{|f|\gt n\}}|f|\mathrm d\mu,$$
and by monotone convergence, $\lim_{n\to\infty}\int \chi_A\chi_{\{|f|\gt n\}}|f|\mathrm d\mu=0$, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Proving that $nb \text{ (mod m)}$ reaches all values $\{0 \dots (m-1)\}$ if $n$ and $m$ are relatively prime I am trying to prove the frobenius coin problem which requires me to prove the following lemma:
If $n$ and $m$ are relatively prime and $b$ is any integer, then the set of all possible values of $$nb \text{ (mod m)} \text{ will be } \{0, 1, 2, \dots, (m-1)\}$$
I have so far that the set of all values of
$b \text{ (mod m)}$ for any $b$ will be $\{0, \dots, (m-1)\}$, and that
$$nb \text{ (mod m)} = [n \text{ (mod m)}\cdot b \text{ (mod m)]}\text{ (mod m)}$$
Let $n \text{ (mod m)} = p$. Then I need to prove that when the elements of the set $\{0, p, 2p, 3p, \dots, (m-1)p\}$ are reduced modulo $m$, then the result is just $$\{0, 1, 2, \dots, (m-1)\}$$ However, I am unsure how to do this. Can someone give me a hint?
|
By Euclid, $\,(m,n)=1,\ m\mid nx\,\Rightarrow\, m\mid x,\,$ i.e. $\,nx\equiv 0\,\Rightarrow x\equiv 0\,$ in $\,\Bbb Z/m,$
i.e. the linear map $\,x\mapsto nx\,$ has trivial kernel, thus it is $\,\color{#c00}1$-$\color{#c00}1\,$ on $\,\Bbb Z/m.$
But $\,\Bbb Z/m\,$ is finite, thus, by $\rm\color{#c00}{pigeonhole}$, the map is $\color{#c00}{\rm onto}$, proving the claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Proof by contradiction using counterexample Why can't we use one counterexample as the contradiction to the contradicting statement?
Example:
Let a statement be A where a-->b.
We can prove A is not true by finding a counter example.
Now, in another space and time, Let a new statement be B where it is the same as a-->not b.
Why can't we prove B is not true by finding a counter example?
Thus, a contradiction and A is true?
|
You can disprove the statement $a \implies \sim b$ using a counterexample if you can find one. However, this does not prove the statement $a \implies b$. This is because the negation of $a \implies \sim b$ isn't $a \implies b$.
Recall that the negation of an implication is not an implication. $\sim (a \implies b) \equiv a \wedge \sim b$. In words, this says that the negation of "a implies b" is equivalent to "a and not b".
You are right that if a statement $B$ is the negation of a statement $A$, then $A$ being false implies $B$ is true, and $B$ being false implies $A$ is true. The problem is, the negation of an implication is not an implication.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Showing that Gaussians are eigenfunctions of the Fourier transform I'm having a bit of trouble on this problem:
I've tried to evaluate the integral directly (using the trick from multivariable calculus where you "square" the integral and convert to polar coordinates), but that hasn't gotten me anywhere. Does anyone have a suggestion on where to start?
Just for context, this is for complex analysis.
|
You have $e^{-x^2/2} e^{-itx}$. The exponent is
$$
\begin{align}
-\frac{x^2}{2} - itx & = -\frac 1 2 (x^2 + 2itx) \\[10pt]
& = -\frac{1}2 \left((x^2+2itx+ (it)^2) - (it)^2\right) \tag{completing the square} \\[10pt]
& = -\frac 1 2 \left( (x+it)^2 - t^2\right)
\end{align}
$$
So you're integrating
$$
e^{-(1/2)(x+it)^2} \cdot \underbrace{{}\ \ e^{t^2/2}\ \ {}}_{\text{no $x$ appears here}}
$$
The factor in which no $x$ appears can be pulled out of the integral when the variable with respect to which you're integrating is $x$.
Then the integral is $\text{constant}\cdot\int_{-\infty}^\infty e^{-(1/2)(x - \text{something})^2} \,dx$. The last thing to do is show that the value of the integral doesn't depend on the "something". You can write $u = (x-\text{something})$ and $du=dx$, etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Parametrizing curve for complex analysis integral I'm trying to show that
$$
\int_{|z-z_0| = R} (z-z_0)^m \, dz = \begin{cases}0, & m \neq -1 \\ 2\pi i, & m =- 1. \end{cases}
$$
Here's my attempt at a solution:
We parametrize the curve at $z(\theta) = z_0 + Re^{i\theta}$ and therefore $dz = iRe^{i\theta} \, d\theta$. Substituting, we have
$$
\int_0^{2\pi} R^me^{im\theta} \, iRe^{i\theta} \, d\theta.
$$
However, I feel that this is wrong since there will be a dependence on $R$. Anyone have a suggestion?
|
For $m\geqslant 0, f(z) = (z - z_{0})^{m}$ is analytic on $\mathbb{C}$. Thus the line integral is 0 ( by FTOC ).
For $m = -1, f(z) = 1$ is analytic on the closed disc $\rvert z - z_{0}\rvert\leqslant R$, so by Cauchy's integral formula this implies that $$1 = \frac{1}{2\pi i}\int_{\rvert z - z_{0}\rvert = R}\frac{1}{z - z_{0}}\,\mathrm{d}z.$$ Therefore the integral is $2\pi$.
For $m < -1$, take the derivative.. so $$0 = \frac{1}{2\pi i}\int_{\rvert z - z_{0}\rvert = R}(z - z_{0})^{m}\,\mathrm{d}z$$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Images of Regions Under Cayley's Transformation I'm working on the following problem for my complex analysis course:
Problem $\bf 1$: Find the images of the followings under the Cayley's transformation: $a)$ imaginary axis $b)$ real axis $c)$ upper half plane $d)$ horizontal line through $i$
I can't seem to find Cayley's transformation anywhere in our textbook - could someone clarify to me what it is? I've done a Google search and have found mixed results.
Furthermore, how can I go about finding this image for each part? Like for this first part, should I just consider a complex number $z = ai$, see what it is mapped to, and then try to establish a general pattern?
Any help is greatly appreciated!
|
For the image of the imaginary axis under the Cayley's transformation: $$0 + yi\mapsto\frac{yi - i}{yi + i} = \frac{y - 1}{y + 1}\:\text{ ( the real-axis ) }$$
The map for the real-axis: $$x + 0i\mapsto\frac{x - i}{x + i}\:\text{ ( Note $\rvert\frac{x - i}{x + i}\rvert = \rvert\frac{\overline{x + i}}{x + i}\rvert = 1$, so this is the unit circle centered at the origin. ) }$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A simple question in combinatorics. A university bus stops at some terminal where one professor,one student and one clerk has to ride on bus.There are six empty seats.How many possible combinations of seating?
My problem:I know that if there are six people to fill the 3 seats then there are $6\times5\times4=120= ^6P_3$$\hspace{0.3cm}$ possible combinations.
If there are 3 people to fill six seats then there will be $\hspace{0.3cm}$$3^6$$\hspace{0.3cm}$ possible combinations.Am i right?
so answer of my question should be$\hspace{0.3cm}$$3^6$.Am i right?
Thanks in advance.
|
Let's see how you arrived at $3^6$.
Each seat can be occupied by $3$ different people, and there are $6$ seats, so the number of possibilities is $3 \times 3 \times 3 \times 3 \times 3 \times 3 = 3^6$.
So one of these $3^6$ possibilities is: First seat is occupied by Professor. Second seat has $3$ choices: Professor, Student, Clerk. Second seat selects Professor. Wait a minute, now the first and second seats are fighting for the professor. Professor can't occupy both seats!
So, no, $3^6$ is horribly wrong.
Is it $6^3$, then? Each person has $6$ choices of seats, and there are $3$ people, so $6 \times 6 \times 6 = 6^3$. Now we have the problem of the same seat being occupied by different people. So that's not it, either. But it can be fixed, by eliminating the selected seats. So, Professor selects one out of $6$ seats, then Student selects one out of $5$ seats, and Clerk selects one out of $4$ seats, so the answer is $6 \times 5 \times 4$ (exactly the same as when there are $6$ people and $3$ seats, because seats and people are not different in any relevant manner).
Another way is, you have three dummies to occupy three seats, but they are identical. So using the idea of permutation with repetition (see http://www.math.wsu.edu/students/aredford/documents/Notes_Perm.pdf, Page $6$), we have $\dfrac{6!}{3!} = 6 \times 5 \times 4$ possibilities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Singular vector of random Gaussian matrix Suppose $\Omega$ is a Gaussian matrix with entries distributed i.i.d. according to normal distribution $\mathcal{N}(0,1)$. Let $U \Sigma V^{\mathsf T}$ be its singular value decomposition. What would be the distribution of the column (or row) vectors of $U$ and $V$? Would it be a Gaussian or anything closely related?
|
The original question suggested IID entries, which means that in the limit that the matrix gets big, the singular values follow a Marchenko-Pastur distribution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Is finiteness of rational points preserved by duality? Sorry if this is obvious. I don't know much about Abelian varieties.
Let $A/k$ be an abelian variety. Let's say $k$ has characteristic zero.
Let $\widehat{A}$ be the dual abelian variety.
Suppose that the set $A(k)$ of $k$-rational points is finite.
Is $\widehat{A}(k)$ also finite?
References will be much appreciated!
|
Since any abelian variety is isogenous to its dual, the answer is yes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many ordered pairs of integers $(a, b)$ are needed to guarantee that there are two ordered pairs $(a_1, b_1)$ and $(a_2, b_2)$ such that $\dots$ Question:How many ordered pairs of integers $(a, b)$ are needed to guarantee that there are two ordered pairs $(a_1, b_1)$ and $(a_2, b_2)$ such that $a_1 \bmod 5 = a_2 \bmod 5$ and $b_1 \bmod 5 = b_2 \bmod 5$?
This is my first time with pigeon hole principle and this is a book exercise without an answer key. Thus I need someone kindly to point out if anything is wrong with my proof and how to make it more straight forward next time.
My Attempt:
For each $a, b$ in $(a, b)$, there are 5 different congruence classes in $\mod 5$. To answer the question of How many ordered pairs of integers of the form $(a, b)$?, we need to consider 25 different congruence classes that both $a, b$ will form. Thus the problem is now reduced to pigeon hole problem,
$$\lceil \dfrac{n}{25}\rceil = 2$$
By solving the inequality of of the ceil function (I don't have to but just to be formal),
$$\lceil \dfrac{n}{25}\rceil-1 < \dfrac{n}{25} \leq \dfrac{n}{25}$$
$$2-1 < \dfrac{n}{25} \leq 2$$
$$25 < n \leq 50$$
Thus have to be $25+1 = 26$ by the Pigeon hole principle.
|
Yes, the answer is correct. There are $5$ options for the congruence of $a$ and $5$ options for the congruence of $b$ modulo $5$ so there are $25$ options for the congruence of the pair $(a,b)$. Clearly if you have $26$ then at least two will be in the same "congruence pair class". However with $25$ it could be we had one pair of each class, so the answer is 26 as you claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/870984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate: $\lim_{x \to \infty} \,\, \sqrt[3]{x^3-1} - x - 2$ Find the following limit $$\lim_{x \to \infty} \,\, \sqrt[3]{x^3-1} - x - 2$$
How do I find this limit? If I had to guess I'd say it converges to $-2$ but the usual things like L'Hôpital or clever factorisation don't seem to work in this case.
|
Hint: It is best to use series. However, we can do it with algebraic manipulation. Let $a=(x^3-1)^{1/3}$ and let $b=x+2$. Multiply top and (missing) bottom by $a^2+ab+b^2$.
Another way: Make the substitution $x=1/t$. We end up wanting
$$\lim_{t\to 0^+} \frac{\sqrt[3]{1-t^3} -1-2t}{t}.$$
Now one Hospital round does it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 8,
"answer_id": 3
}
|
How do I reduce radian fractions? For example, I need to know $\sin (19π/12)$.
I need to use the subtraction formula. How do I get $(\text{what}) - (\text{what}) = 19π/12$? I am stuck at what are the radians
Do I divde it by something? What is the process?
|
The denominator is $3\cdot4$ and you (should) know the values for denominators $3$ and $4$, so just decompose in two fractions
$$\frac{4\pi}3+\frac\pi4.$$
Now apply the addition formula.
This can be obtained by solving
$$\frac a3+\frac b4=\frac{19}{12}$$ or
$$4a+3b=19.$$
You can work this out by trial and error.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Probability Help! (X,Y) ~ f(x,y) = 8xy $I_D(x,y)$ a) $f_X (x) =$ ?
b) $P( X + Y < \frac{1}{2}) =$ ?
c) $f_Y(y \,| \, X = \frac{3}{4}) =$ ?
d) $P( Y < \frac{1}{2} \, | \, X = \frac{3}{4}) = $ ?
Any help is greatly appreciated! Thanks!!
Here is my work so far...
a) To get marginal density of $x$, I need to integrate $f(x,y)$ once with respect to $y$. From the drawing, we see the appropriate bounds for integration is $0$ and $x$. So we have
$∫8xydy$ from $0$ to $x$, yielding $4x^3$ for $0<x<1$.
b) First we need to understand what exactly is $X + Y < \frac{1}{2}$. This the line between $(.5,0)$ and the endpoint where it intersects with $y=x$, so that I end up with an isosceles triangle. You can see that the appropriate double integral is
$∫∫8xydydx$, over $(0,.5-x)$ for $y$ and $(0,.5)$ for $x$. Then split the integral into two parts to make it easier ?
c) $f(y|X)= f(x,y)/f(x)= 8xy/(4x^3)$ Then plug in $x = .75$ ?
d) Now that I have part c), I can integrate $y$ from $0$ to $.5$ ?
|
$\operatorname{\bf I}_D(x,y)$ is an indicator function; a characteristic equation that has the value of $1$ when the argument exists within the domain, and a value if $0$ when it does not. Here the argument is $(x,y)$ and the domain is $D$.
This is sometimes written as: ${\large\bf 1}_D(x,y)$.
It is compact notation for the piecewise function:
$$\operatorname{\bf I}_D(x,y) = \begin{cases}1 & \text{if } 0\leq y \leq x\leq 1 \\ 0 & \text{elsewise}\end{cases}$$
a) $f_X (x) =\int 8xy\operatorname{\bf I}_D(x,y) \operatorname{d} y = \int_0^x 8xy \operatorname{d} y \operatorname{\bf I}_{x\in[0,1]}(x)= 4x^3 \operatorname{\bf I}_{x\in[0,1]}(x)$
b) $P( X + Y < \frac{1}{2}) =$ ? : Simply observe what portion of the domain $D$ is below the line $x=\frac 1 2 - y$?
d) $P( Y < \frac{1}{2} \mid X = \frac{3}{4}) = $ ? : Find the line segment at $x=\frac 34$ what portion lies below $y=\frac 1 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Fresnel Integral multiplied with cosine term. $$I=\int_a^b \sin(\alpha-\beta x^2)\cos(x)\, dx.$$
Can anybody tell me, how to solve this integral ?
I know that this is related to Fresnel Integral if the $\cos(x)$ term is absent.
|
If $\beta>0$
$c+d=2\alpha-2\beta x^{2}$
$c-d=2x$
so $c=\alpha+x-\beta x^{2}=(\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}-\sqrt{\beta}x)^{2}$ and $d=\alpha-x-\beta x^{2}=(\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}+\sqrt{\beta}x)^{2}$
then using that $\sin(c)+\sin(d)=2\sin(\frac{1}{2}(c+d))\cos(\frac{1}{2}(c-d))$ we have:
$\int_{a}^{b}\sin(\alpha-\beta x^{2})\cos(x)dx$
$=\frac{1}{2}\int_{a}^{b}\sin((\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}-\sqrt{\beta}x)^{2})dx+\frac{1}{2}\int_{a}^{b}\sin((\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}-\sqrt{\beta} x)^{2})dx$
Notice that $\int_{s}^{t}\sin(c-u^{2})du=\int_{s}^{t}\sin(c)\cos(u^{2})du-\int_{s}^{t}\cos(c)\sin(u^{2})du$
$=\sin(c)(C(t)-C(s))-\cos(x)(S(t)-S(s))$
where $S(x)=\int_{0}^{x}\sin(p^{2})dp$ and $C(x)=\int_{0}^{x}\cos(p^{2})dp$
A similar procedure can be done for $\beta<0$ since $\sin(-x)=-\sin(x)$. If $\beta=0$ then this is just $\sin(\alpha)\int_{a}^{b}\cos(x)dx$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Frenet-Serret formula proof Prove that $$\textbf{r}''' = [s'''-\kappa^2(s')^3]\textbf{ T } + [3\kappa s's''+\kappa'(s')^2]\textbf{ N }+\kappa \hspace{1mm}\tau (s')^3\textbf{B}.$$
What is $\tau$, I can't figure that part out.
All ideas are welcome.
|
Take into account that, for a generic function $f$,
$$
f'=\frac{df}{dt}=\frac{df}{ds}\frac{ds}{dt}=s'\frac{df}{ds}
$$
so that
$$
\mathbf{N}'=s'\frac{d\mathbf{N}}{ds}
$$
and
$$
\frac{d\mathbf{N}}{ds}=-\kappa\mathbf{T}+\tau\mathbf{B}
$$
see Frenet-Serret formulas.
Also, your $\kappa'$ is indeed $\frac{d\kappa}{dt}$, while in the formula to prove it is intended as $\frac{d\kappa}{ds}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Evaluate $\int \frac{1}{(2x+1)\sqrt {x^2+7}}\,\text{d}x$. How to do this indefinite integral (anti-derivative)?
$$I=\displaystyle\int \dfrac{1}{(2x+1)\sqrt {x^2+7}}\,\text{d}x.$$
I tried doing some substitutions ($x^2+7=t^2$, $2x+1=t$, etc.) but it didn't work out.
|
Using Euler substitution by setting $t-x=\sqrt{x^2+7}$, we will obtain
$x=\dfrac{t^2-7}{2t}$ and $dx=\dfrac{t^2+7}{2t^2}\ dt$, then the integral turns out to be
\begin{align}
\int \dfrac{1}{(2x+1)\sqrt {x^2+7}}\ dx&=\int\frac{1}{t^2+t-7}\ dt\\
&=\int\frac{1}{\left(t+\dfrac{\sqrt{29}+1}{2}\right)\left(t-\dfrac{\sqrt{29}-1}{2}\right)}\ dt\\
&=-\int\left[\frac{2}{\sqrt{29}(2t+\sqrt{29}+1)}+\frac{2}{\sqrt{29}(-2t+\sqrt{29}-1)}\right]\ dt.
\end{align}
The rest can be solved by using substitution $u=2t+\sqrt{29}+1$ and $v=-2t+\sqrt{29}-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
finding right quotient of languages Can someone enumerate in detail, the steps to find right quotient of languages, i.e. $L_1/L_2$. Using an example will be great.
|
The right quotient of $L_1$ with $L_2$ is the set of all strings $x$ where you can pick some element $y$ from $L_2$ and append it to $x$ to get something from $L_1$. That is, $x$ is in the quotient if there is $y$ in $L_2$ for which $xy$ is in $L_1$.
Let's agree to write the quotient of $L_1$ by $L_2$ as $\def\Q{\operatorname{Quotient}}\def\Quot#1#2{{#1}/{#2}}\def\qq{\Quot{L_1}{L_2}}\qq$.
Here are some examples:
*
*Say that $L_1$ is the language $\{\mathtt{fish}, \mathtt{dog}, \mathtt{carrot}\}$ and that $L_2$ is the language $\{\mathtt{rot}\}$. Then $\qq$, the quotient of $L_1$ by $L_2$, is the language $\{\mathtt{car}\}$, because $\mathtt{car}$ is the only string for which you can append something from $L_2$ to get something from $L_1$.
*Say that $L_1$ is the language $\{\mathtt{carrot}, \mathtt{parrot}, \mathtt{rot}\}$ and that $L_2$ is the language $\{\mathtt{rot}\}$. Then $\qq$ is the language $\{\mathtt{car}, \mathtt{par}, \epsilon\}$. Say that $L_3 = \{\mathtt{rot}, \mathtt{cheese}\}$. Then $\Quot{L_1}{L_3}$ is also $\{\mathtt{car}, \mathtt{par}, \epsilon\}$.
*Say that $L_1 = \{\mathtt{carrot}\}$ and $L_2 = \{\mathtt{t}, \mathtt{ot}\}$. Then $\qq$ is $\{\mathtt{carro}, \mathtt{carr}\}$.
*Say that $L_1 = \{\mathtt{xab}, \mathtt{yab}\}$ and $L_2 = \{\mathtt{b}, \mathtt{ab}\}$. Then $\qq$ is $\{\mathtt{xa},\mathtt{ya},\mathtt{x},\mathtt{y}\}$.
*Say that $L_1 = \{\epsilon, \mathtt{a}, \mathtt{ab}, \mathtt{aba}, \mathtt{abab}, \ldots\}$ and $L_2 = \{\mathtt b, \mathtt{bb}, \mathtt{bbb}, \ldots\}$. Then $\qq$ is $ \{\mathtt a, \mathtt{aba}, \mathtt{ababa}, \ldots\}$.
*In general, if $L_2$ contains $\epsilon$, then $\qq$ will contain $L_1$.
*In general, if $L_2 = P\cup Q$, then $$\qq = (\Quot{L_1}P) \cup (\Quot{L_1}Q).$$
*In general, if $L_1 = P\cup Q$, then $$\qq = (\Quot P{L_2}) \cup (\Quot Q{L_2}).$$
*The two foregoing facts mean that you can calculate the right quotient of two languages $L_1$ and $L_2$ as follows: Let $s_1$ be some element of $L_1$ and let $s_2$ be some element of $L_2$. If $s_2$ is a suffix of $s_1$, so that $s_1 = xs_2$ for some string $x$, then $x$ is in the quotient $L_1/L_2$. Repeat this for every possible choice of $s_1$ and $s_2$ and you will have found every element of $L_1/L_2$.
I hope this is some help.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Matrices: $AB=0 \implies A=0 \text{ or } \ B=0$ When A and B are square matrices of the same order, and O is the zero square matrix of the same order, prove or disprove:-
$$AB=0 \implies A=0 \text{ or } \ B=0$$
I proved it as follows:-
Assume $A \neq O$ and $ B \neq O$:
then, $$ |A||B| \neq 0 $$
$$ |AB| \neq 0 $$
$$ AB \neq O $$
$$ \therefore A \neq O\ and\ B \neq O \implies AB \neq O $$
$$ \neg[ AB \neq O] \implies \neg [ A \neq O\ and\ B \neq O ] $$
$$AB=O \implies A=O \text{ or } \ B=O$$
But when considering,
A := \begin{pmatrix} 1&1 \\1&1
\end{pmatrix} and B:= \begin{pmatrix} -1& 1\\ 1 &-1
\end{pmatrix}then, AB=O and A$\neq $O and B $\neq$ O
I can't figure out which one and where I went wrong.
|
You are saying that if $A \neq O$ then, $det(A) \neq O$, which is false in general. Consider any diagonal matrix different from $O$ which has at least one zero in the diagonal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Reduce distance computation overhead between a point and several rectangles We are given several rectangles in the plane, without loss of generality, assume there are three of them, namely $R_1$, $R_2$ and $R_3$.
For a point $P$, we can compute three distances $d_1$, $d_2$ and $d_3$ between $P$ and each rectangle respectively, and a final distance $d_{min}$ is defined as $d_{min}=\min (d_1,d_2,d_3)$.
There are $n$ such points randomly scattered in the plane, and we want to know $d_{min}$ for every point. The naive solution would require $3n$ distance computations.
I'm wondering whether we can do some preprocess to $R_1$, $R_2$ and $R_3$ to reduce the number of distance computation. Or, put it another way, can we compute $d_{min}$ directly according to some aggregated information of $R_1$, $R_2$ and $R_3$, instead of computing $d_1$, $d_2$ and $d_3$ at first?
|
If the rectangles are not too many, you can store the Voronoi diagram of your configuration in a convenient way (as a set of linear and quadratic inequalities, for instance), then find, for each point $P$, the index $i$ of the region where it belongs, in order to compute only $d(P,R_i)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/871831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.