Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Integration and evaluation I have to evaluate this integral here
$$\int_x^\infty \frac{1}{1+e^t} dt$$
I've tried with the substitution $e^t=k$, but at the end I get $\log|k|-\log|1+k| $, which evaluated between the extrema doesn't converge... I hope that somebody can help!
|
Hint:
$$\frac { 1 }{ 1+{ e }^{ t } } =\frac { 1 }{ { e }^{ t }\left( 1+{ e }^{ -t } \right) } =\frac { { e }^{ -t } }{ 1+{ e }^{ -t } } \\ \int { \frac { dt }{ 1+{ e }^{ t } } } =-\int { \frac { d\left( 1+{ e }^{ -t } \right) }{ 1+{ e }^{ -t } } } $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1433517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
}
|
Rewrite a function of $x+iy$ in terms of $z$ only How can you rewrite a function like
$$f(x+iy)=e^{2xy}\cos(x^2-y^2)-i\left[e^{2xy}\sin(x^2-y^2)+C\right]$$
in terms of just $z$? Are there any tricks that you can use or is it just the fact that you need to recognize the functions when you see them?
|
Suppose we have an expression for $f$ in terms of $z$. If we restrict $z$ to be real, then we are effectively replacing the complex variable $z$ by the expression $x + 0i$ where $x$ is a real variable. What we end up with is an expression for $f$ along the real line in terms of $x$. What is miraculous is that we can reverse this process. That is, if we know an expression for $f$ when restricted to the real line in terms of the real variable $x$, we can recover an expression for $f$ on the complex plane in terms of the complex variable $z$. This follows from the Identity Theorem.
For example, given $f(x + iy) = e^{2xy}\cos(x^2-y^2) - i[e^{2xy}\sin(x^2-y^2) + C]$, when we restrict to the real axis (i.e. set $y = 0$), we obtain $f(x) = \cos(x^2)-i\sin(x^2) - iC = e^{-ix^2} -iC$. How do we get an expression for $f(z)$? Replace $x$ by $z$. That is, once we know that $f$ restricted to the real line is given by $f(x) = e^{-ix^2} - iC$ we know that $f(z) = e^{-iz^2} - iC$ for all $z \in \mathbb{C}$.
As Rob Arthan points out below, in order to use the method above, you need to know in advance that $f$ is holomorphic (that is, the expression you are trying to obtain is in terms of $z$ only rather than $z$ and $\bar{z}$); you can check this by verifying the Cauchy-Riemann equations. The reason you need to know that $f$ is holomorphic in advance is the use of the Identity Theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1433644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Given that $f:G\to G$ on a group $G$, defined by $f(x)=x^3$, is an isomorphism, how do I show that $G$ is abelian? Given that $f:G\to G$ on a group $G$, defined by $f(x)=x^3$, is an isomorphism, how do I show that $G$ is abelian, please?
|
Answer of the changed question: Note that for any $a,b \in G$, $xy=f(x^{-1})f(y^{-1})=f(x^{-1}y^{-1})=f((yx)^{-1})=yx$.Hence $G$ is abelian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1433731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $f$ is continuous Let $f$ be a non-constant function such that $f(x+y)=f(x)\cdot f(y),\forall x,y\in\mathbb{R}$ with $\lim_\limits{x\to 0}{f(x)}=l\in\mathbb{R}$. Prove that:
a) $\lim_\limits{x\to x_0}{f(x)}\in\mathbb{R},\forall x_0\in\mathbb{R}$
b) $l=1$
I have managed to prove a:
$$\lim_\limits{h\to 0}{f(x_0+h)}=\lim_\limits{h\to 0}{(f(x_0)\cdot f(h))}=l\cdot f(x_0)\Rightarrow \lim_\limits{x\to x_0}{f(x_0)}=l\cdot f(x_0)\in\mathbb{R}$$
But I cannot move further to b, except proving that $f(0)=1$:
For $y=0$ in the original relation, we get: $f(x)\cdot (f(0)-1)=0,\forall x\in\mathbb{R}$ and since $f$ is non-constant we get that $f(0)=1$. Any hint for b?
|
Assuming that you have established part a) and the fact that $f(0) = 1$ we can proceed to show that $l = 1$. Clearly we can see that $f(x)f(x) = f(2x)$ and hence taking limits when $x \to 0$ (so that $2x \to 0$) we get $l^{2} = l$. So either $l = 0$ or $l = 1$.
Again note that $f(x)f(-x) = f(0) = 1$ and taking limits as $x \to 0$ (and noting that $-x \to 0$) we get $l^{2} = 1$ so that $l = \pm 1$. Thus the only option is to have $l = 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1433808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $g$ is continuous at $x=0$ Given, $g(x) = \frac{1}{1-x} + 1$. I want to prove that $g$ is continuous at $x=0$. I specifically want to do an $\epsilon-\delta$ proof. Related to this Is $g(x)\equiv f(x,1) = \frac{1}{1-x}+1$ increasing or decreasing? differentiable $x=1$?.
My work: Let $\epsilon >0$ be given. The challenging part for me is to pick the $\delta$.
Let $\delta = 2 \epsilon$ and suppose that $|x-0| = |x| < \delta$ and $|\frac{1}{1-x}| < \frac{1}{2}$.
So $$|g(x) - g(0)|$$
$$=|\frac{2-x}{1-x} - 2|$$
$$=|\frac{x}{1-x}|$$
$$=|\frac{1}{1-x}||x|$$
$$<\frac{1}{2} 2 \epsilon = \epsilon$$.
So $g$ is continuous at $x=0$. Is my proof correct?
EDIT: Rough work on how I picked $\delta$. Suppose $|x-0|<\delta$ and since $\delta \leq 1$, we have $|x|<1$, so $-1<x<1$ Then this implies $0<1-x<2$. The next step I'm not too sure of, I have $0<\frac{1}{1-x}<\frac{1}{2} \implies |\frac{1}{1-x}|<\frac{1}{2}$.
|
Hint. See my comment to your post.
After doing your algebra, as you have shown,
$$\left|\dfrac{1}{1-x}+1-2\right| = \left|x\right|\left|\dfrac{1}{1-x}\right|\text{.}$$
We have $|x| < \delta$. Let $\delta = 1/2$, then
$$|x| < \delta \implies -\delta < x < \delta \implies 1-\delta < 1-x < 1 + \delta \implies \dfrac{1}{1+\delta} < \dfrac{1}{1-x} < \dfrac{1}{1-\delta}= 2\text{,}$$
thus implying $\left|\dfrac{1}{1-x}\right| < 2$ (notice $\dfrac{1}{1+\delta} = \dfrac{1}{1.5} \in (-2, 2)$, so we can make this claim).
Choose $\delta := \min\left(\dfrac{\epsilon}{2}, \dfrac{1}{2}\right)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1433919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
probability that exactly two envelopes will contain a card with a matching color Suppose that 10 cards, of which 5 are red and 5 are green, are placed at random in 10 envelopes, of which 5 are red and 5 are green. Determine the probability that exactly two envelopes will contain a card with a matching color.
clearly $\Omega= 10!$
if I have $ 5c1$ ways red and $5c1$ ways green, How do I get other permutations? some hints please
|
Suppose both the cards and envelopes are rearranged (without changing the pairings) so that all red envelopes are on the left and all green envelopes are on the left. Now focusing on the cards on the left, we want the probability that exactly one of these is red (then by symmetry there will also be one matching pair on the right).
There are $\binom{5}{1}=5$ ways of choosing one red card out of five, and $\binom{5}{4}=5$ ways of choosing four green cards out of five. There are $\binom{10}{5}$ ways of choosing the five cards on the left from the set of ten.
Hence, the probability is
$$\mathbf{Pr}[2\text{ cards match envelopes}]=\frac{\binom{5}{1}\binom{5}{4}}{\binom{10}{5}}=\frac{5\times5}{252}=\frac{25}{252}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
}
|
Show that $H+K$ is a subspace of $V$ I am trying to solve this, please give me hints.
$H,K \in V$, $V$ is a vector space.
$H+K=\{w:w=u+v:u \in H, v \in K \}$
Show that $H+K$ is a subspace of $V$.
|
A sufficient condition is that $H$ and $K$ are subspaces of $V$. It suffices to check if we have $aw_{1} + bw_{2} \in H+K$ for all scalars $a,b$ and all $w_{1},w_{2} \in H+K$.
Let $w_{1},w_{2} \in H+K$ and let $a,b$ be scalars. Then $w_{1} = h_{1} + k_{1}$, $w_{2} = h_{2}+k_{2}$ for some $h_{1},h_{2} \in H$ and some $k_{1}, k_{2} \in K$ by definition. Thus $aw_{1} + bw_{2} = ah_{1} + bh_{2} + ak_{1} + bk_{2}$. But $ah_{1}+bh_{2} \in H$ and $ak_{1} + bk_{2} \in K$ by assumption, so by definition we have $aw_{1} + aw_{2} \in H+K$.
Note that if $H,K$ are simply subsets of $V$, then $H+K$ need not be a subspace of $V$; for if $V := \mathbb{R}(\mathbb{R})$, $H:= \{ 0 \}$, and $K := \{1\}$, then $H+K = \{ 1 \}$, which is not a subspace of $V$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Algebra with complex numbers (i) My professor has assigned this problem:
Let r be a real number. By using r = r + 0i, confirm that r(a + bi) = (ra) + (rb)i.
My understanding is that this looks at distributive property, but I do not understand how r = r + 0i relates. Could someone provide a jumping off point? I don't want/ need the entire problem solved... Just somewhere to start.
Thanks in advance.
|
I guess the professor defined "complex-multiplication" operation using "real-multiplication" as follows:
$(a+ib) \cdot (c+ id):= (ac-bd) + i(ad+bc)$.
Hence, $r\cdot (a+ib) = (r+i0)\cdot (a+ib)= (ra -0b) + i(rb + 0a)= (ra)+ i(rb)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Bounds for the PDF of $(X,XY)$ if the PDF of $(X,Y)$ has support $0\leq X\leq Y\leq1$
Given the PDF $\,f_{XY}(x,y)=8xy,\, 0\leq x \leq y \leq 1$. Let $U=X$ and $V=XY$. Find the pdf of $(U,V)$.
My answer:
So, $X=U$ and $Y= V/U$, The Jacobian J of the inverse transformation would then equal:
$J=\frac{1}{u}$..., then
How do I find the bounds for the integration? of $f_{U,V}(u,v)\int_?\int_? F_{X,Y}(u,v/u)|J|dudv$
|
$U=X$ so the range for $U$ is $0\lt U\lt 1$.
$V=XY$ has its maximum when $Y=1$, giving $V=X=U$. $V$ has its minimum when $Y=X$, giving $V=X^2=U^2$.
So the joint pdf for $U,V$ is
$$f_{U,V}(u,v) = |J|f_{X,Y}(x(u,v),\;y(u,v)) = \dfrac{8v}{u},\quad 0\lt u\lt 1\; \text{ and } u^2\lt v\lt u.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
prove the combinatorial identity: $ {2n \choose n} + {2n\choose n-1} = (1/2){2n + 2 \choose n + 1}$ So the first thing I did was to multiply and then isolate the left-most term.
That leaves me with $$ 2{2n \choose n} ={2n + 2\choose n + 1} - 2{2n \choose n -1} $$
No specific reasoning except that it looks cleaner, at least to me. Then the way I thought about it was that we are trying to make two $n$ member committees out of $n$ koalas and $n$ pandas. I was told when seeing subtraction I should think of complement but I don't really know how to incorporate the complement into this. Any way, am I on the right path or completely off. Thank you in advance for your time.
|
Whenever you see these types of identities, at least I don't think like that. I think algebraically. Maybe long, but always correct. We want to prove
$$2{2n \choose n} ={2n + 2\choose n + 1} - 2{2n \choose n -1}$$
We know that $${2n\choose n}+{2n\choose n-1}={2n+1\choose n}$$
Multiplying by $2$ $$2{2n\choose n}+2{2n\choose n-1}=2{2n+1\choose n}$$
Now we have to prove that $$2{2n+1\choose n}={2n+2\choose n+1}$$
$$2.\frac{(2n+1)!}{(n+1)!n!}=\frac{(2n+2)!}{(n+1)!(n+1)!}$$
Let us assume the above, if it is true, we will get RHS=LHS, else, we will get a contradiction. So , by cancellation
$$2=\frac{2n+2}{n+1}$$
Which is true. Hence, $$2{2n\choose n}+2{2n\choose n-1}={2n+2\choose n+1}$$
So this expression is equivalent to your question.
As long as the proof of ${2n\choose n}+{2n\choose n-1}={2n+1\choose n}$ is concerned, I will prove the more general case- $${n\choose k}+{n\choose k-1}={n+1\choose k}$$
Proof: The given equation is same as $$\frac{n!}{k!(n-k)!}+\frac{n!}{(k-1)!(n-k+1)!}=\frac{(n+1)!}{k!(n-k+1)!}$$
Let us assume that with the same condition as before,
$$\frac{n!}{(k-1)!(n-k)!}(\frac1k+\frac1{n-k+1})=\frac{(n+1)!}{k!(n-k+1)!}$$
By cancelling and adding on the LHS, $$\frac{n+1}{k(n-k+1)}=\frac{n+1}{k(n-k+1)}$$
So LHS=RHS. Hence proved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
The order of sum of powers? For example, the sum of n is n(n+1)/2, the dominating term is n square(let say this is order 2). For the sum of n^2, the order is 3. Then for the sum of n^k, is the order k+1?
I been searching Faulhaber's formula and Bernoulli numbers, I'm not sure what is the order of it. It's much more complicated than i think.
Any ideas? Thanks in advance
|
By the definition of Riemann integral, $\frac1n \sum_{k=1}^n (\frac{k}{n})^\alpha \rightarrow \int_0^1 t^\alpha dt=\frac1{\alpha+1}$.
Then $\sum_{k=1}^n k^\alpha \sim \frac{n^{\alpha+1}}{\alpha+1}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$G$ solvable $\implies$ composition factors of $G$ are of prime order. I'm trying to prove the equivalency of the following definitions for a finite group, $G$:
(i) $G$ is solvable, i.e. there exists a chain of subgroups $1 = G_0 \trianglelefteq G_1 \trianglelefteq \cdots \trianglelefteq G_s = G $ such that $G_{i+1}/ \ G_i$ is abelian for all $0 \leq i \leq s-1$
(ii) $G$ has a chain of subgroups $1 = H_0 \trianglelefteq H_1 \trianglelefteq \cdots \trianglelefteq H_s = G $ such that $H_{i+1} /\ H_i$ is cyclic for all $0 \leq i \leq s-1$.
(iii) All composition factors of $G$ are of prime order.
(iv) G has a chain of subgroups $1 = N_0 \trianglelefteq N_1 \trianglelefteq \cdots \trianglelefteq N_t = G $ such that each $N_i$ is a normal subgroup of $G$, and $N_{i+1}/ \ N_i$ is abelian for all $0 \leq i \leq t-1$.
It was simple to show that (iii) $\implies$ (ii) and (ii) $\implies$ (i). Now, I'm trying to prove that (i) $\implies$ (iii), and I'm getting stuck.
I know that If $G$ is an abelian simple group, then $|G|$ is prime, so all I need to show is that each $G_{i+1}/ \ G_i$ in the definition of solvable is simple.
If I knew that $1 = G_0 \trianglelefteq G_1 \trianglelefteq \cdots \trianglelefteq G_s = G $ were a composition series, then each $G_{i+1}/ \ G_i$ would be simple by definition. But the definition of solvable doesn't assume a composition series. I know every group has a unique composition series, but I'm not sure how to connect that to the subgroups in the definition of solvable.
More generally, I guess I just don't understand how being abelian connects to being of simple or of prime order.
Any pushes in the right direction would be appreciated!
|
Hint: Use induction on the order of $G$:If $G_{s-1}$ is a proper normal subgroup of $G$ such that $G/G_{s-1}$ is abelian, then there are two cases:
*
*Either $1 < G_{s-1}$; then by induction, the composition factors of $G_{s-1}$ and $G/G_{s-1}$ have prime orders.
*Or $1 = G_{s-1}$ in which case $G$ is itself abelian.
Can you figure out how the claim follows in each of the two cases?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
}
|
Show set is countable I want to show that the set
$A=\{n^2+m^2:n,m\in \mathbb{N}\}$
is countable.
Is it enough to state that, since $\mathbb{N}$ is closed under multiplication and addition, the set A must be a subset of $\mathbb{N}$; and since any subset of a countable set is countable, A must be countable?
|
Yes. Unless you define "countable" as "infinite and countable", in which case you need to prove that $A$ is infinite (which shoud be almost trivial).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Proving that $\pi(2n) < \pi(n)+\frac{2n}{\log_2(n)}$ Given that $\pi(x)$ is the prime-counting function, prove that, for $n\geq 3$,
1: $\pi(2n) < \pi(n)+\frac{2n}{\log_2(n)}$
2:$ \pi(2^n) < \frac{2^{n+1}\log_2(n-1)}{n}$
For $x\geq8$ a real number, prove that
$\pi(x) < \frac{2x\log_2(\log_2(x)}{\log_2(x)}$.
I understand that I should show what I've tried, but I have no idea what to do next.
The only thing I know about this function is the prime number theorem, but I don't see how that could help me here.
What's strange is that this question was part of a national olympiad exam, suggesting this can be solved using elementary mathematics.
Does anyone know if these can be solved easily, without going into too advanced mathematics? Hints would be preferred over full solutions.
Question comes from 1989 Irish Mathematical Olympiad.
|
We note that$$\binom{2n}{n} \le 4^n$$and is divisible by all primes such that $n < p < 2n$.
Prove that if $n \ge 3$ is an integer, then$$\pi(2n) < \pi(n) + {{2n}\over{\log_2(n)}}.$$
$\pi(2n) - \pi(n)$ is the number of primes such that $n < p < 2n$. If there are $k$ such primes, we have the inequality$$n^k \le 4^n \implies k < {{2n}\over{\log_2 n}} \implies \pi(2n) < \pi(n) + {{2n}\over{\log_2n}}.$$
Prove that if $n \ge 3$ is an integer, then$$\pi(2^n) < {{2^{n+1}\log_2(n-1)}\over{n}}.$$
We proceed by induction. Base case $n = 3$ is trivial to check. We have$$\pi(2^{n+1}) < \pi(2^n) + {{2^{n+1}}\over{n}} < {{2^{n+1}\log_2(n-1)}\over{n}} + {{2^{n+1}}\over{n}}.$$Let us prove$${{2^{n+1}\log_2(n-1)}\over{n}} + {{2^{n+1}}\over{n}} \le {{2^{n+2} \log_2 n}\over{n+1}}.$$Dividing $2^{n+1}$ and taking off the $\log_2$ gives$$(2n-2)^{n+1} \le n^{2n},$$which is true since$$4n \le (n+1)^2 \text{ and } n \le 2^n.$$
Deduce that, for all real numbers $x \ge 8$,$$\pi(x) < {{4x \log_2(\log_2(x))}\over{\log_2(x)}}.$$
Let$$n=\lfloor\log_2x\rfloor.$$Then$$n \le \log_2x <n+1.$$Therefore,$$\pi(x) \le \pi(2^{n+1}) \le \frac{2^{n+2}\log_2n}{n+1} \le \frac{4x \log_2(\log_2x)}{\log_2x},$$as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1434928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Understanding one point compactification I just read about one point compactification and i am having some difficulty in grasping the concept.
Does one point compactification mean that we are simply adding a point to a non compact space to make it compact.
For example, my book says that $S^n$ is the one point compactification of $\mathbb R^n$, i don't quite follow this.
I need some help in understanding this concept, maybe with the help of the above stated example.
|
No, you don't just add a point. You have to define the topology in the newly created space (set). In case of $\mathbb{R}^n $ one thinks of this as adding a point at $\infty$. A topology may be defined by saying that set is, by definition, a neighbourhood of this point if it contains the exterior of some closed ball $B_R(0)$ (which can be thought of as balls around the point at infinity), or by defining a set to be open if it does not contain the point at infinity and is open in $\mathbb{R}^n $. In the general case the process is similar, in principle. See, e.g. Compactification
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate the following definite integrals using the Fundamental Theorem of Calculus Evaluate the following definite integrals using the Fundamental Theorem of Calculus
$$ \int_{-10}^1 s | 25 - s^2 | \; \mathrm d s. $$
my work:
$$ s=\pm 5 $$
$$ \int^{-5}_{-10} f(s) + \int^5_{-5} f(s) + \int^1_5 f(s) $$
Stuck here. Can't move to next step. Help please
|
Hint: See what is the sign of $25-s^2$ in each of the intervals for $s$ and write down the module $|25-s^2|$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Show the Set is Connected Let $f: \mathbb{R} \to \mathbb{R}$ be continuous. Show the set
$$A:= \{\alpha \in \mathbb{R}: \exists \{x_n\}_{n \in \mathbb{N}} \subset \mathbb{R} \, \, \text{ with} \, \, \lim_{n \to \infty} f(x_n) = \alpha \}$$
is connected.
Proof idea:
Since this is a subset of $\mathbb{R}$. This is equivalent to showing that, given $\alpha_1, \alpha_2 \in A$, if $\alpha_1 < z < \alpha_2$ then $z \in A$. Let $\epsilon < \min\{d(\alpha_1,z), d(\alpha_2,z)\}$. Let $\{x_n\} \to \alpha_1$ and $\{x'_n\} \to \alpha_2$. There exists an $N \in \mathbb{N}$ so that, for $f((x_N,x'_N)) \subset (\alpha_1 -\epsilon, \alpha_2-\epsilon)$.
Since $f$ is continuous, the IVT gives that there exists $z_1 \in (x_N,x'_N)$ with $f(z_1) = z$. Iteratively, this gives a sequence $\{z_n\}$ with $f(z_n) = z_0$ and $\lim \limits_{n \to \infty} f(z_n) = z$. Ergo $z \in A$ so $A$ is connected.
Any issues with this proof? Any other ways to attack this problem?
|
$A$ is the closer of $f(\Bbb R)$ i.e $A=\overline{f(\Bbb R)}$.
As $\Bbb R$ is connected and $f$ continuous, then $f(\Bbb R)$ is connected, hence the closer is connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Gaussian elimination involving parameters The problem is :Solve the given system of equations involving the parameter a :
$$x+y+az=1\\
x+ay+z=a\\
ax+y+z=a^2\\
ax+ay+az=a^3 .$$
I tried to solve this using the Gaussian method but I'm stuck because this is $4\times3$ matrix, and the Gaussian process is used for square matrix ? Please help.....
|
Subsitute $y=1-x-az$ from the first equation and then $z= - (a - x + 1)$ from the
second equation, assuming $a\neq 1$. Then the third equation gives $x=(a^2+2a+1)/(a+2)$, the case $a=-2$ being impossible. Then the fourth equation gives
$$
a(a+1)(a-1)=0.
$$
For $a=0$ we have a $3\times 3$ system with unique solution $(x,y,z)=(\frac{1}{2},-\frac{1}{2},\frac{1}{2})$. For $a=1$ we obtain $(x,y,z)=(1-y-z,y,z)$ with two free parameters $x,y$. Otherwise it follows that $a=-1$ and $(x,y,z)=(0,1,0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
If X and Y are equal almost surely, then they have the same distribution According to this question,
If $2$ r.v are equal a.s. can we write $\mathbb P((X\in B)\triangle (Y\in B))=0$
why is this so?
I get the equivalent statement $\mathbb P([(X\in B)\triangle (Y\in B)]^C)=1$ intuitively, but I don't see how to show rigorously that either follows from $P(\omega \in \Omega | X(\omega) - Y(\omega) \in \{ 0 \}) = 1$.
|
$$\{X=Y\} \subseteq [(X\in B)\triangle (Y\in B)]^C$$
Suppose $\omega \in \{X=Y\}$. To show that $\omega \in [(X\in B)\triangle (Y\in B)]^C$.
We have to show that $\omega$ belongs to both of the following sets
$X \in B \cup Y \notin B$
$X \notin B \cup Y \in B$
For $X \in B \cup Y \notin B$, we have two cases by $X$: $X(\omega) \in B$ or $X(\omega) \notin B$
*
*If $X(\omega) \in B$, then $\omega$ belongs to $X \in B \cap Y \notin B$.
*If $X(\omega) \notin B$, then $Y(\omega) \notin B$ and hence $\omega$ belongs to $X \in B \cap Y \notin B$.
For $X \notin B \cup Y \in B$, we again have two cases by $X$: $X(\omega) \in B$ or $X(\omega) \notin B$
*
*If $X(\omega) \in B$, then $Y(\omega) \in B$ and hence $\omega$ belongs to $X \notin B \cup Y \in B$.
*If $X(\omega) \notin B$, then $\omega$ belongs to $X \notin B \cup Y \in B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Prove or disprove that $K$ is a subgroup of $G$.
Let $H$ be a subgroup of $G$, let $a$ be a fixed element of $G$, and let $K$ be the set of all elements of the form $aha^{-1}$, where $h \in H$. That is $$K = \{x \in G~ : x=aha^{-1}~ \text{for some }h\in H \}$$ Prove or disprove that $K$ is a subgroup of $G$.
I have absolutely no idea how to start this question. I know that a subset $K$ of $G$ is a subgroup of $G$ if and only if $K$ is nonempty, closed and contains inverses for every element in $K$. I don't know how to show that in this case, however.
|
Consider the mapping $f\colon G\to G$ defined by
$$
f(x)=axa^{-1}
$$
Then, for $x,y\in G$,
$$
f(x)f(y)=(axa^{-1})(aya^{-1})=a(xy)a^{-1}=f(xy)
$$
Thus $f$ is a homomorphism and $K=f(H)$, so $K$ is a subgroup.
Actually, since $f$ is even an automorphism, you can say that $K=f(H)$ is a subgroup if and only if $H$ is a subgroup.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Iterating limit points ad infinitum to obtain $\{0\}$ Let $X\subseteq\mathbb{R}$. We define $D(X):=\left\{x\mid x\in \mathbb{R}\mid \forall n\in\mathbb{N}\;\exists y\in X\left[0<|x-y|<\frac1{2^n}\right]\right\}$, i.e. the set of limit points of $X$.
We can iterate $D$ by: $D^{(2)}(X):= D(D(X)), \ldots$. We can then also obtain $$D^{(\omega)}:=\bigcap_{n\in\mathbb N} D^{(n)}(X).$$
I am now looking for a set $X$ such that $D^{(\omega)}(X)=\{0\}$. Let us first consider the case $D(X)=\{0\}$. This can be done by taking $X_1=\left\{\frac1{2^n}\mid n\in\mathbb{N}\right\}$.
The case $D^{(2)}(X_1)=\{0\}$ can then be solved by $X_2=\left\{\frac1{2^n}+\frac1{2^m}\mid n,m\in\mathbb{N}\right\}$. Here $D(X_2)=X_1$ and therefore $D^{(2)}(X_2)=\{0\}$.
Now back to $D^{(\omega)}(X)$. One could imagine to take the limit case $X_\infty:=\bigcup\limits_{n\in\mathbb{N}} X_n$, where $X_n =\left\{\frac1{2^{k_0}}+\ldots+\frac1{2^{k_{n-1}}}\mid k_0,\ldots,k_{n-1}\in\mathbb{N}\right\}$. This will however not solve anything, since $X_\infty=\left\{\frac n{2^m}\mid n,m\in\mathbb{N}\text{ and } n\leq2^m\right\}$, of which we can easily see that $D(X_\infty)=X_\infty$. What would be the right approach to find a suitable $X$?
|
We can solve this by looking at your $X_1 = \{ \frac{1}{2^n} | n \in \mathbb{N} \}$. For every element $\frac{1}{2^n}$ in $X_1$ we can find some $\epsilon_n$ such that $[\frac{1}{2^n}, \frac{1}{2^n} + \epsilon_n] \cap X_1 = \frac{1}{2^n}$. We can then map $[0,1] \to [\frac{1}{2^n}, \frac{1}{2^n} + \epsilon_n]$ bijectively and take the image of $X_1$ as $X_{2_n}$. Now take the union over all $n$ as $X_2$. Repeat ad infinitum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Evaluate $\int \tan^6x\sec^3x \ \mathrm{d}x$ Integrate $$\int \tan^6x\sec^3x \ \mathrm{d}x$$
I tried to split integral to $$\tan^6x\sec^2x\sec x$$ but no luck for me. Help thanks
|
Here is what I got from Wolfram Alpha, condensed a little bit
$$\int tan^6(x)sec^3(x)dx$$
$$= \int (sec^2-1)^3sec^3(x)dx$$
$$= \int \Big(sec^9(x) -3sec^7(x)+3sec^5(x)-sec^3(x)\Big)$$
$$= \int sec^9(x)dx -3\int sec^7(x)dx+3\int sec^5(x)dx-\int sec^3(x)dx$$
Since $\int sec^m(x) = \frac{sin(x)sec^{m-1}(x)}{m-1} + \frac{m-2}{m-1}\int sec^{m-2}(x)dx$
$$\Rightarrow \frac{1}{8}tan(x)sec^7(x) -\frac{17}{18}\int sec^7(x)dx+3\int sec^5(x)dx-\int sec^3(x)dx$$
$$ = \frac{1}{8}tan(x)sec^7(x) -\frac{17}{48}tan(x)sec^5(x)+\frac{59}{48}\int sec^5(x)dx-\int sec^3(x)dx$$
$$ = \frac{1}{8}tan(x)sec^7(x) -\frac{17}{48}tan(x)sec^5(x)+\frac{59}{192}tan(x)sec^3(x)-\frac{5}{64}\int sec^3(x)dx$$
$$ = \frac{1}{8}tan(x)sec^7(x) -\frac{17}{48}tan(x)sec^5(x)+\frac{59}{192}tan(x)sec^3(x)-\frac{5}{128}[tan(x)sec(x)-\int sec(x)dx]$$
$$ = \frac{1}{8}tan(x)sec^7(x) -\frac{17}{48}tan(x)sec^5(x)+\frac{59}{192}tan(x)sec^3(x)-\frac{5}{128}tan(x)sec(x)-\frac{5}{128}log[tan(x)+sec(x)]$$
$$ = \frac{1}{384}\bigg(48tan(x)sec^7(x) -136tan(x)sec^5(x)+118tan(x)sec^3(x)-15tan(x)sec(x)-15log[tan(x)+sec(x)]\bigg)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
about necessary and sufficient condition, again Here I have a sentence picked up from a first year book:
The statement “if A then B” is equivalent to the statement “A is a sufficient condition for B” and to the statement “B is a necessary condition for A"
I understand the first part, but I cannot see how the statement “if A then B”
leads to the conclusion that “B is a necessary condition for A".
Is a sufficient condition always a necessary condition?
Hope anyone could help for some explanation and examples.
Thanks
|
Here's my understanding:
It means that $A$ necessarily entails $B$.
The first statement tells you that it's enough to have $A$ to get $B$. But you could have $B$ without $A$.
The second statement tells you that given $A$, you have to have $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
How do I prove the norm of this linear functional is $2$? $f$ is defined on $C[-1, 1]$ by $$f(x)=\int_{-1}^0 x(t) dt - \int_0^1 x(t) dt.$$
I can show that $\|f\| \le 2$. I don't know how to show $\|f\| \ge 2$.
|
Of course if you can plug in
$$x(t) = \begin{cases} 1 & \text{if } t\in [-1, 0] \\ -1 & \text{if } t\in (0,1]\end{cases}$$
Then this function satifies $f(x) = 2$. But this $x$ is not continuous. However you can approximate this by $x_n \in C([-1,1])$, where
$$x_n (t) = \begin{cases}
1 & \text{if } t\in [-1, -\frac 1n)\\
-1 & \text{if } t\in (\frac 1n, 1]\\
- nx & \text{if } t\in [-\frac 1n, \frac 1n]
\end{cases}$$
Then $\|x_n\| = 1$ and
$$f(x_n) = 1- \frac 1n + \frac 1{2n} + 1- \frac 1n + \frac 1{2n} = 2 - \frac{1}{n}$$
Thus $\|f\| \ge 2- \frac{1}{n}$ for all $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1435937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Circular Banked Track Friction
I've tried answering this question by resolving forces, then finding an expression for friction and inserting the given data so I can prove $F = 0$. However, I never get an answer of $0$.
How do I do it?
|
Notice, mass of the car $m=1\ t=1000\ kg$,
speed of the car $v=72\ km/hr=20\ m/sec$,
radius of the circular path $r=160\ m$
$$\tan\alpha=\frac{1}{4}\iff \sin\alpha=\frac{1}{\sqrt {17}}, \cos \alpha=\frac{4}{\sqrt{17}}$$
$\color{red}{\text{Method 1}}$:Resolving components
Component of centrifugal force acting on the center of gravity of car parallel to & up the plane $$F_1=\frac{mv^2}{r}\cos \alpha$$
Component of gravitational force ($mg$) acting on the center of gravity of car parallel to & down the plane $$F_2=mg\sin \alpha$$
The lateral frictional force, $\color{red}{F_f}$ (parallel to & down the plane) acting between the tyres & the track $$F_f=F_1-F_2$$ $$=\frac{mv^2}{r}\cos \alpha-mg\sin \alpha=m\left(\frac{v^2}{r}\cos \alpha-g\sin \alpha\right)$$
$$=1000\left(\frac{20^2}{160}\frac{4}{\sqrt{17}}-10\frac{1}{\sqrt{17}}\right)$$
$$=1000\left(\frac{10}{\sqrt{17}}-\frac{10}{\sqrt{17}}\right)=\color{red}{0}$$
Hence, there is no lateral frictional force between tyres & the track.
$\color{red}{\text{Alternative method}}$: There is direct condition for zero frictional force $$\tan \alpha=\frac{v^2}{rg}$$ Setting the corresponding values values, we get
$$\frac{1}{4}=\frac{20^2}{160\times 10}$$ $$\frac{1}{4}=\frac{1}{4}$$ The condition is satisfied hence, there is no lateral frictional force between tyres & the track.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is null matrix skew-symmetric My question is:
Is null matrix skew-symmetric?
I think it is true as A'=-A is trivially satisfied for a null matrix.
Please help.
|
To test if a matrix is skew symmetric, you need to show that:
$$-A=A^T$$
For example in the case of $2\times 2$ matrices, you need:
$$\begin{bmatrix}-a&-b\\-c&-d\end{bmatrix}=\begin{bmatrix}a&c\\b&d\end{bmatrix}$$
When we set $a=b=c=d=0$ this is trivially satisfied.
Also this is satisfied non-trivially for example in the reals, when $-a=a,-b=c,d=-d$, say:
$$M_c=\begin{bmatrix}0&-c\\c&0\end{bmatrix}:c\in \Bbb R$$
Clearly $-M_c = \begin{bmatrix}0&c\\-c&0\end{bmatrix}$ and $M_c^T = \begin{bmatrix}0&c\\-c&0\end{bmatrix}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Parameters such that the matrix is diagonizable Given the matrix
$$A= \begin{pmatrix}
1 & 0 & 0\\
k & 2 &0 \\
m& n & 1
\end{pmatrix}$$
where $k, m, n \in \mathbb{R}$ find the values of $k, m, n $ such that the matrix is diagonizable.
Solution
The eigenvalues of $A$ are the solutions of the equation:
$$\begin{vmatrix}
1-\ell & 0 &0 \\
k& 2-\ell &0 \\
m& n &1-\ell
\end{vmatrix}=0 \Leftrightarrow \left ( 1-\ell \right )^2 \left ( 2-\ell \right )=0 $$
meaning that the eigenvalues of $A$ are $\ell_1=1$ (double) and $\ell_2=2$ (simple).
In order the matrix to be diagonizable we must have that $\dim \mathcal{V}(1)=2$. Also from the equation of the dimension and rank we have:
$$\dim \mathcal{V}(1)+{\rm r}(A-\mathbb{I}_3)=3$$
implying that the rank of the $A-\mathbb{I}_3$ is $1$. Now on to find $\mathcal{V}(1)$. In order to do so we have to solve the system $(A-\mathbb{I}_3)X=0$ which is easily reduced down to the system:
$$\left\{\begin{matrix}
kx+y &=0 \\
mx+ ny&=0
\end{matrix}\right.$$
And I got stuck at this point. I don't get enough equations so that I can actually determine the values of the parameters. Any help?
|
The system of equations that define the kernel of $A-I$ must have rank $1$ for $\ker A-I$ to have dimension 2, because of the rank-nullity theorem. This means the condition is the vectors $(k,1)$ and $(m,n)$ must be colinear. Explicitly:
$$kn=m.$$
You should be able to check that, given that relation, the minimal polynomial of $A$ is $(x-1)(x-2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
differential equation $y'=\sqrt{|y|(1-y)}$ Consider the differential equation
$y'=\sqrt{|y|(1-y)}$ with $y\le1$
I know that $y=0$ and $y=1$ are solutions.
The problem is to compute the general solution for $0<y<1$. On which interval is the solution defined?
I don't know where or how to start.
$y$ is always positive, so the absolute bars can be left out, but that doesn't seem to make it easier.
Is there anyone who can give me a hint?
|
Given $$\displaystyle \frac{dy}{dx} = \sqrt{|y|\cdot (1-y)}=\sqrt{y(1-y)}\;,$$ bcz $\; 0<y<1$
So $$\displaystyle \frac{dy}{\sqrt{y(1-y)}} = dx\Rightarrow \int \frac{1}{\sqrt{y(1-y)}}dy = \int dx$$
Now Put $$\displaystyle y=\left(z+\frac{1}{2}\right)\;,$$ Then $dy=dz$
So we get $$\displaystyle \int\frac{1}{\sqrt{\left(\frac{1}{2}\right)^2-z^2}}dz = \int dx$$
Now Put $\displaystyle z=\frac{1}{2}\sin \phi\;,$ Then $\displaystyle dz =\frac{1}{2}\cos \phi d\phi $
So we get $$\displaystyle \int\frac{\cos\phi}{\cos \phi}d\phi = \int dx\Rightarrow \phi=x+\mathcal{C}$$
So we get $$\displaystyle \sin^{-1}\left(2z\right) =x+\mathcal{C}$$
So we get $$\displaystyle \sin^{-1}\left(4y-2\right) = x+\mathcal{C}\Rightarrow 4y-2 = \sin(x+\mathcal{C})$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Hash Collision Probability Approximation If an item is chosen at random $k$ times from a set of $n$ items, the probability the chosen items are all different is exactly $\dfrac{n^\underline{k}}{n^k}=\dfrac{n!}{(n-k)!n^k}$. For large $n$, the expression is said to be approximately equal to $\exp\left(\dfrac{-k(k-1)}{2n}\right)$, which works out to probability of collision of about $\dfrac{k^2}{2n}$ for $1 \ll k \ll n$. How does one derive the former approximation? Apparently the Stirling formula first, and then I see some terms that remind me of $\left(1+\dfrac1x\right)^x \approx e^x$, but it doesn’t quite work out for me.
|
The probability of choosing different items is
$$
\left(1-\frac0n\right)\left(1-\frac1n\right)\left(1-\frac2n\right)\cdots\left(1-\frac{k-1}n\right)\;.
$$
For $n\gg k^2$, we can keep only the terms up to first order in $\frac1n$ to obtain
$$
1-\frac1n\sum_{j=0}^{k-1}j=1-\frac{(k-1)k}{2n}\;.
$$
To get the exponential form, you can either approximate this directly as $\exp\left(\dfrac{-k(k-1)}{2n}\right)$ or first approximate the factors in the product:
$$
\exp\left(-\frac0n\right)\exp\left(-\frac1n\right)\exp\left(-\frac2n\right)\cdot\exp\left(-\frac{k-1}n\right)=\exp\left(-\frac1n\sum_{j=0}^{k-1}j\right)=\exp\left(-\frac{(k-1)k}{2n}\right)\;.
$$
The error in the exponential form is $O\left(k^3/n^2\right)$, whereas the error in the first form is $O\left(k^4/n^2\right)$, since the exponential form only has incorrect $\frac1{n^2}$ terms for each factor individually, whereas the first version drops an $\frac1{n^2}$ term for each pair of factors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Measurability of the supremum of a Brownian motion After reading some text books about Brownian Motion i often encountered the following object
$$
\sup_{t \in [0, T]} B_t,
$$
where $(B_t)_{t \geq 0}$ is a Brownian Motion.
But how do i see that this object is measurable? The problem is that the supremum is taken over an uncountable set. Do i miss something trivial or why is no text book mentioning why this object is well-defined as a random variable?
|
Typically a Brownian motion is defined to have continuous sample paths. If you take some countable, dense subset $S\subset [0,T]$, you then have
$$\sup_{t\in S} B_t = \sup_{t\in [0,T]} B_t$$
by continuity of $B_t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
For what values $m \in \mathbb{N}$, $\phi(m) | m$, where $\phi(m)$ is the Euler function. I am working with elementary number theory and, although in theory the $\phi$ Euler function seems easy to understood, I am having some problemas making the exercises.
For example, in this question:
For what values $m \in \mathbb{N}$, $\phi(m) | m$, where $\phi(m)$ is the Euler function,
I know two expressions to the function $\phi$ but, I tried to use them to solve this problem and I failed. Could someone help-me?
Thanks a lot.
Here is what I tried:
What I did was:
$\phi(m) = (p_1^{\alpha_1} - p_1^{\alpha_1 - 1})\ldots (p_k^{\alpha_k} - p_k^{\alpha_k - 1}) \Rightarrow$
$\frac{m}{\phi(m)} = \prod_k \frac{p_k^{\alpha_k}}{p_k^{\alpha_k}(1-\frac{1}{p_k})}= \prod_k \frac{p_k}{- 1 +p_k}.$
Then, I can't follow from here.
|
$m=1$ works. Let $m=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ with $2\le p_1<p_2<\cdots <p_k$. Then
$$\phi(m)=p_1^{\alpha_1-1}p_2^{\alpha_2-1}\cdots p_k^{\alpha_k-1}\left(p_1-1\right)\left(p_2-1\right)\cdots\left(p_k-1\right)$$
$$\phi(m)\mid m\iff \frac{p_1p_2\cdots p_k}{\left(p_1-1\right)\left(p_2-1\right)\cdots\left(p_k-1\right)}\in\Bbb Z$$
If $k=1$, then $$p_1-1\mid p_1\implies p_1-1\mid p_1-\left(p_1-1\right)\implies p_1-1\mid 1\implies p_1=2$$
Then $m=2^{\alpha_1}$, and indeed it's a solution.
If $k\ge 2$, then we'll prove $p_1=2$.
Assume for contradiction $p_1\ge 3$. Then exists a prime $q$ such that
$$q\mid p_1-1\mid p_1p_2\cdots p_k\implies q\mid p_1p_2\cdots p_k\implies$$
$$ \left(\left(q\mid p_1\right) \text{ or } \left(q\mid p_2\right) \text{ or } \ldots \text{ or } \left(q\mid p_k\right)\right)\implies q\in\{p_1,p_2,\ldots, p_k\}$$
But then $q$ is too large to divide $p_1-1$. Contradiction.
Therefore $p_1=2$. Then $$\frac{2p_2\cdots p_k}{\left(p_2-1\right)\cdots\left(p_k-1\right)}\in\Bbb Z$$
Let $p$ be a prime divisor of $p_2-1\ge 2$. Then
$$p\mid p_2-1\mid 2p_2\cdots p_k\implies p\mid 2p_2\cdots p_k\implies p\in\{2,p_2,\ldots, p_k\},$$
so $p=2$, because if $p\ge p_2$, then $p\nmid p_2-1$.
Therefore $p_2-1=2^h$ for some $h\in\Bbb Z^+$, so
$$2^h=p_2-1\mid 2p_2\cdots p_k\implies 2^{h-1}\mid p_2\cdots p_k$$
But $p_2\cdots p_k$ is odd, so $h=1$, so $p_2=3$.
Assume for contradiction $k\ge 3$. Then $4$ divides the denominator of $\frac{2p_2\cdots p_k}{\left(p_2-1\right)\cdots\left(p_k-1\right)}$ but not the numerator, so $\frac{2p_2\cdots p_k}{\left(p_2-1\right)\cdots\left(p_k-1\right)}$ is not an integer. Therefore $k=2$.
Then $m=2^{\alpha_1}3^{\alpha_2}$, which is indeed a solution.
Answer: $m\in\{1,2^t,2^a3^b\},\, t,a,b\ge 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Support of a measure and set where it is concentrated I have some problems understanding the difference between the notion of support of a measure and saying that a measure is concentrated on a certain set.
If I have a $\sigma$-algebra and $\mu$ a measure, is it equivalent to ask the support of $\mu $ to be contained in a measurable set $A$ and $\mu$ to be concentrated on $A$?
|
Some supplementary information regarding Dominik's answer.
1) For most authors and in most cases of practical interest, the definition of support given by Dominik is appropriate. However, be aware that some authors do not distinguish between supports of a measure $\mu$ and sets where the measure $\mu$ concentrates ---the latter are also called carriers. See Billingsley (1995/1976, p. 23) and Davidson (1994).
2) The characterization of the support of a measure in terms of the smallest closed set with full measure is not the most general one because it does not always exist (e.g., Dudley 2002/1989, p. 238). For a Borel space $X$, the most general definition that I know is the following: The support of $\mu $ is the complement of the union of the $\mu$-null open sets, i.e., $\mathop{\mathrm{supp}} \mu:=\left(\bigcup\{O \subset X: O \text{ open and }\mu(O)=0\}\right)^c $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1436928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is a permutation pattern? The wikipedia entry for permutation pattern gives this as an example:
For example, in the permutation π = 391867452, π1=3 and π9=2. A permutation π is said to contain the permutation σ if there exists a subsequence of (not necessarily consecutive) entries of π that has the same relative order as σ, and in this case σ is said to be a pattern of π, written σ ≤ π. Otherwise, π is said to avoid the permutation σ. For example, the permutation π = 391867452 contains the pattern σ = 51342, as can be seen in the highlighted subsequence of π = 391867452 (or π = 391867452 or π = 391867452). Each subsequence (91674, 91675, 91672) is called a copy, instance, or occurrence of σ. Since the permutation π = 391867452 contains no increasing subsequence of length four, π avoids 1234.
In that example, is a pattern of 51342 the same as a pattern of, say, 62453 or 92483? I mean, is it the just the the increase/decrease as well as the relative magnitude of each digit with respect to every other digit in the sequence that makes a pattern? Or must a pattern be "reduced" so that the value of the lowest digit in the pattern is 1 and the difference between the two values in the "sort" of the pattern is no more than 1?
|
Rather than thinking of a pattern as a single permutation, think of it as an equivalence class with respect to order isomorphism (i.e. if positions compare in the same way, then the entries in those positions compare in the same way, too). That equivalence class has a canonical representative where the $k$th smallest position/letter is $k$. In your example, $51342$, $62453$ and $92483$ are order-isomorphic, and the canonical representative of their equivalence class is $51342$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that the area under the curve of a measurabe function is Lebesgue measurable Problem: Let $E \subset R^n$ be a Lebesgue measurable set and $f : E \to [0,\infty)$ be a Lebesgue measurable function. Suppose
$$A = \{(x,y) \in R^{n+1} : 0 \le y \le f(x), x\in E\}.$$
Let $\lambda_1$ denote Lebsgue measure in $R^1$, $\lambda_n$ denote the Lebesgue measure in $R^n$, and $\lambda_{n+1}$ denote the Lebesgue measure in $R^{n+1}$.
(a) Show that the set $A$ is Lebesgue measurable on $R^{n+1}$.
(b) Show that
$$\lambda_{n+1}(A) = \int_E f(x) d\lambda_n (x) = \int_0^\infty \lambda_n (\{x \in E : f(x) \ge y\}) d\lambda_1(y).$$
My attempt at a solution: I know that this is just a basic application of the Fubini-Tonelli theorem, but I can't seem to wrap my head around it for some reason. For (a), I know that the set $A$ is the "area under the curve." But I'm not sure how to show that this is measurable from Fubini Tonelli. For part (b), the first equality seems obvious, and I don't know how much proof is necessary. For the second equality, I have
$$\int_E f(x)d\lambda_n(x) = \int_0^\infty \int_{R^n} [f(x) \cdot \chi_E(x)] d\lambda_n(x) d\lambda_1(y),$$
by the Tonelli theorem, since $f$ is non-negative. But I don't quite see how to pull off that inner integration to get the integrand we want.
I would really appreciate some hints/intuitive explanations to point me in the right direction. Thanks!
|
For $(a)$, notice that the result is obvious if $f$ is a simple function. Indeed the region under the graph of a non-negative simple function looks like the disjoint union of sets of the form $A_i \times [0,a_i]$, which are of course measurable. Now let $f$ be any non-negative measurable function and let $\{s_n\}$ be an increasing sequence of non-negative simple functions that converges to $f$ for almost every $x \in E$. To conclude it is enough to notice that, up to a set of measure $0$, the area under the graph of $f$ is the union of the areas under the graphs of the simple functions $s_n$. You should be able to write down all the details.
For $(b)$, as you mentioned in the question, all you need to do is to apply Tonelli's theorem to $$\int_{\mathbb{R}^{n + 1}}\chi_A\, d\lambda^{n+1}.$$
The two expressions that you are after are the two iterated integrals. I think it helps noticing that $$\chi_A(x,y) = \chi_E(x)\chi_{[0,f(x)]}(y).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to factor$ x^2+2xy+y^2 $ I know the formula $x^2 + 2xy + y^2 = (x + y)^2$ by heart and in order to understand it better I am trying to solve the formula myself; however, I don't understand how step 4 is derived from step 3.
$$x^2+2xy+y^2\qquad1$$
$$x^2+xy+xy+y^2\qquad2$$
$$x(x+y) + y(x+y) \qquad3$$
$$(x+y)(x+y) \qquad 4$$
How does one get from step 3 to step 4? I don't know how to do that process.
|
$$x\color{blue}{(x+y)} + y \color{blue}{(x+y)} \qquad3$$
Let $\color{blue}{b=(x+y)}$
So, now you have.
$$x\color{blue}{(b)} + y\color{blue}{(b)} = x\color{blue}{b} +y\color{blue}{b} =\color{blue}{b}x+\color{blue}{b}y $$
Now factor out the $\color{blue}{b}# :
$$\color{blue}{b} (x+y)$$
Remember, $\color{blue}{b=x+y}$, so substitute:
$$\color{blue}{(x+y)}(x +y)=(x+y)^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 3
}
|
Velocity in applications of integration The question reads as this:
Person A starts riding a bike at noon (t=0) from Niwot to Berthoud, a distance of 20km, with velocity $v(t)=15/(t+1)^2$.
Person B starts riding a bike at noon as well (t=0) from Berthoud to Niwot with velocity $u(t)=20/(t+1)^2$
Assume distance is measured in kilometers amd time is in hours.
A. Make a graph of Person A's distance to Niwot as a function of time
B. Make a graph of Person B's distance to Berthoud as a function of time.
C. How far has each person traveled when they meet? When do they meet?
D. More generally, if the rider's speeds are $v(t)=A/(t+1)^2$ and $u(t)=B/(t+1)^2$ and the distance between the towns is D, what conditions on A,B,and D must be met to ensure the riders pass each other?
E.With the velocity functions given in part(d), make a conjecture about the maximum distance each person can ride, given unlimited time.
So when I started this problem, I first took the antiderivative of both velocity functions to get their respective position functions, getting $v(t)=-15/(t+1)$ and $u(t)=-20/(t+1)$. But if the position of each is set to 20 (20km to their destinations, the result I get in time is a negative number for both. Also, the two position functions never actually intersect, making me believe that the two riders never actually pass each other at one point. If anyone could help me find a way to approach this problem, it would be greatly appreciated.
|
Let's say that Niwot is at (0,0) and Berthoud is at (20,0) giving the following:
$$A.)\quad P_v = \int_{0}^{t} V_{vt} = \int_{0}^{t}\frac{15\,dt}{(t+1)^2} = \frac{15t}{t+1}$$
$$B.) \quad P_u = 20 - \int_{0}^{t} V_{ut} = 20 -\int_{0}^{t}\frac{20\,dt}{(t+1)^2} = 20-\frac{20t}{t+1}$$
$$C.) \quad P_u = P_v\,\, \therefore \,\,\frac{15t}{t+1} = 20-\frac{20t}{t+1}\,\, \therefore \,\, t = \frac{4}{3}$$
I am fairly certain these are correct, but I haven't checked my work very much... some fact checking would be nice! I'm relying on the fact that distance covered from $t=a$ to $t=b$ is $\int_{a}^{b} v_x dt$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Explicitly give an infinity of rational numbers who come from below the number $9.123412341234 ...$, Explicitly give an infinity of rational numbers who come from below the number $9.123412341234 ...$, and an infinity who come from above.
I know this number is $\frac{91225}{9999}$, but how can I give that set of numbers?
Also I have to give an infinity of irational numbers who come from below the number $1.989898989898 ...$, and countless who come from above.
Any ideas or hint?
|
The set of rationals $\frac{n}{9999}$ where $n < 91225$ and $n$ is an integer is countably infinite with a cardinality of $ \aleph_0$. Likewise, the set of rationals $\frac{n}{9999}$ where $n > 91225$ and $n$ is an integer is countably infinite with a cardinality of $ \aleph_0$. In general, if you have a rational number $\frac{a}{b}$ there are countably infinite rational numbers $\frac{a-n}{b}$ and $\frac{a+n}{b}$ less than and greater than $\frac{a}{b}$ respectively, for rational $n$.
The second decimal expansion you give is equal to $\frac{197}{99}$. I leave it as an exercise to you to follow the procedure outlined above to produce your two infinite sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
$C(\textbf{R})$ is a vector subspace of $\textbf{R}^\textbf{R}$? How do I see that $C(\textbf{R})$ is a vector subspace of $\textbf{R}^\textbf{R}$?
|
Let $\mathcal V$ be a vector space over a field $\mathbb F$. To check if a subset $\mathcal W\subset \mathcal V$ is a subspace of $\mathcal V$ it is sufficient to check:
*
*$0\in \mathcal W$
*$v,w\in\mathcal W\implies v+w\in\mathcal W$ for all $v,w\in\mathcal W$
*$v\in\mathcal W, a\in\mathbb F\implies a\cdot v\in\mathcal W$ for all $v\in\mathcal W,a\in\mathbb F$
In this case you have $\mathcal V=\mathbf{R}^{\mathbf{R}}$ and $\mathcal W=C(\mathbf{R})$ (which I assume to be the set of all continuous functions $f:\mathbf{R}\rightarrow\mathbf{R}$). Now apply your knowledge about continuous functions (e.g. the sum of two continuous functions) to check if $\mathcal W$ is a subspace of $\mathcal V$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How can the proof by induction be reliable when it depends on the number of steps? Yesterday, I got a math problem as follows.
Determine with proof whether $\tan 1^\circ$ is an irrational or a rational number?
My solution (method A)
I solved it with the following ways.
I can prove that $\tan 3^\circ$ is an irrational, the proof will be given later because it takes much time to type.
Let's assume that $\tan 1^\circ$ is a rational number. As a result,
$$ \tan 2^\circ = \frac{2 \tan 1^\circ }{1-\tan^2 1^\circ}$$ becomes a rational number. Here we don't know whether $\tan 2^\circ$ is an irrational or a rational. Let's consider each case separately as follows:
*
*If $\tan 2^\circ$ is actually an irrational then a contradiction appears in $\tan 2^\circ = \frac{2 \tan 1^\circ}{1 - \tan^2 1^\circ}$. Thus $\tan 1^\circ$ cannot be a rational.
*If $\tan 2^\circ$ is actually a rational then we can proceed to evaluate
$$\tan 3^\circ = \frac{\tan 2^\circ + \tan 1^\circ}{1 - \tan 2^\circ \tan 1^\circ}$$ A contradiction again appears here because I know that $\tan 3^\circ$ is actually an irrational. Thus $\tan 1^\circ$ cannot be a rational.
For both cases whether $\tan 2^\circ$ is rational or not , it leads us to the conclusion that $\tan 1^\circ$ is an irrational. End.
My friend's solution (Method B)
Assume that $\tan 1^\circ$ is a rational number. If $\tan n^\circ$ ($1\leq n\leq 88$, $n$ is an integer) is a rational number, then
$$ \tan (n+1)^\circ = \frac{\tan n^\circ + \tan 1^\circ}{1-\tan n^\circ \tan 1^\circ}$$ becomes a rational number.
Consequently, $\tan N^\circ$ ($1\leq N\leq 89$, $N$ is an integer) becomes a rational number.
But that is a contradiction, for example, $\tan 60^\circ = \sqrt 3$ that is an irrational number.
Therefore, $\tan 1^\circ$ is an irrational number.
My friend's solution with shortened inteval (Method C)
Consider my friend's proof and assume that we only know that $\tan 45^\circ =1$ which is a rational number. Any $\tan n^\circ$ for ($1\leq n\leq 44$, $n$ is an integer) is unknown (by assumption).
Let's shorten his interval from ($1\leq n\leq 88$, $n$ is an integer) to ($1\leq n\leq 44$, $n$ is an integer).
Use his remaining proof as follows.
For ($1\leq n\leq 44$, $n$ is an integer), $$ \tan (n+1)^\circ = \frac{\tan n^\circ + \tan 1^\circ}{1-\tan n^\circ \tan 1^\circ}$$ becomes a rational number.
Consequently, $\tan N^\circ$ ($1\leq N\leq 45$, $N$ is an integer) becomes a rational number.
Based on the assumption that we don't know whether $\tan n^\circ$ for ($1\leq n\leq 44$, $n$ is an integer) is rational or not, we cannot show a contradiction up to $45^\circ$,
Questions
*
*Can we conclude that $\tan 1^\circ$ is a rational number in method C as there seems no contradiction?
*Is the proof by induction correctly used in method B and C?
*Is the proof by induction in method A the strongest?
|
Let $S_n$ be the statement that $\tan n^\circ$ is rational. Then
$$
\tan (n+1)^\circ = \frac{\tan n^\circ + \tan 1^\circ}{1-\tan n^\circ \tan 1^\circ}
$$
shows that $S_1,S_n\implies S_{n+1}$. So by assuming $S_1$ we have by induction.
$$
S_1\wedge S_1\implies S_1\wedge S_2\implies ...\implies S_1\wedge S_n\text{ etc.}
$$
Note how $S_1$ is always a component in each step. Thus if we ever encounter any $S_n$ which is obviously false, we conclude by contraposition that all the statements $S_1\wedge S_k$ for $1\leq k\leq n$ must have been false. This does not leave out the possibility of some $S_k$'s being true. Only $S_1\wedge S_k$ is still not true since $S_1$ is false (because in particular $S_1\wedge S_1$ is false).
Methode A: Corresponds to taking $S_3$ and thus $S_1\wedge S_3$ as contradiction.
Method B: Corresponds to taking $S_{60}$ and thus $S_1\wedge S_{60}$ as contradiction.
Method C: Corresponds to only noticing that $S_{45}$ is true, but not considering that $S_1\wedge S_{45}$ is actually false because $S_1$ is false.
So to conlude, method A and B are very similar allthough $S_3$ and $S_{60}$ may not be equally simple to falsify. Method C tells us virtually nothing regarding $S_1$.
Method B+C combined: We could, on the other hand, conclude from that fact that $S_{45}$ is true but $S_{60}$ is false, that since
$$
S_1\wedge S_{45}\implies S_1\wedge S_{60}
$$
where the latter is false, by contraposition $S_1\wedge S_{45}$ is false already implying $S_1$ to be false.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
If $z$ lies on the circle $|z-1|=1$ then$\frac {z-2} z$ is a purely imaginary number If $z$ lies on the circle $|z-1|=1$ then $\frac {z-2} z$ is a purely imaginary number. This is what by book states. Did'nt understand why. Can someone help?
Actually I was thinking of a more geometrical approach to the problem, as pointed out by @did and the answer below. I also remembered the method that if $\bar z = -z$ then $z$ is purely imaginary. But can it also be done by visual geometry? Ok, I drew a circle centred at $1$ with radius $1$. The statement given implies $\frac {z-2} z$ equals any point on the semicircle. After that how do we prove its imaginary from there?
|
Let $z = x + iy$. Then $|z - 1|^2 = 1 \rightarrow (x-1)^2 + y^2 = 1$.
Then show that the real part of $\frac{z-2}{z}$ is $x(x - 2) + y^2$, which is zero, from above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Confusing probability question I have got a task, which seems a quite confusing for me. It is simple: In a market, they sell eggs in egg holders, they store $10$ of them in each. There is $60$% chance, that all of the eggs are ok, $30$% chance, that exactly $1$ of them is broken, and $10$% chance, that exactly $2$ of them are broken(it is random, which one is broken).
We buy an egg holder, and after we grab our first egg, we are sad, because it is broken. What is the probability, that there is one more broken egg in our holder?
The "logical" way would be: $30$% of them have $1$ broken egg, $10$% of them have $2$, so, to have $2$ broken, the chance must be $\frac14$. But I am not really sure if that is the correct approach, since the broken egg can be anywhere, getting a broken one for first may be not that easy, or is that independent?(Maybe, I could use Bayes Theorem somehow)?
Any help appreciated.
|
I see this as analogous to the pancake problem ... specifically, I would argue that having randomly observed a broken egg, the chances that you are in case C have disproportionately increased.
Imagine that you had $100$ of your cartons, $60$ of type A, $30$ of type B, and $10$ of type C. You select one egg randomly from each. You'll get $3$ broken ones from the Bs and $2$ from the Cs. Hence your probability is $\frac 25$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 7,
"answer_id": 1
}
|
Does the inner product $\langle \cdot, \cdot \rangle$ induce any other norms other than the 2 norm? In the lecture my professor wrote that the standard inner product on $R^n$ is given by
$\langle x, y \rangle = x^Ty = \sum\limits_{i=1}^n x_i y_i$
which induces a norm $\sqrt{\langle x,x \rangle} = \|x\|_2$
My question is do inner products induce other types of norms...or rather are norms such as the 1-norm or the $\infty$-norm induces by some inner product?
|
This is a really interesting question, and here is a partial answer. The $1$ and $\infty$ norms do not come from inner products. For a norm to have an associated inner product actually gives you a lot of structure. For example (if the scalars are real for convenience),
$$\left\| x - y \right\|^2 = \langle x - y, x -y \rangle = \langle x, x \rangle - 2 \langle x, y \rangle + \langle y, y \rangle = \left\|x \right\|^2 - 2 \langle x, y \rangle + \left\| y \right\|^2$$
In fact it turns out that there is an identity called the parallelogram law
$$2 \left\|x\right\|^2 + 2\left\|y\right\|^2 = \left\|x + y \right\| + \left\| x - y\right\|$$
A norm obeys this identity iff it has an associated inner product. You can verify that the $1$ and $\infty$ norms do not obey this identity (by finding examples), and therefore cannot have an inner product. In fact the $p$-norms on $\mathbb{R}^n$ only obey this identity when $p=2$.
Thanks to the comments for some additions. For a proof of the "iff" claim, see this related question. If you have a norm which obeys the parallelogram law, you can actually express the inner product directly in terms of the norm by (again real case for convenience)
$$\langle x, y \rangle = \frac{1}{4} \left( \left\| x + y \right\|^2 - \left\| x - y \right\|^2 \right)$$
See here for more information.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1437995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
How to find the maximum curvature of $y=e^x$? So I found the curvature to be $K = \dfrac{e^{x}}{(1+e^{2x})^{3/2}}$ but I don't know how to maximize this?
|
We can differentiate and set equal to zero to get
$$\frac{e^x-2e^{3x}}{(1+e^{2x})^{5/2}}=0\implies e^x=2e^{3x}\implies\frac{1}{2}=e^{2x}.\tag{1}$$
Then solve to obtain $x=-\frac{1}{2}\log 2$.
Note - the various algebraic operations in (1) are possible since $e^x\neq 0$ for all $x\in\mathbb{R}$. For future similar questions, it may be necessary to determine maxima/minima using the second derivative. So called "saddle" points may also need addressing.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Are the perfect groups linearly primitive? A finite group $G$ is perfect if $G = G^{(1)} := \langle [G,G] \rangle$, or equivalently, if any $1$-dimensional complex representation is trivial.
A finite group $G$ is linearly primitive if it has a faithful complex irreducible representation.
Question: Are the perfect finite groups linearly primitive?
Remark: the finite simple groups are perfect and linearly primitive.
|
No. Any linearly primitive group must have cyclic center by Schur's lemma, but there's no reason a perfect group should have this property. I think that, for example, the universal central extension of $A_5 \times A_5$ is perfect but has center $C_2 \times C_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Limit of a sequence of integrals and the integral of a limit For every $n \in \mathbb{Z}_{>0}$ define
$$
f_n(x) = \frac{x^n}{1+x^n}, \, \, \, \, \, x \in [0,1].
$$ I want to show that $$\lim_{n \to \infty} \int_{0}^{1} f_n(x) dx = \int_{0}^{1} \left(\lim_{n \to \infty} f_n(x) \right) dx.$$ I know that the limit function of $f_n$ is
$$
f(x)=\begin{cases}
0 &: \text{if $x \in [0,1)$}\\
\frac{1}{2} &: x = 1
\end{cases}
$$ so I expect the integral on the right to be $0$, but how can I bound the integral on the left appropriately to show this?
EDIT: Also, is there a general principle/theorem I can use to approach problems such as these, for sequences that are not uniformly convergent. For example, are there common ways to bound rational functions of this (or similar) form?
|
Note that $$\frac{x^n}{1+x^n}\leqslant 2$$ for all $x\in[0,1]$, and $$\int_0^1 2\ \mathsf dx = 2<\infty. $$
So by the dominated convergence theorem,
$$\lim_{n\to\infty}\int_0^1 f_n(x)\ \mathsf dx = \int_0^1 f(x)\ \mathsf dx = 0. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluation of $ \int_{0}^{1}\left(\sqrt[4]{1-x^7}-\sqrt[7]{1-x^4}\right)dx$
Evaluation of $\displaystyle \int_{0}^{1}\left(\sqrt[4]{1-x^7}-\sqrt[7]{1-x^4}\right)dx$
$\bf{My\; Try::}$ We can write it as $$I = \displaystyle \int_{0}^{1}\sqrt[4]{1-x^7}dx-\int_{0}^{1}\sqrt[7]{1-x^4}dx$$
Now Using $$\displaystyle \bullet \int_{a}^{b}f(x)dx = -\int_{b}^{a}f(x)dx$$
So we get $$I = \displaystyle \int_{0}^{1}\left(1-x^7\right)^{\frac{1}{4}}dx+\int_{1}^{0}\left(1-x^4\right)^{\frac{1}{7}}dx$$
Now Let $$\displaystyle f(x) = \left(1-x^{7}\right)^{\frac{1}{4}}\;,$$ Then $$f^{-1}(x) = (1-x^4)^{\frac{1}{7}}$$ and also $f(0) = 1$ and $f(1) =0$
So Integral $$\displaystyle I = \int_{0}^{1}f(x)dx+\int_{f(0)}^{f(1)}f^{-1}(x)dx$$
Now let $f^{-1}(x) = z\;,$ Then $x=f(z)$ So we get $dx = f'(z)dz$
So Integral $$\displaystyle I =\int_{0}^{1}f(x)dx+\int_{0}^{1}z\cdot f'(z)dz$$
Now Integration by parts for second Integral, We get
$$\displaystyle I =\int_{0}^{1}f(x)dx+\left[z\cdot f(z)\right]_{0}^{1}-\int_{0}^{1}f(z)dz$$
So using $$\displaystyle \bullet\; \int_{a}^{b}f(z)dz = \int_{a}^{b}f(x)dx$$
So we get $$\displaystyle I =\int_{0}^{1}f(x)dx+f(1) -\int_{0}^{1}f(x)dx = f(1) =0$$
My Question is can we solve it Some $\bf{short\; way,}$ Iy yes then plz explain here
Thanks
|
Hint: Notice that both integrands represent the same geometric shape, namely $X^4+Y^7=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Assume this equation has distinct roots. Prove $k = -1/2$ without using Vieta's formulas. Given $(1-2k)x^2 - (3k+4)x + 2 = 0$ for some $k \in \mathbb{R}\setminus\{1/2\}$, suppose $x_1$ and $x_2$ are distinct roots of the equation such that $x_1 x_2 = 1$.
Without using Vieta's formulas, how can we show $k = -1/2$ ?
Here is what I have done so far:
$(1-2k)x_1^2 - (3k+4)x_1 + 2 = 0$
$(1-2k)x_2^2 - (3k+4)x_2 + 2 = 0$
$\to (1-2k)x_1^2 - (3k+4)x_1 + 2 = (1-2k)x_2^2 - (3k+4)x_2 + 2$
$\to (1-2k)x_1^2 - (3k+4)x_1 = (1-2k)x_2^2 - (3k+4)x_2$
$\to (1-2k)x_1^2 - (3k+4)x_1 = (1-2k)(1/x_1)^2 - (3k+4)(1/x_1)$
$\to (1-2k)[x_1^2 - (1/x_1)^2] - (3k+4)[x_1 - (1/x_1)] = 0$
$\to (1-2k)[x_1 - (1/x_1)][x_1 + (1/x_1)] - (3k+4)[x_1 - (1/x_1)] = 0$
$\to (1-2k)[x_1 + (1/x_1)] - (3k+4) = 0$ or $[x_1 - (1/x_1)] = 0$
$\to (1-2k)[x_1 + (1/x_1)] - (3k+4) = 0$, $x_1 = 1$ or $x_1 = -1$
Since the later two cases violate the distinct roots assumption, we have:
$(1-2k)[x_1 + (1/x_1)] - (3k+4) = 0$
This gives:
The answer is $x_1 = \frac{5-i\sqrt{39}}{8}$ or $x_1 = \frac{5+i\sqrt{39}}{8}$. How do I get that (without Vieta's)?
|
The problem is find $k$ not $x_1$ isn't it? If so take the equation you arrived at before you started doing computer algebra:
$$(1-2k)[x_1 + (1/x_1)] - (3k+4) = 0$$
and multiply by $x_1$ to get
$$(1-2k)x_1^2 - (3k+4) x_1 + (1 -2k) = 0$$
but you know that
$$(1-2k)x_1^2 - (3k+4) x_1 + 2 = 0$$
so you must have $1 - 2k = 2$, i.e., $k = -\frac{1}{2}$. (Using the Vieta formulas which just amount to $(x - x_1)(x - x_2) = x^2 - (x_1 + x_2)x + x_1x_2$ for a quadratic seems a much simpler approach to me.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find the value of : $\lim_{n\to \infty} \frac{1}{\sqrt{n}} \left(\frac{1}{\sqrt{n+1}}+\dotso +\frac{1}{\sqrt{n+n}} \right)$ I am trying to evaluate $\lim_{n\to \infty} \frac{1}{\sqrt{n}} \left(\frac{1}{\sqrt{n+1}}+\dotso +\frac{1}{\sqrt{n+n}} \right)$. I suspect identifying an appropriate Riemman sum is the trick. However after some toying with it I gave up on this suspicion and stumbled across the Stolz-Cesaro theorem, which I then used to calculate the limit as $\sqrt{2}$.
Does anybody see a way to do this as Riemann sum?
I tried putting it in this form
$\frac{1}{n} \sum_{k=1}^n \sqrt{\frac{n}{n+k}}$ but then I don't see how carry on to identify the function from the partition to integrate.
Thank you for suggestions or comments.
|
This is a Riemann sum. Rewrite as
$$\frac1{n} \sum_{k=1}^n \frac1{\sqrt{1+\frac{k}{n}}} $$
which, as $n \to \infty$, becomes
$$\int_0^1 dx \frac1{\sqrt{1+x}} = 2 (\sqrt{2}-1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Why is this rotation "incorrect"? I've been trying to use the following formula for the rotation of a point around the origin:
$$
\begin{bmatrix}
x' \\ y'
\end{bmatrix} =
\begin{bmatrix}
\cos{\theta} & -\sin{\theta} \\
\sin{\theta} & \cos{\theta}
\end{bmatrix}
\begin{bmatrix}
x \\ y
\end{bmatrix}
$$
Now, I'm trying to apply this formula to the coordinate $(5,3)$ and rotating it $90$ degrees clockwise, and I ended up with the following result:
$$
\begin{bmatrix}
x' \\ y'
\end{bmatrix} =
\begin{bmatrix}
\cos{90} & -\sin{90} \\
\sin{90} & \cos{90}
\end{bmatrix}
\begin{bmatrix}
5 \\ 3
\end{bmatrix}
\\
=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
5 \\ 3
\end{bmatrix}
\\
=
\begin{bmatrix}
0(5) -1(3) \\
1(5) + 0(3)
\end{bmatrix} \\
=
\begin{bmatrix}
-3 \\ 5
\end{bmatrix}
$$
I ended up with the rotated coordinates $(-3,5)$. Unfortunately, this was wrong. Can anyone tell me what I'm doing wrong, and how I can do it correctly? I tried this method on other coordinate points, and all of them were wrong as well.
|
It is correct. What makes one think that it is wrong? You mean the convention that CCW is positive?
EDIT1:
What I meant is with Clockwise rotation I obtain $(-3,5)$ by matrix multiplication and Counterclockwise rotation gives $(3,-5)$ for a diametrically opposite point. So only association with sign convention of rotation could be the source of error.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Calculation of a covariant derivative and exterior derivative
I am reading these notes on differential geometry from a course at MIT. I have been verifying the computations for myself and I have a concern about the expression for $de^k$ on the final line. When I evaluate $de^k(e_i, e_j)$ according to the lemma, I obtain $-{\Gamma^k_{ij}} - -{\Gamma^k_{ji}}$ which is zero because the Christoffel symbols are symmetric in the lower indices. This would imply that $de^k = 0$ but that is not what seems to be written. Is there something that I am not understanding?
|
It is, I think, a bit misleading to call that a Christoffel symbols: in general if we have a local coordinate $(x^1, \cdots, x^n)$, then it gives you a coordinate frame
$$\frac{\partial}{\partial x^1}, \cdots, \frac{\partial}{\partial x^n}$$
and the Christoffel symbols is defined as (write $\partial_i$ for the coordinate vector for simplicity)
$$\nabla_{\partial_i} \partial_j = \Gamma_{ij}^k \partial_k.$$
In this case, $\Gamma_{ij}^k = \Gamma_{ji}^k$ as by the torsion free condition,
$$\nabla_{\partial_i} \partial_j - \nabla_{\partial_j} \partial_i = [\partial_i , \partial_j] = 0.$$
However, if $e_1, \cdots, e_n$ is an arbitrary basis, and $\Gamma_{ij}^k$ is defined as
$$\nabla_{e_i} e_j = \Gamma_{ij}^k e_k,$$
then $\Gamma_{ij}^k\neq \Gamma_{ji}^k$ (as $[e_i, e_j]\neq 0$).
Going back to your calculation, you have
$$de^k(e_i, e_j) = -(\Gamma_{ij}^k - \Gamma_{ji}^k),\ \ \ \forall i, j, k.$$
This implies
$$de^k = -\sum_{a, b} \Gamma_{ab}^k e^a\wedge e^b$$
as
$$-\sum_{a, b} \Gamma_{ab}^k e^a\wedge e^b \ (e_i, e_j) = -\Gamma_{ij}^k- (- \Gamma_{ji}^k)$$
(as in the summation there are both $e^i\wedge e^j$ and $e^j\wedge e^i$.)
Remark: It is my habit that I write $\Theta_{ij}^k$ instead of $\Gamma_{ij}^k$ if $e_1, \cdots, e_n$ are not coordinate basis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Calculating which two points are the closest together given a series of points. I'm in a position where I need to calculate the distance between two points to determine which is the closest point. Now, the proper formula to do this is:
$$distance = \sqrt{(x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2}$$
That's all well and good, but it's rather slow to calculate. I don't need to know the distance, just that the calculated distance is the shortest. So, thinking about how to do this, if I throw away the powering and square root, would I be able to determine which distance is the shortest by comparing the results, even though it would most certainly not give an accurate distance?
|
You can remove the square root -- just compare squared distances, rather than actual distances.
The multiplications will take very little time compared to the square root, but you can avoid some of those, too.
Suppose the current minimum distance is $d$, and suppose we're checking the point $P_i = (x_i, y_i, z_i)$ to see if it gives us a smaller distance to the base point $P_0 = (x_0, y_0, z_0)$. If $|x_i-x_0|$ is larger than $d$, then we know immediately that the distance from $P_i$ to $P_0$ is greater than $d$, so there's no sense in doing any further calculations. You can check $|y_i-y_0|$ and $|z_i-z_0|$ similarly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1438895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Every translation of $f$ has no radial part
Let $f\in L^1(\mathbb{R}^n)$. Define the radial part of $f$ as
$$f_0(x)=\int_{S^{n-1}} f(||x||\omega)\,d\omega$$ where $\,d\omega$ is
the normalised surface integral over $S^{n-1}$. Define translation of
$f$ as $\ell_xf(y)=f(y-x)$. Suppose that $(\ell_xf)_0\equiv 0$ for all
$x\in\mathbb{R}^n$. Then show that $f\equiv 0$.
I can prove it for continuous functions. But $f\in L^1$ is creating problem. Any hint/suggestion is welcomed.
|
Since the integral over every sphere is zero, it follows (by Fubini) that the integral over every ball is zero. Hence, $f=0$ at every Lebesgue point, which is a.e. point (see the Lebesgue differentiation theorem).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the area of a figure in a triangle Triangle $ABC$ in the figure has area $10$ . Points $D,E,$ and $F$, all distinct from $A,B,$ and $C$, are on sides $AB,BC,$ and $CA$ respectively, and $AD=2$ , $DB=3$ .
If triangle $ABE$ and quadrilateral $DBEF$ have equal areas,then what is that area?
Efforts made: I've tried to add some extra lines to see if i could get something usefull but ,guess, i didnt get anything . It seems like the problem is asking some crazy creative thing to be done,i cant see what.
|
$[DBEF]=[ABE]$ is equivalent, by subtracting $[DBE]$ to both sides, to $[DEF]=[DEA]$.
These triangles share the $DE$-side, hence $[DEF]=[DEA]$ implies $DE\parallel AF$, so:
$$ \frac{BE}{BC}=\frac{BD}{BA}=\frac{3}{5} $$
and the area of $[BDE]$, consequently, equals $\frac{9}{25}[ABC]=\frac{18}{5}$. Since $[ABE]=\frac{5}{3}[DBE]$,
$$ [ABE]=[DBEF]=\color{red}{6} $$
follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
minimal projections in matrix-algebras Consider $A=\{ \begin{pmatrix} T & 0 \\ 0 & T \end{pmatrix}: T\in M_2(\mathbb{C})\}\subseteq M_4(\mathbb{C})$ and $p= \begin{pmatrix} 1 & 0&0&0 \\ 0 & 0&0&0\\0 & 0&1&0\\ 0 & 0&0&0 \end{pmatrix}\in A.$
We defined $p$ to be a projection, if $p^2=p=p^*$. I have already shown that $p$ is a projection.
$p$ is a minimal projection in $A$, if: for all projections $0\neq q\in A$ such that $q\le p$, $\Rightarrow \; p=q$.
$q\le p$ means, that $p-q$ is a positive operator. Equivalent is: $image(q)\subseteq image(p)$.
The claim is: $p$ is minimal in $A$, but $p$ isn't minimal in $M_4(\mathbb{C})$.
-My solution for p minimal in A: It is $M_2(\mathbb{C})\cong A$ (Consider the linear, multiplicative, adjoint-preserving bijective map $\gamma:M_2(\mathbb{C})\to A, T\mapsto \begin{pmatrix} T & 0 \\ 0 & T \end{pmatrix}$). Therefore, T is minimal in $M_2(\mathbb{C}) \iff \begin{pmatrix} T & 0 \\ 0 & T \end{pmatrix}$ is minimal in $A$. Consider $T=\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix},$ it is $\dim (image(T))=1$ and $T^2=T^*=T$, i.e. $T$ is a minimal projection in $M_2(\mathbb{C})$. It follows, that $\gamma(T)=p\in A$ is a minimal projection in $A$.
But why isn't $p$ minimal in $M_4(\mathbb{C})$?
Edit: Sorry, i was too fast. $p$ isn't minimal in $M_4(\mathbb{C})$, because consider $q= \begin{pmatrix} 1 & 0&0&0 \\ 0 & 0&0&0\\0 & 0&0&0\\ 0 & 0&0&0 \end{pmatrix}$. It is $q=q^*=q^2$ and $0\le q\le p$, but it isn't $p=q$.
|
Here the answer again: $p$ isn't minimal in $M_4(\mathbb{C})$, because consider $q= \begin{pmatrix} 1 & 0&0&0 \\ 0 & 0&0&0\\0 & 0&0&0\\ 0 & 0&0&0 \end{pmatrix}$. It is $q=q^*=q^2$ and $0\le q\le p$, but it isn't $p=q$, because p has a smaller image than p.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is there more to explain why a hypothesis doesn't hold, rather than that it arrives at a contradiction? Yesterday, I had the pleasure of teaching some maths to a high-school student. She wondered why the following doesn't work:
$\sqrt{a+b}=\sqrt{a}+\sqrt{b}$.
I explained it as follows (slightly less formal)
*
*For your hypothesis to hold, it should hold given an arbitrary set of operations performed on your equation.
*For example, it should hold if we square the equation, and after that take the square root, i.e. (note that I applied her logic in the second line; I know it's not OK to do maths like that)
$\sqrt{a+b}=\sqrt{a}+\sqrt{b}\\
a+b=a+b+2\sqrt{ab}\\
\sqrt{a+b}=\sqrt{a}+\sqrt{b}+\sqrt{2\sqrt{ab}}$
*We now arrive at a contradiction, which means that your hypothesis is false.
However, she then went on to ask 'But why then is it false? You only proved that it's false!'. As far as I'm concerned, my little proof is a perfect why explanation as far as mathematicians are concerned, but I had a hard time convincing her - the only thing I could think of is to say that the square root operator is not a linear operator, but I don't really think that adds much (besides, I really don't want to be explaining and proving linearity to a high school student).
So, my question: is there anything 'more' as to why the above doesn't work, or was I justified in trying to convince her that this is really all there is to it?
|
Here's another explanation approach:
Suppose that $$\sqrt{a+b}=\sqrt a + \sqrt b$$
If this is true, then we can square both sides of the equation:
$$(\sqrt {a+b})^2=a+b=(\sqrt a+\sqrt b)^2=a+b+2\sqrt {ab}$$
This resulting equality can then be manipulated as follows:
$$a+b=a+b+2\sqrt{ab}\\
0=2\sqrt {ab}\\
0=ab$$
By the rules of real numbers, which do not have zero-divisors, if $ab=0$ then either $a=0$ or $b=0$. Looking back at the original supposition, it will hold given either condition. So the answer is that the supposition is correct as long as at least one of $a,b$ is $0$, or in more explanatory language, "you can do that, as long as either $a$ or $b$ is zero."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 15,
"answer_id": 13
}
|
Solving the nonlinear Diophantine equation $x^2-3x=2y^2$ How can I solve (find all the solutions) the nonlinear Diophantine equation $x^2-3x=2y^2$?
I included here what I had done so far. Thanks for your help.
Note: The equation above can be rewritten into $x^2-3x-2y^2=0$ which is quadratic in $x$. By quadratic formula we have the following solutions for $x$.
\begin{equation}
x=\frac{3\pm\sqrt{9+8y^2}}{2}
\end{equation}
I want $x$ to be a positive integer so I will just consider:
\begin{equation}
x=\frac{3+\sqrt{9+8y^2}}{2}
\end{equation}
From here I don't know how to proceed but after trying out values for $1\leq y\leq 1000$ I only have the following $y$ that yields a positive integer $x$.
\begin{equation}
y=\{0,3,18,105,612\}.
\end{equation}
Again, thanks for any help.
|
I'll solve it in integers instead. $x^2-3x=2y^2$, by the quadratic formula, is equivalent to
$$x=\frac{3\pm\sqrt{9+8y^2}}{2}$$
So i.e. the problem is equivalent to solving $8y^2+9=m^2$ in integers.
$8y^2\equiv m^2\pmod{3}$, so $(y,m)=\left(3y_1,3m_1\right)$ for some $y_1,m_1\in\Bbb Z$, because $h^2\equiv \{0,1\}\pmod{3}$ for any integer $h$.
The problem is equivalent to $m_1^2-8y_1^2=1$ with $m_1,y_1\in\Bbb Z$. This is a Pell's Equation and has infinitely many solutions given exactly by $\pm\left(3+\sqrt{8}\right)^n=x_n+y_n\sqrt{8}$ for $n\ge 0$ (because $(m_1,y_1)=(3,1)$ is the minimal non-trivial solution), so for example $\pm\left(3+\sqrt{8}\right)^2=\pm17\pm6\sqrt{8}$ and $(m_1,y_1)=(\pm 17,\pm 6)$ is another solution.
In general, Pell's Equation $x^2-dy^2=1$ with $d$ not a square has infinitely many solutions; which are given exactly by $\pm\left(x_m+ y_m\sqrt{d}\right)^n=x_n+y_n\sqrt{d}$, $n\ge 0$, where $(x_m,y_m)$ is the minimal solution, i.e. the solution with $x_m,y_m$ positive integers and minimal value of $x_m+ y_m\sqrt{d}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
}
|
Probability of drawing an ace Lets say I've drawn 20 cards from a deck and The 20th was an ace. I'm trying to figure out the probability of drawing an ace in the 21st draw.
I'm sure this is just clever combinatorics but all my attemps have led nowhere. The probability of The first event is 4/52, right? And the second one is 3/32?
|
This problem can be solved with some conditional probability, though the method I will present is tedious and is probably not the most elegant.
We need to find $P(21st\ card\ is\ an\ Ace | 20th\ card\ is\ an\ Ace)$
We know this is equivalent to $\frac{\sum\limits_{i=2}^4 P(21st\ card\ is\ ith\ Ace\ \cap\ 20th\ card\ is\ (i-1)th\ Ace)}{P(21st\ card\ is\ an\ Ace)}$
Now it is an exercise in combinatorics to calculate both the numerator and denominator of this fraction.
I'll help you get started, lets calculate the probability in the denominator of this fraction:
The $21st$ card may be any of the four Aces in a standard deck of 52 cards, thus the probability of drawing $21$ cards such that the $21st$ card is an Ace must be exactly $\frac{4{51\choose20}}{52\choose21}$
Hopefully this helps, I don't think it is too hard now to calculate the numerator of the aforementioned fraction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why $H^2U=UH^2$ implies $H$ and $U$ commutes? Why does $H^2U=UH^2$ imply $H$ and $U$ commutes, where $H$ is a Hermitian matrix and $U$ is a unitary matrix?
This comes from the book 'Theory of Matrices' on p277 http://www.maths.ed.ac.uk/~aar/papers/gantmacher1.pdf
Now I know the implication is false. How to prove the following:
If a matrix $A$ is normal, i.e $AA^*=A^*A$, then the polar and unitary factor of the polar decomposition of $A$, $A=UH$, commute.
PS:Actually the book was right. I forgot to mention H is not only Hermitian, H is also positive semi-definite.
|
That cannot be true. Take
$$
H=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)
$$
then $H^2=1$ it commutes with anything, however
$$U=\left(
\begin{array}{cc}
0 & -i \\
i & 0 \\
\end{array}
\right)$$
is unitary and does not commute with $H$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Application of Differentiation Prove that if the curve $y = x^3 + px + q$ is tangent to the x-axis, then
$$4p^3 + 27q^2 = 0$$
I differentiated $y$ and obtained the value $3x^2 + p$. If the curve is tangent to the x-axis, it implies that $x=0$ (or is it $y = 0$?). How do I continue to prove the above statement? Thanks.
If I substitute in $x=0$, I will obtain $y= q$? Are my above steps correct? Please guide me. Thank you so much!
|
Notice, we have $$y=x^3+px+q$$ $$\frac{dy}{dx}=3x^2+p$$ Since, the x-axis is tangent to the curve at some point where $y=0$ & slope $\frac{dy}{dx}=0$ hence, we have $$\left(\frac{dy}{dx}\right)_{y=0}=0$$ $$3x^2+p=0\iff x^2=\frac{-p}{3}\tag 1$$
Now, at the point of tangency with the x-axis we have $$y=0\iff x^3+px+q=0$$ $$x^3+px+q=0$$ $$(x^3+px)=-q$$$$ (x^3+px)^2=(-q)^2$$ $$x^6+2px^4+p^2x^2=q^2$$
$$(x^2)^3+2p(x^2)^4+p^2x^2=q^2$$ Setting the value of $x^2$ from (1), we get $$\left(\frac{-p}{3}\right)^3+2p\left(\frac{-p}{3}\right)^2+p^2\left(\frac{-p}{3}\right)=q^2$$ $$-\frac{p^3}{27}+\frac{2p^3}{9}-\frac{p^3}{3}=q^2$$ $$-\frac{4p^3}{27}=q^2\iff -4p^3=27q^2$$ $$\color{red}{4p^3+27q^2=0}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Solve for $x$ and represent the solutions on the trig circle? $$\sin(4x)\cos(x)+2\sin(x)\cos^2(2x) = 1- 2\sin^2(x)$$
I'm confused.
|
Since $1-2\sin^2(x)=\cos (2x)$ and $\sin (2\alpha)=2\sin (\alpha)\cos (\alpha)$ we have
$$2\sin(2x)\cos(2x)\cos(x)+2\sin(x)\cos^2(2x)=\cos(2x)$$
$$\cos (2x)[2\sin(2x)\cos(x)+2\sin(x)\cos(2x)-1]=0$$
Then either $\cos (2x)=0$ or $2\sin(2x)\cos(x)+2\sin(x)\cos(2x)-1=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1439980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How does $-(-1)^n$ become $(-1)^{n+1}$ I am currently studying how to prove the Fibonacci Identity by Simple Induction, shown here, however I do not understand how $-(-1)^n$ becomes $(-1)^{n+1}$. Can anybody explain to me the logic behind this?
|
The powers of $-1$ are $$-1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, \ldots$$ If $n$ is odd, then $(-1)^n = -1$, but if $n$ is even then $(-1)^n = 1$. So if you multiply $(-1)^n$ once by $-1$, it is the same as incrementing $n$. (Technically you can also decrement, but you can't generalize that to other negative numbers).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 3
}
|
Prove if $P(A | B^c) = P(A | B)$ then the events A and B are independent. So I've started by saying that since $P(A | B^c) = P(A | B)$ we know that $\frac{P(A \cap B^c)}{P(B^c)} = \frac{P(A \cap B)}{P(B)}$. However I'm not sure where to go from there. Any help would be great!
|
The defining relation is: $$P(A\cap B)=P(A|B)P(B)$$
You could also write: $$P(A\mid B)=\frac{P(A\cap B)}{P(B)}$$ but that asks for an apart treatment of the special case $P(B)=0$.
Let's say that $P(A\mid B)=c=P(A\mid B^c)$.
Then:
$$P(A)=P(A\mid B)P(B)+P(A\mid B^c)P(B^c)=cP(B)+cP(B^c)=c=P(A\mid B)$$
So: $$P(A\cap B)=P(A|B)P(B)=P(A)P(B)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Troubles with solving $\sqrt{2x+3}-\sqrt{x-10}=4$ I have been trying to solve the problem $\sqrt{2x+3}-\sqrt{x-10}=4$ and I have had tons problems of with it and have been unable to solve it. Here is what I have tried-$$\sqrt{2x+3}-\sqrt{x-10}=4$$ is the same as $$\sqrt{2x+3}=4+\sqrt{x-10}$$ from here I would square both sides $$(\sqrt{2x+3})^2=(4+\sqrt{x-10})^2$$
which simplifies to $$2x+3=16+x-10+8\sqrt{x-10}$$ I would then isolate the radical $$x-3=8\sqrt{x-10}$$ then square both sides once again $$(x-3)^2=(8\sqrt{x-10})^2$$ which simplifies to $$x^2-6x+9=8(x-10)$$ simplified again $$x^2-6x+9=8x-80$$ simplified once again $$x^2-14x+89=0$$ this is where I know I have done something wrong because the solution would be $$14 \pm\sqrt{-163 \over2}$$ I am really confused and any help would be appreciated
|
Note that when you square something like $a\sqrt{b}$ you get $a^2b$.
Thus, you should get:
$\begin{align}
x^2-6x+9 &= 64(x-10)\\
x^2-6x+9 &= 64x-640\\
x^2-70x+649 &= 0\\
(x-11)(x-59) &= 0\\
\therefore \boxed{x=11,59}.
\end{align}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Help understanding the geometry behind the maximum principle for eigenvalues. Context:
I am learning about the maximum principle, which states that
If $A$ is a real symmetric matrix and $q(x)=\langle Ax, x \rangle$, the following statements hold:
*
*$\lambda_1=\mathrm{max}_{||x||=1} q(x)=q(x_1)$ is the largest eigenvalue
of the matrix $A$ and $x_1$ is the eigenvector corresponding to
eigenvalue $\lambda_1$.
*Let $\lambda_k=\mathrm{max} q(x)$ subject to
the constraints $\langle x,x_j \rangle=0 \ \ \ j=1,2,\dots ,k-1.$
$||x||=1$.
Then $\lambda_k=q(x_k)$ is the $kth$ eigenvalue of $A$, $\lambda_1\geq \lambda_2 \geq \dots \geq \lambda_k$ and $x_k$ is the corresponding eigenvector of $A$.
Now the book states that geometrically, we can visualize this as follows:
To understand the geometric significance of the maximum principle, we view the quadratic form $q(x)=\langle Ax,x \rangle$ as a radial map of the sphere. That is, to each point on the sphere $||x||=1$, associate $q(x)$ with the projection of $Ax$ onto $x$, $\langle Ax,x \rangle x$.If $A$ is positive definite, the surface $q(x)x$ is an ellipsoid. The maximum of $q(x)$ occurs exactly along the major axis of the ellipsoid. If we intersect it with the plane orthogonal to the major axis, then restricted to this plane, the maximum of q(x) occurs along the semi-major axis and so on.
I've bolded the parts that I don't understand. I don't see how this radial mapping is transforming the sphere into an ellipsoid nor the parts about the maximums occuring along the major axis and "so on". Could anyone elucidate this geometric idea for me?
|
The eigenvectors for a real symmetric matrix are a complete orthonormal set, that is we can take them a an orthonormal basis for $R^n$.So we can represent any $v$ on the unit $n$-sphere as $(v_j)_{1\le j \le n}$ in the sense that there is a unique $(v_j)_{1 \le j \le n}$ such that $v=\sum_j (v_j x_j)$, and furthermore we have $ \sum_j (v_j)^2=1$. That is, the lines from the origin ,through the eigenvectors, form a system of mutually orthogonal co-ordinate axes. And $Av= \sum_j (\lambda_j v_j x_j).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
A theorem for cubic-A generalization of Carnot theorem I found a theorem for cubic, the theorem is a generalization of the Carnot theorem for conic. I'm an electrical engineer, not a mathematician. I don't know how to prove this result.
Let ABC be a triangle, Let three points $A_1, A_2, A_3$ lie on $BC$, three points $B_1, B_2, B_3$ lie on $CA$, let three points $C_1, C_2, C_3$ lie on $AB$. There is a cubic passing through the nine points $A_1, A_2, A_3, B_1, B_2, B_3, C_1, C_2, C_3$ if only if:
$$\frac{\overline{A_1B}}{\overline{A_1C}}.\frac{\overline{A_2B}}{\overline{A_2C}}.\frac{\overline{A_3B}}{\overline{A_3C}}.
\frac{\overline{B_1C}}{\overline{B_1A}}.\frac{\overline{B_2C}}{\overline{B_2A}}.\frac{\overline{B_3C}}{\overline{B_3A}}.
\frac{\overline{C_1A}}{\overline{C_1B}}.\frac{\overline{C_2A}}{\overline{C_2B}}.\frac{\overline{C_3A}}{\overline{C_3B}}=1$$
|
Let $S$ be a hypersurface of degree $d$ of equation $\phi(x) = 0$ in some $k^n$, $a$ a point not on $S$. Consider a line $a + t v$ through $a$ that intersects $S$ in $n$ points given by the equation in $t$
$$\phi(a + t v ) = 0$$
Now expanding the LHS wr to the powers of $t$ we get
$$\phi(a) + \cdots + \phi_d(v) t^d =0$$
(where $\phi_d$ is the $d$-homogenous part of $\phi$)). Therefore, the product of the roots of the above equation is ( by Viete)
$$(-1)^d\frac{\phi(a)}{\phi_d(v)}$$
Therefore: if $a$, are $b$ points not on $S$ and the line $ab$ intersects $S$ in $d$ points $c_1$, $\ldots$, $c_d$ then
$$\frac{ac_1 \cdot ac_2 \cdots ac_d}{b c_1\cdot bc_2 \cdots bc_d} = \frac{\phi(a)}{\phi(b)}$$
Consequence ( Carnot's theorem)
Let $a_1$, $\ldots$, $a_n$ points not on $S$. For every $i$ consider the intersection of the line $a_i$, $a_{i+1}$ with $S$ as $c_{i1}$, $\ldots$, $c_{id}$. Then we have
$$\prod_{i=1}^n \frac{a_i c_{i1} \cdot a_i c_{i2} \cdots \cdot a_i c_{id}}{a_{i+1} c_{i1}\cdot a_{i+1} c_{i2} \cdots \cdot a_{i+1} c_{id}}=1$$
Indeed, the product equals
$$\prod_{i=1}^n \frac{\phi(a_i)}{\phi(a_{i+1})} = 1$$
Notes: the theorem about the product of chords determined by an algebraic curve is due to Newton ( it generalizes the power of a point)
The theorem of Carnot follows from the theorem of Newton, and generalizes results like Menelaus theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Evaluation of $\int_0^\infty \frac{1}{\left[x^4+(1+\sqrt{2})x^2+1\right]\cdot \left[x^{100}-x^{98}+\cdots+1\right]}dx$
Evaluation of $\displaystyle \int_0^\infty \frac{1}{\left[x^4+(1+\sqrt{2})x^2+1\right]\cdot \left[x^{100}-x^{98}+\cdots+1\right]}dx$
$\bf{My\; Try::}$ Let $$I= \int_{0}^{\infty}\frac{1}{\left[x^4+(1+\sqrt{2})x^2+1\right]\cdot \left[x^{100}-x^{98}+\cdots+1\right]}dx \tag 1 $$
Let $\displaystyle x=\frac{1}{t}\;,$ Then $\displaystyle dx = -\frac{1}{t^2}$ and changing limit, we get
So $$I = \int_0^\infty \frac{t^{102}}{\left[t^4+(1+\sqrt{2})t^2+1\right]\cdot \left[t^{100}-t^{98}+\cdots+1\right]}dt$$
So $$\displaystyle I = \int_0^\infty \frac{x^{102}}{\left[x^4+(1+\sqrt{2})t^2+1\right]\cdot \left[x^{100}-x^{98}+\cdots+1\right]}dx \tag 2$$
So we get $$2I = \int_0^\infty \frac{1+x^{102}}{\left[x^4+(1+\sqrt{2})x^2+1\right]\cdot \left[x^{100}-x^{98}+\cdots+1\right]} \,dx$$
Now Using Geometric Progression series,
We can write $$ 1-x^2+x^4-\cdots-x^{98}+x^{100} = \left(\frac{x^{102+1}}{1+x^2}\right)$$
so we get $$2I = \int_0^\infty \frac{1+x^2}{x^4+ax^2+1}dx\;,$$ Where $a=(\sqrt{2}+1)$
So we get $$2I = \int_0^\infty \frac{1+\frac{1}{x^2}}{\left(x-\frac{1}{x}\right)^2+\left(\sqrt{a+2}\right)^2} dx = \frac{1}{\sqrt{a+2}}\left[\tan^{-1}\left(\frac{x^2-1}{x\cdot \sqrt{a+2}}\right)\right]_0^\infty$$
So we get $$2I = \frac{\pi}{\sqrt{a+2}}\Rightarrow I = \frac{\pi}{2\sqrt{3+\sqrt{2}}}$$
My Question is can we solve the Integral $\bf{\displaystyle \int_0^\infty \frac{1+x^2}{x^4+ax^2+1}dx}$ Using any other Method
Means Using Complex analysis or any other.
Thanks.
|
I think your way of evaluating the integral is very short and elegant. Nevertheless, you can do partial fraction decomposition. You'll end up with (as long as I did not do any mistakes)
$$
\frac{1}{\sqrt{2+a}}\biggl[\arctan\Bigl(\frac{\sqrt{2-a}+2x}{\sqrt{2+a}}\Bigr)+\arctan\Bigl(\frac{\sqrt{2-a}-2x}{\sqrt{2+a}}\Bigr)\biggr]
$$
as a primitive.
Edit
That way of writing it is not so good, since $a>2$. It is better to write
$$
x^4+ax^2+1=\Bigl(x^2+\frac{a}{2}-\frac{\sqrt{a^2-4}}{2}\Bigr)\Bigl(x^2+\frac{a}{2}+\frac{\sqrt{a^2-4}}{2}\Bigr)
$$
and then do partial fraction decomposition. The resulting primitive then reads (watch out for typos!)
$$
\begin{aligned}
\frac{1}{\sqrt{2}\sqrt{a^2-4}}\biggl[&\frac{2-a+\sqrt{a^2-4}}{\sqrt{a-\sqrt{a^2-4}}}\arctan\Bigl(\frac{\sqrt{2}x}{\sqrt{a-\sqrt{a^2-4}}}\Bigr)\\
&\qquad+\frac{a-2+\sqrt{a^2-4}}{\sqrt{a+\sqrt{a^2-4}}}\arctan\Bigl(\frac{\sqrt{2}x}{\sqrt{a+\sqrt{a^2-4}}}\Bigr)\biggr].
\end{aligned}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Transforming a set of equations from one set of variables to another? Assume $\theta_1,\dots,\theta_n$ are $n$ positive numbers such that $$\theta_1+\dots+\theta_n=1$$ Define $y_{ij}=\frac{\theta_i}{\theta_j}$ for all $i,j \in \{1,\dots,n\}$. Is there a way to transform the above equation in terms of $y_{ij}$.
|
No. If you multiply all $\theta_i$ by the same positive number $c\neq 1$, then all $y_{ij}$ stay the same, but $\theta_1+\dots+\theta_n$ changes.
In more details: assume that you have a condition or a set of conditions on $y_{ij}=\frac{\theta_i}{\theta_j}$ which is satisfied if and only if $\theta_1+\dots+\theta_n=1$. Let $\overline{\theta}_i$ be positive numbers such that $\overline{\theta}_1+\dots+\overline{\theta}_n=1$. Then, by assumption, the numbers $\overline{y}_{ij}=\frac{\overline{\theta}_i}{\overline{\theta}_j}$ satisfy that condition(s). Now choose a positive number $c\neq 1$ and set $\theta'_i = c\overline{\theta}_i$. Then $y'_{ij} = \frac{\theta'_i}{\theta'_j} = \overline{y}_{ij}$, so the numbers $y'_{ij}$ satisfy the condition(s) as well, but $\theta'_1+\dots+\theta'_n = c(\overline{\theta}_1+\dots+\overline{\theta}_n) = c\neq 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Rugby Union - Sum Of Binomial Random Variables I've been cracking my head at this for a few days now. Suppose a Rugby Union match is expected to have 5.9 tries on average, and tries follow a Poisson distribution. A conversion follows a try, where each conversion follows a binomial distribution with probability "p". Can I use a Poisson distribution to estimate if 0,1,2,3,4,5,6,7...etc. conversions will be scored in a game?
|
Yes, if tries occur with a Poisson arrival rate $\lambda$, and conversions follow each try if there is a Bernoulli success of rate $p$, then conversions occur with a Poisson arrival rate of $\lambda p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1440898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate limit without L'hopital $$\lim_{x\to2}\dfrac{\sqrt{x^2+1}-\sqrt{2x+1}}{\sqrt{x^3-x^2}-\sqrt{x+2}}$$
Please help me evaluate this limit. I have tried rationalising it but it just can't work, I keep ending up with $0$ at the denominator... Thanks!
|
$$\displaystyle \lim_{x\rightarrow 2}\frac{\sqrt{x^2+1}-\sqrt{2x+1}}{\sqrt{x^3-x^2}-\sqrt{x+2}}\times \frac{\sqrt{x^2+1}+\sqrt{2x+1}}{\sqrt{x^2+1}+\sqrt{2x+1}}\times \frac{\sqrt{x^3-x^2}+\sqrt{x+2}}{\sqrt{x^3-x^2}+\sqrt{x+2}}$$
So we get $$\displaystyle \lim_{x\rightarrow 2}\left[\frac{x^2-2x}{x^3-x^2-x-2}\times \frac{\sqrt{x^3-x^2}+\sqrt{x+2}}{\sqrt{x^2+1}+\sqrt{2x+1}}\right]$$
So we get $$\displaystyle \lim_{x\rightarrow 2}\frac{x(x-2)}{(x-2)\cdot (x^2+x+1)}\times \lim_{x\rightarrow 2} \frac{\sqrt{x^3-x^2}+\sqrt{x+2}}{\sqrt{x^2+1}+\sqrt{2x+1}}$$
So we get $$\displaystyle = \frac{2}{7}\times \frac{4}{2\sqrt{5}} = \frac{4}{7\sqrt{5}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\int_0^\infty\frac{x\cos(x)-\sin(x)}{x\left({e^x}-1\right)}\,dx = \frac{\pi}2+\arg\left(\Gamma(i)\right)-\Re\left(\psi_0(i)\right)$ While I was working on this question, I've found that
$$
I=\int_0^\infty\frac{x\cos(x)-\sin(x)}{x\left({e^x}-1\right)}\,dx = \frac{\pi}2+\arg\left(\Gamma(i)\right)-\Re\left(\psi_0(i)\right),
$$
where $\arg$ is the complex argument, $\Re$ is the real part of a complex number, $\Gamma$ is the gamma function, $\psi_0$ is the digamma function.
How could we prove this? Are there any more simple closed-form?
A numerical approximation:
$$
I \approx -0.3962906410900101751594101405188072631361627457\dots
$$
|
Too long for a comment: After simplifying the integrand with x, write the $($new$)$
numerator as $\bigg(1-\dfrac{\sin x}x\bigg)-(1-\cos x),~$ then, for the latter, use Euler's formula
in conjunction with an exponential substitution to $($formally$)$ express the second
integral as a sum of harmonic numbers of complex argument, whose connection
with the digamma function is well-known. Now “all” that's left to do is showing that $$\int_0^\infty\frac{1-\dfrac{\sin x}x}{e^x-1}~dx~=~\frac\pi2+\gamma+\arg\Big(\Gamma(i)\Big).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Kuhn Tucker conditions, and the sign of the Lagrangian multiplier I was under the impression that under the Kuhn-Tucker conditions for a constrained optimisation, with inequality constraints the multipliers must follow a non-negativity condition.
i.e. $$\lambda \geq 0$$
Operating with complementary slackness with the constraint $g(x,y)<c$.
i.e. $$\lambda * [g(x,y)-c] = 0$$
However I was just attempting to solve this problem,
And according to the solutions the related condition for the Lagrangian multiplier is $\lambda \leq 0$, whereas I had incorrectly formulated it as $\lambda \geq 0$
I have otherwise managed to solve the question, finding if the constraint is active, $$x^*= (2a-1)/5 $$ $$y^*=(a+2)/5$$ $$\lambda=(2a-6)/5$$
and $(1,1)$ if the constraint was not active.
However as my Lagrangian inequality was the wrong way round I stated the constraint would be active if $a>3$ and inactive $a \leq 3$, which I can see is incorrect if I plot the graphs.
Can someone please explain in what circumstances the multiplier condition would be $\leq 0$ rather than $\geq 0$, is it because it's a minimisation problem rather than a maximisation problem?
Also as a sub point, if I understand correctly, when minimising using the lagrangian method with inequality constraints one is meant to multiply the objective function by -1, and then maximise it, I didn't do that, I just plugged it in as it is, though I fail to see how that would effect the condition of the multiplier, and my optimising points still seem to agree with the solutions.
Any guidance on the matter is greatly appreciated.
|
The reason that you get the multiplier condition $\lambda \leq 0$ rather than $\lambda \geq 0$ is that you have a minimization problem, as you correctly indicate.
Intuitively, if the constraint $g(x,y)\leq c$ is active at the point $(x^*,y^*)$, then $g(x,y)$ is increasing as you move from $(x^*,y^*)$ and out of the domain, and $g(x,y)$ is decreasing as you move from $(x^*,y^*)$ and into the domain. If $(x^*,y^*)$ is the solution of the minimization problem, then $f(x,y)$ has to be increasing when you move from $(x^*,y^*)$ into the domain and decreasing when you move from $(x^*,y^*)$ and out of the domain. Since $g(x,y)$ and $f(x,y)$ are increasing in opposite directions, the Lagrange multiplier cannot be positive.
To show this more formally, it is convenient to use properties of gradient vectors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Noetherian Rings and Ideals Associated with $\text{Ann}(M)$ If $M$ is a finitely generated $A$-module where $A$ is Noetherian and $I$ is an ideal of $A$ such that the support of $M$, $\mathrm{Supp}(M)$ is a subset of $V(I)$ the set of prime ideals containing $I$. How do i know if there exists an $n$ such that $I^n$ is a subset of $\mathrm{Ann}(M)$?
It is relevant that if $M$ is a finitely generated module then $Supp(M)=V(\text{Ann}(M))$.
|
Hint. $V(J)\subset V(I)\implies \mathrm{rad}(I) \subset \mathrm{rad}(J)$, so $I \subset \mathrm{rad}(J)$. Now use that $I$ is finitely generated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Statistics dice question $6$ people, $A, B, C, D, E, F$ sit in a circle to play a dice game.
Rule of game: as you roll a die and get a number which is a multiple of $3$, give the die to the person on the right(counterclockwise). If the number is not a multiple of $3$, give it to the person on the left.
What is the probability that $B$ has the die after five trials starting from $A$?
My approach:
The probability that the die would go right is $\frac 13$ and left is $\frac 23$. Am I supposed to find all the possible outcomes and add the probabilities?
For example, it could go $A-B-C-D-C-B$. So $\frac 13\times \frac 13 \times \frac 13 \times \frac 23 \times \frac 23$.
Do I find all the paths that it could take and add all these probabilities? Or is there another way to solve this??
|
Let A=0 B=1 C=2 D=3 E=4 F=5
You are at A (ie 1), you have 5 chances and each chance can be +1 with Probability X(=1/3) or -1 with Probability Y(=2/3) such that
v + w + x + y + z = 1
Since RHS is positive, therefore number of positive term on LHS > no of negative term, hence solution is of the form
(v + w + x) + (y + z) = 1 where first bracket is of positive terms and second bracket of negative terms. This can be arranged in L = 5C3 ways.
Therefore Prob is [(1/3)^3 x (2/3)^2] x L = 10 x (1/27) x (4/9) = 0.164
(As per the suggestion from comments)
Also
v + w + x + y + z = -5
or
v' + w' + x' + y' + z' = 5 just change signed to make calculation intitutive.
Now same analysis as above, only possible if
solution of for (v' + w' + x' + y' + z' ) = 5 ie all terms positive, hence I = 5C5 = 1, therefore Probability is (2/3)^5 * 5C5 = (32/243) = 0.131
Therefore net probability is 0.131 + 0.164 = 0.295
P.S Feel free to ask or correct me!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
A limit problem where $x\to 0$ I can't figure out how to compute this limit: $$\lim_{x\to 0}\dfrac{e^{3x^2}-e^{-3x^2}}{e^{x^2}-e^{-x^2}}$$ Any advice?
|
$$\lim_{x\to 0}\dfrac{e^{3x^2}-e^{-3x^2}}{e^{x^2}-e^{-x^2}}$$
$$=\lim_{x\to 0}\dfrac{e^{3x^2}-1-\left(e^{-3x^2}-1\right)}{e^{x^2}-1-\left(e^{-x^2}-1\right)}$$
$$=\dfrac{3\cdot\lim_{x\to 0}\dfrac{e^{3x^2}-1}{3x^2}-\left(\lim_{x\to 0}\dfrac{e^{-3x^2}-1}{-3x^2}\right)(-3)}{\lim_{x\to 0}\dfrac{e^{x^2}-1}{x^2}-\left(\lim_{x\to 0}\dfrac{e^{-x^2}-1}{-x^2}\right)(-1)}$$
Now use $\lim_{h\to0}\dfrac{e^h-1}h=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
The identity $x/y-y/x = (y+x)(y^{-1}-x^{-1}).$ Difference of perfect square asserts that the expression $y^2-x^2$ factorizes as $(y+x)(y-x)$. On the train home last night, I noticed a variant on this. Namely, that $x/y-y/x$ factorizes as $(y+x)(y^{-1}-x^{-1}).$ Explicitly:
Proposition. Let $R$ denote a ring, and consider $x,y \in R$ such that both $x$ and $y$ are units. Then the following identity holds. $$\frac{x}{y}-\frac{y}{x} = (y+x)(y^{-1}-x^{-1})$$
For example, in the rational numbers, we have:
$$\frac{3}{2}-\frac{2}{3} = 5\cdot \left(\frac{1}{2}-\frac{1}{3}\right)$$
Questions.
Q0. Does this variant on "difference of perfect squares" have a name?
Q1. Is it part of a larger family of identities that includes $y^2-x^2 = (y-x)(y+x)$, and other variants?
Q2. Does it have any interesting uses or applications, e.g. in calculus?
If anyone can think of better tags, please retag.
|
Not as far as I know, but notice that your identity is
$$ (x+y)\frac{x-y}{xy} = (xy)^{-1}(x+y)(x-y) = \frac{x^2-y^2}{xy} = \frac{x}{y}-\frac{y}{x}, $$
so I'm not sure I'd say that it's really distinct from difference-of-two-squares.
As far as generalisation goes, you've got the old
$$ x^n-y^n = (x-y)(x^{n-1}+x^{n-2}y+\dotsb+ xy^{n-2}+xy^{n-1}). $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What mistakes, if any, were made in Numberphile's proof that $1+2+3+\cdots=-1/12$? This is not a duplicate question because I am looking for an explanation directed to a general audience as to the mistakes (if any) in Numberphile's proof (reproduced below). (Numberphile is a YouTube channel devoted to pop-math and this particular video has garnered over 3m views.)
By general audience, I mean the same sort of audience as the millions who watch Numberphile. (Which would mean, ideally, making little or no mention of things that a general audience will never have heard of - e.g. Riemann zeta functions, analytic continuations, Casimir forces; and avoiding tactics like appealing to the fact that physicists and other clever people use it in string theory, so therefore it must be correct.)
Numberphile's Proof.
The proof proceeds by evaluating each of the following:
$S_1 = 1 - 1 + 1 - 1 + 1 - 1 + \ldots$
$S_2 = 1 - 2 + 3 - 4 + \ldots $
$S = 1 + 2 + 3 + 4 + \ldots $
"Now the first one is really easy to evaluate ... You stop this at any point. If you stop it at an odd point, you're going to get the answer $1$. If you stop it at an even point, you get the answer $0$. Clearly, that's obvious, right? ... So what number are we going to attach to this infinite sum? Do we stop at an odd or an even point? We don't know, so we take the average of the two. So the answer's a half."
Next:
$S_2 \ \ = 1 - 2 + 3 - 4 + \cdots$
$S_2 \ \ = \ \ \ \ \ \ \ 1 - 2 + 3 - 4 + \cdots$
Adding the above two lines, we get:
$2S_2 = 1 - 1 + 1 - 1 + \cdots$
Therefore, $2S_2=S_1=\frac{1}{2}$ and so $S_2=\frac{1}{4}$.
Finally, take
\begin{align}
S - S_2 & = 1 + 2 + 3 + 4 + \cdots
\\ & - (1 - 2 + 3 - 4 + \cdots)
\\ & = 0 + 4 + 0 + 8 + \cdots
\\ & = 4 + 8 + 12 + \cdots
\\ & = 4S
\end{align}
Hence $-S_2=3S$ or $-\frac{1}{4}=3S$.
And so $S=-\frac{1}{12}$. $\blacksquare$
|
I've argued this with others unsuccessfully, but I'm a glutton for punishment.
What you are saying is
assume that
$$1 - 1 + 1 - 1 + 1 - 1 + \ldots$$ is a convergent infinite
series
and let $$S_1 = 1 - 1 + 1 - 1 + 1 - 1 + \ldots$$
The assumption that
$$ 1 - 1 + 1 - 1 + 1 - 1 + \ldots$$ is a real number is a false hypothesis. Convergent series are very rigorously defined and $1 - 1 + 1 - 1 + 1 - 1 + \ldots$ isn't one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 5,
"answer_id": 1
}
|
Is this a valid proof of the squeeze theorem? I'm self-studying Spivak's calculus, and I have no way of checking my solutions. One of the problems asks for a proof of the squeeze theorem. Here is what I have figured:
Proof. Suppose there exist two functions $f(x)$ and $g(x)$ such that $(\forall x)(g(x) \geq f(x))$ and both $\lim _{x\to a}f(x) = l_1$ and $\lim _{x\to a}g(x) = l_2$ exist. Now let $h(x) = g(x) - f(x)$ so that $h(x) \geq 0$. Assume that $l_2$ $<$ $l_1$.
By previous results, $(\forall \epsilon > 0)(\exists \delta > 0): (\forall x)($if $\lvert x - a \rvert < \delta$, then $\lvert h(x) - (l_2 - l_1) \rvert < \epsilon)$.
But if $\epsilon \leq l_1 - l_2$, no $\delta$ exists such that $\lvert h(x) - (l_2 - l_1) \rvert < \epsilon$, thus, a contradiction.
$\therefore l_2 \geq l_1$
Given this result, define three functions $f_1(x)$, $f_2(x)$, and $f_3(x)$ such that $(\forall x)(f_1(x) \leq f_2(x) \leq f_3(x))$ and $\lim _{x \to a}f_1(x) = \lim _{x \to a}f_3(x) = L$.
Because $f_2(x) \geq f_1(x)$, it must be true that $\lim_{x \to a}f_2(x) \geq \lim_{x \to a}f_1(x)$ and similarly, because $f_3(x) \geq f_2(x)$, it must also be true that $\lim_{x \to a}f_3(x) \geq \lim_{x \to a}f_2(x)$.
It follows that $\lim_{x \to a}f_1(x) \leq \lim_{x \to a} f_2(x) \leq \lim_{x \to a} f_3(x)$, which in turn makes true that $L \leq lim_{x \to a} f_2(x) \leq L$.
$\therefore \lim_{x \to a}f_2(x) = L$ $\blacksquare$
I'm mostly unsure about the validity of the first part, specifically the "no $\delta$ exists" bit. Also, I would appreciate suggestions regarding formal proof-writing as I'm new to this. Thanks a lot in advance!
|
Why does no such $\delta$ exist? This is not immediately obvious. you need to explain it.
A second problem with your proof is that the squeeze theorem only assumes that $f_1$ and $f_3$ converge. That the squeezed function $f_2$ also has a limit is part of what must be proved. But your proof simply assumes this is so.
For a better approach, consider this: if $\epsilon > 0$, then there is a $\delta_1 > 0$ such that for $|x - a| < \delta_1$, $f_1(x) > L - \epsilon$. And there is a $\delta_2 > 0$ such that for $|x - a| < \delta_2$, $f_3(x) < L + \epsilon$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If $f$ is integrable on $[0,1]$, and $\lim_{x\to 0^+}f(x)$ exists, compute $\lim_{x\to 0^{+}}x\int_x^1 \frac{f(t)}{t^2}dt$. If $f$ is integrable on $[0,1]$, and $\lim_{x\to 0}f(x)$ exists, compute $\lim_{x\to 0^{+}}x\int_x^1 \frac{f(t)}{t^2}dt$.
I'm lost about what the value is for this limit in the first place. How can I make a guess for this kind of limit?
|
To make a guess, first consider the simplest kind of function for which the limit exists, say $f=1$. Then $\lim_{x\to 0^{+}}x\int_x^1 \frac{f(t)}{t^2}dt=\lim_{x\to 0^{+}}x(1/x-1)=1$.
So clearly, if $\lim_{x\to 0} f(x)=l$, then $\lim_{x\to 0^{+}}x\int_x^1 \frac{f(t)}{t}dt=l$.
Now let's prove this. Let's use the condition of the $\lim_{x\to 0} f(x)=l$.
Given any $\epsilon \gt 0$, there is some $\delta \gt 0$ such that $|f(t)-l|\lt \epsilon$ for $0\lt t\lt \delta$.
Then
$|x\int_x^1 \frac{f(t)-l}{t^2}dt|\le x\int_x^{\delta} \frac{|f(t)-l|}{t^2}dt+x|\int_{\delta}^1 \frac{f(t)-l}{t^2}dt|\le x\epsilon \int_x^{\delta}\frac{dt}{t^2}+x|\int_{\delta}^1 \frac{f(t)-l}{t^2}dt|$.
Hence we get
$|x\int_x^1 \frac{f(t)}{t^2}dt+xl-l|\le \epsilon-\frac{\epsilon x}{\delta}+x|\int_{\delta}^1 \frac{f(t)-l}{t^2}dt|$.
Finally, we have
$|x\int_x^1 \frac{f(t)}{t^2}dt-l|=|x\int_x^1 \frac{f(t)}{t^2}dt+xl-l-xl|\le |x\int_x^1 \frac{f(t)}{t^2}dt+xl-l|+|xl|\le \epsilon-\frac{\epsilon x}{\delta}+x|\int_{\delta}^1 \frac{f(t)-l}{t^2}dt|+|xl|$.
Now in both sides, let $x\to 0^{+}$, then we have
$\lim_{x\to 0^{+}}|x\int_x^1 \frac{f(t)}{t^2}dt-l|\le \epsilon$, and since $\epsilon\gt 0$ is arbitrary, this limit is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1441888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
How to solve simultaneous modulus / diophantine equations Does anyone know how i'd solve a set of equations like this?
$(x_0\%k_0)\%2=0$
$(x_0\%k_1)\%2=1$
$(x_0\%k_2)\%2=0$
$(x_0\%k_3)\%2=1$
$(x_1\%k_0)\%2=0$
$(x_1\%k_1)\%2=0$
$(x_1\%k_2)\%2=1$
$(x_1\%k_3)\%2=1$
I'm trying to find valid values of $x_i$ and $k_i$. I've found some solutions using a program to brute force it, but I want to up the number of $x$'s, $k$'s which rules out brute force as a practical solution.
For instance, there are two $x$ values above, which translate to $2^x$ $k$ values, and $x∗2^x$ equations. I'd like to take the number of $x$ values up to 16, or 32 if possible, which results in huge numbers of $k$'s and equations.
Anyone able to help at all, even to point me in some direction?
I do know about the chinese remainder theorem, multiplicative modular inverse and the extended euclidean algorithm, among some other basic modulus math techniques, but I'm not really sure how to make any progress on this.
Thanks!
Edit:
To clarify a bit, Ideally I'd like to find all solutions to this problem, but if there is a way to find a subset of solutions, like if the equations below could be solved that would be fine too. Or, if there is some way to find solutions numerically which is much faster than brute force permuting the $x_i$ and $k_i$ values and testing if they fit the constraints, that'd be helpful too.
$x_0\%k_0=0$
$x_0\%k_1=1$
$x_0\%k_2=0$
$x_0\%k_3=1$
$x_1\%k_0=0$
$x_1\%k_1=0$
$x_1\%k_2=1$
$x_1\%k_3=1$
|
I came up with a solution that I'm happy with so far, since it lets me find decent solutions that lets me move on with my reason for wanting to solve these equations.
I'm solving these equations:
$x_0\%k_0=0$
$x_0\%k_1=1$
$x_0\%k_2=0$
$x_0\%k_3=1$
$x_1\%k_0=0$
$x_1\%k_1=0$
$x_1\%k_2=1$
$x_1\%k_3=1$
I solve them by using the chinese remainder theorem for each $x_i$, and i use prime numbers for each $k_i$ value, where each $k_i$ gets it's own prime number. This is to ensure than the $k_i$ values are co-prime, when running them through the chinese remainder theorem algorithm.
This works well and answers are calculated extremely fast, so I'm happy :P
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
if $x_{1},x_{2} $ is $nx-x^n=a$ two roots, show that $|x_{1}-x_{2}|<\dfrac{a}{1-n}+2$
Assmue that $n$ be postive integers,and $a$is real number,and the equation $$nx-x^n=a$$ has postive real roots $x_{1},x_{2}$,show that
$$|x_{1}-x_{2}|<\dfrac{a}{1-n}+2$$
By condition, I showed that
$$(x_{1}-x_{2})[n-(x^{n-1}_{1}+x^{n-2}_{1}x_{2}+\cdots+x^{n-1}_{2})]=0$$
But I don't know what to do then. Please help, Thanks.
|
Let's study the function $f(x)=nx-x^n$ for $x>0$. $f'(x)=n(1-x^{n-1})$ so there is a global maximum in $1$. The function is monotonic increasing in $[0,1]$ and decreasing in $[1,\infty]$. The function attains twice the same value $a$ if and only if $a\in [0,n-1]$, that is $x\in [0,\sqrt[n-1]{n}]$. So for any such $x_1,x_2$ we have in fact $$|x_1-x_2|\leq\sqrt[n-1]{n}\leq 2\leq \frac{a}{n-1}+2.$$
Equality case: we must have $a=0$ and $2^{n-1}=n$, that is $n=1,2$. For $n=1$ $f(x)=0$ identically (and the text doesn't really make sense), so we might want to suppose $n\geq 2$ from the beginning, and for $n=2$ we have $f(x)=2x-x^2$: here $x_1=0$ and $x_2=1$ give us equality. In all other cases we have strict inequality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Give a counterexample to $\bigcup\limits_{t \in T} (A_t \cap B_t)= \bigcup\limits_{t \in T} A_t \cap \bigcup\limits_{t \in T} B_t$ Let $\{A_t\}_{t \in T}$ and $\{B_t\}_{t \in T}$ be two non-empty indexed families of set.
Find a counterexample to
$$\bigcup\limits_{t \in T} (A_t \cap B_t)= \bigcup\limits_{t \in T} A_t \cap \bigcup\limits_{t \in T} B_t.$$
It is clear that $T$ must be an infinite set but I have no idea about the families $\{A_t\}_{t \in T}$ and $\{B_t\}_{t \in T}.$
|
To make it more concrete, perhaps, let $T=\{\alpha,\beta\}$, where $\alpha\neq\beta$. Now, let $A_\alpha=\{\alpha\}$, $A_\beta=\{\beta\}$, $B_\alpha=\{\beta\}$, and $B_\beta=\{\alpha\}$. Then we have that
$$
\bigcup_{t\in T}(A_t\cap B_t)=(A_\alpha\cap B_\alpha)\cup(A_\beta\cap B_\beta)=\varnothing\cup\varnothing=\varnothing,
$$
but we also have that
$$
\bigcup_{t\in T}A_t\cap \bigcup_{t\in T}B_t=(A_\alpha\cup A_\beta)\cap(B_\alpha\cup B_\beta)=\{\alpha,\beta\}\cap\{\beta,\alpha\}=\{\alpha,\beta\}\neq\varnothing,
$$
and this serves as a counterexample.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Help with radical logs. $\log_ 2\sqrt{32}$ I am having trouble understanding logarithms. Specifically I can't understand this equation.
$$\log_2{\sqrt{32}}$$
I know the answer is $\left(\frac{5}{2}\right)$ but I don't know how it is the answer. If somebody could please explain the process step by step for solving this log and perhaps give examples of other radical logs I would really appreciate it.
|
Using Logarithmic formula
$$\bullet \log_{e}(m)^n = n\cdot \log_{e}(m)\;,$$ where $m>0$ and $$ \bullet \log_{m}(m) =1\;,$$ Where $m>0$ and $m\neq 1$
So $$\displaystyle \log_{2}\sqrt{32} = \log_{2}(32)^{\frac{1}{2}} =\log_{2}\left(2^5\right)^{\frac{1}{2}}= \log_{2}(2)^{\frac{5}{2}} = \frac{5}{2}\log_{2}(2) = \frac{5}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Isomorphism between subspaces Let $G,H$ be subspaces of a vector space $I$ such that $G \cap H = \{0\}$. Prove that $G + H \cong G\times H$.
I've defined $\phi: G + H \to G\times H$ as $\phi(g + h) = (g,h)$. Showing bijectivity as:
*
*$\phi(g + h) = \phi(j + k) \implies (g,h) = (j,k) \implies g = j, h = k$, so $g + h = j + k$.
*For all $g,h \in G + H$, we have $\phi(g + h) = (g,h)$ so $\phi$ is onto.
I'm trapped in showing $\phi(hg) = \phi(h)\phi(g)$.
Is my $\phi$ bad?
|
You have a short exact sequence:
\begin{alignat*}4
0\longrightarrow G\cap H && \longrightarrow G\oplus H &\longrightarrow G+H \longrightarrow 0 \\
x&&\longmapsto (x,x)\\
&&(x,y)&\longmapsto x-y
\end{alignat*}
As $G\cap H=\{0\}$ by hypothesis, the map $\; G\oplus H \longrightarrow G+H $ is an isomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving $[(p\leftrightarrow q)\land(q\leftrightarrow r)]\to(p\leftrightarrow r)$ is a tautology without a truth table I came across the following problem in a book:
Show that if $p, q$, and $r$ are compound propositions such that $p$ and $q$ are logically equivalent and $q$ and $r$ are logically equivalent, then $p$ and $r$ are logically equivalent.
The book's solution certainly makes sense:
To say that $p$ and $q$ are logically equivalent is to say that the truth tables for $p$ and $q$ are identical; similarly, to say that $q$ and $r$ are logically equivalent is to say that the truth tables for $q$ and $r$ are identical. Clearly if the truth tables for $p$ and $q$ are identical, and the truth tables for $q$ and $r$ are identical, then the truth tables for $p$ and $r$ are identical. Therefore $p$ and $r$ are logically equivalent.
I decided to "symbolically translate" the problem in the book:
Show that $[(p\leftrightarrow q)\land(q\leftrightarrow r)]\to(p\leftrightarrow r)$ is a tautology.
I wrote out a truth table and everything checks out, as expected (and as mentioned in the book's solution). My question is whether or not there is a more "algebraic" solution using equivalences (not resorting to CNF or DNF).
Any ideas?
|
Here is a proof which is slightly more algebraic than the "just look at the truth tables" one while not ballooning out into something lengthy and potentially unreadable:
\begin{array}l
(p \leftrightarrow q) \land (q \leftrightarrow r) \\
= \{\mbox{definition of $\leftrightarrow$}\} \\
(p \rightarrow q) \land (q \rightarrow p) \land (q \rightarrow r) \land (r \rightarrow q) \\
= \{\mbox{commutativity of $\land$}\} \\
(p \rightarrow q) \land (q \rightarrow r) \land (r \rightarrow q) \land (q \rightarrow p) \\
\implies \{\mbox{transitivity of $\rightarrow$}\} \\
(p \rightarrow r) \land (r \rightarrow p) \\
= \{\mbox{definition of $\leftrightarrow$}\} \\
p \leftrightarrow r
\end{array}
The only step in this chain that you may consider ill-founded is the "transitivity of $\rightarrow$" step; but we can pretty easily prove that separately if that's wanted:
\begin{array}l
(p \rightarrow q) \land (q \rightarrow r) \\
= \{\mbox{definition of $\rightarrow$}\} \\
(\lnot p \lor q) \land (\lnot q \lor r) \\
= \{\mbox{distributivity}\} \\
(\lnot p \land \lnot q) \lor (\lnot p \land r) \lor (q \land \lnot q) \lor (q \land r) \\
\implies \{\mbox{weakening}\} \\
\lnot p \lor \lnot p \lor (q \land \lnot q) \lor r \\
= \{\mbox{simplification}\} \\
\lnot p \lor r \\
= \{\mbox{definition of $\rightarrow$}\} \\
p \rightarrow r
\end{array}
The line marked "simplification" actually uses a number of algebraic facts -- but I think they are clear enough to the human reader.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Why Dominated Convergence Theorem is not applicable in this case? Suppose $\omega$ is distributed uniformly over $(0,1]$. Define random variables $$X_n:=n\mathbf{1}_{(0,1/n]}.$$
Obviously, $X_n\rightarrow X=\mathbf{0}$ and $\lim_{n}E[X_n]$ is not equal to E[X]. In this case the Dominated Convergence Theorem (DCT) is not applicable here.
I was wondering how to show that we could not find an integrable random varible Y (i.e. $E[|Y|]<\infty$) such that $|X_n| < |Y| \;a.e.$
At first I think if $Y$ is integrable, then it should be essentially bounded. But afterwards I found it was not true. Then I got stuck...
|
Consider $f=\max\limits_{n\in\mathbb{N}}n\mathbf{1}_{(0,1/n]}$, which is given by
$$
f(x)=\begin{array}{}n&\text{if }x\in\left(\frac1{n+1},\frac1n\right]\end{array}
$$
This would be the smallest candidate for a dominating function.
However, notice that
$$
\int_{\frac1{n+1}}^{\frac1n}f(x)\,\mathrm{d}x=\frac1{n+1}
$$
Thus,
$$
\int_0^1f(x)\,\mathrm{d}x
$$
diverges since the Harmonic Series diverges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1442989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Why is $f$ constant in this case? Reference: Conway - Functions of one complex variable p.55 Exercise 17.
Let $\epsilon>0$.
Let $f:B(0,\epsilon)\rightarrow \mathbb{C}$ be an analytic function such that $|f(z)|=|f(w)|$ for each $z,w\in B(0,\epsilon)$.
In this case, how do I prove that $f$ is constant?
I'm completely lost where to start..
|
Suppose that $f$ is not constant. Then $f'$ is not identically zero. Suppose, for instance (and without loss of generality), that $f'(0)\neq 0$ and $f(0)=1$. Now use the definition of the derivative to estimate $f(z)$ when $z$ is very close to $0$. Can you show there must be some $z$ such that $|f(z)|\neq 1$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Given a Fourier series $f(x)$: What's the difference between the value the expansion takes for given $x$ and the value it converges to for given $x$? The question given to me was: Find the Fourier series for $f(x) = e^x$ over the range $-1\lt x\lt 1$ and find what value the expansion will have when $x = 2$?
The Fourier series for $f(x)=e^x$ is $$f(x)=e^x=\sinh1\left(1+2\sum_{n=1}^{\infty}\frac{(-1)^n}{(n\pi)^2+1}\left(\cos(n\pi x)-n\pi\sin(n\pi x)\right)\right)$$
My attempt to compute these values:
$$f(2)=\sinh1\left(1+2\sum_{n=1}^{\infty}\frac{(-1)^{n}}{(n\pi)^2+1}\right)$$
$$f(0)=e^0=\sinh1\left(1+2\sum_{n=1}^{\infty}\frac{(-1)^{n}}{(n\pi)^2+1}\right)$$
The answer given was: The series will converge to the same value as it does at $x = 0$, i.e. $f(0) = 1$.
Could someone please explain to me the difference between:
What value will the expansion take at a given $x$ and the value the series converges to for a given $x$?
Many Thanks
|
That answer is wrong.
Define the function $f$ to be $\exp(x)$ on $[-1, 1)$ and continue it periodically. Then by the Dirichlet condition the Fourier series of $f$ converges for all $x$ to $\frac{1}{2}(f(x+) + f(x-))$. Since the function has a jump-type discontinuity at $x = 1$, the Fourier series at this point will converge to $\frac{1}{2}(e + e^{-1})$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Proving that if $\lim_{n \rightarrow \infty} a_n = a$ then $\lim_{n \rightarrow \infty} a_n^2 = a^2$ If we have a real sequence $\left|a_n\right|$ such that $\lim_{n \rightarrow \infty} a_n = a$, how do we prove (by an $\epsilon - N$ argument) that $\left|a_n\right|$ such that $\lim_{n \rightarrow \infty} a_{n}^{2} = a^2$?
I know you can use algebra to do to the following:
$$\left|a_n^2 - a^2\right| =\left|(a_n - a)(a_n + a)\right|$$
Where I feel like you can use the implication that $\lim_{n \rightarrow \infty} a_n = a$ to show that $(a_n-a) < a$ or something.
What's the proper way to go about this?
|
If for every $\varepsilon > 0$ there is some $N \geq 1$ such that $|a_{n}-a| < \varepsilon$, then, taking any $\varepsilon > 0$, there is some $N \geq 1$ such that
$$
|a_{n}^{2}-a^{2}| = |a_{n}-a||a_{n}+a| < 2|a|\varepsilon+\varepsilon^{2}
$$
for all $n \geq N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How can you algebraically determine if a curve is symmetric about $y=-x$? If I have a curve implicitly defined by say $x^2+xy+y^2=1$, then it is clear that it is symmetric about $y=x$ because if I interchange x's with y's, then I have the exact same equation.
However, how would one adapt a similar kind of mentality to show that a curve is symmetric about $y=-x$?
It seems awfully tempting to say that you replace $x$ with $-y$ and $y$ with $-x$, but I am not sure if this is valid. It's worked with the few examples I've thought of so far like $xy=1$ and $x^2+xy+y^2=1$.
|
Symmetry in line $y=-x$ maps point $(x,y)$ to $(-y,-x)$ (and vice versa), so you indeed can simply replace $x$ by $-y$, $y$ by $-x$ and check whether you get the same equality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Playing with closure and interior to get the maximum number of sets Can you find $A \subset \mathbb R^2$ such that $A, \overline{A}, \overset{\circ}{A}, \overset{\circ}{\overline{A}}, \overline{\overset{\circ}{A}}$ are all different?
Can we get even more sets be alternating again closure and interior?
|
According to Kuratowski's closure-complement problem, the monoid generated by the complement operator $a$ and the closure operator $b$ has $14$ elements and is presented by the relations $a^2 = 1$, $b^2 = b$ and $(ba)^3b = bab$. Now you are interested by the submonoid generated by the closure operator $b$ and by the interior operator $i = aba$. This submonoid has only $7$ elements:
$1$, $b$, $i$, $bi$, $ib$, $bib$ and $ibi$. You can use Kuratowski's example
$$K = {]0,1[} \cup {]1,2[} \cup \{3\} \cup ([4,5] \cap \mathbb{Q})$$
to generate the $14$ sets and hence the $7$ sets you are interested in. This is an example in $\mathbb{R}$, but $K \times \mathbb{R}$ should work for $\mathbb{R}^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
How do you explain that in $y=x^2$, y is proportional to the square of x? My understanding is that all proportional relationships are linear relationships. If this is indeed the case, how is it that we can also say that in a non linear equation like $y = x^2$, y is proportional to $x^2$?
|
The definition of proportionality is that $A \propto B \iff A = kB$. So A is linearly related to B, and also $A=0$ iff $B=0$.
In your case, $y = (1) x^2$ so y is actually proportional to $x^2$.
I think you are confused by "y is linear to x squared". Actually linear means that the power is 1. But when I say that y is linear to $x^2$ it means that if you take $x^2$ as one variable, lets call it $z$, then $y = z$ which is linear.
Linear also means that the graph is a straight line. If you draw a graph between y and x it is not a straight line. So y is not linearly related to x. But if you make the x axis as $x^2$ it will become a straight line. Then we can say y is linearly related to to $x^2$.
Clarification for the graph:
Suppose we graph y = x^2. The result is a parabola.
But if you plot y on one axis and (x^2) on another axis, that means you dont plot y = x^2, but you treat x^2 as a single variable and you plot it along an axis, you then will get a straight line.
Here I have let x go from 1 to 5. On the horizontal axis, I have plotted the numbers $x^2$, i.e. $(1,4,9,16,25)$ and on the verticla axis, I have $y=x^2$, i.e. $(1,4,9,16,25)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Can't calculate an eigenvector because the system has no solutions. Find all eigenvalues of the matrix $A=\begin{pmatrix}
2&2&-2\\
2&5&-4\\
-2&-4&5\\
\end{pmatrix}$
and find matrix $U$ such that $U^TU=I$ and $U^TAU$ is diagonal.
I calculated and checked that $\lambda_1=10,\lambda_{2,3}=1$. Then eigenvector if $\lambda_1=10$ is $(1,2,-2)^T$; if $\lambda_{2}=1$ eigenvector is $(0,1,1)^T$. How to find the third eigenvector (if I try to calculate the third eigenvector in this way $(A-I)x=(0,1,1)^T$, the system has no solutions.
|
What you need is a second (linearly independent) eigenvector associated with $\lambda = 1$. That is, you need to find another solution $x$ to
$$
(A - I)x = 0
$$
What you were looking for was a solution to
$$
(A - I)x = v_1
$$
which would be a generalized eigenvector. Generalized eigenvectors are only useful when the matrix in question fails to be diagonalizable, which is not the case here (since we are meant to diagonalize it).
In fact, in order to have $U^TU = I$, you must select the eigenvectors for $\lambda = 1$ to be both length $1$ and mutually orthogonal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Boundedness of a set Let $$S = \{(x, y) \in \mathbb{R}^2: x^2 + 2hxy + y^2 =1\}$$
For what values of $h$ is the set $S$ nonempty and bounded?
For $h = 0,$ it is surely bounded, the curve being the unit circle. What for other $h$?
Please help someone.
|
Hint:
$x^2+2hxy+y^2-1=0$ is a conic, and it is an ellipse if its discriminant $B^2-4AC=4(h^2-1)$ is negative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Looking for Taylor series expansion of $\ln(x)$ We know that the expansion of $$\sin(x)
$$ is $$x/1!-x^3/3!\cdots$$
Without using Wolfram alpha, please help me find the expansion of $\ln(x)$.
I have my way of doing it, but am checking myself with this program because I am unsure of my method.
|
$f(x)=ln(x)$
taylor expansion around a is
$f(x)=\sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n$
so for $a = 1$
$f'(x) = \frac{1}{x}$ $f''(x) = \frac{-1}{x^2}$ $f^{(3)}(x) = \frac{2}{x^3}$
$f^{(n)}(x) = \frac{(n-1)! (-1)^{n-1} }{x^n}$
$f(x)=\sum_{n=0}^{\infty} \frac{f^{(n)}(1)}{n!}(x-1)^n=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n}(x-1)^n$
$ln(x)=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n}(x-1)^n$
the more popular formula is obtained when we substitute $x$ with $1+x$
$ln(1+x)=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n}x^n$
or
$ln(1+x)=\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}x^n$
this is for $|x|<1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Behaviour of $f(x)=e-\left(1+\frac{1}{x} \right)^{x}$ when $x\to+\infty$ This is from an MCQ contest.
For all $x\geq 1$ let $f(x)=e-\left(1+\dfrac{1}{x} \right)^{x}$ then we have :
*
*$f(x)\mathrel{\underset{_+\infty}{\sim}}\dfrac{e}{x}$ and $f$ is integrable on $[1,+\infty[$
*$f(x)\mathrel{\underset{_+\infty}{\sim}}\dfrac{e}{2\sqrt{x}}$ and $f$ is not integrable on $[1,+\infty[$
*$f(x)\mathrel{\underset{_+\infty}{\sim}}\dfrac{e}{\sqrt{2}x}$ and $f$ is integrable on $[1,+\infty[$
*$f(x)\mathrel{\underset{_+\infty}{\sim}}\dfrac{e}{2x}$ and $f$ is not integrable on $[1,+\infty[$
My thoughts :
i don't know how to prove that for example to show that a is true shall we procced like this
$$\lim_{x\to +\infty} \dfrac{e-e^{ x\ln(1+\dfrac{1}{x}) } }{ \dfrac{e}{x}}=\lim_{x\to +\infty}x\left( 1-e^{x\ln(1+\dfrac{1}{x})}\right)$$
|
Answers 1. and 3. can be excluded a priori, because if $f(x)\sim k/x$ then $f$ cannot be integrable on $[1,+\infty[$.
Then it is quite obvious that we can make a Taylor expansion in $1/x$, which will never yield a $\sqrt{x}$ term: if so 2. can be excluded and the correct answer should be 4.
In fact, using $\ln(1+t)=t-t^2/2+O(t^3)$ one gets
$$
e-e^{x\ln(1+1/x)}=e - e^{1-1/(2x)+O(1/x^2)}=
e\left(1-e^{-1/(2x)+O(1/x^2)}\right)=e\left({1\over2x}+O\left({1\over x^2}\right)\right).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that as a $\Bbb Q$ vector space $\Bbb R^n$ is isomorphic to $\Bbb R$
Prove that as a $\mathbb{Q}$ vector space, $\mathbb{R}^n$ is isomorphic to $\mathbb{R}.$
I came across this statement somewhere and I have been trying to prove it ever since, but I just don't know how to start or what to define as the map.
|
You can use the statements that both $\Bbb R$ and $\Bbb R^n$ are infinite (uncountable) dimensional extensions over $\Bbb Q$ and two vector spaces of same dimension are isomorphic. And $|\Bbb Q| < |\Bbb R|$. So $dim(\Bbb R)= |\Bbb R|= |\Bbb R^n|=dim(\Bbb R^n)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Is there a name for the rule $a \div (b \times c) = a \div b \div c$? Edit, because I should have looked it up before I posted the question:
Is there a name for the rule $a \div (b \div c) = a \div b \times c$ ? I ran across this in Liping Ma's book, Knowing and Teaching Mathematics, and I have searched the internet for a name for this rule to no avail. It is not the distributive law, but it is rather similar. Thank you!
From Ma's book, p. 59 discussing "dividing by a number is equivalent to multiplying by its reciprocal":
"We can use the knowledge that students have learned to prove the rule that dividing by a fraction is equivalent to multiplying by its reciprocal. They have learned the commutative law. They have learned how to take off and add parentheses. They have also learned that a fraction is equivalent to to the result of a division, for example, $ \frac{1}{2} = 1 \div 2 $ . Now, using these, we can rewrite the equation this way:
$ 1\frac{3}{4} \div \frac{1}{2} \to $
$1\frac{3}{4} \div (1 \div 2) \to $
$1\frac{3}{4} \div 1 \times 2 \to $ (This is the step my question is about.)
$1\frac{3}{4} \times 2 \div 1 \to $ (and I'd like an explicit explanation of this step, too.)
$1\frac{3}{4} \times 2\to$
$1\frac{3}{4} \times (2 \div 1) \to $
|
I suppose that $a,b,c$ are rational (or real) numbers. In this case your starting expression is equivalent to:
$$
\dfrac{a}{b\times c}=a\times \dfrac{1}{b}\times \dfrac{1}{c}=\left(a \times \dfrac{1}{b} \right)\times \dfrac{1}{c}=\dfrac{a \times \dfrac{1}{b}}{c}=\dfrac{ \dfrac{a}{b}}{c}
$$
so you can see that this property does not need a special name since it is simply the application of the definition of inverse and of associativity for the product.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1443978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 0
}
|
Prove that $|x+y+z| \le |x|+|y| +| z|$ for all $x, y, z$
Prove that $$\lvert x+y+z \rvert \le \lvert x \rvert + \lvert y \rvert + \lvert z\rvert $$ for all $x, y, z$.
|
You can prove it by using triangle inequality of two numbers.Consider $(y+z)$ as a single entity,$|x+(y+z)|\le |x|+|y+z| $, now use triangle inequality on $|y+z|$, and you will get your answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1444046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Writing $\sqrt[\large3]2+\sqrt[\large3]4$ with nested roots Let $C\subset\Bbb R$ be the smallest set containing $0$ and closed under whole number addition/subtraction, whole number exponents, and whole number roots. That is, for all $c\in C$ and $n\in\Bbb N$, we have $c\pm n\in C$, $c^n\in C$, and $c^{1/n}\in C$.
We know that $\sqrt2+\sqrt3\in C$, since it's equal to $\sqrt{\sqrt{24}+5}$. Similarly, Ramanujan proved that:
$$\sqrt[\large3]{\sqrt[\large3]{1458}-9}-1=\sqrt[\large3]4-\sqrt[\large3]2$$
so $\sqrt[\large3]4-\sqrt[\large3]2\in C$. (Well, Ramanujan proved something equivalent.)
I found that $\left(\sqrt[\large3]2-1\right)^{-1}-1=\sqrt[\large3]2+\sqrt[\large3]4$, but I'm not allowed to use negative exponents/roots, so this doesn't prove that $\sqrt[\large3]2+\sqrt[\large3]4\in C$. So:
Is it true that $\sqrt[\large3]2+\sqrt[\large3]4\in C$? If not, why not?
|
Note
$$(\sqrt[3]2+\sqrt[3]4 -1)^2= 5- \sqrt[3]{4}
$$
which yields
$$\sqrt[3]2+\sqrt[3]4 =\sqrt{5-\sqrt[3]{4}}+1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1444163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
How to prove $f(n)=\lceil\frac{n}{2}\rceil$ is one-to-one and onto? Edit: Both domain and codomain are the set of all integers ($\mathbb{Z}$)
I proved $f(n) = \lceil\frac{n}{2}\rceil$ was one-to-one this way:
$\forall{x_1}\forall{x_2}[(f(x_1) = f(x_2)) \to(x_1 = x_2)]$
Let $f(n) = f(x) = y$
Let $x_1 = 2$ and Let $x_2 = 1$
Then $f(x_1) = \lceil\frac{x_1}{2}\rceil$ and $f(x_2) = \lceil\frac{x_2}{2}\rceil$
$f(x_1) = f(x_2)$ becomes $\lceil\frac{2}{2}\rceil = \lceil\frac{1}{2}\rceil$, which in turn becomes $1 = 1$, but $x_1\neq x_2$.
$\therefore$ this function is not one-to-one.
1 -- Is this proof coherent and correct?
2 -- Next, I'm trying to prove onto by letting $f(x) = y = f(n)$ and $x=n$, then solving for $x$ in $y=\lceil\frac{x}{2}\rceil$. But I don't know how to do this, because I'm not sure if the ceiling function allows associativity, nor how that would work if it does.
I've been trying to prove this based on the (book's) logical definition of onto as $\forall{y}\in{Y}\exists{x}\in{X}|[f(x)=y]$ but have been coming up short.
Any help would be greatly appreciated.
|
In general when you prove functions are 1-1 and onto, you need to start by specifying the domain and codomain for your function. In this case both your domain and codomain are $\mathbb{Z}$, so you can write your ceiling function as $f:\mathbb{Z} \to \mathbb{Z}$.
To prove $f$ is onto, you want to show for any number $y$ in the codomain, there exists a preimage $x$ in the domain mapping to it. To show this, start by taking an arbitrary $y \in \mathbb{Z}$. Does $y$ have something in the domain mapping to it? Sure, take $x=2y$ (which we know is in the domain, because if $y$ is an integer, then so is $2y$), and observe that
\begin{equation*}f(x)=\lceil \frac{x}{2} \rceil = \lceil \frac{(2y)}{2} \rceil = \lceil y \rceil =y
\end{equation*}
as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1444281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.