Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Proof that every number ≥ $8$ can be represented by a sum of fives and threes. Can you check if my proof is right?
Theorem. $\forall x\geq8, x$ can be represented by $5a + 3b$ where $a,b \in \mathbb{N}$.
Base case(s): $x=8 = 3\cdot1 + 5\cdot1 \quad \checkmark\\
x=9 = 3\cdot3 + 5\cdot0 \quad \checkmark\\
x=10 = 3\cdot0 + 5\cdot2 \quad \checkmark$
Inductive step:
$n \in \mathbb{N}\\a_1 = 8, a_n = a_1 + (x-1)\cdot3\\
b_1 = 9, b_n = b_1 + (x-1)\cdot3 = a_1 +1 + (x-1) \cdot 3\\
c_1 = 10, c_n = c_1 + (x-1)\cdot3 = b_1 + 1 + (x-1) \cdot 3 = a_1 + 2 + (x-1) \cdot 3\\
\\
S = \{x\in\mathbb{N}: x \in a_{x} \lor x \in b_{x} \lor x \in c_{x}\}$
Basis stays true, because $8,9,10 \in S$
Lets assume that $x \in S$. That means $x \in a_{n} \lor x \in b_{n} \lor x \in c_{n}$.
If $x \in a_n$ then $x+1 \in b_x$,
If $x \in b_x$ then $x+1 \in c_x$,
If $x \in c_x$ then $x+1 \in a_x$.
I can't prove that but it's obvious. What do you think about this?
|
Any number is of the form $5k+1$, $5k+2$, $5k+3$, $5k+4$ and $5k+5$.
If $n=5k+3$ then nothing to prove.
$n=5k+1 \implies n=5(k-1)+2\cdot3$
$n=5k+2 \implies n=5(k-2)+4\cdot3$
$n=5k+4\implies n=5(k-4)+8\cdot3$
$n=5k+5\implies n=5(k-2)+5\cdot 3$
Hence for all $k\ge5$ the result holds.
Hence for all $n\ge 25$ the result hold.
What is left is to check the remaining cases and that is also true.
Hence the proposition is true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54",
"answer_count": 13,
"answer_id": 8
}
|
Prove $7$ divides $13^n- 6^n$ for any positive integer I need to prove $7|13^n-6^n$ for $n$ being any positive integer.
Using induction I have the following:
Base case:
$n=0$: $13^0-6^0 = 1-1 = 0, 7|0$
so, generally you could say:
$7|13^k-6^k , n = k \ge 1$
so, prove the $(k+1)$ situation:
$13^{(k+1)}-6^{(k+1)}$
$13 \cdot 13^k-6 \cdot 6^k$
And then I'm stuck....where do I go from here?
|
Write $13=6+7$ to expand $13*13^k-6*6^k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 3
}
|
Intersection between two planes and a line? What is the coordinates of the point where the planes: $3x-2y+z-6=0$ and $x+y-2z-8=0$ and the line: $(x, y, z) = (1, 1, -1) + t(5, 1, -1)$ intersects with eachother?
I've tried letting the line where the two planes intersect eachother be equal to the given line, this results in no solutions.
I have tried inserting the lines x, y and z values into the planes equations, this too, results in no solutions.
According to the answer sheet the correct solution is: $\frac{1}{2}(7,3,-3)$
|
From the line equation you know that $x$ (as well as $y$) is a function of $z$:
when $z = -1-t$, then $x = 1+5t$ and hence $x = -4-5z$.
This gives you the third equation you need:
\begin{align}
3x-2y+\phantom{1}z-6&=0\\
\phantom{1}x-\phantom{1}y-2z-8&=0\\
\phantom{1}x+0y+5z+4&=0.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Stuck when tackling the computation of $\Phi_n(\zeta_8)$ My current way of calculation of $\Phi_n(\zeta_8)$ where $\Phi_n(x)$ is the $n$-th cyclotomic polynomial and $\zeta_8=\cos(\frac{2\pi}{8})+i\sin(\frac{2\pi}{8})$ leave me now stuck at the problem of calculating $$\prod_{d|n}\sin(\frac{d2\pi}{16})^{\mu(\frac{n}{d})}$$ the product running over all integer divisors 0f $n$. $\mu(n)$ is the Möbius function.
My tests with WolframAlpha Online resulted for e.g. prime $n$ and $n$ squarefree with maximally $2$ prime divisors the for me astonishing results of $\pm 1$,$\pm \cot(\frac{\pi}{8})$,$\pm \cot^2(\frac{\pi}{8})$,$\pm \tan(\frac{\pi}{8})$,$\pm \tan^2(\frac{\pi}{8})$ so far.
I see no way how I could find a closed form for the product which the CAS calculations lead me to conjecture to exist.
|
It's a classic and elementary fact that since $x^n-1=\prod_{d\mid n}\Phi_d(x)$, by applying $\log$s to both sides followed by Mobius inversion and then exponentiating back we must have
$$\Phi_n(x)=\prod_{d\mid n}(x^d-1)^{\mu(n/d)}.$$
This is mentioned in almost any source that covers cyclotomic polynomials. Furthermore,
$$\begin{array}{ll} \displaystyle \prod_{d\mid n}(x^d-1)^{\mu(n/d)} & \displaystyle =\prod_{d\mid n}x^{d\mu(n/d)/2}(x^{d/2}-x^{-d/2})^{\mu(n/d)} \\ & \displaystyle =x^{\large\frac{1}{2}\left[\sum\limits_{d\mid n}d\mu(n/d)\right]}\prod_{d\mid n}(x^{d/2}-x^{-d/2})^{\mu(n/d)}. \end{array}$$
Here is a well-known technique utilized in the number theory of arithmetic functions: since $\sum_{d\mid n}d\mu(n/d)$ is a convolution of multiplicative functions it is itself multiplicative, hence equals
$$\prod_{p^e\|n}\left(\sum_{r=0}^e p^r\mu(p^e/p^r)\right)=\prod_{p^e\|n}(p^e-p^{e-1})= \varphi(n).$$
The notation $p^e\|n$ when $p$ is a prime means that $p^e\mid n$ but $p^{e+1}\nmid n$, or in other words that $p^e$ is the precise power of $p$ present in $n$'s prime factorization. Thus we have
$$\Phi_n(x)=x^{\varphi(n)/2}\prod_{d\mid n}(x^{d/2}-x^{-d/2})^{\mu(n/d)}.$$
If $x=e^{2\pi i\frac{k}{m}}$ then $x^{d/2}-x^{-d/2}=2i\sin(\pi\frac{kd}{m})$ by Euler's formula, and so by factoring out all of the terms $(2i)^{\mu(n/d)}$ and then using the fact $\sum_{d\mid n}\mu(n/d)=0$ if $n>1$ we get
$$\Phi_n(e^{2\pi i\frac{k}{m}})=e^{\pi i\frac{\varphi(n)k}{m}}\prod_{d\mid n}\sin\left(\pi\frac{kd}{m}\right)^{\mu(n/d)}.$$
Now let's specialize to the case $k/m=1/8$ and $n$ odd. It is geometrically "obvious" that
$$\begin{array}{|c|rrrrrrrr|}\hline d\bmod 16 & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 15 \\ \hline
\sin(\pi\frac{d}{8}) & s & c & c & s & -s & -c & -c & -s \\ \hline \end{array} $$
where $s=\sin(\frac{\pi}{8})~\left(=\frac{1}{2}\sqrt{2-\sqrt{2}}\,\right)$ and $c=\cos(\frac{\pi}{8})~\left(=\frac{1}{2}\sqrt{2+\sqrt{2}}\,\right)$. Therefore
$$\Phi_n(e^{2\pi i/8})=e^{i\pi \varphi(n)/8}(-1)^{v_{9,11,13,15}(n)}\sin(\frac{\pi}{8})^{v_{1,7,9,15}(n)}\cos(\frac{\pi}{8})^{v_{3,5,11,13}(n)} \tag{1}$$
using the ad hoc functions $v_S(n)=\sum_{d\mid n,d\in S\bmod16}\mu(n/d)$. Let's simply abbreviate this
$$\Phi_n(e^{2\pi i/8})=e^{i\pi\varphi(n)/8}(-1)^{\gamma(n)}\sin(\frac{\pi}{8})^{\alpha(n)}\cos(\frac{\pi}{8})^{\beta(n)}.$$
The fact $\sum_{d\mid n}\mu(n/d)=0$ (given $n>1$) tells us $\alpha(n)+\beta(n)=0$. Simplifying we get
$$\Phi_n(e^{2\pi i/8})=e^{i\pi\varphi(n)/8}(-1)^{\gamma(n)}\tan(\frac{\pi}{8})^{\alpha(n)}. $$
Notice something special: the residues $\overline{1},\overline{7},\overline{9},\overline{15}$ form an index two subgroup of $U(16)$, so they are the kernel of some homomorphism $\theta:U(16)\to\{\pm1\}$. Indeed we have
$$\theta(x)=\begin{cases}1 & x^2\equiv1 \mod{16} \\ -1 & x^2\not\equiv 1\mod{16}.\end{cases} \tag{2}$$
Therefore with some trickery we can rewrite $\alpha(n)$ via
$$\alpha(n)=\frac{\alpha(n)-\beta(n)}{2}=\frac{1}{2}\sum_{d\mid n}\mu(n/d)\theta(\overline{d})=\frac{1}{2}\prod_{p^e\|n}\left(\sum_{r=0}^e\mu(p^e/p^r)\theta(\overline{p})^r\right)$$
$$=\frac{1}{2}\prod_{p^e\|n}\left(\theta(\overline{p})^e-\theta(\overline{p})^{e-1}\right)=\frac{\theta(\overline{n})}{2}\prod_{p\mid n}(1-\theta(\overline{p})) $$
$$\alpha(n)=\begin{cases}\theta(\overline{n})2^{\omega(n)-1} & {\rm if}~p^2\not\equiv1\bmod16~{\rm for~each~prime~}p\mid n \\ 0 & {\rm if}~p^2\equiv1\bmod16~{\rm for~any~~prime~}p\mid n. \end{cases} \tag{3}$$
The function $\omega(n)$ counts the number of prime factors of $n$. And so we conclude
Theorem. $\Phi_n(e^{2\pi i/8})=e^{i\pi\varphi(n)/8}(-1)^{\gamma(n)}\tan(\frac{\pi}{8})^{\alpha(n)}$ where $\alpha,\gamma$ are defined in $(1),(2),(3)$, for any odd integer $n>1$.
In particular, the first $n$ for which the product of sines fails to have absolute value $\tan(\frac{\pi}{8})^e$ for an exponent $|e|\le2$ occurs at $n=3\cdot5\cdot11=165$, where $|\Phi_{165}(e^{2\pi i/8})|=\tan(\frac{\pi}{8})^{-4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If people who work at 100% of a program, do the full program in 42 weeks, how many weeks does someone who works at 60% capacity need?
If people who work at 100% of a program, do the full program in 42 weeks, how many weeks does someone who works at 60% capacity need?
I said 75.6 weeks and here's how I got it:
*
*people who work at 100% do 42 weeks
*people who work at 50% capacity do 84 weeks (double it)
*halfway between 50 and 100 is 75% and halfway between 42 and 84 is 63, so people who work at 75% capacity do 63 weeks
*halfway between 50% and 75% is 62.5% and halfway between 84 weeks and 63 weeks is 73.5 weeks, so people who work at 62.5% capacity do 73.5 weeks
*Someone who works at 60% capacity does 75.6 weeks (or round up to 76 weeks)
Everyone else I asked is giving an answer of 70 weeks (they are doing 42 divided by 0.6).
I feel like I am right and they are using the wrong equation. Can you either tell me the equation that will give me my answer (and vindicate me) or call me an idiot and I will accept that, please?
|
My friend, you yourself have stated that one working at 50% needs 84 weeks. (You got this by multiplying 100/50 and 42)
So similarly, one working at 60% needs 42*(10/6)=70 weeks.
Also, one working at 75% needs 42*(100/75)= 56 weeks; you make the mistake here itself, hence your calculations for 60% are wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Getting two consecutive $6$'s A standard six-sided die is rolled until two consecutive 6s appear. Find the expected number of rolls. Please see that this is a language problem. What do they mean by expected number of rolls? Should the answer be 36? Or 42?
|
Let $E$ be the expected number of rolls. This is the mean number of roll we expect to take to witness the event.
To obtain two consecutive sixes, you must roll until you get a six followed immediately by a six (obviously).
Let the expected number of rolls until you get one six be : $F$. Can you find what $F$ is? (Hint: Geometric Distribution)
Then you will either get a six on the next roll (the seventh), or you don't and have to start all over from there. There is a $5/6$ chance you don't, so we have a recursive definition for the expected number: $$E = F + 1 + \frac 5 6 E$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
square root / factor problem $(A/B)^{13} - (B/A)^{13}$ Let
$A=\sqrt{13+\sqrt{1}}+\sqrt{13+\sqrt{2}}+\sqrt{13+\sqrt{3}}+\cdots+\sqrt{13+\sqrt{168}}$ and
$B=\sqrt{13-\sqrt{1}}+\sqrt{13-\sqrt{2}}+\sqrt{13-\sqrt{3}}+\cdots+\sqrt{13-\sqrt{168}}$.
Evaluate $(\frac{A}{B})^{13}-(\frac{B}{A})^{13}$.
By Calculator, I have $\frac{A}{B}=\sqrt{2}+1$ and $\frac{B}{A}=\sqrt{2}-1$.
But, I don't know how. Has someone any idea about this.
|
Let
$$A=\sum_{n=1}^{168}\sqrt{13+\sqrt{n}},B=\sum_{n=1}^{168}\sqrt{13-\sqrt{n}}$$
since
$$\sqrt{2}A=\sum_{n=1}^{168}\sqrt{26+2\sqrt{n}}=\sum_{n=1}^{168}\left(\sqrt{13+\sqrt{169-n}}+\sqrt{13-\sqrt{169-n}}\right)=A+B$$ so we have $x=\dfrac{A}{B}=\sqrt{2}$,then we have
$$x=\sqrt{2}+1,\dfrac{1}{x}=\sqrt{2}-1\Longrightarrow x+\dfrac{1}{x}=2\sqrt{2}$$
let
$$a_{n}=x^n-x^{-n}$$ use this well know indentity
$$a_{n+2}=(x+\dfrac{1}{x})a_{n+1}-a_{n}\Longrightarrow a_{n+2}=2\sqrt{2}a_{n+1}-a_{n}$$
$$a_{1}=2,a_{2}=4\sqrt{2}$$
so
$$a_{3}=2\sqrt{2}a_{2}-a_{1}=16-2=14$$
$$a_{4}=2\sqrt{2}a_{3}-a_{2}=28\sqrt{2}-4\sqrt{2}=24\sqrt{2}$$
$$a_{5}=2\sqrt{2}a_{4}-a_{3}=96-14=82$$
$$\cdots$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1181905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 0
}
|
Behaviour of roots of a polynomial with function coefficients Let $(-1+c_4(h))x^4 +c_3(h)x^3+c_2(h)x^2+c_1(h)x+c_0(h)=0$ be an equation with variable coefficients, depending smoothly on $h$. Also let $0\le c_4(h)\le 1-\epsilon$ for some $\epsilon>0$ and $c_0(h)>\epsilon'$ for some $\epsilon'>0$. One can easily see that for any $h$, this equation has two real solution, one positive and one negative. Let $z(h)$ be the positive roots. The question is that under what conditions I can say that $z(h)$ is a differentiable function of $h$?
|
In fact, you can have one or two positive roots. Using the IFT,
$$
0\ne\frac{\partial}{\partial x}(-1+c_4(h))x^4+c_3(h)x^3+c_2(h)x^2+c_1(h)x+c_0(h)\Big\vert_{x=z(h)}
$$
$$
=4(-1+c_4(h))x^3+3c_3(h)x^2+2c_2(h)x+c_1(h)x\Big\vert_{x=z(h)}
$$
while
$$(-1+c_4(h))x^4 +c_3(h)x^3+c_2(h)x^2+c_1(h)x+c_0(h)\vert_{x=z(h)}=0,$$
i.e.,the root(s) $z(h)$ ($z_1(h)$, $z_2(h)$) is (are) simple.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove $3a^4-4a^3b+b^4\ge0$ . $(a,b\in\mathbb{R})$ $$3a^4-4a^3b+b^4\ge0\ \ (a,b\in\mathbb{R})$$
We must factorize $3a^4-4a^3b+b^4\ge0$ and get an expression with an even power like $(x+y)^2$ and say an expression with an even power can not have a negative value in $\mathbb{R}$.
But I don't know how to factorize it since it is not in the shape of any standard formula.
|
If $a=b$, the expression is zero, so take the factor $a-b$:
$$3a^4-4a^3b+b^4=(a-b)(3a^3-a^2b-ab^2-b^3)$$
Again, the cubic is zero if $a=b$:
$$=(a-b)^2(3a^2+2ab+b^2)$$
Now try to show the quadratic is always positive or zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Perpendiculars to two vectors So this is more than likely a simple question that has been asked before, but if have 2 lines described by the formula:
$V = (x,y,z) + L (i,j,k)$
i.e. a line described by a position and a length along a unit vector
How would I find the line that is perpendicular to both lines?
|
Hint: Take the cross product.First write each in the form (a,b,c) by choosing t equal 0 and 1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
What is this motivation for changing variables to standardize a pde? In a question I am looking at it says -
Find a change of variables $u \rightarrow v$ of the form $u = ve^{ax + by}$ that would transform the PDE
$$\frac{\delta^2u}{\delta x^2} + \frac{\delta^2u}{\delta y^2} + \frac{\delta u}{\delta x} + 2\frac{\delta u}{\delta y} + 3u = 0$$
into
$$\frac{\delta^2v}{\delta x^2} + \frac{\delta^2v}{\delta y^2} + Av = 0$$
where $A$ is a constant.
What is the motivation for the change of variables being of the form $u = ve^{ax + by}$ that lets us get the pde in 'nicer' form?
|
The motivation comes from solving ODE with the method of integration factors. Suppose you're to solve $y'+2y=f(x)$. A natural approach is to let $z=e^{ax}y$ and compute $z'=e^{ax}(y'+ay)$. In order to match the original ODE, we let $a=2$. Then $z'=e^{2x}f(x)$, which we can integrate to solve $z$, and hence $y=e^{-2x}z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If both $x+10$ and $x-79$ are perfect squares, what is $x$?
If both $x+10$ and $x-79$ are perfect squares, what is $x$?
That is:
$x+10=$perfect square (or can be square such that the result of it squared is a positive integer)
$x-79=$perfect square (or can be square such that the result of it squared is a positive integer)
I have no idea how to solve this.I tried inequality like
$$x+10>x-79$$
$$x-x>-79-10$$
$$0>-89$$
Note: You are not allowed to use trial and error or guess and check to solve the question.
|
Assuming you are looking for positive integers...
Consecutive squares differ by consecutive odd amounts. The first differences to $\{0,1,4,9,16,\ldots\}$ are $\{1,3,5,7,\ldots\}$.
You have two squares that are $89$ apart. They may or may not be consecutive. But for some string of consecutive odd numbers, the sum must be $89$.
$$
\begin{align}
(2k+1)+\left(2(k+1)+1\right)\cdots+\left(2(k+h)+1\right)&=89\\
2k(h+1)+h(h+1)+h+1&=89\\
(2k+h+1)(h+1)&=89\\
\end{align}$$
Since $89$ is prime, either $h=0$ and $k=44$, or well, there is no other possibility.
So $k=44$ and there is only one consecutive odd number in our sequence, meaning the two squares must be consecutive squares. We must be dealing with the difference between $44^2$ and $45^2$. So $x=45^2-10$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Integration question on $\int \frac{x}{x^2-10x+50} \, dx$ How would I integrate
$$\int \frac{x}{x^2-10x+50} \, dx$$
I am not sure on how to start the problem
|
I would complete the square in the denominator first.
$$ \displaystyle \int \dfrac{x}{x^2-10x+25+25}dx = \int \dfrac{x}{(x-5)^2+25}dx $$
Let $u=x-5$,
$$ \displaystyle \int \dfrac{u+5}{u^2+25}du = \int \dfrac{u}{u^2+25}du + \int \dfrac{5}{u^2+25}du$$
the first integral you can solve by doing one more substitution, the second is just arctangent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Hatcher 2.2.26 Show that if $A$ is contractile in $X$ then $H_n(X,A) =H_n(X) \oplus H_{n-1} (A)$
Show that if $A$ is contractible in $X$ then $H_n(X,A) \approx \tilde H_n(X) \oplus \tilde H_{n-1}(A)$
I know that $\tilde H_n(X \cup CA) \approx H_n(X \cup CA, CA) \approx H_n(X,A)$.
And $(X \cup CA)/X = SA$, where $SA$ is the suspension of $A$. So
$H_n((X \cup CA)/X) = H_n(SA)$, where $SA$ is the suspension of $A$. But $SA \simeq A$, and homology is homotopic invariant, we have $H_n((X \cup CA)/X) = H_n(A)$.
I have seen this discussion in one of the post in mathstack but having no clue how to use this in the problem. The long exact sequence I can calculate but why $H_n(X \cup CA, CA) \approx H_n(X,A)$. And where suspension is used and how to split the long exact sequence in the direct sum?
|
Hatcher suggests to use the fact that $(X\cup CA)/X\simeq SA$: in order to do that you can consider what you obtained in the point (a) of the exercise.
From (a) you know that $A$ is contractible in $X$ iff $X$ is a retract of $X\cup CA$. Since $X$ is a retract of $X\cup CA$ you have that the following sequence splits:
$$0\to \tilde H_n(X)\to \tilde H_n(X\cup CA)\to \tilde H_n(X\cup CA,X)\to 0,$$
hence
$$\tilde H_n(X\cup CA)\approx \tilde H_n(X)\oplus \tilde H_n(X\cup CA,X).\label{a}\tag{1}$$
Now,
$$\tilde H_n(X\cup CA,X)\approx \tilde H_n((X\cup CA)/X)\approx \tilde H_n(SA)\approx \tilde H_{n-1}(A).$$
In order to obtain the desired result you just need to recognize $\tilde H_n(X\cup CA)$ as $\tilde H_n(X,A)\approx H_n(X,A)$ and ($\ref{a}$) becomes
$$H_n(X,A)\approx \tilde H_n(X)\oplus \tilde H_{n-1}(A).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
To show every sylow p-subgroup is normal in G Let $G$ be a finite group and suppose that $\phi$ is an automorphism of $G$ such that $\phi^3$ is the identity automorphism. Suppose further that $\phi(x)=x$ implies that $x=e$. Prove that for every prime $p$ which divides $o(G)$, the $p$-Sylow subgroup is normal in G.
Its an problem from Herstein, prob 19 in 2.12,2nd edition.I need some hint to solve this.Thank you.
|
This is not exactly an easy exercise! The proof outline below is from Burnside's book, "The Theory of Groups of Finite Order".
*
*Show that $G = \{x^{-1}\phi(x) : x \in G \}$.
*If $g$ and $\phi(g)$ are conjugate in $G$, then $g=1$. (Proof. Suppose $\phi(g) = h^{-1}gh$. By 1, $h = x^{-1}\phi(x)$ for some $x$, so $\phi(g) = \phi(x^{-1})xgx^{-1}\phi(x) \Rightarrow \phi(xgx^{-1})=xgx^{-1} \Rightarrow xgx^{-1}=1 \Rightarrow g=1$.)
*$g\phi(g)\phi^2(g)=1$ for all $g \in G$. (Let $x = g\phi(g)\phi^2(g)$. Tthen $x$ and $\phi(x)$ are conjugate, so it follows from 2.)
*Similarly $\phi^2(g)\phi(g)g = 1$, so $g$ and $\phi(g)$ commute for all $g \in G$.
*Any two conjugate elements in $G$ commute. (Proof. Let $g,h \in G$. By 1, $h=x^{-1}\phi(x)$ for some $x \in G$. Now by 4, $xgx^{-1}$ commutes with $\phi(xgx^{-1})$, so $g$ commutes with $x^{-1}\phi(x)\phi(g)\phi(x^{-1})x = h\phi(g)h^{-1}$ So $\phi(g)$ commutes with $h^{-1}gh$ and similarly $\phi^2(g)$ commutes with $h^{-1}gh$, and hence, by 3, so does $g$.)
*Now, if $p$ is a prime dividing $|G|$, and $g \in G$ has order $p$, then the conjugates of $g$ in $G$ generate a normal (abelian) subgroup $N$ of $G$ of order a power of $p$. Now the largest normal subgroup $O_p(G)$ of $G$ of order a power of $p$ is characteristic in $G$ and hence left invariant by $\phi$ so we can complete the proof by applying induction to $G/N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Solving $\lim\limits_{n\to\infty}\frac{n^n}{e^nn!}$ I was solving a convergence of a series and this limit popped up:
$$\lim\limits_{n\to\infty}\frac{n^n}{e^nn!}$$
I needed this limit to be $0$ and it is in fact (according to WolframAlpha), but I just don't see how to get the result.
|
Almost as good: write the expression as
$$
L = e^{n \log n - n - \log n!} = e^{n \log n -n -\sum_{k=1}^{n} \log k}
$$
and use the bounds on the sum:
$$
\int_{1}^{n} \log x dx < \sum_{k=1}^{n} \log k < \int_{1}^{n+1} \log x dx
$$
to get the same result without Stirling.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
If a group $G$ is generated by $n$ elements, can every subgroup of $G$ by generated by $\leq n$ elements? I wonder if the statement in the title is true, so if:
$$G=\langle g_1,g_2,\ldots,g_n\rangle$$
means that we can write every subgroup $H$ of $G$ as
$$H=\langle h_1,h_2,\ldots,h_m\rangle$$
for some $h_i\in G$ and $m\leq n$.
What I'm thinking so far: if $n=1$, then:
$$H=\langle h_1,h_2\rangle=\langle g^a,g^b\rangle=\langle g^{\gcd(a,b)}\rangle$$
So the statement is true for $n=1$.
However for $n>1$ I'm having trouble proving/disproving it. I think it comes down to some number theory stuff that I'm not aware of. For example for the case $n=2$:
Say $|g_1|=n, |g_2|=m$
$$H=\langle h_1,h_2,h_3\rangle =\langle g_1^{a_1} g_2^{a_2}, g_1^{b_1} g_2^{b_2}, g_1^{c_1} g_2^{c_2}\rangle$$
So I think that proving that this can be generated by only $2$ of the elements comes down to proving that there are $x,y$ s.t.
$$xa_1+yb_1\equiv c_1\bmod n$$
$$xa_2+yb_2\equiv c_2\bmod m$$
However, I don't know the conditions that have to be satisfied for this system to have solutions.
By the way, I'm not sure/given that this is actually true.
edit: by the way, I think that in writing down that system I accidentally assumed the group to be Abelian. Now I'm interested in the result both for Abelian and non-Abelian groups
|
The statement is true for abelian groups: every abelian group generated by $n$ elements is a quotient of $\mathbb{Z}^n$ and every subgroup of $\mathbb{Z}^n$ is free generated by at most $n$ elements. This can be found in any text elementary group theory.
It is false for non abelian groups. For example the permutation group $S_n$ is generated by two elements: the cycle $(12\ldots n)$ and the trasposition $(12)$. Every finite group $G$ is a subgroup of some $S_n$ but not every finite group is generated by two elements
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Use the definition of differentiation on a piecewise function. I need to find the derivative at $x=0$.
$$ f(x)= \begin{cases} x^2\sin(1/x) & \text{if } x\neq 0 \\
0 & \text{if } x \leqslant 0 \end{cases} $$
Using the definition, I know that it's equal to $0$. However, I also need to prove that $f'(x)$ is not continuous at $x=0$. Do I need the entire equation for that?
|
Indeed $f'(0) = 0$. We have, then: $$f'(x) = \begin{cases} 2x\sin(1/x) - \cos(1/x),& \text{if }x\neq 0 \\ 0, & \text{if } x \leq 0\end{cases}$$
Just check that $\lim_{x \to 0}f'(x)$ does not exist and you are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1182997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Orthonormal Basis Proofs Suppose $u, v, w$ are three differentiable functions from $R \to R^3$ such that for every $t ∈ R$ the vectors $u(t), v(t), w(t)$ form an orthonormal basis in $R^3$.
i) Prove that $u′(t) ⊥ u(t)$ and $u′(t)·v(t) = −u(t)·v′(t)$ for all $t$.
I did
$\theta=cos^{-1}((u_1'(t),u_2'(t),u_3'(t))\bullet(u_1(t),u_2(t),u_3(t)))$
$=cos^{-1}(u_1'(t)u_1(t)+u_2'(t)u_2(t)+u_3'(t)u_3(t))\ne0$.
Aren't they meant to be parallel because the tangent vector of an orthonormal basis takes the same direction as the basis itself?
For the second part, since $\frac{d}{dt} (u(t) \bullet v(t))=0 =u′(t)·v(t) + u(t)·v′(t) $
Therefore $u′(t)·v(t) = −u(t)·v′(t)$
|
$\{u(t),v(t),w(t)\}$ orthonormal basis implies $u(t)\cdot u(t)=1.$ Then
$$0=\frac{d}{dt}1=\frac{d}{dt}(u(t)\cdot u(t))=u'(t)\cdot u(t)+u(t)\cdot u'(t)=2u'(t)\cdot u(t)\implies u'(t)\cdot u(t)=0.$$
$\{u(t),v(t),w(t)\}$ orthonormal basis implies $u(t)\cdot v(t)=0.$ Then
$$0=\frac{d}{dt}0=\frac{d}{dt}(u(t)\cdot v(t))=u'(t)\cdot v(t)+u(t)\cdot v'(t)\implies u'(t)\cdot v(t)=-u(t)\cdot v'(t).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Certificate of primality based on the order of a primitive root Reading my textbook, it tells me that to prove $n$ is prime, all that is necessary is to find one of its primitive roots and verify that the order of one of these primitive roots is $n-1$.
Now, why exactly does this work? It's not making much sense to me. Also, will any primitive root work?
|
If there exists a primitive root, this already means that $(\mathbb Z/n)^{\times} \cong \mathbb Z/\varphi(n)$ is cyclic, where $\varphi$ is Euler's totient function. If $\varphi(n) = n-1$, then $n$ is prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Find the coordinate of third point of equilateral triangle. I have two points A and B whose coordinates are $(3,4)$ and $(-2,3)$ The third point is C. We need to calculate its coordinates.
I think there will be two possible answers, as the point C could be on the either side of line joining A and B.
Now I put AB = AC = BC.
We calculate AB by distance formula : $\sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$
= $ \sqrt{5^2 + 1^2} = \sqrt{26}$
I plug this value into the distance of AC and BC and finally equate to get :
$5x + y = 6$
Now what can I do? There are two variables, I am getting equation of a line! How can I solve this?
|
If you like vector approach:
Displace (shift/translate) point A by vector(-3,-4) to come to the origin.
Multiply the new radius vector $ab$ with $ e^ {i \pi/3} , e^ {-i \pi/3}$ (once clockwise and once anticlockwise) to obtain new points $ C_1 $ and $ C_2 $. The multiplying factor is $ (1/2 \pm i \sqrt 3/2) $.
Displace these points back to original positions translating by $ (3,4) $.
If you multiply thrice, all points of a hexagon would also be reached.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
One-sided nilpotent ideal not in the Jacobson radical? Problem XVII.5a of Lang's Algebra, revised 3rd edition, is:
Suppose $N$ is a two-sided nilpotent ideal of a ring $R$. Show that
$N$ is contained in the Jacobson radical $J: = \{ \cap\, I: I \text{ a
maximal left ideal of } R \}$.
I put my solution below the fold. My question is: can't we generalize a bit more? It seems that all we need is that $N$ is a nil ideal; further, I don't see why $N$ can't be merely a one-sided ideal. I assume there's some error in my thinking here...
Solution: Take $y \in N$, and show that $1-xy$ has a left inverse for all $x\in R$ (this is an equivalent characterization of the Jacobson radical, see here). The way to construct the left inverse is to note that $xy \in N$, so $\exists k$ s.t. $(xy)^k= 0$, so $(1 + xy + \dotsb + (xy)^{k-1})(1-xy)=1$.
|
That is totally correct.
If you require further validation, then check out Lam's First course in noncommutative rings pg 53, lemma 4.11 which has exactly the generalization you describe, with the same proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
What is a "right" automorphism? Let $B_n$ be the braid group with $n$ strands and let $F_n$ be the free group of rank $n$ generated by $x_1,\ldots,x_n$. The classical Artin Representation Theorem reads:
If an automorphism of $F_n$ is an isomorphism from $F_n$ to itself, then what is a right automorphism?
|
Automorphisms are special cases of group actions - the automorphism group acts on the underlying set of some first group. In particular this is a left group action. If $A$ is a group and we have a set $X$, recall a left action of $A$ on $X$ is a map $A\times X\to X$ denoted $(a,x)\mapsto ax$ which satisfies the "associativity" relation $a(bx)=(ab)x$ for all $a,b\in A$ and $x\in X$. Similarly then we can define a right group action as a map $X\times A\to X$ with $(xa)b=x(ab)$ for all $a,b,x$. Even though functions are normally written on the left (e.g. $f(x)$) sometimes they can be on the right side instead (e.g. $(x)f$). It is somewhat unusual though. The right automorphism group is then the set of all right functions of $G$ which are automorphisms of $G$.
The point of right actions is to keep track of how the actions compose together in a consistent and correct manner; sometimes naturally occurring actions are not left actions. For instance we know that $S_n$ acts on $\{1,\cdots,n\}$ and so it acts on $\{(x_1,\cdots,x_n):x_i\in X\}$ for any set $X$ by permuting the coordinates. If $\sigma$ is to put $x_i$ into the $\sigma(i)$-coordinate though, that means $(\sigma x)_{\sigma(i)}=x_i$, or in other words (via the substitution of $\sigma^{-1}(i)$ for $i$) $\sigma(x_1,\cdots,x_n)=(x_{\sigma^{-1}(1)},\cdots,x_{\sigma^{-1}(n)})$. The fact that $(x_1,\cdots,x_n)\sigma=(x_{\sigma(1)},\cdots,x_{\sigma(n)})$ is a right action, not a left action, often trips many people up, even authors of lecture notes in my experience.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What function satisfies the following equation? $$f(x)e^{-x}\Gamma(x/\pi)=f(\pi/2-x)e^{x-\pi/2}\Gamma(1/2-x/\pi)$$
I think it should be similar to Zeta function, but what is it exactly?
|
You are right about the similarity of $f(x)$ to the Riemann Zeta function.
Using the following identity that can be found here
$$\pi^{-\frac{z}{2}}\Gamma\left(\frac{z}{2}\right)\zeta(z)=\pi^{-\frac{1-z}{2}}\Gamma\left(\frac{1-z}{2}\right)\zeta(1-z)$$
and setting $z=\frac{2x}{\pi}$ results in
$$\color{red}{\pi^{-\frac{x}{\pi}}}\Gamma\left(\frac{x}{\pi}\right)\color{red}{\zeta\left(\frac{2x}{\pi}\right)}=\color{red}{\pi^{-\frac{1}{2}-\frac{x}{\pi}}}\Gamma\left(\frac{1}{2}-\frac{x}{\pi}\right)\color{red}{\zeta\left(1-\frac{2x}{\pi}\right)}$$
Comparing this expression (paying particular attention to the parts highlighted in red) with the relationship given in the question, we can deduce that
$$f(x)=\pi^{-\frac{x}{\pi}}\zeta\left(\frac{2x}{\pi}\right)e^{x}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convergence series problem. How to show that $$\sum_{n=3}^{\infty }\frac{1}{n(\ln n)(\ln \ln n)^p}$$ converges if and only if $p>1$ ?
By integral test,
$$\sum_{n=3}^{\infty }\frac{1}{n(\ln n)(\ln \ln n)^p}$$
$$f(x)=\frac{1}{x(\ln x)(\ln \ln x)^P}$$
$$\int_{3}^{\infty }\frac{1}{x(\ln x)(\ln \ln x)^p}$$
I stucked at here.
|
Another way is to use the Cauchy condensation test:
For a non-negative, non-decreasing sequence $a_n$ of reals, we have
$$\sum_{n=1}^\infty a_n<\infty\;\;\Leftrightarrow\;\;\sum_{n=0}^\infty 2^n a_{2^n}<\infty$$
Applied to your situation we get that your series converges if and only if
$$\sum_{n=2}^\infty \frac{1}{n (\ln n)^p}$$
converges, which by applying the Cauchy condensation test again converges if and only if
$$\sum_{n=1}^\infty \frac{1}{n^p}$$
converges. Now for this one you should now that it converges iff $p>1$.
Note that, for the sake of clarity, I ignored constants coming from $\ln 2$ that pop up.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Finding dimension or basis of linear subspace Let $\;F\;$ be some field and take a column vector $\;\vec x=\begin{pmatrix}x_1\\x_2\\\ldots\\x_m\end{pmatrix}\in F^m\;$ . We defined
$$W=\left\{\;A\in V=M_{n\times m}(F)\;:\;\;A\vec x=\vec 0 \;\right\}$$
It is asked: is $\;W\;$ a linear subspace of $\;V\;$ and if it is find its dimension.
This is what I did: if $\;A,B\in W\;,\;\;k\in F\;$ , then
$$\begin{align}&(A+B)\vec x=A\vec x+B\vec x=\vec 0+\vec 0=\vec 0\implies A+B\in W\\{}\\&
(kA)\vec x=k(A\vec x)=k\vec 0=\vec 0\implies kA\in V\end{align}$$
Since the zero matrix is in $\;W\;$ we get it is a linear subspace.
They gave us hint for second part: that the answer to dimension of $\;W\;$ depends on $\;\vec x\;$ , which I can understand since if $\;\vec x=\vec0\;$ then $\;W=V\implies \dim W=nm\;$, but I can't see how to do in general case.
For example, if $\;\vec x=\begin{pmatrix}1\\0\\0\end{pmatrix}\in\Bbb R^3\;$ and $\;V=M_{n\times 3}(\Bbb R)\;$ , then
$$\begin{pmatrix}a_{11}&a_{12}&a_{13}\\
a_{21}&a_{22}&a_{23}\\\ldots&\ldots&\ldots\\a_{n1}&a_{n2}&a_{n3}\end{pmatrix}\begin{pmatrix}1\\0\\0\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix}\iff a_{11}=a_{21}=\ldots=a_{n1}=0$$
which gives us $\;\dim W=3n-n=2n\;$ , and if we take instead $\;\vec x=\begin{pmatrix}1\\1\\0\end{pmatrix}\;$ , then
$$a_{i1}=-a_{i2}\;\;\forall\;i=1,...,n\;$$ ...and again I get $\;\dim W=2n\;$ ! This confuses me, and any help with be very appreciated.
|
Here's one way to think about it: If $Ax = 0$, then each of the rows of $A$ is orthogonal to $x$. The collection of vectors orthogonal to a nonzero vector is an $(m-1)$-dimensional subspace**. Thus to specify a matrix $A$ with $Ax = 0$, we must choose $n$ rows each from this $(m-1)$-dimensional space, which implies that $\text{dim}(W) = n(m-1)$. Note that this agrees with your examples, since you took $m = 3$.
**
We want to find the dimension of the subspace $U = \{a \in F^m : a_1 x_1 + \cdots + a_m x_m = 0\}$, where $x$ is a fixed nonzero vector. Since $x \neq 0$ then $x_i \neq 0$ for some $i \in \{1, \ldots, m\}$. Then
$$
a_i = \frac{1}{x_i} (-a_1 x_1 - a_2 x_2 - \cdots - a_{i-1} x_{i-1} - a_{i+1} x_{i+1} - \cdots - a_m x_m)
$$
so we may write
\begin{align*}
\begin{pmatrix}
a_1\\
a_2\\
a_3\\
\vdots\\
a_{i-1}\\
a_i\\
a_{i+1}\\
\vdots\\
a_m
\end{pmatrix} =
\begin{pmatrix}
a_1\\
a_2\\
a_3\\
\vdots\\
a_{i-1}\\
-\frac{1}{x_i} \sum_{\substack{k=1\\ k\neq i}}^m a_k x_k\\
a_{i+1}\\
\vdots\\
a_m
\end{pmatrix}=
a_1
\begin{pmatrix}
1\\
0\\
0\\
\vdots\\
0\\
-x_1/x_i\\
0\\
\vdots\\
0
\end{pmatrix}
+
a_2
\begin{pmatrix}
0\\
1\\
0\\
\vdots\\
0\\
-x_2/x_i\\
0\\
\vdots\\
0
\end{pmatrix}
+ \cdots +
a_m
\begin{pmatrix}
0\\
0\\
0\\
\vdots\\
0\\
-x_m/x_i\\
0\\
\vdots\\
1
\end{pmatrix} \, .
\end{align*}
Thus $U$ is spanned by the $m-1$ vectors $w_k$ with a $1$ in the $k^\text{th}$ spot and $-x_k/x_i$ in the $i^\text{th}$ spot, for $1 \leq k \leq m$, $k \neq i$. One can easily show that these vectors are linearly independent, so $\text{dim}(U) = m-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1183921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that the complex expression is real Let $|z_1|=\dots=|z_n|=1$ on the complex plane.
Prove that:
$$
\left(1+\frac{z_2}{z_1}\right)
\left(1+\frac{z_3}{z_2}\right)
\dots
\left(1+\frac{z_n}{z_{n-1}}\right)
\left(1+\frac{z_1}{z_n}\right)
\in\mathbb{R}
$$
I have tried induction and writing every "subexpression" as $(1+e^{i(\theta_n-\theta_{n-1})})$.
Any ideas?
|
Writing $z_k = e^{i\theta_k}$, $k = 1,2,\ldots, n$, we write the above expression as
\begin{align}&(1 + e^{i(\theta_2 - \theta_1)})(1 + e^{i(\theta_3 - \theta_2)})\cdots (1 + e^{i(\theta_n - \theta_{n-1})})(1 + e^{i(\theta_1 - \theta_n)})\\
&= e^{-i(\theta_2 - \theta_1)/2}e^{-i(\theta_3 - \theta_2)/2}\cdots e^{-i(\theta_n - \theta_{n-1})/2}(e^{-i(\theta_2- \theta_1)/2} + e^{i(\theta_2 - \theta_1)/2})\cdots (e^{-i(\theta_1 - \theta_n)/2} + e^{i(\theta_1 - \theta_n)/2})\\
&= e^{-i[(\theta_2 - \theta_1) + (\theta_3 - \theta_2) + \cdots + (\theta_1 - \theta_n)]/2}\cdot 2^n\cos((\theta_2 - \theta_1)/2)\cos((\theta_3 - \theta_2)/2)\cdots \cos((\theta_1 - \theta_n)/2)\\
&= 2^n\cos((\theta_2 - \theta_1)/2)\cos((\theta_3 - \theta_2)/2)\cdots \cos((\theta_1 - \theta_n)/2).
\end{align}
The last expression is a real number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
}
|
An example of a Ring with many zero divisors Is there an example of a commutative ring $R$ with identity such that all its elements distinct from $1$ are zero-divisors?
I know that in a finite ring all the elements are units or zero-divisors. Is there a finite ring with the property I've required?
Obviosuly I'm requiring that $|R|\geq 3$.
|
Hint $\ $ If $\,1\,$ is the only unit then $\,-1 = 1\,$ so the ring is an algebra over $\,\Bbb F_2.\,$ With that in mind, it is now easy to construct many examples.
Such commutative rings - where every element $\ne 1$ is a zero-divisor - were called $0$-rings by Paul M. Cohn. Clearly they include Booolean rings, i.e. rings where every element is idempotent $\,x^2-x,\,$ since then $\,x(x-1) = 0.\,$ Kaplansky asked about the existence of non-Boolean $0$-rings. Paul M. Cohn answered the question in Rings of Zero-divisors. There he gave a simple proof that every commutative ring R can be embedded in a commutative ring S such that every element is either a unit of R or a zero-divisor (and if R is an algebra over a field F then so is S). The proof shows further that every proper ideal of R survives (remains proper) in S, with nontrivial annihilator. Cohn then proceeded to prove
${\bf Theorem\ 3\,\ }$ Let $R\,$ be an algebra over $F$ in which every element not in $F$ is a zero-divisor. Then $R$ is a subdirect product of extension fields of $F,\,$ and every $\,x\in R\,$ which is not in $\,F\,$ is transcendental over $F$, except if $\,F = \Bbb F_2$ and $\,x\,$ is idempotent. Moreover, if $R$ has finite dimension over $F$ then either $R=F$ or $R\,$ is a Boolean algebra.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 3
}
|
Simplify the expression $\sin 4x+\sin 8x+\cdots+\sin 4nx$
Simplify the expression $\sin 4x+\sin 8x+\cdots+\sin 4nx$
I have no idea how to do it.
|
Alternatively,
$\sum_{k=1}^n \sin(4kx) = \Im\left\{\sum_{k=1}^n \exp(4kxi)\right\} = \Im\left\{\frac{1-\exp(4(n+1)xi)}{1-\exp(4xi)}\right\} = \Im\left\{\frac{(1-\exp(4(n+1)xi)\exp(-2xi))}{\exp(-2xi)-\exp(2xi))}\right\} = \frac{\cos(2x)(1-\cos(4(n+1)x))-\sin(2x)\sin(4(n+1)x)}{2\sin(2x)}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Solving the system of differential equations I need to solve the system of equations
$$tx'=\begin{pmatrix}
2 & -1 \\
3 & -2 \\
\end{pmatrix}x$$
where $t>0$. I can solve this system if there was not any $t$ there. How do I treat that $t$?
Any help would be appreciated!
Thanks in advance!
|
Change variable $\tau=\log(t)$
Then
$$
t dx/dt= dx/d(\log(t))=dx/d\tau
$$
Solve the system with respect to $\tau$ and replace finally $\tau$ by $\log(t)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that if $A \subseteq B$ and $B \subset C$, then $A \subset C$. Prove that if $A \subseteq B$ and $B \subset C$, then $A \subset C$.
Proof:
$A \subseteq B \Longrightarrow \forall x\in A, x \in B.$ Since $B \subset C$, it follows that $x \in B \Longrightarrow x \in C$ but $\exists c \in C \ni c \notin B.$
Since $A \subseteq B$, it follows that $c \notin A$, thus $A \subset C$.
Is this good enough?
|
If A is a subset B then for all A is element of B. It follows B is a subset of C for all B is element of C. Since A is element of B for all A and B is element of C for all B Therefore A is subset of C.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
If a group $G$ contains an element a having exactly two conjugates, then $G$ has a proper normal subgroup $N \ne \{e\}$ If a group $G$ contains an element a having exactly two conjugates, then $G$ has a proper normal subgroup $N \ne \{e\}$
So my take on this is as follows: If we take $C_G(S)$ of S. This is a subgroup of G. If $C_G(S)=G$, then S has no conjugate but itself, so therefore $C_G(S)$ is a proper subgroup. If we suppose $C_G(S)=\{e\}$,then in order for there to be exactly two conjugates of S, then
For every $a \ne b \in G \ \{e\}, bxb^{-1} =axa^{-1}$ but $bxb^{-1}=axa^{-1} \to (a^{-1}b)xb^{-1}=xa^{-1} \to (a^{-1}b)x(a^{-1}b)^{-1}=x \to a^{-1}b \in C_G(S)$
Which means that $C_G(S)$ is actually nontrivial or that $a^{-1}b=e$ if and only if $a=b$, which would be a contradiction. Thus $C_G(S)$ is a nontrivial proper subgroup. Since there are exactly 2 conjugacy classes of S and they are in one to one correspondence with cosets of S, its centralizers' index $[G:C_G(S)]=s$. Subgroups of index $2$ are normal, so $C_G(S)$ is a proper nontrivial normal subgroup.
This approach seemed very different from other examples I have seen so I guess I am wondering if this approach makes sense.
|
Simplifying $a,b$ argument a little:
Let $g$ be the element with exactly two conjugates. Suppose $C_G(g) = \{e\}$. Since $g \in C_G(g)$, this means $g = e$.
By Lagrange's Theorem, $[G : C_G(g)] = 2$ imples $\lvert G \rvert = 2$, so $G = \{e, h\}$. The two conjugates of $g$ are itself and $hgh^{-1}$, but both are $e$, contradicting $g$ having two conjugates.
Conclude $C_G(g) \ne \{e\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Prove $(1 +\frac{ 1}{n}) ^ {n} \ge 2$ Using induction, I proved the base case and then proceeded to prove: $$(1 + \frac{1}{n+1}) ^ {n+1} \ge 2$$ given $$(1 + \frac{1}{n}) ^ n \ge 2$$ However, I'm stuck at this point and have no clue how to go about it. Other than induction, I tried simple algebraic transformations but couldn't prove this inequality. Any pointers on how to prove this will be appreciated.
[PS: This is my first question on stackexchange, so I'm sorry if there's anything wrong with this post and will be happy to edit if needed].
|
Assuming $\left ( 1+\dfrac{1}{n}\right)^n \geq 2$, we want to prove $\left ( 1+\dfrac{1}{n+1} \right)^{n+1}\geq 2$.
You may start by saying: $\left ( 1+\dfrac{1}{n+1}\right)^{n+1} = \left ( 1+\dfrac{1}{n+1} \right)^n\left ( 1+\dfrac{1}{n+1}\right).$
But if $\left( 1+\dfrac{1}{n}\right)^n\geq 2, $ then $\left( 1+\dfrac{1}{n+1}\right)^n\geq 2$.
$\left( 1+\dfrac{1}{n+1}\right)^n\left ( 1+\dfrac{1}{n+1}\right)\geq 2\left ( 1+\dfrac{1}{n+1}\right)$.
It is easy to see that $\left ( 1+\dfrac{1}{n+1}\right) \geq 1$, therefore conclusion follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Hartshorne's Algebraic geometry Chapter III ex. 9.10 I'm struggling with the exercise said in the title (Hartshorne, III, ex. 9.10). No problems in showing that $\mathbb{P}^1$ is rigid. In the second part, we want to show that $X_0$ being rigid does not imply that $X_0$ does not admit global deformations.
The problem asks us to do so building a flat, proper morphism $f:X\to \mathbb{A}^2$ over an algebriacally closed field $k$, with $\mathbb{P}^1$ in the central fiber, but such that for no neighbourhood $U$ of the origin one has $f^{-1}(U)\simeq U\times \mathbb{P}^1$.
I'd like to get hints or an example for this part. I know that some properties of the fibres have to be preserved, in particular dimension, degree and arithmetic genus (Hart III, Cor 9.10). I guess this also implies that we want some of the fibres to be singular (otherwise the geometric genus being equal to the arithmetic would force an isomorphism with $\mathbb{P}^1$), but I don't know how to obtain these singularities, or if the problem can be avoided considering a field different from the complex numbers.
I haven't tried to think about the third part of the problem yet, but any hints about that one would be well accepted anyway.
|
Consider a field of characteristic $\ne 2$ , and the projective subscheme $X \subset \mathbb A^2_k\times_k\mathbb P^2_k $ given by the equation $(a-1)x^2+(b-1)y^2+z^2=0$.
The projection morphism $f:X\to \mathbb A^2_k$ is smooth above $$S=D(a-1,b-1)=\operatorname {Spec}k[a,(a-1)^{-1},b,(b-1)^{-1}]\subset \mathbb A^2_k\quad (S\cong\mathbb G_m\times_k \mathbb G_m)$$ with all fibers isomorphic to $\mathbb P^1_k$ .
However the projection $X|S\to S$ is not locally trivial near any point of $S$ (in particular not locally trivial near $a=0,b=0$) because the generic fiber of $X|S\to S$ is the conic $(a-1)x^2+(b-1)y^2+z^2=0$ seen as having coefficients $a-1,b-1,1 \in k(a,b)$, and that conic has no rational point over the field $k(a,b)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
if G is generated by {a,b} and ab=ba, then prove G is Abelian if G is generated by {a,b} and ab=ba, then prove G is Abelian
All elements of G will be of the form $a^kb^j,\,\, k,j \in \mathbb{Z}^+$
so I need to get $a^kb^j = b^ja^k$
|
First notice that $G=<a,b>$ does not imply directly that all elements of $G$ are of the form $a^kb^j$.
Here we have to different assertion the first one is $G=<a,b> \,\,\,(1)$ and the second one is $ab=ba\,\,\, (2)$. If this two assertions are true than the $G$ is Abelian. To understand the consequences of $(2)$ over $G$ we need first to understand what $G$ look like without $(2)$ and then assuming $(2)$ we simplify $(1)$.
The definition of the group $<a,b>$ is the following :
$$<a,b>=\big\{a^{i_1}b^{j_1}\cdots a^{i_n}b^{j_n}\Big / n\in \mathbb{N}, (i_1,j_1,\cdots,i_n,j_n)\in \mathbb{Z}\big\} $$
Let's suppose that $ab=ba$, we pick two elements $x,y$ of the group $<a,b>$ we can write :
$$x=a^{i_1}b^{j_1}\cdots a^{i_n}b^{j_n}\\y=a^{l_1}b^{k_1}\cdots a^{l_m}b^{k_m} $$
and we want to prove that $xy=yx$ , the idea is to prove it by induction on $$n(x)=\sum_{i=1,\cdots,n}|i_k|+|j_k|$$
*
*$n(x)=0$ implies that $x$ is the neutral element.
*$n(x)=1$ then $x=a$ or $x=b$, because of the symmetric role of $a$ and $b$ we can assume WLOG that $x=a$, we have $ab=ba$ so $ab^k=ab.b^{k-1}=b (b.a^{k-1})$using this, we can prove for all $k\in \mathbb{Z}$ that $ab^k=b^ka$ hence :
$$ay=a.a^{l_1}b^{k_1}\cdots a^{l_m}b^{k_m}=a^{l_1}.a.b^{k_1}\cdots a^{l_m}b^{k_m}=a^{l_1}b^{k_1}.a.\cdots a^{l_m}b^{k_m}\\ =a^{l_1}b^{k_1}\cdots a^{l_m}b^{k_m}.a=y.a $$
because $a$ commutes with the powers of both $a$ and $b$, so $xy=yx$
*Now suppose that the result is true for all $t$ such that $n(t)=k$, Let $x$ be an element of $<a,b>$ such that $n(x)=k+1$ so we can write either $x=tb$, $x=tb^{-1}$, $x=ta$ or $x=ta^{-1}$ for some $t$ such that $n(t)=k$,and we apply the induction hypothesis in all four cases to conclude that $xy=yx$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1184981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluating the improper integral $\int_0^1 \frac{\log (x \sqrt{x})}{\sqrt{x}} \,dx$ I am supposed to solve this integral but i have no idea how:
$$\int_0^1 \frac{\log (x \sqrt{x})}{\sqrt{x}} \,dx$$
Since one limit is $0$ it will be divided by zero.
Can someone please explain this to me (I really want to understand) and guide me through step by step. Thanks! :)
|
Use the transformation $x=z^2$, to get $$I=6\int_{0}^1 \ln z dz $$ and then use integration by parts to obtain $I=-6$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Tricky 3d geometry problem We have a cube with edge length $L$, now rotate it around its major diagonal (a complete turn, that is to say, the angle is 360 degrees), which object are we gonna get?
Astoundingly the answer is D. And here is a demonstration:
Well now I'm required to calculate the volume of this monster. It's far beyond my capability. I don't know how to analytically describe the curve between the two cones (although I believe it is a hyperbola). And I'm still not sure why it should be a curve rather than a segment or something. Could you help me? Thanks in advance.
|
If we place the cube with its main diagonal from $(0,0,0)$ to $(1,1,1)$ and three edges along the axes, then we can parametrize two edges and a diagonal:
$$
\begin{align}
edge_1&:s\mapsto\begin{pmatrix}s\\0\\0\end{pmatrix},\quad s\in[0,1]\\
edge_2&:s\mapsto\begin{pmatrix}1\\0\\s\end{pmatrix},\quad s\in[0,1]\\
diag&:x\mapsto\frac{1}{\sqrt 3}\begin{pmatrix}x\\x\\x\end{pmatrix},\quad x\in[0,\sqrt 3]
\end{align}
$$
For a given $s\in[0,1]$ one can minimize the quadratic expression (just pick the vertex)
$$
|diag(x)-edge_1(s)|^2
$$
with respect to $x$ to find that $s=\sqrt 3 x$ and with this the distance $f(x)$ between the point $diag(x)$ on the diagonal and the point $edge_1(s)$ on $edge_1$ is
$$
f(x)=\sqrt 2 x
$$
Similarly, one may deduce that for
$$
|diag(x)-edge_2(s)|^2
$$
to be minimized wrt. $x$ for a fixed $s\in[0,1]$ we must have $s=\sqrt 3 x-1$ and so the distance $g(x)$ between the diagonal and $edge_2$ is
$$
g(x)=\sqrt{2(x^2-\sqrt 3x+1)}
$$
By symmetry, we may conclude that the curve we are rotating is
$$
h(x)=
\begin{cases}
\sqrt 2 x&\text{ for }x\leq\tfrac13\sqrt 3\\
-\sqrt 2(x-\sqrt 3)&\text{ for }x\geq \tfrac23\sqrt 3\\
\sqrt{2(x^2-\sqrt 3x+1)}&\text{ in between}
\end{cases}
$$
defined on the domain $x\in[0,\sqrt 3]$ which is illustrated here:
Remark: Fixing $s$ and varying $x$ fixes a point on an edge and varies a point on the diagonal until the nearest point is found. Doing it the other way around would result in a wrong construction of fixing a point on the diagonal and finding the nearest point on the given edge, which minimizes distance orthogonal to an edge instead of orthogonal to the diagonal/axis of rotation.
To demonstrate how it fits, here is an overlay in a dynamic 3D-model of it:
The red curve is the function $h(x)$ derived above corresponding to the "union" case of the solid formed by the uncountable union of all positions of a full rotation of the cube. The purple lines describe the "intersection" case, the uncountable intersection of all positions in a full rotation of the cube.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 3
}
|
How to explain the formula for the sum of a geometric series without calculus? How to explain to a middle-school student the notion of a geometric series without any calculus (i.e. limits)? For example I want to convince my student that
$$1 + \frac{1}{4} + \frac{1}{4^2} + \ldots + \frac{1}{4^n} = \frac{1 - (\frac{1}{4})^{n+1} }{ 1 - \frac{1}{4}}$$
at $n \to \infty$ gives 4/3?
|
What about multiplying the LHS by $(1 - \frac{1}{4})$?
Or is that what you wanted to avoid?
I mean it is not so difficult to understand that nearly all terms cancel... ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 13,
"answer_id": 3
}
|
simplifying an expression which concludes $e^x$ I am solving an high school question right now(well, I'm an high-scholar) and I'm don't understand how they simplify the expression:
$$\frac{-e^{x}+\frac{e^{2x}}{\sqrt{1+e^{2x}}}}{-e^{x}+\sqrt{1+e^{2x}}}$$
To be the expression $$\frac{-e^{x}}{\sqrt{e^{2x}+1}}.$$
I tried to do $$\frac{\frac{e^{2x}+\left(\sqrt{1+e^{2x}}\right)(-e^{x})}{\sqrt{1+e^{2x}}}}{-e^{x}+\sqrt{1+e^{2x}}}=\frac{e^{2x}+\left(\sqrt{1+e^{2x}}\right)(-e^{x})}{\sqrt{1+e^{2x}}\left(-e^{x}+\sqrt{1+e^{2x}}\right)}=\frac{e^{2x}+\left(\sqrt{1+e^{2x}}\right)(-e^{x})}{\sqrt{1+e^{2x}}(-e^{x})+1+e^{2x}}$$ But I don't understand how to continue from here.
|
Look at the numerator of your big fraction, which is $$-e^x+\frac {e^{2x}}{\sqrt{1+e^{2x}}}$$
Put this over a common denominator $$\frac {-e^x\sqrt{1+e^{2x}}+e^{2x}}{\sqrt{1+e^{2x}}}$$
Extract a factor $-e^x$ from the numerator
$$-e^x\frac {\sqrt{1+e^{2x}}-e^{x}}{\sqrt{1+e^{2x}}}$$
Now divide this by the original denominator and cancel.
This involves fewer and simpler steps than the other methods proposed, but involves spotting that the large factor will cancel in a straightforward way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Find the volume V of the solid bounded by the cylinder $x^2 +y^2 = 1$, the xy-plane and the plane $x + z = 1 $. Find the volume V of the solid bounded by the cylinder $x^2 +y^2 = 1$, the xy-plane and the
plane $x + z = 1 $.
Hi all, i cant seem to get the correct answer for this question. The answer is $\pi$ but i got $2\pi$ . Was hoping that someone could check to see what i'm doing wrong. Thanks in advance.
I tried doing this with polar coordinates. so $x^2+y^2=r^2$ , $x=rcos\theta$ , $y=rsin\theta$
$0\le \theta \le 2\pi$ , $0 \le r \le 1 $ and $ 0 \le z \le 1 - r cos\theta$
Did the integration like this
$\int_0^{2\pi}\int_0^1\int_0^{1-rcos\theta} dz dr d\theta$
= $\int_0^{2\pi}\int_0^1 (1-rcos\theta) dr d\theta$
= $\int_0^{2\pi} (1- \frac{cos\theta}{2}) d\theta$
=$2\pi$
edit: mistake was that r is missing eg.$\int_0^{2\pi}\int_0^1\int_0^{1-rcos\theta} r dz dr d\theta$
|
Project the whole situation onto the $(x,z)$-plane, and draw a figure showing this plane. You will then realize that the body $B$ in question is half of the cylinder $$\bigl\{(x,y,z)\>|\>x^2+y^2\leq 1, \ 0\leq z\leq2\bigr\}\ .$$
It follows that ${\rm vol}(B)={1\over2}\cdot \pi\cdot 2=\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Topology, Basis of a given topology. We defined basis for a topology, and there is something that I do not understand. Here is how we defined the basis.
Given a topological space $\left(X,\mathscr T\right)$
we defined basis for the topology to be the set $\mathscr B$
, consisting of subsets of $X$
if it satisfies 2 conditions.
First, for all $x\in X$
there exists $B\in\mathscr B$
such that $x\in B$
. Secondly if $x\in B_{1}\cap B_{2}$
, for $B_{1},B_{2}\in\mathscr B$
, then there exists $B_{3}$
such that $x\in B_{3}\subseteq B_{1}\cap B_{2}$
.
So,I am studying from the book Topology, by Munkres. And it stated that the basis is a subset of the topology. But, if I choose $X=\left\{ a,b\right\}$
and $\tau=\left\{ \emptyset,X\right\}$
, and I can define $\mathbb{B}=\left\{ \left\{ a\right\} ,\left\{ b\right\} \right\} $
. The set $\mathbb{B}$
satisfies the conditions of the definitions. However it's not a subset of the topology. What am I missing here?
|
If we already have a topology given on a set, then a basis of the topological space must be a subset of the topology given on the set and it must generate ( i.e. by taking arbitrary unions, finite intersections of the basis elements) the whole topological space. So, if you choose any topology on a set and try to find a basis of it, you need to keep in mind that the subset of the topology you choose must satisfy the conditions you mentioned as the definition of a basis from Munkres and this subset must generate the entire topology.
Now, if we have X just as a set and we take a collection of its subsets such that they satisfy the definition you mentioned of a basis, then this subset generates a topology on X which might be different from any other topology on X generated by some other basis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Why is $I^1=\varphi*I^0$?
Why is $I^1=\varphi*I^0$ ?
where, $I^j(n)=n^j$ and $(f*g)(n)=\sum\limits_{d\in\mathbb N, d\mid n}f(d)g(\frac{n}d)$
$\varphi$ is the Euler-totient function, $\varphi(c)=\left(\prod\limits_{p\text{ prime},\atop p\mid c}\frac{p-1}{p}\right)c$
So $I^1(n)=n\overset!=\varphi*I^0(n)=\sum\limits_{d\in\mathbb N,\atop d\mid n}\left(\prod\limits_{p\text{ prime},\atop p\mid d}\frac{p-1}{p}\right)d\cdot \underbrace{I^0(\frac{n}d)}_{=1}$ now it becomes complicated can you help
(Latex question: How can I write several conditions under a product or sum one above the other)
|
The convolution $\varphi * I^0$ is commutative, so
$$\varphi * I^0 =\sum_{d\mid n}\varphi(n)\left(\frac{n}{d} \right)^0= \sum_{d\mid n} \varphi\left(\frac{n}{d}\right).$$
It is conventient to look at the latter term, rather than the second one, because it is easier to see what is going on.
The Euler totient function counts the number of elements in $\Bbb{Z}/n\Bbb{Z}$ that are coprime with $n$, hence gives $\left| (\Bbb{Z}/n\Bbb{Z})^{\times}\right|$ as value for $\varphi(n)$, because each elements that is coprime with $n$ generates $\Bbb{Z}/n\Bbb{Z}$. The Euler totient function is therefore defined as
$$\varphi(n):= \sum_{m=1}^{n} \left\lfloor \frac{1}{\gcd(n,m)} \right\rfloor.$$
Generally if $\gcd(n,k)=d$ then $\gcd\left(\frac{n}{d},\frac{k}{d} \right)=1$, so $\varphi\left(\frac{n}{d}\right)$ counts the positive numbers $k \le n$ such that $\gcd(n,k)=d$. Now
$$\sum_{d\mid n}\varphi\left(\frac{n}{d}\right)$$
counts the number of positive integers $k\le n$ such that $\gcd(n,k)=d$ for all divisors of $n$ and this number is exactely $n = I^1 = \sum_{d\mid n} \varphi\left(\frac{n}{d}\right) = \sum_{d\mid n} \varphi(d)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
total variation of uniformly bounded function If a function is uniformly bounded and has finite variation, is the finite variation less than the uniform bound times a constant?
Thank you.
|
No.
Take functions of $C([0;1])$ for example. Take $f_n(0)=0$ and $f_n'(x)=1$ if $\frac{2k}{n}\leq x < \frac{2k+1}{n}$ for some integer $k$ and $f_n'(x)=-1$ otherwise. those functions are saw shaped, you can easily show that $||f_n||_\infty = \frac{1}{n}$ and the total variation of $f_n$ is always one.
Reciprocally if $f(x)=1 \; \forall x$ the total variation is $0$ but the uniform norm is 1. As you can see the total variation isn't a norm over function of bounded variation. However, $N_x: f \mapsto |f(x)|+V(f)$ (where $V(f)$ is the total variation of $f$) is in fact a norm with the following property : $N_x(f)\geq ||f||_\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to get value of $\lim\limits_{n \to \infty} \frac{\sqrt[n]{n!}}{n}$ without using Stirling-formula? The problem $$\lim\limits_{n \to \infty}\frac{\sqrt[n]{n!}}{n}$$ was a for-fun-only exercise given by our Calculus Professor. I was able to solve it quite easily with the use of the Stirling-formula, but can't figure out if it can be done in a different way, possibly by brute force spiced with recognizing the Euler-number in the proccess. I got to $$\lim\limits_{n \to \infty} \sqrt[n]{1 \cdot (1-1/n) \cdot (1-2/n) \cdot ... \cdot (1-(n-1)/n) }$$ by simply bringing the $n$ inside the n-th root sign, but I believe this is not the correct path.
I have no other ideas.
|
Start as
$$ \frac{(n!)^{1/n}}{n} = e^{ \frac{1}{n}\ln n! - \ln n } \sim e^{ \frac{n\ln n-n+1}{n}-\ln n }= e^{-1+1/n} \longrightarrow ... $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Probability proof of inversion formula for Laplace transform Let $f:[0, \infty[\longrightarrow \mathbb{R}$ be bounded and continuous and define $L(\lambda)=\int_0^\infty e^{-\lambda x}f(x)dx$.
Let $X_n$ be a sequence of independent random variables with exponential distribution of rate $\lambda$.
Using the fact that the sum $S_n=X_1+...+X_n$ has Gamma distribution we can see that
$$ (-1)^{n-1}\frac{\lambda^nL^{(n-1)}(\lambda)}{(n-1)!} =Ef(S_n)$$
where $L^{(n-1)}$ is the $n-1$ derivative of $L$.
How can we use this to
prove $$ f(y)=\lim_{n}(-1)^{n-1}\frac{(\frac{n}{y})^nL^{(n-1)}(\frac{n}{y})}{(n-1)!} $$
We know that by the strong law of large numbers $S_n/n$ converges to $E(X_1)$ almost everywhere, so I could consider
the parameter $\lambda=f(y)$ but that doesn't seem useful..
|
Old question, but I was working on the same problem recently. Here's an idea:
The functions $g_n$ defined by $g(x) = f(x/n)$ are bounded and continuous. Let $L_h$ denote the Laplace transform of a function $h$, and let the independent exponential random variables $X_1,X_2,\dots$ have rate $\frac{1}{y}$ (using $y\neq 0$). By the first part of the problem
$$
\mathbb{E}\left(f\left(\frac{S_n}{n}\right)\right) = \mathbb{E}(g_n(S_n)) = (-1)^{n-1}\frac{\frac{1}{y^n}L^{(n-1)}_{g_n}\left(\frac{1}{y}\right)}{(n-1)!}
$$
Yet the strong law from Chapter 7 gives
$$\lim_{n\to\infty}\frac{S_n}{n} = \mathbb{E}(X_1) = y \quad \text{almost surely}$$
and since $f$ is bounded the expected values converge:
$$
\lim_{n\to\infty} (-1)^{n-1}\frac{\frac{1}{y^n}L^{(n-1)}_{g_n}\left(\frac{1}{y}\right)}{(n-1)!} = \lim_{n\to\infty} \mathbb{E}\left(f\left(\frac{S_n}{n}\right)\right) = f(y)
$$
The rest is just a matter of rewriting the terms $L_{g_n}^{(n-1)}$ in terms of $L_f^{(n-1)}$. For any bounded continuous (say) function $h$ one can compute for each $\lambda > 0$ that
$$
\quad L_h^{(n-1)}(\lambda) = (-1)^{n-1}\int_0^\infty e^{-\lambda x} x^{n-1} h(x)\,dx
$$
In our case, then,
$$
L_{g_n}^{(n-1)}(\lambda) = (-1)^{n-1}\int_0^\infty e^{-\lambda x} x^{n-1} f(x/n) \, \mathrel{\underset{u:=x/n}{=}} (-1)^{n-1}\int_0^\infty e^{-n\lambda u} n^{n-1}u f(u) n \, du = n^n L_f(n\lambda)
$$
Setting $\lambda = \frac{1}{y}$ yields
$$
f(y) = \lim_{n\to\infty} (-1)^{n-1}\frac{\frac{1}{y^n}L^{(n-1)}_{g_n}\left(\frac{1}{y}\right)}{(n-1)!} = \lim_{n\to\infty} (-1)^{n-1}\frac{\frac{n^n}{y^n}L^{(n-1)}_{f}\left(\frac{n}{y}\right)}{(n-1)!}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Negative modulus In the programming world, modulo operations involving negative numbers give different results in different programming languages and this seems to be the only thing that Wikipedia mentions in any of its articles relating to negative numbers and modular arithmetic. It is fairly clear that from a number theory perspective $-13 \equiv 2 \mod 5$. This is because a modulus of $5$ is defined as the set $\{0,1,2,3,4\}$ and having $-13 \equiv -3 \mod 5$ would contradict that because $-3 \not\in \{0,1,2,3,4\}$. My question is then regarding a negative modulus. What definition of a modulus of $-5$ would make the most sense in number theory? One in which $13 \equiv -2 \mod -5$, one in which $13 \equiv 3 \mod -5$, or something else entirely?
|
Negating the modulus preserves the congruence relation, by $\ m\mid x\color{#c00}\iff -m\mid x,\,$ so
$\quad a\equiv b\pmod m\iff m\mid a\!-\!b \color{#c00}{\iff} -m\mid a\!-\!b\iff a\equiv b\pmod {\!{-}m}$
Structurally, a congruence is uniquely determined by its kernel, i.e. the set of integers $\equiv 0.\,$ But this is a set of the form $\,m\Bbb Z,\,$ which is invariant under negation $\,-m\Bbb Z\, =\, m\Bbb Z$
When you study rings you can view this as a a special case of the fact that ideals are preserved under unimodular basis transformations, e.g. $\,aR + bR\, =\, cR + dR \ $ if $\, (a,b)M = (c,d)\,$ for some matrix $\,M\,$ having $\ \det M = \pm1,\, $ e.g. $\ a\Bbb Z + b \Bbb Z\, =\, a\Bbb Z + (b\!-\!a)\,\Bbb Z,\,$ which is the inductive step in the Euclidean algorithm for the gcd (it computes the modulus $\,m=\gcd(a,b)\,$ corresponding to the congruence generated by $\,a\equiv 0\equiv b,\,$ i.e. $\,a\Bbb Z + b\Bbb Z = m\Bbb Z).\,$ When the ideal $= a\Bbb Z$ then this amounts to multiplying $\,(a)\,$ by a $\,1\times 1\,$ matrix $\,[u]\,$ with $\det = \pm1,\,$ i.e. $\, u = \pm1,\,$ yielding $\,a\Bbb Z = -a\Bbb Z,\,$ precisely the original equality
As for the choice of canonical reps for the congruence classes, it is your freedom to choose which system you find most convenient. For example, in manual computations it often proves most convenient to choose reps of least magnitude, e.g. $\, 0,\pm1,\pm2\,$ mod $\,5,\,$ e.g. the capability to to use $\,-1$ simplifies many things, e.g. compare $(-1)^n$ vs. $\,4^n.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1185921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Polynomial subspace Wondering abut this set : $E=(p(X) \ \in \mathbb{R}[X]; Xp(X)+p'(X)=0)$, is it a subspace of $\mathbb{R}[X] $?
I definitely think it is because it only includes the zero polynomial but how could we prove it ? I usually take $u$ and $v$ which are in the set and then prove that $\lambda u+v \ \in$ the set but don't see how to proceed here.
|
Hint: Find all elements of the set $E$.
Hint 2: Look at the degrees and leading coefficients.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
$|G:H|=2$, $K \leq G$ with an element not in $H$. Prove $HK=G$ I know it has something to do with normal subgroups and cosets partitioning, but I don't know how to deal with cosets of $HK$.
|
Since $H$ has index $2$, it is a normal subgroup. This implies that $HK$ is a subgroup. (In general it is a subset). But $H \subsetneq HK \subseteq G$, where the first inclusion is strict because $K$ has an element not in $H$. Comparing indices it follows that $G=HK$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to show $\lim_{k\rightarrow \infty} \left(1 + \frac{z}{k}\right)^k=e^z$ I need to show the following:
$$\lim_{k\rightarrow \infty} \left(1 + \frac{z}{k}\right)^k=e^z$$
For all complex numbers z. I don't know how to start this. Should I use l'Hopitals rule somehow?
|
Presumably you know that $\lim\limits_{n\to\infty} \left(1+\frac{1}{n}\right)^n = e$. Try setting $n = \frac{k}{z}$ where $z$ is a constant and see where that gets you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Proof of aperiodic Markov Convergence Theorem for null recurrent case. Status quo:
We consider a irreducible, aperiodic Markov chain $(X_n)_{n\in\mathbb{N}}$ on a countable set $S$ with tranistion function $p(\cdot,\cdot)$.
Now we want to examine $\underset{n\rightarrow\infty}{\text{lim}}~p^n(i,j)$ for arbitrary $i,j\in S$.
If $j$ is transient we can show that $\sum_{n=0}^\infty p^n(i,j)<\infty$ and therefore we know that $\underset{n\rightarrow\infty}{\text{lim}}~p^n(i,j)=0$. If $j$ is positive recurrent we can show by nice coupling argument (see for example Durrett) that $\underset{n\rightarrow\infty}{\text{lim}}~p^n(i,j)=\pi(j)$, where $\pi$ is the unique stationary distribution of the chain.
The question:
My concern is the null recurrent case. I looked through a bunch of textbooks and I could not find a proof for the fact that $\underset{n\rightarrow\infty}{\text{lim}}~p^n(i,j)=0$ in this case. Most textbooks mention it but none of them gives a proof and I can't proof it either.
Any ideas, thoughts and especially references are highly appreciated. Thanks!
|
After some more research I found a nice answer in the book by Norris Theorem 1.8.5. It also makes use of a Coupling argument with the choice of a shifted measure.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to solve a congruence system? Here is a tricky congruence system to solve, I have tried to use the Chinese Remainder Theorem without success so far.
$$2x \equiv3\;(mod\;7)\\
x\equiv8\;(mod\;15)$$
Thank you very much
Li
|
$$ 2x = 3(mod 7) $$
$$ x+x= 3(mod 7) $$
$$ x = 5(mod 7) $$
$$ x = 8(mod 15) $$
$$ x = 15k_1 + 8 $$
$$ x = 7k_2 + 5 $$
$$ x = 105k_3 + c $$
$$ 0 \le c < 7*15 $$
$$ c(mod 7)=5 $$
$$ c = 15k_4+8 $$
$$ 0 \le k_4 < 7 $$
$$(15k_4+8)(mod 7)=3$$
$$ k_4 + 1 = 5 $$
$$ k_4 = 4 $$
$$ c=15*4+4 => c=68 $$
$$ x = 105k_3 + 68 $$
$$ 0 \le k_3 <\infty$$
$$ k_3 \in Z $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
primitive roots problem. that integer n can never have exactly 26 primitive roots. Show that no integer $n$ can have exactly 26 primitive roots.
I know that if $n$ has primitive roots then it has exactly $\phi(\phi(n))$ primitive roots.
I think the proof has to use contradiction.
Suppose $n$ is an integer and has exactly 26 primitive roots then $\phi(\phi(n))=26$.
How do I carry on and show that $\phi(\phi(n))$ is not 26.
Please explain me.
|
Because 26 is what is called a "nontotient." With only two exceptions, $\phi(n)$ is never odd. There are also some even values that never occur as $\phi(n)$, and 26 is one of them.
Remember that $\phi(p) = p - 1$ if $p$ is prime. But $27 = 3^3$, so 26 is not a totient this way. How about $\phi(p^\alpha) = (p - 1)p^{\alpha - 1}$? Or some combination of primes and powers of primes? Since $26 = 2 \times 13$, we're looking for $n$ a multiple of 3 or 4. But what to make of the 13? If you're still not convinced, you can try testing $\phi(n)$ for $26 < n < 162$ by brute force, e.g., Select[Range[27, 161], EulerPhi[#] == 26 &].
So, if $\phi(n) = 26$ has no solution, then $\phi(\phi(n)) = 26$ is absolutely hopeless.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
What multiples of $d$ are still multiples of $d$ when they have their digits reversed? I teach at a school for 11 to 18 year olds. Every term I put up a Challenge on the wall outside my classroom.
This question is one that I have devised for that audience. I think that it is quite interesting and I would like to share it with a wider audience. I am not aware of it being a copy of another question.
An underlying assumption to this question is that we are writing numbers in base 10. A further question could be to investigate the same question in other bases.
Define the reverse of a number as the number created by writing the digits of the original number in the opposite order. Leading zeroes are ignored, although you will see that this makes very little difference to my question if this stipulation is removed.
Thus $Reverse(1456)=6541$ and $Reverse(2100)=12$
My question is this: For a given value $d$, what numbers $x$ have the property that $x$ is a multiple of $d$ and $Reverse(x)$ is a multiple of $d$?
I call such numbers "reversible multiples of $d$."
Clearly there are "brute force" ways to investigate this question, but I am looking for more subtle answers.
A good answer should address the following:
a) For some values of $d$ all multiples are reversible multiples. List those values, with a proof of why this is so.
b) For other values there are certain properties that the multiple must have for it to be a reversible multiple. Explain these.
c) For at least one value there is an algorithm that can be used to construct reversible multiples. Describe such an algorithm.
Enjoy!
|
I think this question is fascinating, thanks for posting it. In addition to $3$ and $9$, I might add that multiples of $11$ are also always reversible. Therefore, multiplies of $33$ and $99$ are also always reversible. One might hope that squaring an always reversible number leads to an always reversible number, but that's not true: $81$ and $121$ multiples are not always reversible. Similarly, $33^2 = 1089 \ $ is not always reversible. However, 1089 is one of my favorite numbers because it is reversible for the first ten multiples, which is pretty unusual for a 4 digit number!
Now, $1001$ is reversible for much more than just the first ten multiples. It is reversible for so many multiplies that one might be tempted to conjecture it is always reversible. However, a quick Mathematica search reveals $1001 \cdot 1009 \ $ is not reversible, even though it any multiple less than $1009$ is reversible!
Is $1, 3, 9, 11, 33, 99 \ $ the complete list of numbers whose multiples are always reversible? I do not know ... (Edit: actually, this question is answered here: https://oeis.org/A018282/a018282.pdf )
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 0
}
|
Closure of Intersection Given a topological space $\Omega$.
By minimality it holds:
$$\overline{A\cap B}\subseteq\overline{A}\cap\overline{B}$$
Especially one has equality for:
$$A=\overline{A}:\quad\overline{A\cap B}=A\cap\overline{B}$$
How to check this quickly?
|
What if $A= [0,1]$ and $B=(1,2),$ then $A \cap B = \emptyset$ so $\overline{A \cap B} = \emptyset,$ but $A \cap \overline{B}= [0,1] \cap [1,2]=\{1\}.$ So you don't have the equality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the point on the paraboloid $z = \frac{x^2}{4}+ \frac{y^2}{25}$ that is closest to the point $(3, 0, 0)$ Find the point on the paraboloid $z = \frac{x^2}{4}+ \frac{y^2}{25}$
that is closest to the point $(3, 0, 0)$
Hi all, could someone give me a hint on how to start doing the above question?
|
Set $P=(3,0,0)$.
The function of interest:
$$f(x,y)=\left(\frac{x}{2}\right)^2+\left(\frac{y}{5}\right)^2$$
Its graph is given by:
$$\phi (u,v)=(u,v,f(u,v))$$
We calculate the normal-vector as follows:
$$\frac{\partial \phi }{\partial u}\times \frac{\partial \phi }{\partial v}(u,v)=\left(-\frac{u}{2},-\frac{1}{25} (2 v),1\right)$$
The normal-vector is perpendicular on the surface.
We now consider the straight-line:
$$\phi (u,v)+t \left(-\frac{u}{2},-\frac{1}{25} (2 v),1\right)$$
and ask, were this line meets P. We are going to solve
$$\phi (u,v)+t \left(-\frac{u}{2},-\frac{1}{25} (2 v),1\right)=P$$
We found these values:
$$u=2,v=0,t=-1$$
There exits other values too, but they are no reals.
Because of
$$\phi (2,0)-P=(-1,0,1)$$
this shortest distance is $\sqrt{2}$.
Have a look what we've done:
Red line for x-Axis, blue line? Our shortest one.
Were they meet is point P.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Method of finding a p-adic expansion to a rational number Could someone go though the method of finding a p-adic expansion of say $-\frac{1}{6}$ in $\mathbb{Z}_7?$
|
The short answer is, long division.
Say you want to find the $5$-adic expansion of $1/17$. You start by writing
$$
\frac{1}{17}=k+5q
$$
with $k \in \{0,1,2,3,4\}$ and $q$ a $5$-adic integer (that is, a rational number with no powers of $5$ in the denominator). Then $k$ is the first term in the expansion, and you repeat the process with $q$ to find the remaining terms.
In this case, we have
$$
\frac{1}{17}=3+5\left(-\frac{10}{17}\right)
$$
and so the first term is a $3$. Continuing similarly,
$$
-\frac{10}{17}=5\left(-\frac{2}{17}\right)
$$
and so the second term is a $0$,
$$
-\frac{2}{17}=4 + 5\left(-\frac{14}{17}\right)
$$
and so the third term is a $4$,
$$
-\frac{14}{17}= 3 + 5\left(-\frac{13}{17}\right)
$$
and so the fourth term is a $3$, and so on.
Eventually, you'll hit a remainder for the second time, and then the expansion will start repeating (just as when you're computing the repeating decimal expansion of a fraction using ordinary long division).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1186967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 0
}
|
Boolean Algebra: Simplifying $\;xyz + x'y + xyz'$ Given the following expression: $xyz + x'y + xyz'\,$ where ($'$) means complement, I tried to simplify it by first factoring out a y so I would get $\;y(xz + x' + xz').\,$
At this point, it appears I have several options:
A) Use two successive rounds of distributive property:
$\begin{align} y( (x + x')(z + x') + xz') )
&= y ( z + x' + xz')\\ & = y ( z + (x' + x)(x' + z') )\\ &= y ( z + x' + z') \\ &= y ( x') \\ &= yx'\end{align}$
B) Or I could use absorption,
$\begin{align}y ( xz + xz' + x' )
&= y ( x (z+z') + x') \\
& = y ( x + x' )\\
&= y ( 1) \\
&= y\end{align}$
I believe the second answer is correct. What am I doing wrong with option A ?
|
Using the distributive property (first method), we get:
$$\begin{align} xyz + x'y + xyz' & = xy(z + z') + x'y \\ &= xy + x'y \\&= (x + x')y \\&= y\end{align}$$
You erred when you went from $ y ( z + x' + z') $ to $yx'$. You should have $$y((z+z')+x')= y(1+x') = y\cdot 1 = y$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Laplace Transform for a difficult function The Laplace Transform I'm having trouble with is:
$$f(t) = 6te^{-9t}\sin(6t)$$
I'm not sure what the protocol is for multiplying t into it.
The Laplace Transform for $f(t) = 6e^{-9t}\sin(6t)$ is $\dfrac 6{(s+9)^2 - 36}$.
Can't figure out how to add in the $t$.
Thanks in advance for your help.
|
Here is a partial solution:
\begin{align}
\mathcal{L}\{f(t)\}&=\mathcal{L}\{e^{-9t}[6t\sin(6t)]\} \\
&=\mathcal{L}\{6t \sin(6t)\} \vert_{s \to s+9} & \text{First Translation Theorem}\\
&= 6 \mathcal{L}\{t\sin(6t)\} \vert_{s \to s+9} & \text{linearity} \\
&=-6\frac d{ds} \mathcal{L}\{\sin(6t)\} \vert_{s \to s+9} & \text{transform derivative principle} \\
&= -6 \left. \frac d{ds} \frac {6}{s^2+36} \right\vert_{s \to s+9}
\end{align}
Now evaluate $\frac d{ds} \frac {6}{s^2+36}$, then replace all the $s$ in your derivative with $s+9$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How many ways are there to divide elements into equal unlabelled sets? How many ways are there to divide N elements into K sets where each set has exactly N/K elements.
For the example of 6 elements into 3 sets each with 2 elements. I started by selecting the elements that would go in the first set (6 choose 2) and then those that would go into the second as (4 choose 2) and then the 2 remaining elements into the third set. This gives, (6 choose 2) * (4 choose 2). In general
(N choose N/K) * (N-(N/K) choose N/K) * (N-(2*N/K) choose N/K) * ... * 1
|
Hint: You are close. As the sets are unlabeled, choosing $\{a,b\},\{c,d\},\{e,f\}$ is the same as choosing $\{e,f\},\{c,d\},\{a,b\}$, but you have counted them both.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Gromov's boundary at infinity, drop the hypothesis on hyperbolicity It's an easy result that if we have two quasi isometric hyperbolic spaces, then their Gromov boundaries at infinity are homeomorphic.
I found online these notes where at page 8, prop 2.20 they seem to drop the hypothesis on hyperbolicity. They give two references (french articles) for the proof but I didn't found anything.
Hence, can someone give me a reference/proof/counterexample to the following statement?
Let X,Y proper geodesic spaces, and let $f\colon X \to Y$ be a quasi isometry between them, then $\partial_pX \cong \partial_{f(p)}Y$
|
Counter-examples are due to Buyalo in his paper "Geodesics in Hadamard spaces" and to Croke and Kleiner in an unpublished preprint. See also the later paper of Croke and Kleiner entitled "Spaces with nonpositive curvature and their ideal boundaries" which cites their preprint and gives further information.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Computing Residue for a General, Multiple-Poled function? I'm trying to compute the residue of the following function at $a$. I'm having a little trouble seeing which poles are relevant:
Compute $\,Res_f(a)$ for the following function:
$$f(z) = \frac{1}{(z - a)^3}\ \tanh z $$
What I'm confused about is the ambiguity surrounding $a$. Indeed, there is a pole of order 3 at $a$, but since $a$ isn't clearly defined, couldn't $a$ assume a value whereby there's a pole in $\tanh z = \frac{sinh\, z}{cosh\, z}$, in addition to the pole already caused by the $\frac{1}{(z-a)^3}$ term? Initially, I just assumed I could use the residue derivative formula, but even then, things get a bit hairy with the second derivative of $\tanh z$.
Should I just assume that $a$ never achieves a value that would put a pole in the denominator of $\tanh z$ and use the derivative formula to compute the residue of a third order pole at $a$?
|
You need to separate between a few cases:
$a=i(\frac{\pi}{2}+\pi k)$
In this case we have $\lim_{z \to a}(z-a)^4f(z)$ is finite and different from $0$ so $a$ is a pole of order $4$
$a=\pi ki$
In this case $\lim_{z \to a}(z-a)^2f(z)$ is finite and different from $0$ so $a$ is a pole of order $2$
$a \ne i(\frac{\pi}{2}+\pi k), \space \pi ki$
In this case $\lim_{z \to a}(z-a)^3f(z)$ is finite and different from $0$ so $a$ is a pole of order $3$
You can use the formula in Panja's answer to calculate the residue at $a$ in each case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Question about one-to-one correspondence between left coset and image of Homomorphism Let $G$ be a finite group and $K$ be a homomorphism such that $K : G\rightarrow G^*$ and $H=\ker K$.
Then there is a one-to-one correspondence between the number of left cosets of $\ker K$ and the number of elements in $K[G]$.
But, it seems that if there are $a,b\in G$, then there are two distinct left cosets, namely $aH$ and $bH$.
How can we prove that $aH$ is not equal to $bH$?
Since $H=\ker K$, then $a\in aH$. Isn't it possible that there may be some $h\in H$ such that $bh=a$?
|
The Emperor of Ice Creams' answer is letter-perfect from a mathematical point of view. Since my comment is rather long, I'm writing it is an answer (but it's only an "illustration").
Suppose $G = D_4 = \langle r,s: r^4 = s^2 = e, sr = r^3s\rangle$ (this is the dihedral group of order 8), and that $G' = \langle a,b: a^2 = b^2 = e, ab = ba\rangle$. This is a non-cyclic group of order $4$.
So explicitly $G = \{e,r,r^2,r^3,s,rs,r^2s,r^3s\}$, and $G' = \{e',a,b,ab\}$.
Define: $\phi: G \to G'$ by: $\phi(r^ks^m) = a^kb^m$. Explicitly:
$\phi(e) = e'$
$\phi(r) = a$
$\phi(r^2) = e'$
$\phi(r^3) = a$ (since in $G',\ a^3 = a^2a = e'a = a$)
$\phi(s) = b$
$\phi(rs) = ab$
$\phi(r^2s) = b$
$\phi(r^3s) = ab$.
To see that $\phi$ is a homomorphism, it suffices (why?) to show that:
$\phi(r)^4 = \phi(s)^2 = \phi(e) = e'$ (which is clear), and that:
$\phi(s)\phi(r) = \phi(r)^3\phi(s)$, that is:
$ba = a^3b$ (again this should be clear because $a^3 = a$, and $G'$ is abelian).
Now the kernel of $\phi$ is $N = \{e,r^2\}$. We compute the cosets:
$N = r^2N = \{e,r^2\}$
$rN = r^3N = \{r,r^3\}$ (Note that $r(r^3)^{-1} = r^2$ is indeed in $N$)
$sN = r^2sN = \{s,r^2s\}$ ($sr^2 = (sr)r = (r^3s)r = r^3(sr) = r^3(r^3s) = r^6s = r^2s$).
$rsN = r^3sN = \{rs,r^3s\}$ ($rsr^2 = r(sr^2) = (r(r^2s) = r^3s$).
Note how beautifully this all works out, $\phi$ is constant on every coset:
$\phi(N) = \{e\}$
$\phi(rN) = \{a\}$
$\phi(sN) = \{b\}$
$\phi(rsN) = \{ab\}$
This means we have a bijection (let's call it $[\phi]: G/N \to G'$). I leave it to you, to PROVE $[\phi]$ is a homomorphism (and thus an isomorphism), the crucial step is noting that:
$[\phi](rsN) = ab = [\phi](rN)[\phi](sN)$.
Note well that even though $s \neq r^2s \in G$, we still have $sN = r^2sN$. This is because cosets "clump" elements of $G$ together. When we say $sN$, the COSET of $N$ we mean is clear, but although $s$ "identifies" it, $r^2s$ would do just as well. And as far as $[\phi]$ is concerned, it cannot tell them apart (And why is this? Because $\phi$ sends both to the same target).
What is going on, here, geometrically? Algebraically, we can see we've "collapsed" the subgroup $N$ to a single element (our new identity). Since $N$ has two elements, this halved the size of our group. Now $D_4$ is the symmetry group of the square, and the geometric consequence of this, is that we have "identified" two opposite ends of a diagonal, collapsing our square down to a line segment (sometimes called a "di-gon").
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do I define a block of a $(0,1)$-matrix as one that has no proper sub-blocks? I'm struggling to come up with a definition of a "block" in a $(0,1)$-matrix $M$ such that we can decompose $M$ into blocks, but the blocks themselves don't further decompose. This is what I've got so far:
Given any $r \times s$ $(0,1)$-matrix $M$, we define a block of $M$ to be a submatrix $H$ in which: (a) every row and every column of $H$ contains a $1$, (b) in $M$, there are no $1$'s in the rows of $H$ outside of $H$, (c) in $M$, there are no $1$'s in the columns of $H$ outside of $H$, and (d) no proper submatrix of $H$ satisfies (a)-(c).
I want to define blocks in such a way that there are no proper sub-blocks in blocks. But I feel item (d) is difficult to parse. I can't just say "there's no proper sub-blocks" because this is a circular definition.
Q: Could the community suggest a better way of phrasing this definition?
My conundrum reminds me of this comic:
|
I would suggest breaking this up into two definitions... maybe call one of them a "block", and another a "simple block". That is:
Given any $r \times s$ $(0,1)$-matrix $M$, we define a block of $M$ to be a submatrix $H$ in which: (a) every row and every column of $H$ contains a $1$, (b) in $M$, there are no $1$'s in the rows of $H$ outside of $H$, (c) in $M$, there are no $1$'s in the columns of $H$ outside of $H$.
A sub-block of a block $H$ is a block $J$ of the submatrix $H$. If $J \neq H$, then $J$ is a proper sub-block.
We say that $H$ is a simple block if it has no proper sub-block.
How's that? Alternatively, call the first one a "pre-block", and the second a "block" (cf. pre-sheaf vs. sheaf).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Need clarification on a Taylor polynomial question $$f(x) = 5 \ln(x)-x$$
second Taylor polynomial centered around $b=1$ is $-1 + 4(x-1) - (5/2)(x-1)^2$
let $a$ be a real number : $0 < a < 1$
let $J$ be closed interval $[1-a, 1+a]$
find upper bound for the error $|f(x)-T_2(x)|$ on interval $J$
answer in terms of a
so i got to the point where i have $|f(x)-T_2(x)| <= (10/6)(x-1)^3$ however the answer is
$(5/3)(a/(1-a))^3$
How did the $a/(1-a)$ get there? If i sub in for the max $x$ which would be $1 + a$, then i
would get $(5/3)(a)^3$. Why is it divided by $(1-a)$?
|
We are given:
$$f(x) = 5 \ln(x) - x$$
The second Taylor polynomial centered around $b=1$ is given by:
$$T_2(x) = -\frac{1}{2} 5 (x-1)^2+4 (x-1)-1$$
We are told to let $a$ be a real number such that $0 \lt a \lt 1$ and let $J$ be the closed interval $[1 −a, 1 +a]$.
We are then asked to use the Quadratic Approximation Error Bound to find an upper bound for the error $|f(x) − T_2(x)|$ on the interval $J$.
The error term is given by:
$$R_{n+1} = \dfrac{f^{(n+1)}(c)}{(n+1)!}(x-b)^{n+1} \le \dfrac{M}{(n+1)!}(x-b)^{n+1}$$
We have $b=1, n= 2$, and have to find the max error for two items, thus:
$$\dfrac{d^3}{dx^3} (5 \ln(x)-x) = \dfrac{10}{x^3}$$
So the maximum of $f^{(3)}(x)$ is given by:
$$\displaystyle \max_{J} \left|f'''(x)\right| = \max_{ 1-a \le x \le 1+a} \left|\dfrac{10}{x^3}\right|$$
The max occurs at the left endpoint because $0 \lt a \lt 1$, so the maximum is given by:
$$\dfrac{10}{(1-a)^3}$$
Next, we have to repeat this and find the maximum of $(x-1)^3$. In this case, the maximum occurs at the rightmost endpoint, hence the maximum is:
$$((1+a) - 1)^3 = a^3$$
Putting this together yields:
$$\left|f(x)-T_2(x)\right| = R_{3} = \dfrac{f^{(3)}(c)}{3!}(x-1)^{3} \le \dfrac{M}{3!}(x-1)^{3} = \dfrac{10}{3!(1-a)^3}a^3 = \dfrac{5}{3} \left(\dfrac {a}{1-a}\right)^3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Derivative tests question Show that $k(x) = \sin^{-1}(x)$ has $0$ inflections $2$ critical points $0$ max/min
I find that the first derivative is $$\frac{1}{\sqrt{1-x^2}}$$
Second derivative is $$\frac{x}{(1-x^2)\sqrt{1-x^2}}$$
I don't know why it has $2$ critical points, I can find only one at second derivative which $x = 0$. I don't know the other one.
There is no min/man because I can't find $x$ values in first derivative.
|
In mathematics, a critical point or stationary point of a differentiable function of a real or complex variable is any value in its domain where its derivative is 0 or undefined
As you can easily see, $f'(x)$ is undefined on $x=1$ and $x=-1$.
I think you might have misunderstood the concept of a critical point for that of an inflection point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1187991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Equation of plane parallel to a vector and containing two given points
I'm not sure how to solve this.
I started by finding the equation of the line AB.
|
Hints:
Let the wanted plane be $\;m: ax+by+cz+d=0\;$ , then it must fulfill:
$$\begin{align}&0=(a,b,c)\cdot(1,-5,-3)\implies&a-5b-3c=0\\{}\\
&A\in m\implies&3a-2b+4c+d=0\\{}\\
&B\in m\implies&2a-b+7c+d=0\end{align}$$
Well, now solve the above linear system. Pay attention to the fact that you shall get a parametrized answer and you can choose values so as to get nice values for $\;a,b,c,d\;$ (i.e., you do not need a fourth independent equation).
The line through $\;A,\,B\;$ is $\;\ell:\;A+t\vec{BA}\;,\;\;t\in\Bbb R$ , so find the value of $\;t\;$ for which an element in this line minimizes the distance to $\;C\;$ (further hint: it must be an element $\;H\in\ell\;$ perpendicular $\;\vec{HC}\;$) , or use one of the formulas to evaluate distance from point to line in the space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Series solution to first order differential equation I need to find a series solution to the following simple differential equation
$$x^2y'=y$$
Assuming the solution to be of the form $y=\sum a_nx^n$ and equating the coefficients on both the sides, all the coefficients turn out to be zero which is definitely wrong.
Please guide me to the correct solution.
Any help is appreciated. Thanks!
|
Well,
\begin{eqnarray}
\int \frac{dy}{y} &=& \int \frac{dx}{x^{2}}\\
\implies \ln y &=& -\frac{1}{x}+c
\end{eqnarray}
Hence,
\begin{equation}
y=Ae^{-\frac{1}{x}}
\end{equation}
Is a series solution necessary?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Connection between algebraic geometry and complex analysis? I've studied some complex analysis and basics in algebraic geometry (let's say over $\mathbb C $). We had been mentioned GAGA but nothing in detail. Anyway, from my beginner's point of view, I already see many parallels between both areas of study like
*
*compact riemann surfaces vs. projective curves
*regular functions being holomorphic
*ramified coverings
*globally holomorphic/regular maps on projective varieties/compact riemann surfaces being constant
Also, the field of elliptic functions on a torus is finitely generated, reminiscent of function fields of varieties (in fact the torus algebro-geometrically is an elliptic curve).
However, I don't see the precise correspondence. After all, the notion of morphism should be weaker than that of a holomorphic map. Or can one get a 1:1-correspondence? How would algebraic geometry help in studying all holomorphic functions and not just those taat happen to be regular. Maybe you could give me some intuition on the topic or GAGA in basic terms. Thank you
|
May I recommend Mme. Raynaud: Géométrie Algébrique et Géometrie Analytique, Exposé XII. The GAGA-relation is made precise by a functor
$an: \underline{Al}_\mathbb{C} \xrightarrow{} \underline{An},\ X \mapsto X^{an},$
from the category of schemes over $\mathbb{C}$ which are locally of finite type and the category of complex spaces. $X^{an}$ can be considered as $X(\mathbb{C})$, the set of complex points of $X$ provided with a canonical complex structure: Raynaud shows that $X^{an}$ solves a universal problem, i.e. represents a certain functor. Hence $X^{an}$ is uniquely determined which facilitates many GAGA-constructions.
GAGA prompts the question which complex spaces $Y$ have the form $Y=X^{an}$, i.e. can be investigated by methods from algebraic geometry. Here Kodairas embedding theorem for compact manifolds $Y$ with a positive line bundle was the first result, a far reaching generalization of the embedding theorem for compact Riemann surfaces. Kodaira's theorem has been generalized by Grauert to compact complex spaces.
Raynaud's paper provides severals lists of properties of schemes and morphisms which carry over by the functor $an$. The paper is well written.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
random variables independence I need to check if Z and W are dependent or not.
X,Y ~ Exp(2)
Then I define: Z=X-Y , W=X+Y.
Now How I can check that Z and W are dependent or not ?
I know from the teory that I should show that f(z,w) = f(z)f(w) , but I dont see how I do it here.
Thanks for help.
|
If $Z$ and $W$ are indeed independent then e.g. $P(Z>1,W<1)=P(Z>1)P(W<1)>0$.
This contradicts that $P(Z<W)=1$ as mentioned in the comment of @Did.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
suppose $|a|<1$, show that $\frac{z-a}{1-\overline{a}z}$ is a mobius transformation that sends $B(0,1)$ to itself. Suppose $|a|<1$, show that $f(x) = \frac{z-a}{1-\overline{a}z}$ is a mobius transformation that sends $B(0,1)$ to itself.
To make such a mobius transformation i tried to send 3 points on the edge to 3 points of the edge. so filling $i,1,-1$ in $f(x)$ we should get on the edges of the unit ball. But i don't seem to know how to calculate these exactly:
$$f(1) = \frac{1-a}{1-\overline{a}1}$$
$$f(-1) = \frac{-1-a}{1+\overline{a}}$$
$$f(1) = \frac{i-a}{1-\overline{a}i}$$
I don't seem to get how i could write these formula's in such a way that i get into the edges of the circle.
Anyone can help me?
Kees
|
If $|z|=1$, then $\overline z=1/z$. So,
$$\left|{z-a\over 1-\overline az}\right|=\left|{z-a\over1-\overline{a/z}}\right|=\left|{\overline z(z-a)\over\overline z-\overline a}\right|=|\overline z|\left|{z-a\over\overline{z-a}}\right|=|\overline z|=1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Finding integer part of a polynomial root If a polynomial $f(x)=x^3+2x^2+x+5$ has only one real root $\omega$ then integer part of $\omega = -3$.
It appeared in my school exam and I couldn't solve it completely but somehow tried to find the range in which the root lies. But it was tough to show that $[\omega] = -3$.
|
$f'(x)=3x^2+4x+1$
The derivative vanishes for $x=-1,-\frac{1}{3}$ It is positive for $x\in(-\infty,-1)$
$f(-3)=-7$
$f(-2)=13$
Thus, by the intermediate value theorem there must be a root in $(-3,2)$ such that $f(\omega)=0$. Thus, it has an integer part of $-3$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Solve $y = x^N + z^N$ for N? Is there a way to solve equations of the form
$y = x^N+z^N$ for N?
I've been looking and I don't think there is a way, but I'm not sure if there's some obscure algebra rule that will help me here.
|
As already said, there is no closed form for the solution (except for very specific cases) and only numerical methods would solve the problem.
Let us admit that $x,y,z$ are positive numbers and $N$ be a real number and consider Newton method which, starting from a reasonable guess $N_0$, will update it according to $$N_{k+1}=N_k-\frac{f(N_k)}{f'(N_k)}$$ So, if $$f(N)=x^N+z^N-y$$ $$f'(N)=x^N \log (x)+z^N \log (z)$$ Concerning $N_0$, you could look at the graph to get a rough estimate or, even simpler, start in the middle of the interval suggested by marty cohen.
For illustration purposes, let us use $x=2,z=5,y=100$. Using the bounds suggested by marty cohen, we find $$\frac{\log (50)}{\log (5)}<N<\frac{\log (50)}{\log (2)}$$ So, let us start at the middle point, that is to say $N_0=4$ and start iterating. The following iterates will then be generated : $3.46804$, $3.06212$, $2.85830$, $2.81737$, $2.81598$ which is the solution for six significant figures.
We can do better (obtaining a much more linear function) if instead we search for the root of $$g(N)=\log(x^N+z^N)-\log(y)$$ $$g'(N)=\frac{x^N \log (x)+z^N \log (z)}{x^N+z^N}$$ Starting again with $N_0=4$, the iterates are : $2.82901$, $2.81598$.
Function $g(N)$ is so close to linearity that computing its value at the bounds and assuming that a straight line joins the points gives a very good estimate of the solution. In the case used above for illustration purposes, this would give $N\approx 2.80472$ which is indead very close to the solution. So, if $N$ is supposed to be an integer, this could be a very fast method.
Doing the same using now $y=\frac{1}{100}$, the iterations would start at $N_0=-5.5$ and the iterates are $-6.64344$, $-6.64712$ which is the solution for six significant figures. Note that the estimate given by the above described secant would provide an estimate equal to $-6.66046$.
Edit
By the way, you could apply the same method for solving $$\sum_{i=1}^m a_i^N=y$$ for any number of terms. Admitting $a_1<a_2<\cdots<a_m$, The solution will be such that $$\frac{y}{m \log (a_m)}<N<\frac{y}{m \log (a_1)}$$ Let us try using $m=4,a_1=2,a_2=3,a_3=5,a_4=7,y=6000$. The secant gives an estimate which is $\approx 4.66956$ from where we can start Newton iterations for the function $$g(N)=\log\Big(\sum_{i=1}^m a_i^N\Big)-\log(y)$$ and obtain as iterates : $4.35247$, $4.35166$ which is the solution for six significant figures.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Related rates problem. Determine the speed of the plane given the following information.
I started by noting that I am looking for $\displaystyle\frac{dx}{dt}$ and I am given $\displaystyle\frac{d\theta}{dt}=1.5^{\circ}/\mbox{s}$.
I then related $\theta$ to $x$ by $\displaystyle\cot(\theta)=\frac{x}{6000\mbox{m}}$, then taking their derivatives with respect to $t$, I get $\displaystyle\frac{dx}{dt}=-6000\mbox{m}\csc^2(\theta)\frac{d\theta}{dt}$, but i'm not sure if this is correct, this number seems too big, and it is also negative. Thoughts?
|
you need to use radians, so you need to convert your $\frac{d\, \theta}{dt} = 1.5^\circ = \frac{1.5 \pi}{180}. $ putting that in $$\frac{dx}{dt} = -6000 \csc^2(\pi/3) \frac{1.5 \pi}{180} = -6000 \times \frac{4}{3} \times \frac{1.5 \pi}{180} = -209.44\ m/sec $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Does the square or the circle have the greater perimeter? A surprisingly hard problem for high schoolers An exam for high school students had the following problem:
Let the point $E$ be the midpoint of the line segment $AD$ on the square $ABCD$. Then let a circle be determined by the points $E$, $B$ and $C$ as shown on the diagram. Which of the geometric figures has the greater perimeter, the square or the circle?
Of course, there are some ways to solve this problem. One method is as follows: assume the side lengths of the square is $1$, put everything somewhere on a Cartesian coordinate system, find the midpoint of the circle using the coordinates of $E$, $B$ and $C$, then find the radius of the circle, and finally use the radius to calculate the circle's circumference and compare it to the perimeter of the square.
The problem with that method is that ostensibly this problem is supposed to be very simple; it shouldn't require the student to know the formula for the midpoint of a circle given three coordinates. Therefore the question here is: does there exist a simple way to solve the problem without knowing any complicated geometric formulas?
|
Let the circle be the unit circle. Consider the square of equal perimeter: it has side length $\pi/2$. Align this square so that the midpoint of its left side touches the leftmost point of the unit circle. Where is the upper right corner of the square? It has location $(-1+\pi/2,\pi/4)$, which is at distance $\sqrt{1-\pi+5\pi^2/16}<1$ (since $\sqrt{1-\pi+5\pi^2/16}=\sqrt{1-\pi(1-5\pi/16)}$, and $5\pi/16<5*3.2/16=1 \implies 1-5\pi/16>0 \implies 1-\pi(1-5\pi/16)<1$) from the center of the circle. Hence you must make this square bigger to get it to touch the circle. Thus the square of the original problem is bigger, and so it has bigger perimeter than the circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "251",
"answer_count": 26,
"answer_id": 8
}
|
Prove 3 is prime in the ring $\Bbb{Z[i]}$. I am not sure what is the right phrase for primes in some ring. The definition I was gave is that in a ring $A$, $p\in A$ is prime if $\forall x,y \in A$ such that $p\mid xy$, $p\mid x$ or $p\mid y$.
I don't know how to start.(I arrived at odd expressions but too long ones) and it is complex for me... I would really appreciate your help.
|
Suppose that $3=rs$. Then $9=N3=N(rs)=NrNs$. It follows then that say $Nr$ (up to sign) must be $1,3$ or $9$. I claim it cannot be $3$. Note that in such case, we can write $3=a^2+b^2$ for some integers $a,b$. But the squares modulo $4$ are $0,1$; and they add up to $0,1,2$, and $3$ is equivalent to none of $0,1,2$ modulo $4$. More generally, if $p$ is a prime with $p\equiv 3\mod 4$, then $p$ is prime in $\Bbb Z[i]$. Conversely, if $p$ is a prime with $p\equiv 1 \mod 4$, then $p$ is not prime in $\Bbb Z[i]$ (for example, $5=(2+i)(2-i)$ by the square decomposition $5=2^2+1^2$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1188892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Why is the notation $f=f(x)$ mathematically correct? Looking at the equation, which is written all over my textbooks, $f=f(x)$ or $\mathbf r = \mathbf r(x,y,z)$ or what-have-you, I can't help but think that that is just wrong. On the LHS is a function. On the RHS is the value of that function evaluated at some arbitrary point in the domain. I understand that what it's supposed to mean is something like, "$f$ is a function of one argument which we'll be calling $x$", however, it looks to me like it is mathematically unsound because there are different types of objects on the LHS and RHS.
Is my interpretation correct? And if so, why do we use this notation?
|
This notation tells you that the function will be called by two different names in the sequel. When the argument is important, it will be called $f(x)$. However when the argument is not particularly important, it will also be called $f$.
In this expression, the symbol $=$ is not used mathematically, but meta-mathematically, as shorthand for "which we will also call".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Proof or counterexample : Supremum and infimum If $($An$)_{n \in N}$ are sets such that each $A_n$ has a supremum and $∩_{n \in N}$$A_n$ $\neq$ $\emptyset$ , then $∩_{n \in N}$$A_n$ has a supremum.
How to Prove This.
|
The statement is false.
I bring you a counter example.
consider $\mathbb{Q}$ as the universal set with the following set definitions:
$$A_1=\{x\in \mathbb{Q}|x^2<2\}\cup{\{5\}}$$
$$A_2=\{x\in \mathbb{Q}|x^2<2\}\cup{\{6\}}$$
$$A_n=\{x\in \mathbb{Q}|x^2<2\}\cup{\{4+n\}}$$
In this example, sumpermum of $A_1$ is 5 and for $A_2$ is 6.
However, $A_1 \cap A_2$ has no supremum:
$$A_1 \cap A_2=\{x\in \mathbb{Q}|x^2<2\}$$
And so on:
$$A_1 \cap A_2 \cap ... =\{x\in \mathbb{Q}|x^2<2\}$$
However, in $\mathbb{R}$ as a universal set, any bounded non-empty set has a supremum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Hausdorff measure of $f(A)$ where $f$ is a Holder continuous function. Let $f\colon \mathbb R^d\to \mathbb R^k$ be a $\beta-$ Holder continuous function ($\beta \in (0,1)$) and $A\subset \mathbb R^d$. As for a Lipschitz function $g$ it holds that $H^s(g(A))\leq Lip(g)^s H^s(A)$, also for $f$ should hold a similar inequality. I want to know if what I've done is correct.
So, let $f$ be such that $|f(x)-f(y)|\leq L|x-y|^\beta$ for each $x,y$.
For each $\epsilon > 0$ there exists a covering $\{E_j\}_j$ of $A$ with $diam(E_j)\leq \delta$, such that $$\sum_j \alpha(\beta s)\Big(\frac{diam (E_j)}{2}\Big)^{\beta s}\leq H^{\beta s}_\delta(A)+\epsilon \quad (*). $$
Now $\{f(E_j)\}$ is a covering of $f(A)$ and $diam f(E_j)\leq L (diam(E_j))^\beta:=\delta'$, so $$H^s_{\delta'}(f(A))\leq \sum_j \alpha(s)\Big(\frac{diam f(E_j)}{2}\Big)^{s}\leq \sum_j \alpha(s) L^s \frac{(diam (E_j))^{\beta s}}{2^s}.$$
But $$\sum_j \alpha(s) L^s \frac{(diam (E_j))^{\beta s}}{2^s}=\frac{\alpha(s) L^s}{\alpha(\beta s) 2^{s-\beta s}}\sum_j \alpha(\beta s) \Big(\frac{(diam (E_j))}{2}\Big)^{\beta s}$$.
Now from $(*)$, we get that $$H^s_{\delta'}(f(A))\leq \frac{\alpha(s) L^s}{\alpha(\beta s) 2^{s-\beta s}} (H^{\beta s}_\delta(A)+\epsilon),$$ so $$H^s(f(A))\leq \frac{\alpha(s) L^s}{\alpha(\beta s) 2^{s-\beta s}} H^{\beta s}(A).$$
Is it correct? Can I get a better estimate?
|
Your argument is correct, and there is no way to improve the estimate. A simple example can be given in the context of abstract metric spaces: let $X $ be the metric space $ (A,d_\beta)$ where $d_\beta(x,y)=|x-y|^\beta$ is a metric. Let $f$ be the identity map from $A$ to $X$; it has the property $|f(x)-f(y)| = |x-y|^\beta$ for all $x,y$.
Unwinding the definition of $H^s$, one obtains
$$H^s(X) = \frac{\alpha(s) }{\alpha(\beta s) 2^{s-\beta s}} H^{\beta s}(A)$$
which shows that equality holds in your estimate.
The abstract metric space $ (A,d_\beta)$ can be realized as a subset of a sufficiently high-dimensional Euclidean space (i.e., it's isometric to such a subset); this is a consequence of a theorem of Schoenberg on isometric embeddings into Euclidean spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Is there a Turing Machine that can distinguish the Halting problem among others? Can there be a Turing machine, that given two oracles, if one of them is the Halting problem, then this machine can output the Halting problem itself?
Clearly, if the first oracle is always the Halting problem, then such machine exists, just copy the first oracle. But if the Halting problem can either be the first or the second oracle, can a Turing machine distinguish which one is the Halting problem (for all such pairs of oracles)?
|
No. Using programs where both oracles give the same output, you obviously cannot distinguish them. Therefore the only way to distinguish the oracles is to analyse programs on which both oracles give different results.
However, if the two oracles give different results, the only way to decide which oracle is the one answering the halting problem is to determine whether that algorithm halts.
Since your Turing machine is intended to answer the question for any pair of oracles, it has to be able to answer that question for any arbitrary algorithm, in order to distinguish the oracle that gives a different answer at only that one algorithm from the halting oracle.
But that means, quite literally, that the Turing machine must be able to solve the halting problem, which we know is impossible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Find the derivative using the chain rule and the quotient rule
$$f(x) = \left(\frac{x}{x+1}\right)^4$$ Find $f'(x)$.
Here is my work:
$$f'(x) = \frac{4x^3\left(x+1\right)^4-4\left(x+1\right)^3x^4}{\left(x+1\right)^8}$$
$$f'(x) = \frac{4x^3\left(x+1\right)^4-4x^4\left(x+1\right)^3}{\left(x+1\right)^8}$$
I know the final simplified answer to be:
$${4x^3\over (x+1)^5}$$
How do I get to the final answer from my last step? Or have I done something wrong in my own work?
|
Not wrong, only written in a different form: you can simplify much of it.
It's much better exploiting the chain rule: if you call
$$
g(x)=\frac{x}{x+1},
$$
then $f(x)=(g(x))^4$ and so
$$
f'(x)=4(g(x))^3g'(x)=
4\left(\frac{x}{x+1}\right)^{\!3}\frac{1(x+1)-x\cdot 1}{(x+1)^2}=
\frac{4x^3}{(x+1)^5}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
}
|
Compact subset of locally compact $\sigma$-compact Let $X$ be a non-compact, locally compact space. Also suppose that there is a sequence of compact non-empty sets $\{K_n\}_{n\in N}$ such that $$X=\bigcup_{n\in N} K_n,\quad K_n\subset K_{n+1}.$$
Now could we say that for any compact subset $K\subset X$ there is a number $n\in N$ such that $K\subset K_n$? If not, which conditions can help? or, is it possible if we choose sequence of compact sets? How about topological groups?
|
No, not as stated.
Let $X = [0,1)$ be the half-open unit interval, which is locally compact but not compact. Let $K_n = \{0\} \cup [\frac{1}{n}, 1-\frac{1}{n}]$ which is compact. Then we have $K_n \subset K_{n+1}$ and $X = \bigcup_n K_n$. But the compact set $K = [0, \frac{1}{2}]$ is not contained in any of the $K_n$.
However, under these assumptions here is something we can prove.
There exists a sequence of compact sets $K_n'$ such that $\bigcup_n K_n' = X$ and $K_{n-1}' \subset (K_{n}')^\circ$ for each $n$. In particular, for any compact set $K$, there is some $n$ with $K \subset K_n'$.
Proof. By local compactness, for each $x \in X$ there is an open set $U_x$ such that $x \in U_x$ and $\overline{U_x}$ is compact. We construct $K_n'$ recursively. To get started, let $K_0'=\emptyset$. Now to construct $K_n'$, suppose $K_{n-1}'$ is already constructed. Since $K_n \cup K_{n-1}'$ is compact, there exist $x_1, \dots, x_r$ such that $K_n \cup K_{n-1}' \subset U_{x_1} \cup \dots \cup U_{x_r}$. Set $K_n' = \overline{U_{x_1}} \cup \dots \cup \overline{U_{x_r}}$ which is compact. Then $K_{n-1}' \cup K_n \subset (K_n')^\circ$.
In particular, $\bigcup_n K_n' \supset \bigcup (K_n')^\circ \supset \bigcup_n K_n = X$. So the $K_n'$ cover $X$.
Moreover, the sets $(K_n')^\circ$ are an increasing open covering of $X$. So if $K$ is any compact set, we must have $K \subset (K_n')^\circ \subset K_n'$ for some $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
}
|
For $x, y \in \Bbb R$ such that $(x+y+1)^2+5x+7y+10+y^2=0$. Show that $-5 \le x+y \le -2.$ I have a problem:
For $x, y \in \Bbb R$ such that $(x+y+1)^2+5x+7y+10+y^2=0$. Show that
$$-5 \le x+y \le -2.$$
I have tried:
I write $(x+y+1)^2+5x+7y+10+y^2=(x+y)^2+7(x+y)+(y+1)^2+10=0.$
Now I'm stuck :(
Any help will be appreciated! Thanks!
|
Since $(y+1)^2\ge 0$ we must have $(x+y)^2+7(x+y)+10=(x+y+5)(x+y+2)\leq 0$ then, solving this inequality for $x+y$, we get $$-5\le x+y \le -2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to do multiplication in $GF(2^8)$? I am taking an Internet Security Class and we received some practice problems and answers, but I do not know how to do these problems , an explanation would be greatly appreciated
Try to compute the following value:(the number is in hexadecimal and each represents a polynomial in GF(2^8)
1) {02} * {87}
answer:
{02} = {0000 0010} = x
{87} = {1000 0111} = x
-> {02} * {87} = (000 1110) XOR (0001 1011) = 0001 0101 = {15}
|
To carry out the operation, we need to know the irreducible polynomial that is being used in this representation. By reverse-engineering the answer, I can see that the irreducible polynomial must be $x^8+x^4+x^3+x+1$ (Rijndael's finite field). To carry out a product of any two polynomials then, what you want to do is multiply them and then use the relation $x^8+x^4+x^3+x+1\equiv 0$, or in other words $x^8\equiv x^4+x^3+x+1$, to eliminate any terms $x^k$ where $k\geq 8$, reducing modulo 2 as you go along.
The binary {0000 0010} corresponds to the polynomial $x$ (i.e., $0x^7+0x^6+0x^5+0x^4+0x^3+0x^2+1x^1+0x^0$), while the binary {1000 0111} corresponds to $x^7+x^2+x+1$ (i.e., $1x^7+0x^6+0x^5+0x^4+0x^3+1x^2+1x^1+1x^0$). So to do the multiplication, we calculate
\begin{align*}
x*(x^7+x^2+x+1) &= x^8 +x^3+x^2+x \\
&\equiv (x^4+x^3+x+1) + x^3+x^2+x \\
&= x^4+2x^3+x^2+2x+1 \\
&\equiv x^4+x^2+1
\end{align*}
which is represented in binary as {0001 0101}.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Understanding controllability indices I'm teaching myself linear control systems through various online materials and the book Linear Systems Theory and Design by Chen. I'm trying to understand controllability indices. Chen says that looking at the controllability matrix $C$ as follows
$$ C = \begin{bmatrix} B & AB & A^2 B & \cdots & A^n B \end{bmatrix} = \begin{bmatrix} b_1 & \cdots & b_p & | & A b_1 & \cdots & A b_p & | & \cdots & | & A^{n-1} b_1 & \cdots & A^{n-1} b_p \end{bmatrix}$$
that reading left to right, the columns are linearly independent until some column $A^ib_m$, which is dependent on the columns to the left - this makes sense - but that subsequent columns associated with $b_m$ (e.g. $A^{i+k}b_m$) are also linearly dependent on their proceeding columns, or that "once a column associated with $b_m$ becomes linearly dependent, then all columns associated with $b_m$ thereafter are linearly dependent". Unfortunately this last point isn't intuitive to me at all - if anyone can shed some light on this or help me see it I'd be very grateful!
Thanks in advance for any help.
Luke
|
I am studying the same subject now, from the same book and had exactly the same question you had.
The answer by @Pait doesn't show that the resulting set $\mathit{\lambda_iAb_i}$ is a subset of the LHS vectors relative to $\mathit{A^2b_1}$ in the controllability matrix. However I had understood it myself, and I will post it here for reference.
Say $\pmb{A^ib_m}$ is the first vector associated with $\pmb{b_m}$ that is dependent on some of the vectors $\pmb{v_k}$ at its left hand side (LHS).
That is: $$\pmb{A^ib_m}= \sum_{k=1}^{i*m-1}\alpha_k\pmb{v_k} \ \ \ (1)$$
Now, for the next vector associated with $\pmb{b_m}$, i.e. $\pmb{A^{i+1}b_m}$, we have: $$\pmb{A^{i+1}b_m}= \sum_{k=1}^{i*m-1}\alpha_k\pmb{Av_k} \ \ \ (2)$$
Now we show (or clarify) that the set $\{\pmb{Av_k}\ |\ k\in\Bbb{Z},\ 1\le k \le (i*m-1)\}$ is a subset of the vectors at LHS of $\pmb{A^{i+1}b_m}$ in the controllability matrix.
The sum in (1) can be split into:
$$\pmb{A^ib_m} = \underbrace{\sum_{k=0}^{i-1}\alpha_{mk}^{'}\pmb{A^kb_m}}_{Vectors\ which\ are\\ associated\ with\ \pmb{b_m}\\ but\ are\ on\ LHS} + \underbrace{(\sum_{k=0}^i\alpha_{1k}^{'}\pmb{A^kb_1} +\sum_{k=0}^i\alpha_{2k}^{'}\pmb{A^kb_2}+\cdots+\sum_{k=0}^i\alpha_{(m-1)k}^{'}\pmb{A^kb_{m-1}})}_{Vectors\ associated\ with\ \pmb{b_j}\ columns\\ which\ are\ before\ \pmb{b_m}\ in\ the\ \pmb{B}\ matrix}$$
Now by multiplying by $\pmb{A}$ on both sides:
$$\pmb{A^{i+1}b_m} = \sum_{k=1}^{i}\alpha_{mk}^{'}\pmb{A^kb_m} + (\sum_{k=1}^{i+1}\alpha_{1k}^{'}\pmb{A^kb_1} +\sum_{k=1}^{i+1}\alpha_{2k}^{'}\pmb{A^kb_2}+\cdots+\sum_{k=1}^{i+1}\alpha_{(m-1)k}^{'}\pmb{A^kb_{m-1}})$$
Clearly, the vectors in the summations are subset of the vectors that are at LHS of $\pmb{A^{i+1}b_m}$ in the controllability matrix.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1189957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Given a drawing of a parabola is there any geometric construction one can make to find its focus? This question was inspired by another one I asked myself these days Given a drawing of an ellipse is there any geometric construction we can do to find it's foci?
I think this is harder, I can't even find is axis or vertex.
|
As @AchilleHui mentions, the midpoints of two parallel chords lead to the point of tangency ($T$) with a third parallel line. Note that the line of midpoints is parallel to the axis of the parabola. By the reflection property of conics, the line of midpoints and the line $\overleftrightarrow{TF}$ make congruent angles with the tangent line.
So, two sets of parallel chords determine two points of tangency and two lines-of-midpoints, which determine two lines that meet at focus $F$. $\square$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Does $\{(1, -1),(2, 1)\}$ spans $\mathbb{R}^2$? please correct me Can anyone please correct me? my problem is in the proof part below
Q: Does $\{(1, -1),(2, 1)\}$ spans $\mathbb{R}^2$?
A:
$$c_1(1, -1) + c_2(2, 1) = (x, y)$$
$$c_1 + 2c_2 = x$$
$$-c_1 + c_2 = y$$
$$c_1 = x - 2c_2$$
$$-(x - 2c_2) + c_2 = y$$
$$-x + 2c_2 + c_2 = y$$
$$c_2 = \frac{x + y}{3}$$
$$c_1 + 2\frac{x + y}{3} = x$$
$$c_1 = x - 2\frac{x + y}{3}$$
$$c_1 = x - \frac{2x-2y}{3}$$
Conclusion: it can reach any point $(x, y)$ in $\mathbb{R}^2$
Proof: Get to $(4, 7)$:
so, $c_1 = 6$, $c_2 =\frac{11}{3}$.
$$6(1, -1)+\frac{11}{3}(2, 1) = \left(\frac{40}{3}, -\frac{7}{3}\right)$$
!!! it should've been equal to $(4, 7)$
|
Perhaps easier:
$$\begin{cases}&\;\;\;c_1+2c_2=x\\{}\\&-c_1+c_2\;\;=y\end{cases}\stackrel{\text{sum both eq's}}\implies3c_2=x+y\implies c_2=\frac{x+y}3$$
and substituting back, say in equation two:
$$c_1=\frac{x-2y}3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Solve the initial value problem $u_{xx}+2u_{xy}-3u_{yy}=0,\ u(x,0)=\sin{x},\ u_{y}(x,0)=x$
Solve the partial differential equation $$u_{xx}+2u_{xy}-3u_{yy}=0$$ subjet to the initial conditions $u(x,0)=\sin{x}$, $u_{y}(x,0)=x$.
What I have done
$$
3\left(\frac{dx}{dy}\right)^2+2\frac{dx}{dy}-1=0
$$
implies
$$\frac{dx}{dy}=-1,\frac{dx}{dy}=\frac{1}{3}
$$
and so
$$
x+y=c_{1},\ 3x-y=c_{2}.
$$
Let $ \xi=x+y$, $\eta=3x-y$. Then
\begin{align}
u_{xx}&=u_{\xi\xi}+6u_{\xi\eta}+9u_{\eta\eta} \\ u_{yy}&=u_{\xi\xi}-2u_{\xi\eta}+u_{\eta\eta} \\
u_{xy}&=u_{\xi\xi}-2u_{\xi\eta}-3u_{\eta\eta}.
\end{align}
Applying substitutions,
$$
u_{\xi\eta}=0.
$$
Thus,
\begin{align}
u(\xi,\eta)&=\varphi(\xi)+\psi(\eta) \\
u(x,y)&=\varphi(x+y)+\psi(3x-y).
\end{align}
Applying the initial value condition,
\begin{align}
u(x,0)&=\varphi(x)+\psi(3x)=\sin{x} \\
u_{y}(x,0)&=\varphi'(x)-\psi'(3x)=x
\end{align}
Therefore,
\begin{align}
\varphi(x)&= \frac{1}{2} \left(\sin{x}+\int_{x_{0}}^{x} \tau \, d\tau \right)+\frac k2 \\
ψ(3x)&=\frac{1}{2} \left(\sin{x}-\int_{x_{0}}^{x}\tau \, d\tau \right)-\frac k2.
\end{align}
I have no idea how to get $ψ(x)$. Does anyone could help me to continue doing this question? Thanks very much!
|
We use Laplace transform method and free CAS Maxima
http://maxima.sourceforge.net/
Answer:
$$u=\frac{\sin(x+y)}{4}+\frac{y^2}{3}+xy+\frac34\sin\left(x-\frac{y}{3}\right)$$
2 method
*
*$D_x^2+2D_xD_y-3D_x^2=(D_x-D_y)(D_x+3D_y)$
*General solution of $u_x-u_y=0$ is $u_1=f(x+y)$
*General solution of $u_x+3u_y=0$ is $u_2=g\left(x-\frac{y}{3}\right)$
*$u=u_1+u_2=f(x+y)+g\left(x-\frac{y}{3}\right)$
*From initial conditions we get
$$f(x)+g(x)=\sin(x),\\f'(x)-\frac13g'(x)=x$$
*After the integration of the second equation we get
$$f(x)+g(x)=\sin(x),\\f(x)-\frac13g(x)=\frac{x^2}{2}+c$$
*Then
$$f(x)=\frac{\sin{(x)}}{4}+\frac{3 {{x}^{2}}}{8}+\frac{3 c}{4},\\
g(x)=\frac{3 \sin{(x)}}{4}-\frac{3 {{x}^{2}}}{8}-\frac{3 c}{4}
$$
*$$u=f(x+y)+g\left(x-\frac{y}{3}\right)\\=\frac{\sin{\left( x+y\right) }}{4}+\frac{3 {{\left( x+y\right) }^{2}}}{8}-\frac{3 {{\left( x-\frac{y}{3}\right) }^{2}}}{8}-\frac{3 \sin{\left( \frac{y}{3}-x\right) }}{4}\\=
\frac{\sin(x+y)}{4}+\frac{y^2}{3}+xy+\frac34\sin\left(x-\frac{y}{3}\right)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
Can someone provide a simple example of the "pre-image theorem" in differential geometry? I only have a background in engineering calculus. A problem I am currently working on relates to something called a "pre-image theorem"
The theorem roughly states:
Let $f: N \to R^{m}$ be a $C^{\infty} $ map on a manifold N of
dimension n. Then a nonempty regular level set $S = f^{-1}(c)$ is a
regular submanifold of dimension n-m of N
I am confounded by the language used in this theorem, but I really wish to understand this. Can someone translate this into a simple case where $f$ is some three dimensional function i.e. $f = x^2 + y^2 + z^2$ can show how this theorem applies?
Thanks!
|
A few examples might make things clear:
(1) Let $n=2$ and $m=1$. Define $f:\mathbb{R}^2 \to \mathbb{R}$ by $f(x,y) = x^2 + y^2$. Then $S = f^{-1}(c) = \{(x,y) \in \mathbb{R}^2 : x^2 + y^2 =c\}$, which is a circle with radius $\sqrt{c}$. We have $n-m=1$, and a circle is indeed a one-dimensional manifold.
(2) Let $n=3$ and $m=1$. Define $f:\mathbb{R}^3 \to \mathbb{R}$ by $f(x,y) = x + y + 2z$. Then $S = f^{-1}(c) = \{(x,y,z) \in \mathbb{R}^3 : x + y + 2z = c\}$, which is a plane. We have $n-m=2$, and a plane is indeed a two-dimensional manifold.
(3) Let $n=3$ and $m=2$. Define $f:\mathbb{R}^3 \to \mathbb{R}^2$ by $f(x,y) = (x^2 + y^2, y^2 + z^2)$. Then
$$
S = f^{-1}(a,b) = \{(x,y,z) \in \mathbb{R}^3 : x^2 + y^2 =a \text{ and } y^2 + z^2 = b\},
$$
which is the curve(s) of intersection of two circular cylinders. We have $n-m=1$, and the set of intersection curves is indeed a one-dimensional manifold.
Since you're not a mathematician, I don't recommend that you spend much time thinking about this theorem. In two or three dimensional space, ignoring all the corner cases and pathologies, all it says is that an equation of the form $f(x,y) = 0$ defines a curve, and an equation of the form $f(x,y,z) = 0$ defines a surface. But those facts are intuitively obvious, anyway, so the theorem doesn't tell us anything new.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Radius of convergence of a power series whose coeffecients are "discontinuous" I have a power series:
$s(x)=\sum_0^\infty a_n x^n$
with
$a_n=
\begin{cases}
1, & \text{if $n$ is a square number} \\
0, & \text{otherwise}
\end{cases}$
What is the radius of convergence of this series?
I tried to use ratio test, but as$\lim\limits_{n \to \infty} \frac{a_n x^n}{a_{n+1} x^{n+1}}$does not exist, I don't know how to apply the ratio test on the problem.
Thanks in advance!
|
Although $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}$ and $\lim_{n\to\infty}\sqrt[n]{a_n}$ don't exist, you can still find $\limsup_{n\to\infty} \sqrt[n]{a_n}$. Since
$$
\sup_{m\ge n} \sqrt[m]{a_m}=1
$$
for all $n\in\mathbb{N}$, $\limsup_{n\to\infty}\sqrt[n]{a_n}=1$ and so the radius of convergence is $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Stirling number of the second kind and combinations The Stirling number of the second kind, denoted by $S(n,r)$, is defined as the number of $r$-partitions on a set of $n$ elements. Let $\binom{n}{r}$ (which is read $n$ choose $r$) denotes the number of $r$-element subsets of an $n$-element set. How to show that
$$
\binom{n}{r}\leq S(n,r)
$$
for $2\leq r\leq n$.
|
We know that,
${n \choose r} = {n \choose {n-r}}$,
So it's sufficient to prove that,$${n \choose {n-r}} \le S(n,r)$$.
proof
For any $(n-r)$-subset of $n$ element set we can create (at-least one) $r$-partitions in the following way,
1) Put all the $(n-r)$ elements in a single partition.
2) partition the remaining(if any) $r$ elements into $r-1$ sets in $S(r,r-1)$ ways ($r \ge 1$).
Also, there are partitions which does not contain any set of size $n-r$ if $r \notin \{1,n\}$.
By this you can conclude.
Now, check what happens when $r=2, n=4$. Is there any more like that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What initial value do I have to take at the beginning? In my lecture notes there is the following example on which we have applied the method of characteristics:
$$u_t+2xu_x=x+u, x \in \mathbb{R}, t>0 \\ u(x,0)=1+x^2, x \in \mathbb{R}$$
$$$$
$$(x(0), t(0))=(x_0,0)$$
We will find a curve $(x(s), t(s)), s \in \mathbb{R}$ such that $\sigma '(s)=\frac{d}{ds}(u(x(s), t(s))=u_x(x(s), t(s))x'(s)+u_t(x(s), t(s))t'(s)$
$$x'(s)=2x(s), x(0)=x_0 \\ t'(s)=1, t(0)=0 \\ \sigma '(s)=2xu_x+u_t=x(s)+u(s), \sigma(0)=u(x(0), t(0))=1+x_0^2$$
$$\dots \dots \dots \dots \dots$$
$$t(s)=s \\ x(s)=x_0e^{2s} \\ \sigma(s)=x_0 e^{2s}+(1+x_0^2-x_0)e^s$$
If $\overline{s}$ is the value of $s$ such that $(x(\overline{s}), t(\overline{s}))=(x_1, t_1)$ then we have $$x_0e^{2\overline{s}}=x_1 \ , \ \overline{s}=t_1 \\ \Rightarrow x_0=x_1e^{-2t_1}, \overline{s}=t_1$$
So for $s=\overline{s}$ we have $$\sigma(\overline{s})=u(x_1, t_1)=x_1 e^{t_1}+x_1^2e^{-3t_1}-x_1e^{t_1}$$
$$$$
I want to apply this method at the following problem:
$$u_x(x, y)+(x+y)u_y(x, y)=0, x+y>1 \\ u(x, 1-x)=f(x), x \in \mathbb{R}$$
What initial value do I have to take at the beginning?
$(x(0), y(0))=(x_0, 1-x_0)$ ?
Or something else?
|
Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example:
$\dfrac{dx}{dt}=1$ , letting $x(0)=0$ , we have $x=t$
$\dfrac{dy}{dt}=x+y=t+y$ , we have $y=y_0e^t-t-1=y_0e^x-x-1$
$\dfrac{du}{dt}=0$ , letting $u(0)=F(y_0)$ , we have $u(x,y)=F(y_0)=F((x+y+1)e^{-x})$
$u(x,1-x)=f(x)$ :
$F(2e^{-x})=f(x)$
$F(x)=f(\ln2-\ln x)$
$\therefore u(x,y)=f(\ln2-\ln((x+y+1)e^{-x}))=f(x-\ln(x+y+1)+\ln2)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Problem in understanding models of hyperbolic geometry I recently started reading The Princeton Companion to Mathematics. I am currently stuck in the introduction to hyperbolic geometry and have some doubts about its models.
Isn't the hyperbolic space produced by rotating a hyperbola? That is, isn't hyperbolic geometry carried out on a hyperboloid?
Also, how is the half plane model generated? Is it generated by taking the crossection of the hyperboloid on the upper half of the complex plane or by taking a projection?
Also, can someone explain to me how "hyperbolic distances become larger and larger, relative to Euclidean ones, the closer you get to the real axis". Is the axis referred to here the real axis of the complex plane or some other axis. And why do distances become larger the closer we get to the real axis? The part of the line close to the real axis simply looks like a part of a circle whose distance does not increase abnormally.
Any visuals will be appreciated.
|
It looks like you are talking about the Poincaré half-plane model of hyperbolic geometry (see https://en.wikipedia.org/wiki/Poincar%C3%A9_half-plane_model )
Be aware this is only a model of hyperbolic geometry.
For doing "real-life" hyperbolic geometry you need a "surface with a constant negative curvature " or pseudo spherical surface see https://en.wikipedia.org/wiki/Pseudosphere
Also the hyperboloid is only a model of hyperbolic geometry (the curvature of an hyperboloid is positive and not constant, so wrong on both counts of the curvature )
Having said all this you seem to investigate the Poincaré half-plane model of hyperbolic geometry and indeed the scale of this model becomes 0 when you get near the x=0 line
also see
https://en.wikipedia.org/wiki/Hyperbolic_geometry#Models_of_the_hyperbolic_plane
for more information
Hopes this helps
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Union/intersection over index families of intervals I am currently reading "Introduction to Topology" by Bert Mendelson (third edition).
At Chapter 1, Section 4 - Indexed Families of Sets. For Exercise 5, he asks the following:
Let $I$ be the set of real numbers that are greater than $0$.
For each $x \in I$, let $A_x$ be the open interval $(0,x)$.
a) Prove that $\cap _{x \in I}\ A_x = \emptyset$
b) Prove that $\cup _{x \in I}\ A_x = I$.
For each $x \in I$, let $B_x$ be the closed interval $[0,x]$.
c) Prove that $\cap _{x \in I}\ B_x = \{0\}$
d) Prove that $\cup _{x \in I}\ B_x = I \cup \{0\}$.
Maybe I have misread it, but some of this doesn't make sense to me.
I am fine with Part a as for the smallest $x \in I$, the open interval $(0, x) = \emptyset$ given that $I$ is the set of real numbers greater than $0$.
Part c seems wrong: for each $x \in I$, the set $B_x$ should contain both $0$ and the smallest $x \in I$ - because $B_x$ is the closed interval $[0, x]$, where $0 \in [0, x]$ and $x \in [0, x]$. As $\cap _{x \in I}\ B_x$ will contain all common elements of $B_x$ (for each $x \in I$), it seems to me that it should be: $\cap _{x \in I}\ B_x = \{0, smallest(I)\}$.
As I was writing this question I originally had trouble with b and d too, however I have since changed my mind - as I was forgetting that $I$ is an infinite set. I was thinking up issues when considering the largest $x \in I$.
So my question is: am I mistaken with part c? and how so?
|
There is no smallest member of $I$. Given any $x \in I$, you'll eventually come across $[0, \frac{x}{2}]$ in your intersection. The only common point to all those intervals is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Let $R=\{(x,y): x=y^2\}$ be a relation defined in $\mathbb{Z}$. Is it reflexive, symmetric, transitive or antisymmetric? Let $R=\{(x,y): x=y^2\}$ be a relation defined in $\mathbb{Z}$. Is it reflexive, symmetric, transitive or antisymmetric).
I'm having most trouble determining if this relation is symmetric, how can I tell?
|
*
*The relation is not reflexive: $(2,2)\notin R$
*The relation is not symmetric: $(4,2)\in R$, but $(2,4)\notin R$
*The relation is not transitive: $(16,4)\in R$, $(4,2)\in R$, but $(16,2)\notin R$
Now, let's see if the relation is antisymmetric. Suppose $(x,y)\in R$ and $(y,x)\in R$. Then $x=y^2$ and $y=x^2$, which implies $x=x^4$. Can you now end proving the relation is antisymmetric?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1190965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How can I show logically equivalence without a truth table Show that $(p \rightarrow q) \wedge (p \rightarrow r)$ and $p \rightarrow (q \wedge r)$ are logically equivalent.
I tried to do this making a truth table but I think my teacher wants me to solve it using the different laws of Logical Equivalences.
Can anyone help me?
|
Here is an approach
$$(p \to q) \wedge (p \to r) \equiv (\neg p \vee q) \wedge (\neg p \vee r) \equiv \neg p \vee (q \wedge r) \equiv p \to (q \wedge r)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Irrationality of $\sqrt[n]2$ I know how to prove the result for $n=2$ by contradiction, but does anyone know a proof for general integers $n$ ?
Thank you for your answers.
Marcus
|
Suppose that $\sqrt[n]2$ is rational. Then, for some $p,q\in\mathbb Q$,
$$\sqrt[n]2=\frac{p}{q}\implies 2=\frac{p^n}{q^n}\implies p^n=2q^n=q^n+q^n.$$
Contradiction with Fermat last theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Cohomology of Segre varieties Let $\Sigma_{n,m}$ be a Segre variety, i.e. the image of the Segre map $\mathbb{P}^n\times\mathbb{P}^m\to\mathbb{P}^{(n+1)(m+1)-1}$.
Then how can I calculate the first cohomology group of its tangent bundle, i.e. $H^1(\Sigma_{n,m},\mathcal{T})$?
|
Forget about Segre and use the Künneth formula for $T=T_{\mathbb P^n\times {\mathbb P^m}}=T_{\mathbb P^n}\boxtimes T_{\mathbb P^m}$, obtaining:$$ H^1(\mathbb{P}^n\times\mathbb{P}^m,T)\\=[ H^0(\mathbb P^n,T_{\mathbb P^n})\otimes H^1(\mathbb{P}^m,T_{\mathbb P^m})]\oplus [H^1(\mathbb P^n,T_{\mathbb P^n}) \otimes H^0((\mathbb{P}^m,T_{\mathbb P^m})]\\=[\operatorname {(whatever) }\otimes 0]\oplus [0\otimes \operatorname {(whatever)}]\\=\Large {0} $$ I have used that for any projective space $\mathbb P^N$ we have $H^i(\mathbb P^N, T_{\mathbb P^N})=0 \; \operatorname {for all}\: i\geq1$, which follows from the long cohomology exact sequence associated to the Euler exact sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How do I change the order of integration of this integral? $$\int\limits_1^e\int\limits_{\frac{\pi}{2}}^{\log \,x} - \sin\,y\, dy\,dx$$
I don't understand how to change the order because of the $\log\,x$ as the upper bound for the inner integral
How do I change it so it looks like $$\int \int f(x)d(y)\,dx\,dy$$
Thanks
|
Your region is $S=\{(x,y): 1\leq x\leq e, \log x\leq y\leq \pi/2 \}$. You split it in two regions, $S_1=\{(x,y):1\leq x\leq e, 1\leq y\leq \pi/2\}$ (a rectangle) and $S_2=\{(x,y):0\leq y\leq 1, 1\leq x\leq e^y\}$. Then you write your integral as a sum over each of these regions, and here you have the integration in the order you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Maximal Function Estimate Suppose $\psi$ is a rapidly decreasing function; i.e. for all $N>0$ there exists a constant $C_{N}$ such that $\left|\psi(x)\right|\leq C_{N}(1+\left|x\right|)^{-N}$. Define a family of functions $\left\{\psi_{j,k}\right\}_{j,k\in\mathbb{Z}}$ by $\psi_{j,k}(x)=2^{j/2}\psi(2^{j}x-k)$. For each $j,k\in\mathbb{Z}$, let $D(j,k)=\lfloor{2+\left|2^{j}x-k\right|}\rfloor$, where $x\in\mathbb{R}$ is fixed.
In a paper that I'm reading, the author states that the following inequality is a consequence of the rapid decrease of $\psi$ and standard estimates on approximations to the identity:
$$\left|a_{jk}\right|=\left|\int_{\mathbb{R}}f(y)\overline{\psi_{j,k}(y)}dy\right|\leq C2^{-j/2}D(j,k)Mf(x), \ \forall f\in L^{p} (1<p<\infty)$$
where $C$ is some constant which depends only on $\psi$.
I have tried proving this estimate by using the rapid decrease hypothesis to get a continuous, integrable majorant of $\psi_{jk}$ and then using (Lebesgue-Stieltjes) integration by parts, but with no luck. For one thing, I'm not sure how to pick up a factor of $Mf(x)$ instead of $Mf(0)$. Any suggestions on how to proceed would be appreciated.
|
So this is not as nice as I would have liked. I'm hoping somebody will come up with something cleaner.
Set $\delta:=(1+\left|2^{j}x-k\right|)$ for $j,k\in\mathbb{Z}$. Without loss of generality, assume $f\geq 0$. Observe that
$$\left|a_{jk}\right|\leq\int_{\left|2^{j}y-k\right|\leq\delta}f(y)\left|\psi_{jk}(y)\right|dy+\sum_{k=0}^{\infty}\int_{2^{k}\delta\leq\left|2^{j}y-k\right|<2^{k+1}\delta}f(y)\left|\psi_{jk}(y)\right|dy$$
By the triangle inequality, $\left|2^{j}(y-x)\right|\leq 2\delta$, whence the first term is majorized by
$$2^{j/2}\left\|\psi\right\|_{\infty}\int_{\left|y-x\right|\leq 2^{-j+1}\delta}f(y)dy\leq2^{-j/2+2}\delta\left\|\psi\right\|_{\infty}Mf(x)$$
Using the hypothesis that $\left|\psi(y)\right|\leq C_{N}(1+\left|y\right|)^{-N}$, we see that the general term in the series is majorized by
$$C_{N}2^{j/2}(1+2^{k}\delta)^{-N}\int_{\left|y-x\right|\leq 2^{k-j+2}\delta}f(y)dy\leq C_{N}2^{j/2}(1+2^{k}\delta)^{-N}2^{k-j+3}\delta Mf(x)$$
Take $N>1$ above. Since $\delta\geq 1$, we see that the series is majorized by
$$C_{N}2^{-j/2+3}\delta Mf(x)\sum_{k=0}^{\infty}(2^{N-1})^{-k}$$
Clearly, $\delta\leq\lfloor{2+\left|2^{j}x-k\right|}\rfloor$. Taking
$$C=\max\left\{4\left\|\psi\right\|_{\infty}, 8C_{N}\sum_{k=0}^{\infty}(2^{N-1})^{-k}\right\}$$
completes the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Probability and elementary set theory proofs (i) suppose P(A) = $0 $ or $ 1$, prove that A and any subset B of Ω are independent.
I did: $P(A)+P(B)+P(C)=1 $ since $ P(Ω)=1.$
If $P(A)=0, $ then $P(B)=1-P(C)=P(B|A)$
Similarly, if $P(A)=1, $ then $P(B)=P(C)=P(B|A)$
Therefore, A and any subset B of Ω are independent. Is this correct and/or sufficient?
|
This is a bad assumption:
$$P(A)+P(B)+P(C)=1.$$
In the case $P(A) = 1,$ you cannot use this to prove something about
"any subset $B$ of $\Omega$", because the assumption implies that $P(B) = 0$.
We know there are subsets of $\Omega$ that have non-zero probabilities,
and you have discarded all of them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.