Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Finding a differentiable function that cannot be bounded by quadratic function Is there any function from $\mathbb R$ to $\mathbb R$ such that it's value at $0$ is $0$ and its derivative at $0$ is also $0$ and can never be bounded by any form of quadratic function in any neighborhood of the origin?
I have been thinking about this for a long time but got no clue.
|
What about $f(x) = |x|^{3/2}$?
It satisfies your condition, but $\lim_{x\to 0} \dfrac{f(x)}{x^2} = \infty$, so $f(x)$ is larger than $cx^2$ near $x=0$ for every $c>0$.
(You may have another kind of "bound" in mind though.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1731921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate $\Re[\cos(1+i)]$ Evaluate $\Re[\cos(1+i)]$.
The trigonometric function in the expression is throwing me in a loop and need some guidance on how to evaluate this.
Thanks.
|
$$2\cos(a+ib)=e^{i(a+ib)}+e^{-i(a+ib)}=e^{-b}e^{ia}+e^be^{-ia}$$
Use Euler Identity: $e^{ix}=\cos x+i\sin x$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1732221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Convergence of a Branching process Consider the Branching process: $\{ \xi_i^n , n \ge 1, i \ge 1\}$ are i.i.d. taking values $0, 1, \ldots$ and $Z_0 := 1, \; Z_{n+1} := \sum\limits_{i=1}^{Z_n} \xi_i^{n+1}$. Assume $\mu := \mathbb{E}[\xi_i^b]>1$. Assume $\sigma^2 := \operatorname{Var}(\xi_i^n) < \infty$. Denote $X_n := \frac{Z_n}{\mu^n}$. I need to show that $\lim\limits_{n \longrightarrow \infty} \mathbb{E}[|X_n - X_\infty|^2] = 0$ and that $P(X_\infty > 0) > 0$.
I can show that $X_n$ defined as such, is a martingale and that $X_n \longrightarrow X_\infty$ a.s. for some random variable $X_{\infty}$. Furthermore, I derived $\mathbb{E}[|X_n - X_{n-1}|^2] = \frac{\sigma^2}{\mu^{n+1}}$. Since $X_n$ is a martingale, I can interpret $\mathbb{E}[|X_n - X_{n-1}|^2]$ as the conditional variance $\operatorname{Var}(X_n \mid X_{n-1})$. I am trying to use this latter result to prove the convergence.
I appreciate any insights on this.
|
Suppose the conditional expectation of $X_n$ given $X_{n-1}$ is $X_{n-1}$, as in a martingale, and the conditional variance is $2$. Then the law of total variance tells us that
$$
\operatorname{var}(|X_n - X_{n-1}|^2) = \operatorname{var}(\operatorname{E}(|X_n-X_{n-1}|^2 \mid X_{n-1}) + \operatorname{E}(\operatorname{var}(|X_n-X_{n-1}|^2 \mid X_{n-1}). \tag 1
$$
In cases where the conditional variance of $X_n$ given $X_{n-1}$ does not depend on $X_{n-1}$, the first term on the right side in $(1)$ is the conditional variance of $X_n$ given $X_{n-1}$, and typically the second term is positive, not $0$. Therefore the expression on the left side is not the conditional variance of $X_n$ given $X_{n-1}$, but is a larger quantity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1732389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $M$ is a connected manifold, does $M\setminus\{p\}$ have finitely many components?
Let $M$ be a connected manifold and $p\in M$. Is it true that $M\setminus\{p\}$ has only finitely many connected components?
(We can also suppose $M$ is compact if that helps.)
I think this is true but I can't prove it yet. This is what I thought: $M$ looks the same as some $\mathbb{R}^n$ locally. Let $U\subseteq M,V\subseteq\mathbb{R}^n$ be homeomorphic open sets with $p\in U$ and $V$ some open ball. If $M\setminus\{p\}$ has infinitely many components, would that imply that $V\setminus\{x\}$ ($x$ is the image of $p$) must also have infinitely many components? That would prove that $M\setminus \{p\}$ must have only finitely many components.
What do you think?
Thank you.
|
I think your argument is quite all right. The crucial part of it is proving the implication $M - \{p\}$ has infinitely many components $\implies$ $V - \{x\}$ has infinitely many components, and I think you should focus on making sure it you argue it convincingly.
Note though that if $\dim M \geq 2$, $M - \{p\}$ is connected whenever $M$ is -- connected manifolds are also path connected, and you can make any path omit $p$ by going down to Euclidean neighbourhood.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1732583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Neumann condition for Poisson equation Solving $ \nabla^2u = 1 $ for spherically symmetric u in the region $r < a, a > 0$, with the following conditions at r = a (separately)
(a) $u = 0$
(b) $\nabla u \cdot n = 0 $ where n is the outward normal of the region.
So generally $ u(r) = 1/6 r^2 + Ar^{-1} + B $ where A,B are arbitrary. For (a) I said A = 0 to remove singularity at r=0 and then applying the condition to get $u(r) = 1/6 (r^2 - a^2) $. However I'm not sure how to approach (b). I thought that the normal is the unit radial vector so $\nabla u\cdot n = u'(r)$, but then unless we leave the singularity at r=0 we get $(1/3)a = 0 $ which is clearly wrong. Leaving the singularity gives $A = -(1/3)a^3 $ which would make sense since then the solution is unique up to a constant as required, but why is it then ok for the singularity to persist?
|
Note that a necessary condition for the existence of a solution to the Neumann problem is that
$$\begin{align}\int_{r\le a} (1)\,dV&=\int_{r\le a}\nabla^2 u(r)\,dV\\\\
&=\oint_{r=a}\nabla u(r)\cdot \hat n\,dS \tag 1
\end{align}$$
The left-hand side of $(1)$ is simply $\frac{4\pi a^3}{3}$ while if $u'(a)=0$, the right-hand side of $(1)$ is zero. Therefore, there exists no solution to the Neumann problem in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1732694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Least involved proof for continuos functions $\Rightarrow$ uniform continuous functions on $[a,b]$ I have been looking at this proof in my textbook and seem to always get lost in its logic, its roughly 3 pages long. The proof is:
If f is continuous on a closed interval [a,b], then f is uniformly continuous on [a,b].
Do any of you know of a nice, clean and quick proof for this? Asking my professor has only resulted in more confusion, and google helps little to none.
I greatly appreciate your help in advance!
|
Suppose for contradiction that $f$ is continuous but not uniformly continuous on $[a,b]$. Then there exists $\varepsilon > 0$ such that for every $n \in \mathbb{N}$ there exist $x_n, y_n \in [a,b]$ with $|x_n - y_n| < \frac{1}{n}$ and $|f(x_n) - f(y_n)| \ge \varepsilon$. By the Bolzano-Weierstrass theorem, the sequence $(x_n)$ has a convergent subsequence $(x_{n_k})$; say $x_{n_k} \to x \in [a,b]$. Now you have
$$
|y_{n_k} - x| \le |y_{n_k} - x_{n_k}| + |x_{n_k} - x| \le \frac{1}{n_k} + |x_{n_k} - x| \to 0
$$
and so $y_{n_k} \to x$ as well.
Since $f$ is continuous at $x$ and since $x_{n_k}, y_{n_k} \to x$, we have $f(x_{n_k}) \to f(x)$ and $f(y_{n_k}) \to f(x)$. In particular, there exists $k$ such that $|f({x_{n_k}}) - f(x)| < \varepsilon/2$ and $|f(y_{n_k}) - f(x)| < \varepsilon/2$. So the triangle inequality forces $|f(x_{n_k}) - f(y_{n_k})| < \varepsilon$, which contradicts the choice of the sequences $x_n$ and $y_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1732833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that a homomorphism is injective or trivial Let A,B be groups, and assume that |A| = 29. Let φ:A→B be a homomorphism.
a) Prove that either φ is injective or trivial. (φ is trivial if for all a∈A φ(a) = e)
b) If |B|=80, prove that φ is trivial.
Now I know that a homomorphism is injective iff the kernel is trivial. But I can't seem to figure out how to start this question. Should I assume by contradiction that φ is not injective and not trivial and try to arrive at a contradiction? Or should I show that the kernel is trivial so φ has to be injective.
Any hints or suggestions would be really appreciated. Thanks!
|
By the first isomorphism theorem, $A/\ker\varphi \cong \varphi(A)$. In particular: \begin{equation}\frac{29}{|\ker\varphi|}=|\varphi(A)|.\end{equation} Since 29 is a prime number, either $\ker\varphi=\{0\}$ (which implies $\varphi$ is injective) or $\ker\varphi=A$ (i.e. $\varphi$ is trivial).
If $|B|=80$ then since $\varphi(A)$ is a subgroup of $B$, $|\varphi(A)|$ divides $80$ by Lagrange's theorem. But $|\varphi(A)|$ also divides 29, so $|\varphi(A)|=1$ and $|\ker\varphi|=29$, which implies $\varphi$ is trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1732931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
The quintic equation: Why is there no closed formula? We know that polynomials up to fourth degree have closed solutions using radicals. And we know that starting from the quintic no polynomial will have a closed solution using radicals.
Question 1: What I want to know is, why does this happen for the fifth order polynomial? What is so special about the number 5? I don't want a prove. I am looking for an intuitive explanation.
Question 2: Is there a closed formula for higher order polynomials using other functions and operations? I heared that Felix Klein did something like that, but I could not find a closed formula for the solution.
|
If by ‘closed formula’, you mean a formula with radicals and ordinary arithmetic operations, the general answer comes from Galois theory:
One consider the Galois group of the equation, i.e. the group of automorphisms of the splitting field of the polynomial. Suppose the polynomial has degree $n$. As an automorphism of this splitting field maps a root of the equation to another root, it is a subgroup $G$ of the symmetric group $S_n$.
Now the equation is ‘solvable by radicals’ if and only if $G$ is a solvable group.
This means there is a sequence of subgroups:
$$\{e\}=G_0\subset G_1\subset\dots\subset G_r=G$$
such that each $G_i$ is normal in $G_{i+1}$ and $G_{i+1}/G_i$ is abelian.
It happens that for the general polynomial of degree $n$ (i.e. with indeterminate coefficients) the Galois group is $S_n$, and that the alternating group $A_n$ is simple for $n\ge 5$, and of course, not abelian. Hence $S_n$ is not solvable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $f(x)=\frac{1}{1-x-x^2}$, find $\frac{f^{(10)}(0)}{10!}$ My math teacher posed this question to my calculus class. If $f(x)=\frac{1}{1-x-x^2}$, find $\frac{f^{(10)}(0)}{10!}$.
At first, I began by taking the first few derivatives, but it soon go out of hand with the repeated quotient rules. I'm sure that I could continue for $10$ derivatives, but I believe that there must be an easier solution.
|
If you don't know the theory of generating functions, the trick is to get the Taylor expansion with partial fractions. Write
$$
\frac{1}{1-x-x^2}=\frac{a}{1-\alpha x}+\frac{b}{1-\beta x}
$$
that gives the relations
$$
\begin{cases}
a+b=1\\[4px]
a\beta+b\alpha=0\\[4px]
\alpha+\beta=1\\[4px]
\alpha\beta=-1
\end{cases}
$$
Now you can use
$$
\frac{a}{1-\alpha x}=a\sum_{n\ge0}\alpha^nx^n
$$
so
$$
\frac{f^{(10)}(0)}{10!}=a\alpha^{10}+b\beta^{10}
$$
Once you compute $a$, $b$, $\alpha$ and $\beta$, you'll notice that
$$
a\alpha^n+b\beta^n=\frac{\varphi^n-\bar{\varphi}^n}{\sqrt{5}}
$$
where
$$
\varphi=\frac{1+\sqrt{5}}{2},\qquad
\bar{\varphi}=\frac{1-\sqrt{5}}{2}
$$
which is the Bézout formula for the Fibonacci numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Does a finite sum of distinct prime reciprocals always give an irreducible fraction?
If we add any finite number of any distinct prime reciprocals, will the result always be an irreducible fraction?
If not, is there any bound on the value of a greatest common divisor for the numerator and demoninator of such fraction?
This boils down to the question about the greatest common divisor for:
$$Q=p_1p_2 \cdots p_n~~~~\text{and}~~~~P=\sum_{k=1}^{n} \frac{Q}{p_k}$$
Here $\{p_k\}$ is any finite subset of primes.
If this question is trivial, I apologise in advance. I checked this quickly for small primes and got only irreducible fractions. Elementary number theory is not my strong point.
|
As you've shown, this comes down to the question of whether
$$P=\sum_{k=1}^n p_1p_2\ldots\hat p_k\ldots p_n$$ and $p_1\ldots p_n$ are coprime, where $\hat a$ means that the product excludes $a$.
The only candidates for common prime factors are the $p_i$. But
$$S\equiv p_1p_2\ldots\hat p_i\ldots p_n\pmod {p_i}$$which is nonzero as the primes are distinct and not equal to $p_i$. Hence $p_i\nmid S$ for each $i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
Newton's method for square roots 'jumps' through the continued fraction convergents I know that Newton's method approximately doubles the number of the correct digits on each step, but I noticed that it also doubles the number of terms in the continued fraction, at least for square roots.
Explanation. If we start Newton's iterations with some partial convergent of the simple continued fraction for the square root, we get another convergent of the same continued fraction on each step, with double amount of CF terms.
Example. Finding the square root of $2$.
1) Start with $\frac{3}{2}$ which is also the first continued fraction convergent. Listing the values on each step and the number of terms in the continued fraction, we get:
$$ \begin{array}( \frac{3}{2} & 1 & \\ \frac{17}{12} & 3 & +2 \\ \frac{577}{408} & 7 & +4 \\ \frac{665857}{470832} & 15 & +8 \\ \cdots & 31 & +16 \end{array} $$
As you can see, the amount of CF terms (the position of this fraction in the list of all partial convergents) doubles on each step.
2) Start with $\frac{7}{5}$ which is the second continued fraction convergent.
$$ \begin{array}( \frac{7}{5} & 2 & \\ \frac{99}{70} & 5 & +3 \\ \frac{19601}{13860} & 11 & +6 \\ \cdots & 23 & +12 \end{array} $$
The same happens for other square roots I checked.
How does Newton's method 'jump' through the continued fraction this way, exactly doubling the number of CF terms at each step?
Can we prove this observation using the recurrence relations for the continued fraction?
Just in case, the Newton's method:
$$\frac{p_{n+1}}{q_{n+1}}=\frac{p_n^2+bq_n^2}{2p_nq_n}$$
$$\lim_{n \to \infty} \frac{p_{n}}{q_{n}}=\sqrt{b}$$
And the continued fraction recurrence:
$$\frac{P_{n+1}}{Q_{n+1}}=\frac{a_n P_n+P_{n-1}}{a_n Q_n+Q_{n-1}}$$
$$P_1=a_0,~~~Q_1=1,~~~P_0=1,~~~Q_0=0,~~~P_{-1}=0,~~~Q_{-1}=1$$
$$\lim_{n \to \infty} \frac{P_{n}}{Q_{n}}=a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cdots}}$$
How can we even relate these two?
A related question here. But I'm not asking about the digits, I'm asking about continued fraction terms.
|
First some experimentation might be in order. I wrote up a program that could check for chains like this:
program nr
implicit none
integer, parameter :: ik16 = selected_int_kind(38)
integer, parameter :: N = 200
integer(ik16) p(N), q(N)
integer D
integer sqD
integer r, s, a
integer i, j, k
integer(ik16) e, b, c
write(*,'(a)',advance='no') 'Enter the value of D:> '
read(*,*) D
sqD = sqrt(D+0.5d0)
r = 0
s = 1
a = (sqD+r)/s
p(1) = a
q(1) = 1
r = a*s-r
s = (D-r**2)/s
a = (sqD+r)/s
p(2) = a*p(1)+1
q(2) = a
do i = 3, N
r = a*s-r
s = (D-r**2)/s
a = (sqD+r)/s
p(i) = a*p(i-1)+p(i-2)
q(i) = a*q(i-1)+q(i-2)
if(p(i) <= p(i-1) .OR. q(i) <= q(i-1)) exit
end do
write(*,'(*(g0))') 'There are ',i-1,' convergents computed'
outer: do j = 1, 5!(i-1)/2
write(*,'(*(g0))') 'Starting convergent: ', j
e = p(j)
b = q(j)
k = j
do
c = e**2+D*b**2
if(c <= e) exit
b = 2*e*b
e = c
do k = k+1, i-1
if(b == q(k)) then
if(e == p(k)) then
write(*,'(*(g0))') 'Matching convergent: ', k
exit
else
write(*,'(a)') 'Matched denominator but not numerator'
stop
end if
else if(b < q(k)) then
write(*,'(*(g0))') 'Missed convergent: ', k-1
cycle outer
end if
end do
end do
end do outer
end program nr
So it always seemed to hit chains for $D=2$, $D=5$, $D=10$, $D=26$, $D=37$, and $D=50$ but for other values of $D$ sometimes it hit and sometimes it missed. My program has indices $1$ bigger than in the original question, so I'm seeing $n\rightarrow2n$ rather than $n\rightarrow2n+1$. Even the infamous $D=61$, which has a pretty large nontrivial solution to Pell's equation, has a chain. When another convergent is hit, it always seems to be exactly $2n$, never $2n-2$ or $2n+2$. It can't be $2n-1$ or $2n+1$ because Newton-Raphson approaches roots from above.
To prove a chain once found, it's probably useful to compare to solutoins to Pell's equation. Probably writing out the solutions in exponential form would lead to proof for a given chain.
EDIT: Just after posting it occurred to me that
$$\begin{align}(p^2+Dq^2)^2-D(2pq)^2 & =p^4+2Dp^2q^2+D^2q^4-4Dp^2q^2\\
& =p^4-2Dp^2q^2+D^2q^4\\
& =(p^2-Dq^2)^2\end{align}$$
So every solution to Pell's equation $p^2-Dq^2=1$ leads to an infinite chain of solutions via Newton-Raphson and if $|p^2-Dq^2|\ne1$, then this metric grows exponentially so that the roots computed by Newton-Raphson will not be convergents.
EDIT: Relevant link to Newton-Raphson and Pell's equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
}
|
The closure of a subset in finer topology is always subset of the closure of that subset in the coarser one. On the Appendix A of Naber's book $\textit{The Geometry of Minkowski Spacetime}$ there is a claim in Lemma A.3.3. It says that if we have a set (says $M$) endowed with two different topology says $(M,O_A)$ and $(M,O_B)$ where $O_A$ finer than $O_B$ then for any subset $U$ of $M$ Closure $U$ in $O_A$ is always subset of Closure $U$ in $O_B$ ($Cl_A(U)$ $\subseteq$ $Cl_B(U)$).
I can show that this is true, but i 'm not sure my proof is valid.
Here's my proof. If you found this not valid or unsatisfying, please let me know.
The claim is that $Cl_A(U)$ $\subseteq$ $Cl_B(U)$. So, we must show that if $ x \in Cl_A(U)$, then $x \in Cl_B(U)$. This is equal to if $x \notin Cl_B(U)$ then $x \notin Cl_A(U)$.
$x \notin Cl_B(U) \iff x \in M-Cl_B(U)$. Because the closure of any subset is closed, then the complement must be open. So $M-Cl_B(U)$ open in $O_B$. But $O_A$ finer than $O_B$. So $M-Cl_B(U)$ must be open in $O_A$ too.
$x \in M-Cl_B(U)=M-(M-(Int_B(U) \cup Ext_B(U)) \iff x \in Int_B(U) \cup Ext_B(U) $
But $ Int_B(U) \cup Ext_B(U) $ is open in $O_A$.So x is in some open set in $O_A$. If $x \in Int_B(U) \implies x \in U$. But U may or may not open. So the largest open set that contain x is $Int_A(U)$. So $x \in Int_A(U)$. If $x \in Ext_B(U) \implies x \notin U$. Because x must contain in some open set in $O_A$ therefore $x \in Ext_A(U)$. So we have shown that
$x \notin Cl_B(U) \iff x \in Int_B(U) \cup Ext_B(U) $ which is implies $ x \in Int_A(U) \cup Ext_A(U) = M - (M-(Int_A(U) \cup Ext_A(U)))=M-Cl_A(U) \iff x \notin Cl_A(U)$.
|
Easier proof: $Cl_B(U)$ is an $A$-closed set which contains $U$, hence $Cl_B(U)$ must also contain $Cl_A(U)$, by definition of $Cl_A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Supremum/Maximum and Infimum/minimum of a given set Determine $\sup E$, $\inf E$, and (where possible) $\max E$, $\min E$ for the set $E = \{ \sqrt[n]{n}: n \in \mathbb{N}\}$.
Attempt: I've written that $\inf E = 1 = \min E$.
When it comes to finding $\sup E$, I've noticed punching in increasing values of n on my calculator, the elements of $E$ seem to never go past about $1.4\ldots$, but I still don't know what $\sup E$ is. How do I figure this out?
|
The function $f(x)=x^{\frac{1}{x}}$ has derivative
$$ f^{\prime}(x)=x^{\frac{1}{x}}\frac{1-\log x}{x^2}$$
Therefore $f$ has its global maximum on $[1,\infty)$ at $x=e$, and is increasing on $[1,e)$ and decreasing on $(e,\infty)$. Therefore the only values of $n$ you need to check are $n=2$ and $n=3$. And $3^{\frac{1}{3}}>\sqrt{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Is there an elegant way to solve $\int \frac{(\sin^2(x)\cdot \cos(x))}{\sin(x)+\cos(x)}dx$? The integral is:
$$\int \frac{(\sin^2(x)\cdot \cos(x))}{\sin(x)+\cos(x)}dx$$
I used weierstraß substitution
$$t:=\tan(\frac{x}{2})$$
$$\sin(x)=\frac{2t}{1+t^2}$$
$$\cos(x)=\frac{1-t^2}{1+t^2}$$
$$dx=\frac{2}{1+t^2}dt$$
Got this:
$$\int \frac{8t^4-8t^2}{t^8−2t^7+2t^6−6t^5−6t^3−2t^2−2t−1}dt$$
and with partial fraction expansion the final answer is:
$$\frac{1}{4}[\ln(\sin(x)+\cos(x))-\cos(x)*(\sin(x)+\cos(x))]+C$$
It is a long way and I am very convinced, there is a shorter way, maybe you know one? Thanks
|
If you multiply numerator and denominator by $\cos x-\sin x$, the numerator can be rewritten as
$$
\sin x\cos x(\sin x\cos x-\sin^2x)
$$
Now use $\sin x\cos x=\frac{1}{2}\sin 2x$ and
$$
\sin^2x=\frac{1-\cos2x}{2}
$$
so finally we get
$$
\frac{1}{4}\sin2x(\sin2x-1+\cos2x)=
\frac{1}{4}(1-\cos^22x-\sin2x+\sin2x\cos2x)
$$
and the integral becomes
$$
\frac{1}{4}\int\left(
\frac{1}{\cos2x}-\frac{\sin2x}{\cos2x}-\cos2x+\sin2x
\right)\,dx
$$
that should pose little problems.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
}
|
Proof that exact form are path independent seems to imply the same for merely closed forms A singular $k$-cube on some set $A \subseteq \mathbb R^n$ is a continuous map $c : [0,1]^k \to A$. Consider the following exercise:
Let $c_1, c_2$ be singular $1$-cubes in $\mathbb R^2$ with $c_1(0) = c_2(0)$ and $c_1(1) = c_2(1)$. Show that $\int_{c_1} \omega = \int_{c_2} \omega$ if $\omega$ is exact. Give a counter-example on $\mathbb R^2 - 0$ if $\omega$ is merely closed.
Here $\omega$ denotes a differential $1$-form on $\mathbb R^2$, i.e. for each $p \in \mathbb R^2$ we have that $\omega(p)$ is a linear map on $\mathbb R^2_p$, i.e. the tangent space at the point $p$. This is exercise 4-32 (a) from Spivak: Calculus on Manifolds (page 105), and a solution could be found here. The solution goes like this:
First it is shown that there exists a $2$-cube $c : [0,2]\to \mathbb R^2$ such that $\partial c = c_1 - c_2 + c_1(1) - c_1(0)$, and then using that on $1$-cubes the integral vanishes and Stokes theorem:
Suppose $\omega$ is exact, hence closed. Then by Stokes Theorem we have $\int_{c_1 - c_2} \omega = \int_{\partial c} \omega = \int_c d\omega = \int_c 0 = 0$ (since $d\omega = 0$ as it is closed), and so $\int_{c_1} \omega = \int_{c_2}\omega$.
So as I see it just uses the fact that $\omega$ is closed, but not the more stronger property of exactness. So this proof would also work if $\omega$ is merely closed. What have I overlooked here?
|
You are somehow right: the proof would work if you only knew that $\omega$ were closed, provided that $\omega$ were defined not just on $\partial C$, but on all $C$. But in that case, all closed forms on $C$ are exact.
That is not the case on $\mathbb{R}^2 \setminus \{ 0\}$. If you pick $C_1$ and $C_2$ to bound a region $C$ that does not contain $0$, then it is true that $\int_{C_1}\omega = \int_{C_2}\omega$ for every closed form $\omega$. However, on such a region, all closed forms are exact. The problem occurs when the region $C$ contains $0$ - in that situation $d\omega$ is not defined on $C$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1733914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Strengthening Poincaré Recurrence Let $(X, B, \mu, T)$ be a measure preserving system. For any set $B$ of positive measure, $E = \{n \in \Bbb N |\; \mu(B \; \cap \;T^{-n}B) > 0\}$ is syndetic.
This exercise comes from Einseidler and Ward. The exercise before is the "uniform" mean ergodic theorem which is proved basically the same way as the mean ergodic theorem, and they say it should be used in the proof. Can someone help me get started? Thanks in advance!
|
Hint: If the set was not syndetic, then the sequence
$$
\frac1n\sum_{k=0}^{n-1}\mu(B\cap T^{-k}B)
$$
would have zero as an accumulation point (take larger and larger gaps). But
$$
\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}\mu(B\cap T^{-k}B)=\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}\int_B(\chi_B\circ T^k)\,d\mu=\int_B\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}(\chi_B\circ T^k)\,d\mu,
$$
using the dominated convergence theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Nowhere $0$ form on the sphere? Consider the differential form on $\mathbb R^3$ given by $ x dy \wedge dz + y dz \wedge dx + z dx \wedge dy$. I converted this to spherical coordinates using a laborious calculation, and when I'm done, by some miracle (which would be cool if someone could explain exactly how that works), I get something really compact and elegant: $\rho^3 \sin \phi d\phi \wedge d\theta$.
If we call this form $\Omega$, my task is to compute $i^*\Omega,$ where $i$ represents the inclusion from $\mathbb S^2 \rightarrow \mathbb R^3$, which seems to be pretty easy also - on the unit sphere we just have $\rho = 1$.
Now why is this nowhere $0$ on the sphere? It looks like it's $0$ along the half plane where $\phi = 0$!
|
First of all, there's just a point on the unit sphere where $\phi=0$ — the north pole (but there's also the south pole, where $\phi=\pi$, to worry about). But remember that spherical coordinates actually fail to give a coordinate system at these points (and we can debate what happens when $\theta = 0$ or $2\pi$).
In the original cartesian coordinates, you see that at the poles the $2$-form is given by $z\,dx\wedge dy = \pm dx\wedge dy$, and, since the tangent plane to the sphere is the $xy$-plane, this $2$-form is definitely non-zero on the sphere at those points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Can an alternating series ever be absolutely convergent? Can an alternating series EVER be absolutely convergent?
I am examining practice problems in my calculus book and I haven't yet come across a case where this is so. It might be because they are simple, but I'm genuinely curious.
|
a series is absolutely convergent if $\sum |a_n| < M$
If a series is absolutely convergent then every sub-series is convergent.
Consider $\sum (-1)^n|a_n|$
The sum of the of the even terms converges, the sum of the odd terms converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
}
|
Eigenvalues within unit circle Let
$$A = \begin{bmatrix} -1 & -N\\ 6 & 0\end{bmatrix}$$
from the state space realization of an LTI system. For this system to be stable, all eigenvalues must be within the unit circle, i.e., for all eigenvalues $|\lambda_i|<1$ must be satisfied. Matrix $A$ has eigenvalues
$$\lambda_{1,2} = \frac{-1 \pm \sqrt{1-24N} }{2}$$
How can I derive $N$ in such a way that both eigenvalues lie within the unit circle? The solution should probably be $$0<N<1/6$$ This seems obvious, but since the eigenvalues can be complex I don't know how to interpret this.
|
The two roots are
$$
\lambda_1(N) = -\frac{-1+ \sqrt{1-24N}}{2}, \;\;\;
\lambda_2(N) = -\frac{-1- \sqrt{1-24N}}{2}.
$$
It is easily seen that the condition for real roots are $N \le \frac{1}{24}$.
We consider real eigenvalue case first. The real eigenvalues are within the unit disc if
$$
-1 \le \lambda_i(N) \le 1, \;\; i=1,2
$$
The root $\lambda_1(N)$ decreases monotonically with respect to $N$ and $\lambda_2(N)$ increases monotonically with $N$.
We study the two roots when $N$ decreases from $N=\frac{1}{24}$.
For the boundary condition $\lambda_1(N) \le 1$, we get
$$
N \ge -\frac{1}{3}
$$
and it satisfies the condition for real roots.
For the condition $\lambda_2(N) \ge -1$, we get
$$
N \ge 0
$$
which also satisfies the condition for real root. However, when $N$ is decreasing, $\lambda_2(N)$ hits the boundary condition
$-1$ first (before $\lambda_1(N)$ hits $1$). Thus, we discard the solution for $\lambda_1(N)$.
Thus, the required condition
in the real eigenvalue case is
$$
0 \le N \le \frac{1}{24} .
$$
For complex eigenvalues, the moduli of the two eigenvalues are the same.
$$
|\lambda_1(N)| = |\lambda_2(N)| .
$$
The moduli have to be less than or equal to unity for the eigenvalues to be in the unit disc.
The condition is
$$
\frac{1+ (\sqrt{|1-24N|})^2}{4} \le 1
$$
Equivalently,
$$
\frac{1+|1-24N|}{4} \le 1 .
$$
Since, $1-24N$ is negative in the range we are interested, we obtain the condition for complex roots to be in the unit disc as,
$$
\frac{1+24N-1}{4} \le 1
$$
which gives us
$$
N \le \frac{1}{6} .
$$
By incorporating the condition for complex roots, we have the complete solution for complex roots as,
$$
\frac{1}{24} < N \le \frac{1}{6} .
$$
Thus, the complete solution for the eigenvalues (real or complex) to be in the unit disc is
$$
0 \le N \le \frac{1}{6} .
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Can I approximate a series as an integral to find its limit and determine convergence? Find $\lim \limits_{n \to \infty} (a_n)$, where $a_n=\frac{1}{n^2}+\frac{2}{n^2}+\frac{3}{n^2}+...+\frac{n}{n^2}$.
So I can solve it like that $a_n=\frac{1+2+3+...n}{n^2}=\frac{\frac{1}{2}n(n+1)}{n^2}=\frac{1}{2}(1+\frac{1}{n})$. Clearly $\lim \limits_{n \to \infty} (a_n)=\frac{1}{2}$ so the sequence converges to $\frac{1}{2}$.
But can I approximate the series as an integral?
$$a_n=\sum_{i=1}^n \frac{i}{n^2} \approx\int_{1}^n \frac{x}{n^2}dx=\frac{1}{n^2}\int_{1}^n x \,dx=\frac{1}{2}-\frac{1}{2n^2}$$
Now, when $n$ tends to infinity, $a_n$ tends to $\frac{1}{2}$ so the sequence converges to $\frac{1}{2}$. This produced the same result as using the first method. The only thing I am unsure of is that the final sums are different despite the fact that they both converge to the same number. This is because in the first method we sum only integers but in the second we sum all real $x$'s in the given interval, right?
Is this approach also valid?
|
This can be written as
$$\lim_{n \to \infty}\frac{1}{n} \sum_{r=1}^n {r\over n}$$
This is of the form
$$\lim_{n \to \infty}\frac{1}{n} \sum_{r=1}^n f\left ({r\over n} \right) $$
So it can be written as
$$\int_0^1 f(x)dx$$
$$=\int_0^1x dx$$
$$=\frac{1}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
When is $\sum_{N=1}^{\infty}\exp\left(\ln\left(\frac{\sqrt{N}\ln(N)}{g(N)}\right)-\frac{(g(N))^2}{\ln(N)}\right)<\infty$? Let
\begin{align}
\sum_{N=1}^{\infty}\exp\left(\ln \left(\frac{\sqrt{N}\ln(N)}{g(N)}\right)-\frac{(g(N))^2}{\ln(N)}\right)
\end{align}
Question:
Let $g(N)=a(\ln(N))^{t}$ where $a \geq 0$ is some constant of your choice. I am interested to know what is the smallest $t$ such that the above expression is finite. Is there a $t$ and $g(N)=a(\ln(\ln(N)))^{t}$ such that the above expression is finite? How do I go about solving such a problem?
My attempt: I know $\sum_{N=1}^{\infty}\frac{1}{N^S}< \infty$ for $S>1$ so I want to manipulate the above expression into this form. Let $g(N)=a\ln(N)$. Then
\begin{align}
\sum_{N=1}^{\infty}\exp\left(\ln \left(\frac{\sqrt{N}\ln(N)}{g(N)}\right)-\frac{(g(N))^2}{\ln(N)}\right) &=\sum_{N=1}^{\infty}\exp\left(\frac{1}{2}\ln (N)-\ln(a)-a^2\ln(N)\right) \\
&=a^{-1}\sum_{N=1}^{\infty} \frac{1}{N^{a^2-1/2}}
\end{align}
and the above expression is finite for any $a>\sqrt{3/2}$.
Edit: I'd like to add one more function. What about $g(N)=aN^{t}$?
My attempt:
\begin{align}
\sum_{N=1}^{\infty}\exp\left(\ln \left(\frac{\sqrt{N}\ln(N)}{g(N)}\right)-\frac{(g(N))^2}{\ln(N)}\right) &=\sum_{N=1}^{\infty}
\exp\left((\frac{1}{2}-t)\ln(N)+\ln(\ln (N))-\ln(a)-a^2\frac{N^{2t}}{\ln(N)}\right) \\
&=a^{-1}\sum_{N=1}^{\infty} \frac{\ln(N)}{N^{t-1/2}}\exp\left(-a^2\frac{N^{2t}}{\ln(N)}\right)
\end{align}
Now I think that if $t>1/2$ then this is finite because the exponential term will kill everything but not sure how to deal with $t<1/2$.
|
If $g(N)=a(\ln(\ln(N)))^{t}$, rewrite terms in decreasing asymptotic magnitude and get rid of the insignificant ones.
$\displaystyle \ln \left(\frac{\sqrt{n}\ln(n)}{g(n)}\right)=\frac 12 \ln n+\ln \ln n -t\ln \ln \ln - \ln a$
$\displaystyle \frac{(g(n))^2}{\ln(n)}= \frac{a^2(\ln\ln n)^{2t}}{\ln n}=o(1)$
Hence $$\exp\left(\ln \left(\frac{\sqrt{n}\ln(n)}{g(n)}\right) - \frac{(g(n))^2}{\ln(n)}\right)=\exp\left( \frac 12 \ln n+\ln \ln n -t\ln \ln \ln - \ln a+o(1)\right)=\frac{\sqrt{n}\ln n}{(\ln\ln n)^t}\left(\frac 1a +o(1)\right) \to \infty$$
The series always diverges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Function that maps the "pureness" of a rational number? By pureness I mean a number that shows how much the numerator and denominator are small.
E.g. $\frac{1}{1}$ is purest, $\frac{1}{2}$ is less pure (but the same as $\frac{2}{1}$), $\frac{2}{3}$ is less pure than the previous examples, $\frac{53}{41}$ is worse, .... $\pi$ isn't pure at all (as well as e...).
|
$f(\frac{a}{b})=\frac{1}{|a|+|b|}$
For the following conditions on $x$, $f(x)$ is either zero or not defined:
*
*$x$ irrational
*$x=0$
Higher output values implies high purity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 9,
"answer_id": 6
}
|
Differentiate $\sqrt{1+e^x}$ using the definition of a derivative This is the progress I've made so far.
$$\lim_{h \to 0} \frac{\sqrt{1+e^{x+h}}-\sqrt{1+e^{x}}}{h}$$
$$= \lim_{h \to 0} \frac{\left(1+e^{x+h}\right)-\left(1+e^{x}\right)}{h\left(\sqrt{1+e^{x+h}}+\sqrt{1+e^{x}}\right)}$$
$$= \lim_{h \to 0} \frac{e^x\left(e^h-1\right)}{h\left(\sqrt{1+e^{x+h}}+\sqrt{1+e^{x}}\right)}$$
$$= \lim_{h \to 0} \frac{e^x\left(e^h-1\right)}{h\left(\sqrt{1+e^{x+h}}+\sqrt{1+e^{x}}\right)}$$
$$= \lim_{h \to 0} \frac{e^x}{\sqrt{1+e^{x+h}}+\sqrt{1+e^{x}}} \frac{e^h-1}{h}$$
I can see how as h tends to 0 the fraction on the left will tend to the desired result, but I'm not sure how to deal with the fraction on the right.
|
$$\lim_{h \to 0} \frac{e^x}{\left(\sqrt{1+e^{x+h}}+\sqrt{1+e^{x}}\right)} \cdot \frac{\left(e^h-1\right)}{h}$$
Since $$e^h=\frac{h^0}{0!}+\frac{h}{1!}+\frac{h^2}{2!}+\frac{h^3}{3!}+......$$
$$e^h=1+\frac{h}{1!}+\frac{h^2}{2!}+\frac{h^3}{3!}+......$$
$$e^h-1= \frac{h}{1!}+\frac{h^2}{2!}+\frac{h^3}{3!}+......$$
$$\frac{\left(e^h-1\right)}{h}= \frac{1}{1!}+\frac{h}{2!}+\frac{h^2}{3!}+......$$
Thus $$\lim_{h \to 0}\frac{\left(e^h-1\right)}{h}= 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Find a recurrence expression which solution have $\sin$ or $\cos$. I, I'm a computer science student of the first course.
My teacher have told us to try to find a recurrence equation for the closed-form expression:
$$f(n) = 2^n + 3^n \cos\left(\frac{n\pi}{2}\right) $$
and I think I need a bit of help.
At the first, I tried to do the inverse process of solve a recurrence expression, following the rules of an example recurrence with the complex roots $1+i$ and $1-i$ but I have tried for hours and I haven't had success.
Second, I tried to find $f(0), f(1), f(2), \dots$ and find similarities and I had only realized that, for the odd numbers, $f(n)=n^2$ ; but with the evens numbers stuffs get difficult.
Finally, I've been trying to solve some recurrences which characteristic equation have complex roots like $3i$ or $-3i$, but at last I have an expression which I can't transform on a simple sin or cos expression.
My teacher have told us about some forms of transform an expression like $\cos\bigl(\frac{n \pi}{2}\bigr) + \sin\bigl(\frac{n \pi}{2}\bigr)$ on $\sin\bigl(\frac{n \pi}{4}+\frac{\pi}{4}\bigr)$ but, when I have an only complex root, I had no idea of how to transform it on a simple cosine, so I'm unable of overcome this challenge.
Moreover, I can't find good books or lessons about the inverse process of solve a recurrence, and even less about do it when a cosine appears... (the only examples I have on my notes are when the characteristic equation have only the roots $1+i$ and $1-i$ which are the same example that I found everywhere...)
So... if someone could help me, I would be very grateful...
Actually, I don't ask for someone who solve the problem, but someone who can help me to find the solution. If you could recommend me some good book which could be nice to learn about this, or if you can tell me about recurrences which have cosines or tell me if it is possible to solve a recurrence with an only complex root... I'll be gratefully.
Greetings.
|
You are so close! Since $e^{\frac{n\pi}2i}=\cos\left(\frac{n\pi}2\right)+i\sin\left(\frac{n\pi}2\right)$, you need $$e^{\frac{n\pi}2i}=\left(e^{\frac{\pi}2i}\right)^n=\left(\cos\left(\frac{\pi}2\right)+i\sin\left(\frac{\pi}2\right)\right)^n=(0+i)^n=i^n$$
Thus your second term is $3^ni^n=(3i)^n$ so you need the root of the characteristic equation $r_2=3i$ and to get real answers also its complex conjugate $r_3=-3i$. Already I think you were aware of the root $r_1=2$. So your characteristic equation reads
$$(r-r_1)(r-r_2)(r-r_3)=(r-2)(r-3i)(r+3i)=(r-2)(r^2+9)=r^3-2r^2+9r-18=0$$
My reading of your question is that this is all the help you wanted. If you need more, ask for an edit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1734987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Coloring classes of $\{1,2,3,\dots,n\}$ I'm trying to prove the following statement
There is an integer $n_0$ such that for any $n\ge n_0$, in every $9$-coloring of $\{1,2,3,\dots,n\}$, one of the $9$ color classes contains $4$ integers $a,b,c,d$ such that $a+b+c=d$.
I thought about taking $\{1,2,\dots,n\}$ to be vertices of a graph where edges connect vertices of the same color. Then I tried to use Ramsey theorem, but to no avail. I thought also about Schur theorem and tried to find some generalization of it, but got nothing.
How should one prove the statement?
Please help, thanks!
|
Isn't this a direct consequence of Rado's single equation theorem?
(taken from the book Ramsey Theory over the Integers by B. Landmann and A. Robertson)
where $c_1=c_2=c_3=1$, $c_4=-1$ for your statement and we can take e.g., $D=\{c_1, c_4\}$. By "regular", it means your statement is true for all $r$-colorings with $r\ge1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
functional, compact operator I am working on a homework problem (Analysis now E3.3.7) and I have no idea on how to solve it. Can anyone give some thoughts? Many thanks.
Assume that Hilbert pace $H$ is separable and prove that an operator $T$ in $B(H)$
has the form $U|T|$ for some unitary operator $U$ with the $|T|=(T^*T)^{\frac{1}{2}}$ iff index$T = 0$.
I know if $T$ is invertible, then $U$ in polar decomposition is unitary. But how to prove $T$ is invertible if index$T = 0$. And I have no idea on the other direction.
|
Remaining spaces:
$$\overline{\mathcal{R}|A|}^\perp=\mathcal{N}|A|=\mathcal{N}A\quad\mathcal{N}A^*=\overline{\mathcal{R}A}^\perp$$
For equal dimensions:
$$\dim\mathcal{N}A=\dim\mathcal{N}A^*\implies UU^*=1$$
For more details: Polar Decomposition
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Continuous functions Let $X$ and $Y$ be topological spaces and $f$ a function of $X$ into $Y$. Show that $f$ is continuous if and only if es continuous as a function of $X$ onto the subspace $f(X)$ of $Y$.
I'm proceding like this:
First, assume that $f$ is continuous of $X$ into $Y$, let $f(X) \cap A$ be an open set of $f(X)$, where $A$ is open in $Y$.
So $f^{-1} (f(X)\cap A)=f^{-1}(f(X)) \cap f^{-1}(A)$, $ f^{-1}(A)$ is open because $f$ is continuous , and $X=f^{-1}(f(X))$ because $f$ is a function of $X$ into $Y$, so $f^{-1} (f(X)\cap A)$ is open.
¿Am I proceeding right? , and ¿How can I prove the reverse?, I know that it's a easy problem but I'm stuck.
|
If you like to be pedantic, one could say this is an immediate consequence to the characterisation of continuity of maps into a space with the initial topology. I.e. if we have $f: X \rightarrow Y$ and $i: f[X] \rightarrow Y$ is the inclusion map, and $\tilde{f}: X \rightarrow f[X]$ is the image restriction of $f$, so $\tilde{f}(x) = f(x)$ for all $x \in X$, then $i \circ \tilde{f} = f$ and $f$ is continuous iff $\tilde{f}$ is, by the initial topology fact ($f[X]$ has the initial topology w.r.t. $i$, being a subspace topology).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Solving $e^x = 6x$ for $x$ without a graph. Throughout my high school career I was always told that an equation of this sort ( $e^x = 6x$ for example) couldn't be solved algebraically. However I feel that there may be a way (and you may be out there saying "of course there is a way") I know that it can be solved graphically, but is there any other way(s) to solve this equation: $$e^x = 6x$$
**Without graphing or using a equation solver **
|
I am a fan of fixed point iteration because I find it easy to set up. You want to work your equation into the form $x=f(x)$ where the derivative of $f(x)$ at the root is small and certainly less than $1$ in absolute value. Then pick a reasonable starting value for $x_0$ and iterate $x_{i+1}=f(x_i)$. As logs are slowly varying I would write this as
$$x=\log (6x)=\log (6) + \log (x)$$
A starting value of $x_0=\log(6)$ looks reasonable. After a couple dozen iterations in a spreadsheet (love copy-down) it has converged to about $2.833148$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Prove by strong induction that $3^n$ divides $a_n$ for all integers $n \ge 1$
Let $a_1 = 3, a_2 = 18$, and $a_n = 6a_{n-1} − 9a_{n-2}$ for each integer $n \ge 3$. Prove by strong induction that $3^n$ divides $a_n$ for all integers $n \ge 1$
I've done the base step and ih however I am stuck on the Inductive Step. I'm thinking it's something like $a_{k+1} = 6a_k - 9a_{k-1}$ but I don't know how to follow that.
Thanks in advance for the help with the Inductive Step.
|
Use strong induction: assume $a_{n - 2} = 3^{n - 2}\lambda_{n - 2}$ and $a_{n - 1} = 3^{n - 1}\lambda_{n - 1}$. We then have:
\begin{align}
a_n =&\ 2\cdot 3\cdot 3^{n - 1}\lambda_{n - 1} - 3^2\cdot 3^{n - 2}\lambda_{n - 2}\\
=&\ 2\lambda_{n - 1}3^n - \lambda_{n - 2}3^n \\
=&\ \left(2\lambda_{n - 1} - \lambda_{n - 2}\right)3^n
\end{align}
$2\lambda_{n - 1} - \lambda_{n - 2}$ is surely an integer (integers are closed under addition and multiplication) therefore $a_n$ is a multiple of $3^n$. The only thing left is to show that $a_1$ and $a_2$ satisfy this condition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
limit of product exists and one limit exists Question is to check :
If $\lim_{n\rightarrow \infty}a_nb_n$ exists and $\lim_{n\rightarrow \infty}a_n$ exists implies $\lim_{n\rightarrow \infty}b_n$ exists.
Considering $a_n=\frac{1}{n}$ and $b_n=n$ then we see that $\lim_{n\rightarrow \infty}a_nb_n$ exists, equals to $1$ and
$\lim_{n\rightarrow \infty}a_n$ exists and equals to $0$. In this case $\lim_{n\rightarrow \infty}b_n$ does not exists..
So, the answer to the question is Not always..
Now, what if $\lim_{n\rightarrow \infty}a_n$ exists and is non zero and $(b_n)$ is bounded?
Suppose that $\lim_{n\rightarrow \infty}a_nb_n=M$ with $\lim_{n\rightarrow \infty}a_n=P\neq 0$ and $|b_n|\leq A$ for all $n\in \mathbb{N}$.
I claim that $\lim_{n\rightarrow \infty}b_n=\frac{M}{P}$
Consider $|b_n-\frac{M}{P}|$.. We estimate this. Given $\epsilon>0$ there exists $N\in \mathbb{N}$ such that $|a_nb_n-M|<\epsilon$ and $|a_n-P|<\epsilon$ for all $n\geq N$.
$$|b_n-\frac{M}{P}|=\frac{1}{P}|Pb_n-M|=\frac{1}{P}|Pb_n-a_nb_n+a_nb_n-M|\leq \frac{1}{P}|b_n||a_n-P|+\frac{1}{P} \epsilon$$
As $(b_n)$ is bounded, we have for all $n\geq N$
$$|b_n-\frac{M}{P}|\leq \frac{1}{P}A\epsilon+\frac{1}{P} \epsilon=\epsilon\left(\frac{1}{P}(A+1)\right)$$
Thus, we are done.
I am just wondering if i can relax any of the conditions that i have assumed. Help me to know more about this.
|
If
$$\;\lim_{n\to\infty}a_n=L\neq0\;,\;\;\lim_{n\to\infty}a_nb_n= K\;,\;\;\text{then since for almost all indexes}\;\;a_n\neq0\,,$$
we get that for all indexes except a finite number of them, from arithmetic of limits:
$$b_n=\frac{a_nb_n}{a_n}\xrightarrow[n\to\infty]{}\frac KL$$
and all this is well-defined and always finite since $\;L\neq0\;$ . No need to require a priori boundedness for $\;\{b_n\}\;$ .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Norm of element $\alpha$ equal to absolute norm of principal ideal $(\alpha)$ Let $K$ be a number field, $A$ its ring of integers, $N_{K / \mathbf{Q}}$ the usual field norm, and $N$ the absolute norm of the ideals in $A$.
In some textbooks on algebraic number theory I have seen the fact: $\vert N_{K / \mathbf{Q}}(\alpha) \vert = N(\alpha A)$ for any $\alpha \in A$. However, I wasn't able to find a proof (neither in books nor by myself).
Can someone explain to me, why this is true?
Thanks!
|
I have written up the proof in Lemma $3.3.3$ of my lecture notes in algebraic number theory, on page $35$. It uses three different $\mathbb{Q}$-bases of the number field $K$, $\mathbb{Z}$-bases for the the ring of integers $\mathcal{O}_K$, and the determinant for the commutative diagram given.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1735863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Cut-off function construction Let $f:I=[0,1]\cup[2,3]\to \mathbb{R}$, defined by
$$f(x)= \begin{cases} 0 & \text{if } x\in [0,1] \\ 1 & \text{if } x\in [2,3] \end{cases} $$
How do I construct a $C^1$ function $\tilde{f}: \mathbb{R}\to \mathbb{R}$ such that $\tilde{f}(x)=f(x)$ for all $x\in I$.
|
I will show you how to fill in the gap $(1,2)$ and leave you to get the rest. The easiest method is to use a polynomial to fill in the gap. It will need to have a derivative of zero at $x=1$ and $x=2$, so the polynomial needs to be at least cubic, and its derivative has the form $$p'(x)=a(x-1)(x-2)=ax^2-3ax+2a,$$ where $a$ is a constant. The derivative has to be positive between $1$ and $2$, so $a$ is negative. By taking an antiderivative, we get $$p(x)=\frac{a}{3}x^3-\frac{3a}{2}x^2+2ax+c.$$ We should have $p(1)=0$ and $p(2)=1$. Therefore $$\frac{a}{3}-\frac{3a}{2}+2a+c=0,$$ $$\frac{8a}{3}-6a+4a+c=0.$$ Now solve this system of equations to find what $a$ and $c$ should be.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Residue Theorem: compute the integral $\int_0^\infty \frac{x \sin x}{x^4+4a^4}$
Compute the integral $$\int_0^\infty \frac{x \sin x}{x^4+4a^4}$$
Since, it's an even function I can rewrite the expression as $$\frac{1}{2}\int_{-\infty}^\infty \frac{x \sin x}{x^4+4a^4}$$.
In the previous part I found that the integral along the upper semicirlce as $R \rightarrow \infty$$$\frac{1}{2}\int_{C} \frac{z e^{iz}}{z^4+4a^4}\rightarrow 0$$.
I'm left to evaluate $$\oint \frac{z\sin z}{z^4+4a^4}$$. Should I compute the residues of sin or should I rewrite $$\sin z=\frac{e^{it}-e^{-it}}{2i}$$
I found the roots of $z^4+4a^4$ to be $z=\sqrt2 ae^{\pi/4+2\pi k/4}$ for $k=0,1,2,3$.
But I can't seem to be able to compute the answer.
|
Write the integral of interest $I(a)$ as
$$\begin{align}
I(a)&=\frac12\int_{-\infty}^\infty \frac{x\sin(x)}{x^4+4a^4}\,dx\\\\
&=\text{Im}\left(\frac12\int_{-\infty}^\infty \frac{xe^{ix}}{x^4+4a^4}\,dx\right) \tag 1\\\\
&=\lim_{R\to \infty}\text{Im}\left(\frac12\oint_{C_R}\frac{ze^{iz}}{z^4+4a^4}\,dz\right)
\tag 2\\\\
\end{align}$$
where $C_R$ is the closed contour in the upper-half plane comprised of the line segment from $-R$ to $R$ and the semicircle centered at the origin with radius $R$. The equivalence of $(1)$ and $(2)$ is guaranteed since, as already determined in the OP, the contribution from the integral over the semi-circle vanishes as $R\to \infty$.
Therefore, we have
$$I(a)=\text{Im}\left(\frac12\,(2\pi i) \,\sum \text{Res}_{\text{Im}(z)>0}\left(\frac{ze^{iz}}{z^4+4a^4}\right)\right)$$
where the residues in the upper-half plane are at $z=4^{1/4}ae^{i\pi/4}$ and $z=4^{1/4}ae^{i3\pi/4}$
Can you finish now?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Function as an eigenvector for a matrix? So I am currently going through this paper: Link Here
And in section 2.2, it defines $K$ to be a weighted adjacency matrix for a certain rectangular $n$ by $m$ graph, where the weights are all either $1$ or $i$. It then goes on to say that for fixed $j$ and $k$, the function:
$$ f(x,y)=\sin\frac{\pi j x}{m+1}\sin\frac{\pi k y}{n+1}$$
is an eigenvector of $K$. My question is, what does this mean? $K$ is a matrix with strictly $\mathbb{C}$ valued entries, so my understanding is that it can only act on vectors in $\mathbb{C}^n$ for some $n$. What does it mean for this matrix to act on a function?
For example, when $m=2$ and $n=3$, we have the matrix:
$$ K = \begin{bmatrix}
0 & 0 & 0 & i & 1 & 0 \\
0 & 0 & 0 & 1 & i & 1 \\
0 & 0 & 0 & 0 & 1 & i \\
i & 1 & 0 & 0 & 0 & 0 \\
1 & i & 1 & 0 & 0 & 0 \\
0 & 1 & i & 0 & 0 & 0 \\
\end{bmatrix}$$
In this case, what are the eigenvectors that it is referring to?
|
It appears that the standard basis of the vector space the linear operator $K$ acts on is most easily enumerated by two indexes $x,y$ where $x\in\{1,\ldots,m\}$ and $y\in\{1,\ldots,n\}$. To specify a vector in this space, we need to specify the component this vector has for each possible combination of $x$ and $y$ from these sets. That is, to specify a vector, we may specify a function $f\colon\{1,\ldots,m\}\times\{1,\ldots,n\}\to\Bbb C$. And that is precisely what happens here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Efficiently evaluating $\int x^{4}e^{-x}dx$ The integral I am trying to compute is this:
$$\int x^{4}e^{-x}dx$$
I got the right answer but I had to integrate by parts multiple times. Only thing is it took a long time to do the computations. I was wondering whether there are any more efficient ways of computing this integral or is integration by parts the only way to do this question?
Edit: This question is similar to the question linked but slightly different because in the other question they are asking for any method to integrate the function which included integration by parts. In this question I acknowledge that integration by parts is a method that can be used to evaluate the integral but am looking for the most efficient way. This question has also generated different responses than the question linked such as the tabular method.
|
Here is a nice little trick to integrate it without using partial integration.
$$
\int x^4 e^{-x} \,\mathrm dx = \left. \frac{\mathrm d^4}{\mathrm d \alpha^4}\int e^{-\alpha x} \,\mathrm dx \right|_{\alpha=1} = \left.- \frac{\mathrm d^4}{\mathrm d \alpha^4} \frac{1}{\alpha} e^{-\alpha x}\right|_{\alpha=1}
$$
The idea is to introduca a variable $\alpha$ in the exponent and write the $x^4$ term as the fourth derivative with respect to $\alpha$. This is especially helpful when you want to calculate the definite integral $\int_0^\infty$ because in this case the differentiation greatly simplifies.
$$
\int\limits_0^\infty x^n e^{-x} \,\mathrm dx = (-1)^n \left. \frac{\mathrm d^n}{\mathrm d \alpha^n} \frac{1}{\alpha} \right|_{\alpha=1} = n!\stackrel{n=4}{=} 24
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 3,
"answer_id": 2
}
|
Trouble with dependent matrix solution Im determining the eigenvector for $\lambda = 6$. Here is the following matrix $A - 6*I$:
0 0 0 0
0 1 0 1
0 0 0 0
0 0 0 0
Thus the corresponding equation: $x_2 = -x_4$
Thus $x_1,x_3,x_4$ are free.
How do I express this in terms of an eigenvector, a little confused..?
|
A simpler example to show what can happen.
$$\left[\begin{array}{cc}2&1\\0&2\end{array}\right]$$
You will get $(2-\lambda)^2 = 0$ to solve for eigenvalues. So two eigenvalues at $2$.
$$\left[\begin{array}{cc|c}
0&1&0\\
0&0&0
\end{array}\right]$$
Wee see that the only requirement is $x_2 = 0$ and $x_1$ can be what it wants. Only one eigenvector!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Different names for model with parameters specified and not? Say I have a general model:
$$y=\beta_{1}x_{1}+\beta_{0}$$
or
$$y=\beta_{1}x_{1}+\beta_{2}x_{2}+\beta_{0}$$
I might be performing some operations to determine which general model to choose.
Say I have decided on a general model structure and want to define a specialised model, like:
$$y=5 x_{1} +2$$
What is the vocabulary to call (1) model without parameter values defined (2) model with specific parameter values (can this even be called a model, are names function or equation more appropriate)?
|
The coefficients $\beta_1,\beta_2,\beta_0$ in this context are often called parameters. Particular values of the parameters determine a particular member of a parametrized family of models.
In the context of linear regression, one sometimes says those three parameters are fixed (as opposed to random) and unobservable, so that they must be estimated by using least squares. The variables $x_1,\,x_2$ are typically observable and often treated as fixed rather than random, even though they may change when a new sample is taken. The rationale for treating them as not random is that one may be concerned with the conditional distribution of $y$, or of the least-squares estimates of the parameters, given the $x$-values. On the other hand, $y$, although observable, would be treated as random.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Dimension of two subspaces Good evening. I'm trying to show that if the sum of the dimensions of two subspaces of a vector space exceeds the dimension of space then these subspaces have a vector in common. I have trouble to build the proof, because no how is this possible.
|
Let $V$ be the vector space, and $W$, $U$ be the two subspaces. Choose bases $\{w_1,\dots,w_m\}$ and $\{u_1,\dots,u_n\}$ for $W$ and $U$, respectively.
If $m+n>\dim V$, then the set $\{w_1,\dots,w_m,u_1,\dots,u_n\}$ is linearly dependent, so there are scalars $c_1,\dots,c_{m+n}$ not all zero such that
$$ c_1w_1+\dots+c_mw_m+c_{m+1}u_1+\dots+c_{m+n}u_n=0 $$
Therefore
$$ c_1w_1+\dots+c_mw_m=-(c_{m+1}u_1+\dots+c_{m+n}u_n) $$
and neither side can be zero because the original sets $\{w_1,\dots,w_m\}$ and $\{u_1,\dots,u_n\}$ are linearly independent.
Thus $c_1w_1+\dots+c_mw_m$ is a non-zero vector in $U\cap W$.
By the way, a refinement of this argument can show that
$$ \dim(W+U)=\dim(W)+\dim(U)-\dim(W\cap U)$$
which will immediately solve the problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Non-Changing Determinant When Adding (Seemingly) Arbitrary Entries Question:
I've found that adding what seem to be arbitrary values in the 4th row don't change the value of the determinant. Why is that?
A = $\begin{bmatrix}
0 & 2 & 0 & 0 & 0 \\
0 & 0 & 0 & 4 & 0 \\
0 & 0 & 5 & 0 & 0 \\
3 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{bmatrix}$
B = $\begin{bmatrix}
0 & 2 & 0 & 0 & 0 \\
0 & 0 & 0 & 4 & 0 \\
0 & 0 & 5 & 0 & 0 \\
3 & -12 & 7 & 14 & 41 \\
0 & 0 & 0 & 0 & 1 \\
\end{bmatrix}$
Thanks!
|
Replace those entries by unknowns, say $a,b,c,d$, then calculate the determinant by expanding along the first column. See what you get.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Extension of natural transformation from a dense subcategory I'm trying to prove that the free cocompletion of a small category $\mathcal{C}$ gives an equivalence of categories
$$Cat_+[\widehat{\mathcal{C}},\mathcal{D}] \longrightarrow Cat[\mathcal{C},\mathcal{D}]$$
by precomposition with the Yoneda embedding $h$ (take $\widehat{\mathcal{C}}$ as the presheaf category on $\mathcal{C}$, and $Cat_+[\widehat{\mathcal{C}},\mathcal{D}]$ means the category of cocontinuous functors and natural transformations between them). For example, see https://qchu.wordpress.com/2014/04/01/the-free-cocompletion-i/.
What isn't clear to me is that this functor is full. I see that given a natural transformation $$\varepsilon : Fh \Rightarrow Gh : \mathcal{C} \rightarrow \mathcal{D}$$
there is a unique way to define the components of a potential natural transformation $$\alpha : F \Rightarrow G : \widehat{\mathcal{C}} \rightarrow \mathcal{D}$$
such that $\alpha h = \varepsilon$, but why must $\alpha$ be natural?
|
A teacher of mine gave me this answer.
Define for each presheaf $P$, the morphism $\alpha_P : FP \rightarrow GP$ in the unique possible way to make the naturality square of $\alpha$ commutative for all $\lambda_{C,p}$ (these $\lambda_{C,p} : [-,C] \rightarrow P$ form the colimiting cone that establishes $P$ as a colimit of representable functors). Then the naturality square of $\alpha$ commutes for any arrow $[-,C] \rightarrow P$ because this arrow is $\lambda_{C,p}$ if $p \in PC$ is obtained by the Yoneda bijection.
Now fix any $f : P \Rightarrow Q$. To see that $\alpha_Q Ff = Gf \alpha_P$, we can precompose with the colimiting cone $F\lambda_{C,p} : F[-,C] \rightarrow FP$ (F is cocontinuous), so it remains to be seen that $\alpha_Q F(f\lambda_{C,p}) = G(f\lambda_{C,p}) \varepsilon_C$, but this holds because of naturality of $\alpha$ with respect to any arrow $[-,C] \rightarrow P$.
This works for any dense subcategory.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1736967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Puzzle About Cubes (from the book thinking mathematically) I want to confirm my solution to the given problem (solutions were not available in the book)
I have eight cubes. Two of them are painted red, two white, two blue
and two yellow, but otherwise they are indistinguishable. I wish to
assemble them into one large cube with each color appearing on each
face. In how many different ways can I assemble the cube?
*
*The answer i got was 96 is this correct?
*I also tried to generalize
the question such that given $n$ cubes and $\sqrt n$ number of colors, I came up with
$$n!\cdot2(n-\sqrt{n})!\cdot(n-2\sqrt{n})!\cdot2(n-(4\sqrt{n}-4))!$$
is the above generalization correct?
Thanks
EDIT
well to further explain my question, the rational behind the answer to the 1st question was we have 6 faces and if we take one of the faces we have 4! possibilities to arrange the 4 colors. that gives us 24 possibilities. and if move to the other faces, we have 2 faces of 2! possibilities and 3 faces of 1! possibilities thus total arrangements are 4!x2!x2!x1!x1!x1! = 96
for the second question yes the colors should be n^1/3 but for the generalization n should be no of colors.
for example if we take the current question
n=4
*
*if we take the front face as the base face we have n! = 24
*if we take the left face because of step 1 we have $(n-\sqrt n)! =2$
*if we take the back face because of step 2 we again have $(n-\sqrt
n)! =2$
*if we take the right face because of step 1 and step 3 we have
$(n-2\sqrt n)! =1$
*if we take the top face because of step 1,2,3,4 we have $(n-(4\sqrt
n -4))! =1$
*if we take the bottom face again because of step 1,2,3,4 we have
$(n-(4\sqrt n -4))! =1$
thus the result 4!x2!x2!x1!x1!x1! = 96
|
The requirement is to have each color on each face of the composed cube.
What that means is that every pair of cubes of the same color must be arranged "diagonally" that is, touching corner to corner, or, in other words, if I put a red cube on the front-bottom-left then the other red cube must be placed on the rear-top-right.
Is that clear until now? If not, then try to think why other arrangements won't satisfy the requirement above.
Now, apparently rotating the composed cube doesn't make it any different from the non-rotated version. With that in mind, let's see how many options we have to arrange the cubes on the "front" half.
- At first it may seem that we have 4 options to choose where to place the first cube, and 3 options for the next one. And we can go that way... but there's an easier one:
Let's ask - how many ways are there to arrange 4 different colors on the "front" half. And the answer is 3*2. Why? Because it doesn't matter where we put the first chosen cube (whatever color it is) since we can rotate the cube. Therefore, what really matters, is how many ways are there to arrange the rest 3 colors. So we are left with 3 places for the second color, 2 places for the third, and only 1 for the last color.
And what about the "back" half? It will arrange in a "flipped" and "mirrored" manner. And what about the sides - again - they are the mirrors of the "front". So we've found the simple method for counting only the "front" side and it greatly simplified our lives.
So the answer is 3! = 6.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
}
|
Evaluating Line Integral with Green's Theorem I'm given a line integral
$$\int_{C} \left(\frac{\sin(3x)}{x^2+1}-6x^2y\right) dx + \left(6xy^2+\arctan\left(\frac{y}{7}\right)\right) dy$$
where C is the circle $$x^2+y^2=8$$ oriented in the counter clockwise direction. I'm supposed to solve it with Green's Theorem.
What I have so far is the parametrization of C:$$\vec{r}=\left \langle 2\cos(t), 2\sin(t) \right \rangle , 0\leq t\leq 2\pi$$
What I am confused about is how to proceed after this step. When I try to substitute $x$ and $y$ into the integral, I get a very complicated integral. I feel like I am missing something to simplify the integral to something more nice to solve. Any insights would be greatly appreciated.
|
Green's theorem converts your line integral into a double integral over the region bounded by the (closed) curve, in your case the circle.
$$\oint_{C^+} (L\, dx + M\, dy) = \iint_{D} \left(\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y}\right)\, dx\, dy$$
Calculating the line integral itself is hard, but notice that the integrand becomes a lot simpler since in your case:
$$\frac{\partial M}{\partial x} = 6y^2 \quad \mbox{and} \quad \frac{\partial L}{\partial y} = -6x^2$$
The integral is then simply:
$$6 \iint_{D} \left( x^2+y^2 \right) \, dx\, dy$$
where $D$ is the disc $x^2+y^2 \le 8$; this is begging for polar coordinates!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Combinatorial interpretation of the sum $\sum s1(n, i+j) \cdot {i + j \choose i} $ I'm trying to figure out a combinatorial interpretation of the following sum:
$\sum\limits_{i,j} s1(n, i+j) \cdot {i + j \choose i} $
and then a compact formula. The $ s1 $ function denotes the Stirling numbers of the first kind (i.e. number of $ n$-permutations with $i+j$ cycles).
For fixed $ i $, it looks like choosing a permutation with at least $ i $ cycles and choosing $ i $ out of them, but I can' see a closed formula from this
|
\begin{align}
\sum_{i,j}s_1(n,i+j)\binom{i+j}i
&=
\sum_ks_1(n,k)\sum_{i=0}^k\binom ki=\sum_ks_1(n,k)2^k=(-1)^n(-2)_n=(n+1)!\;,
\end{align}
where $(x)_n$ is the falling factorial $x(x-1)\cdots(x-n+1)$.
So the number of subsets of cycles taken from permutations of $n$ elements is the number of permutations of $n+1$ elements. I don't see a bijective proof of that, but I'll think about it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
An inequality involving $\frac{x^3+y^3+z^3}{(x+y+z)(x^2+y^2+z^2)}$
$$\frac{x^3+y^3+z^3}{(x+y+z)(x^2+y^2+z^2)}$$
Let $(x, y, z)$ be non-negative real numbers such that $x^2+y^2+z^2=2(xy+yz+zx)$.
Question: Find the maximum value of the expression above.
My attempt:
Since $(x,y,z)$ can be non-negative, we can take $x=0$, then equation becomes
$$y^2 + z^2=2xy$$
This implies that $(y-z)^2=0$.
So this implies that the required value is $$\frac{y^3 + z^3}{(y+z)(y^2 + z^2)}=\frac{1}{2}$$
But this wrong as the correct answer is $\frac{11}{18}$.
What is wrong with my method?
|
Let $P$ be the expression we want to maximise.
Using the following notation: $S_1=x+y+z$, $S_2=xy+xz+yz$ and $S_3=xyz$.
From the hypothesis we get that, $S_1^2=4S2$.
So the expression we want to maximise is:
$P=\dfrac{x^3+y^3+z^3}{(x+y+z)(x^2+y^2+z^2)}=\dfrac{S_1^3-3S_1S_2+3S3}{2S_1S_2}$
Then, simplify it using the hypothesis, in a way such that we only get in terms of $S_1$ and $S_3$ :
$P=\dfrac{1}{2}+\dfrac{6S_3}{S_1^3}$
Now, consider a polynomial with roots $x,y,z$. Ofcourse it is $F(x)=x^3-S_1x^2+S_2x-S_3$. Then, for the roots to be real, the discriminant of this polynomial must be $\geq 0$, so:
$S_1^2S_2^2-4S_2^3-4S_1^3S_3+18S_1S_2S_3-27S_3^2 \geq 0$
Again, using the hypothesis, we get:
$\dfrac{1}{2}S_1^3-27S_3 \geq 0$
Hence, $\dfrac{S_3}{S_1^3} \leq \dfrac{1}{54}$
Finally, $P \leq \dfrac{1}{2}+6\left(\dfrac{1}{54}\right)=\boxed{\dfrac{11}{18}}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Problem applying Simpson's rule I am having a problem applying composite Simpson's rule for the integral $$I=\int_0^2\dfrac{1}{x+4}dx$$ with $n=4$.
The exact value of the integral is about $0.405$, however, Simpson's is giving $0.8$, and by increasing the number $n$ up to $8$, Simpson's gives $1.6$ !!
The formula I'm using is $$I\approx \frac{1}{3}[f(0)+f(2)+2(f(1))+4(f(0.5)+f(1.5))]$$Can anyone help me figure out the problem ?
|
The factor should be
$$
\frac{(b-a)}{3n}
$$
with $b-a=2$ and $n=4$ you get $1/6$ and not $1/3$. One half of $0.8$ is $0.4$ which is close to the exact value.
You can quickly check the correctness of the factor by using the constant function $f=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Number of pairs of rational numbers that satisfy the given relation The number of pairs $(x,y)$ that satisfy : $2x^2 + y^2 + 2xy - 2y + 2 = 0$ is
a.) $0$
b.) $1$
c.) $2$
d.) None of the foregoing numbers
My attempt : I am not well versed in number theory , thus I took the most basic approach that I could see , that is I tried to divide the given equation into squares to and see if i could so something from that however I got stuck at
$ (x+y)^2 + x^2 - 2y + 2 $
Also I tried putting x = 0 and realised that there exists no real number y which could form the required pair with x = 0 atleast , similarly i could observe the same thing with y = 0.
Please suggest me a solution as well as a more general approach towards solving these kind of problems
My background is a degree in Electrical Engineering , however I have never taken any specific course in number theory.
|
$2x^2 + y^2 + 2xy - 2y + 2 = 0$
$2(x+\frac{y}{2})^2+\frac12(y-2)^2=0$
so the answer is unique, and
$y=2,x=-1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
All possible ways to order numbers in an array with decreasing rows and columns Given positive integer numbers $1,2,...,N\cdot M$. How many ways are there to order them in an $N\times M$ array given that the values decrease in each row from left to right and in each column from top to bottom? For small arrays one can just count but I don't find a general rule. Thanks for any help.
|
This is the number of standard Young tableaux for a Young diagram with $N$ rows and $M$ columns. By the hook length formula, this is
$$
\frac{(NM)!}{\prod_{i=1}^M\prod_{j=1}^N(i+j-1)}\;.
$$
This is OEIS sequence A060854. That entry gives the alternative formula
$$
(NM)!\prod_{k=0}^{N-1}\frac{k!}{(M+k)!}\;.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
How to simplify the expression $(\log_9 2 + \log_9 4)\log_2 (3)$ Our test asked to simplify $(\log_9 2 + \log_9 4)\log_2 (3)$.
I simplified the first parenthesis to be $\log_9 (8)$.
So, now I have $\log_9 (8) \cdot \log_2 (3)$ and I can change to base $10$ and get, $$\frac{\log 8}{\log 9} \cdot \frac{\log 3}{\log 2}$$ However, the problem shows that this should without a calculator simplify to $1.5$. Without using a calculator, I don't see how I'm supposed to get $3/2$ from the multiplication of these $2$ logs. Any thoughts?
|
$$(\log_{9}2+\log_9{4})\log_{2}3$$
this equal to:
$$\log_{9}8\cdot \log_{2}3=\frac{3\log2}{2\log3}\cdot\frac{\log3}{\log2}=\frac{3}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1737990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Potential Frobenius automorphism question Let $F$ be a finite field of characteristic $p$ of size $p^n$
for $n \ge 1$ with the base field $K \cong Z_p$.
I'm attempting to prove that the map $\phi: F → F$ sending $u$ to $u^p$ for each $u \in F$ is a $K$-automorphism of $F$ of degree $n$.
The thing is, I'm fairly certain $\phi$ is the Frobenius automorphism (correct me if I'm wrong), but doesn't that imply it's a $K$-automorphism? And if not, then I have no idea how to approach proving that it is.
|
You are correct that $\phi$ is the Frobenius automorphism. Of course, you have to show that it actually is an automorphism. If $\mathbb{F}$ is a field of characteristic $p$, the map $\phi : \mathbb{F} \to \mathbb{F}$ is always a field homomorphism, however it is not always an automorphism.
Show that $\phi$ is a field homomorphism. Any field homomorphism is injective. Since $F$ is finite and $\phi$ is injective, $\phi$ must then be surjective and hence $\phi$ is an automorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is the Klein bottle homeomorphic to the union of two Mobius bands attached along boundary circle? Question: Determine whether the Klein bottle is homeomorphic to the union of two Mobius bands attached along their boundary circles.
The Klein bottle is the quotient space
$$
K=I^2 /{\sim}, \quad (x,0)\sim(x,1), \; (0,y)\sim(1,1-y), \; \forall x,y\in I
$$
The Möbius band is the quotient space
$$
M=I^2 /{\sim}, \quad (0,y)\sim(1,1-y)
$$
What would be a good way to approach this question? I have not had any success constructing a map between spaces
Edit: I remember that homeomorphism must preserve orientability. So this could be used to disprove a homeomorphism.
The mobius band is non-orientable, as is the klein bottle. What I am not sure about is if we take the union of two non-orientable Mobius bands and attach their boundary circles, do we still get a non-orientable surface.
I think the gluing the boundary step may switch the orientabilty. So we have an orientable surface which therefore cannot be homeomorphic to the non-orientable Klein bottle.
I am unsure how to prove this in a formal way (with equations and notation etc)
Would appreciate your help
|
This is a diagram of the Klein bottle, note that the diagonal lines divide it into 2 Möbius strips sharing a boundary:
So the answer is yes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Upper half-plane $\overline{\mathbb{H}}$ with two boundary punctures Consider $\overline{\mathbb H}$ with two puncture $P_1$ and $P_2$ on the real line, with coordinates $z = x_1$ and $z = x_2$, respectively. Consider another copy of $\overline{\mathbb H}$ with two punctures $P_1$ and $P_2$ on the real line, with coordinates $z = x_1'$ and $z = x_2'$, respectively. Are these two surfaces the same Riemann surface?
Idea. I suspect they are, and it would suffice to exhibit the conformal map that takes the punctures into each other while preserving $\overline{\mathbb H}$. But I am not sure how to do this, could anybody help?
|
It's enough to consider the case $x_1^{\prime}=0, x_2^{\prime}=\infty$. If $x_1>x_2\in \mathbb{R}$, the map
$$ z\mapsto \frac{z-x_1}{z-x_2} $$
sends $x_1$ to $0$ and $x_2$ to $\infty$, and is conformal because the determinant of the corresponding matrix is $x_1-x_2>0$. And since
$$ \Im\frac{z-x_1}{z-x_2}=\frac{(x_1-x_2)\Im z}{|z-x_2|^2}$$
it follows that the map sends $\mathbb{H}$ into itself and $\mathbb{R}\cup\{\infty\}$ into itself.
If for instance $x_2=\infty$, then the map $z\mapsto z-x_1$ sends $x_1$ to $0$ and $x_2$ to $\infty$, and clearly preserves $\mathbb{H}$ and $\mathbb{R}\cup\{\infty\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Probability question using no-memory property of exponential distribution
A customer must be served first by server 1, then by server 2, and
finally by server 3. The amount of time required for service by server
i is an exponential random variable with rate µi , for i = 1, 2, 3.
Suppose you enter the system when it contains a single customer who is
being served by server 3.
Find the expected amount of time that you spend in the system. (Whenever you encounter a busy server, you must wait for the service in progress to end before you can enter service.)
Attempt:
$1/\mu_1+1/\mu_2+1\mu_3 \frac{\mu_1}{\mu_1+\mu_2} \frac{\mu_2}{\mu_2+\mu_3)}1/\mu_3$
|
I'm not sure if this is correct, but you can do this with a conditioning argument: $E(T_1+T_2+T_3)+E(S|S>T_1+T_2)P(S>T_1+T_2)+E(S|S<T_1+T_2)P(S<T_1+T_2)=\frac{1}{\mu_1}+\frac{1}{\mu_2}+\frac{1}{\mu_3}+\frac{\mu_1}{\mu_1+\mu_3}\frac{\mu_2}{\mu_2+\mu_3}\frac{1}{\mu_3}$, since $E(S|S<T_1+T_2)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Canonical Projection I hope are well. I have some doubts as I am new in algebra.
Let $V/W$ the quotient vector space with the usual sum and product, in addition to the equivalence relation defined on $W$ which is a subspace of $V$.
How I can prove that the canonical projection that sends a vector $v$ of $V$ to its equivalence class in $V/W$ is not injective but it's surjective?
|
I recommend, investigate the universal property of quotient vector space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Some commutator identities On the way to study Lang's algebra, I cannot solve this problem. See page 69.
Let G be a group and denote the commutator of x and y by & $[x,y]=xyx^{-1}y^{-1}$.
I wanna prove that if $[x,y]=y, [y,z]=z, [z,x]=x$ then $x=y=z=e$.
I tried before posting it, but I don't have a clue. Please give me some hints or solution.
Thanks in advance.
|
Here is the approach I outlined in the comments above.
From the identity $[x,y]=y$, one sees that $xyx^{-1} = y^2$. I will write $y^x$ from now on, for $xyx^{-1}$; similarly, I write ${}^xy$ for $x^{-1}yx$.
We also see from this commutator relation that ${}^yx=yx$ and $x^y=y^{-1}x$. Similar identities can be deduced from the other two commutator relations.
So a quick check shows $[x,y^{-1}]=y^{-1}$, and so
\begin{align}
[[x,y^{-1}],z] &= y^{-1}(zyz^{-1}) \\
&= y^{-1}z^{-1}y
\end{align}
(The second equality comes from $y^z=z^{-1}y$). This means that
$$ [x,y^{-1},z]^y=z^{-1} $$
Similarly, we get
$$ [y,z^{-1},x]^z = x^{-1} $$
and
$$ [z,x^{-1},y]^x = y^{-1} $$
The Hall-Witt identity then shows $z^{-1}x^{-1}y^{-1}=1$, or
$$ z = x^{-1}y^{-1} $$
Let's put that value for $z$ into the final commutator relation; we get
\begin{align}
x &= [z,x] \\
&= [x^{-1}y^{-1},x] \\
&= x^{-1}y^{-1}xyxx^{-1} \\
&= x^{-1}(y^{-1}xy) \\
&= x^{-1}yx
\end{align}
(The last equality comes from ${}^yx=yx$). Conjugating both sides by $x$ shows $x=y$. Thus $y=[x,y]=1$, and hence $x=y=1$, and so $z=1$ as well; the group is trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Expanding $E[N^2]-E[N]$ I'm trying to prove $$\sum_{i=0}^{\infty}iP[N>i]=\frac{1}{2}(E[N^2]-E[N])$$
by expanding both the RHS and LHS and showing that they are equal. The first thing I did was multiply both sides by $2$ to get $$2\sum_{i=0}^{\infty}iP[N>i]=E[N^2]-E[N]$$
which made it simpler to expand the LHS. I ended up getting the following equality:
$$2\sum_{i=0}^{\infty}iP[N>i] = 2P[N=2] + 6P[N=3] + 14P[N=4] + \cdots$$
When trying to expand the LHS, however, I did not know how to proceed once I saw $E[N^2]$. So, I have 2 questions:
(1) Is the following true: $E[N^2]-E[N] = E[N^2 - N] = E[N(N-1)]?$
(2) Can $E[N^2]$ be expanded in a similar way to $E[N]$? i.e. $E[N^2] = 0^2P[N=0] + 1^2P[N=1] + 2^2P[N=2^2]$? I know this is 100% incorrect, but I can't figure out how else it would be expanded.
Thank you!
|
Yes, what you wrote $\Bbb E[N^2]-\Bbb E[N]=\Bbb E[N(N-1)]$ is correct. You can expand it as $$\Bbb E\left[N(N-1)\right]=\sum_{i=1}^{\infty}i(i-1)P(N=i)=2P(N=2)+6P(N=3)+12P(N=4)+\dots$$ Now, go back and check where this $14P(X=4)$ comes from in the expansion of the LHS. It should be $12P(X=4)$.
Your expansion of $\Bbb E\left[N^2\right]$ is correct as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $a_n=\left(1-\frac{1}{\sqrt{2}}\right)\ldots\left(1-\frac{1}{\sqrt{n+1}}\right)$ then $\lim_{n\to\infty}a_n=?$ I have an objective type question:-
If $$a_n=\left(1-\frac{1}{\sqrt{2}}\right)\ldots\left(1-\frac{1}{\sqrt{n+1}}\right)$$ then $\lim_{n\to\infty}a_n=?$:-
A)$0$
B)limit does not exist
C)$\frac{1}{\sqrt\pi}$
D)1
My approach :-As the product contains $1/\sqrt2$ so the overall product would be irrational,and as product is converging also so by option elimination answer would be $$C) \frac{1}{\sqrt \pi}$$,as it is the only irrational number in options.
Now i want to ask is that is my solution right? If it is right then what is the proper method to actually solve the question? but if it is wrong then what is the right solution?
Suddenly a doubt is also arising that whether 0 is an irrational number?
|
$\displaystyle \ln a_n = \sum_{i=2}^{n+1} \ln(1-\frac{1}{\sqrt{i}})$ and $\displaystyle \ln(1-\frac{1}{\sqrt{i}})\sim -\frac{1}{\sqrt{i}}$
Thefore $\sum_{i=2}^{n+1} \ln(1-\frac{1}{\sqrt{i}})$ diverges to $-\infty$
Hence $\ln a_n\to -\infty$
Hence $a_n\to 0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1738921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Prove that $\varphi(m)+ \tau(m)\leqslant m+1$
Prove that $\varphi(m)+ \tau(m)\leqslant m+1$ where $m\in \mathbb N$
I wrote $m:=p_1^{\alpha_ 1}....p_s^{\alpha _s}$
$$\varphi(m)=p_1^{\alpha_1}(p_1-1)...p_s^{\alpha_s}(p_s-1)$$
$$\tau(m)=(\alpha_1+1)...(\alpha_s+1)$$
Then $\varphi(m)+ \tau(m)=\prod\limits_{i=1}^{s}p_i^{\alpha_i-1}(p_i-1)+\sum\limits_{i=1}^{s}(\alpha_i+1)\overset{?}{\leqslant}\prod\limits_{i=1}^{s} p_i^{\alpha_i}+1$
and I'm stuck here, how can I prove that $\prod\limits_{i=1}^{s}p_i^{\alpha_i-1}(p_1-1)+\sum\limits_{i=1}^{s}(\alpha_i+1)\overset{?}{\leqslant}\prod\limits_{i=1}^{s} p_i^{\alpha_i}+1$
or maybe is there an easy way to prove that $\varphi(m)+ \tau(m)\leqslant m+1$
|
Fix $m\in\mathbb{N}$ and let $A=\{a\in\mathbb{N}:a\leq m,\gcd{(a,m)}=1\},$ $B=\{a\in\mathbb{N}:a\mid m\}$. Now $\lvert A \rvert = \varphi(m)$ and $\lvert B \rvert = \tau(m)$. Hence $$\varphi(m)+\tau(m) = \lvert A \rvert + \lvert B \rvert = \lvert A \cup B \rvert + \lvert A\cap B \rvert.$$
But now note that $A\cup B\subseteq\{1,2,\ldots,m\}$ and $A\cap B = \{1\}$, so $\lvert A\cup B \rvert \leq m$ and $\lvert A\cap B \rvert = 1$. The result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Finding the Dimension of a given space $V$ I am unsure how to solve this problem:
If $\vec{v}$ is any nonzero vector in $\mathbb{R}^2$, what is the
dimension of the space $V$ of all $2 \times 2$ matrices for which
$\vec{v}$ is an eigenvector?
What I have so far is:
$$ \left[
\begin{array}{ c c }
a & b \\
c & d
\end{array} \right]
\left[
\begin{array}{ c }
v_{1} \\
v_{2}
\end{array} \right] = \left[
\begin{array}{ c }
\lambda v_{1} \\
\lambda v_{2}
\end{array} \right] $$
And solving for this I get two equations with four unkowns.
$$ av_{1} + bv_{2} = \lambda v_{1}$$
$$cv_{1} + dv_{2} = \lambda v_{2} $$
I am not sure where to go from here. At first I solved for $a$ and $c$ in terms of $b$, $v_1$ and $v_2$ and $d$, $v_1$, and $v_2$ respectively and got dim = 2, but the answer is dim = 3. Any hints why this is?
|
Another solution (using the fact that the vector is non-zero, and we are in dimension 2) is to express the fact that
$ \left[\begin{array}{ c c } a & b \\ c & d \end{array} \right]
\left[\begin{array}{ c } v_1 \\ v_2 \end{array} \right]$
and $\left[\begin{array}{ c } v_1 \\ v_2 \end{array} \right]$
are colinear by them having the same orthogonal complement, which is spanned by $\left[\begin{array}{ c } v_2 \\ -v_1 \end{array} \right]$. This means that $a,b,c,d$ must satisfy the single equation (parametric in $v_1,v_2$)
$$ 0 = \left[\begin{array}{ c c } v_2 & -v_1 \end{array} \right]
\left[\begin{array}{ c c } a & b \\ c & d \end{array} \right]
\left[\begin{array}{ c } v_1 \\ v_2 \end{array} \right]
= v_2^2b+v_1v_2(a-d)-v_1^2c,$$
which is non-trivial because $v_1,v_2$ are not both $0$. The dimension of the space of solutions is therefore $4-1=3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
A question about Rudin's proof of Lusin's theorem In page 56 of Rudin's Real and Complex Analysis, it's stated:
[I]f $f$ is a complex measurable function and $B_n=\{x:|f(x)|>n\}$, then $\bigcap B_n= \varnothing$.
My question is why?
|
By definition, a complex function is a map $X\to\Bbb C$. If $f$ is complex, then the set $\bigcap B_n$ is the set of all points $x$ such that $|f(x)| > n$ for every $n\in\Bbb N$. There is no complex number $z$ with the property that $|z|>n$ for every natural number $n$. Hence there is no such $x\in\bigcap B_n$, so $\bigcap B_n = \varnothing$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Asymptotic estimate for the sum $\sum_{n\leq x} 2^{\omega(n)}$ How to find an estimate for the sum $\sum_{n\leq x} 2^{\omega(n)}$, where $\omega(n)$ is the number of distinct prime factors of $n$.
Since $2^{\omega(n)}$ is multiplicative, computing its value at prime power, we see that $2^{\omega(n)}=\sum_{d\mid n}\mu^2(d)$. Then
\begin{align}
\sum_{n\leq x}2^{\omega(n)}&=\sum_{n\leq x}\sum_{d\mid n}\mu^2(d)=\sum_{d\leq x}\mu^2(d)\sum_{d\mid n} 1\\
&=\sum_{d\leq x}\mu^2(d)\left\lfloor \frac{x}{d}\right\rfloor=\sum_{d\leq x}\mu^2(d)(\frac{x}{d}+O(1))\\
&=x\sum_{d\leq x}\frac{\mu^2(d)}{d}+O(\sum_{d\leq x}\mu^2(d))
\end{align}
I get stuck here, the series $\sum_{n=1}^\infty\frac{\mu^2(n)}{n}$ is not convergent, I don't know how to estimate the first term.
|
It is not difficult to see that $$\sum_{d\leq x}\mu^{2}\left(d\right)=x\frac{6}{\pi^{2}}+O\left(\sqrt{x}\right)\tag{1}$$ (for a proof see here) so by Abel's summation we have $$\sum_{d\leq x}\frac{\mu^{2}\left(d\right)}{d}=\frac{\sum_{d\leq x}\mu^{2}\left(d\right)}{x}+\int_{1}^{x}\frac{\sum_{d\leq t}\mu^{2}\left(d\right)}{t^{2}}dt
$$ hence using $(1)$ we have $$
\begin{align*}
\sum_{d\leq x}\frac{\mu^{2}\left(d\right)}{d}= & \frac{6}{\pi^{2}}+O\left(\frac{1}{\sqrt{x}}\right)+\frac{6}{\pi^{2}}\int_{1}^{x}\frac{1}{t}dt+O\left(\int_{1}^{x}\frac{1}{t^{3/2}}dt\right)\\
= & \frac{6}{\pi^{2}}\log\left(x\right)+O\left(1\right)
\end{align*}
$$ hence $$\sum_{n\leq x}2^{\omega\left(n\right)}=\frac{6}{\pi^{2}}x\log\left(x\right)+O\left(x\right).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Why does this way of solving inequalities work? Here is what I had to prove.
Question: For positive reals $a$ and $b$ prove that $a^2+b^2 \geq 2ab$.
Here is how my teacher did it:
First assume that it is in fact, true that $a^2+b^2 \geq 2ab$. Therefore $a^2+b^2-2ab \geq 0$ . We have $(a-b)^2$ is greater than or equal to zero which is true. Hence what was assumed originally is true.
Why does this method work?
I cannot understand how you can first assume that it is true and then prove that it is.
|
You can not actually. You should rather start with $(a-b)^2 \geq 0$ and the show that $a^2+b^2 \geq 2ab$, not the other way round.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 10,
"answer_id": 1
}
|
Find all pairs of prime numbers $p, q$ such that $p+q = 18(p−q)$. Find all pairs of prime numbers $p, q$ such that $p+q = 18(p−q)$.
It is clear that $p-q$ must be an even number since if we consider $q$ as $2$, we won't get any solution. So any pair of odd prime does the work. I got $p=19$ and $q=17$ as one pair, considering the fact that $p+q$ must be a multiple of $18$. So considering numbers $18\cdot 2$, $18\cdot4$ , $18\cdot6$ ... as $p+q$ and then testing whether such pair of odd prime exist or not.
Is this the right approach since I am stuck up?
|
$p+q = 18p - 18q$
$17p = 19q$
Therefore $p = 19n$ and $q=17n$.
Then $p$ and $q$ are prime only when $n=1$.
Hence the only solution in primes is $(p,q) = (19,17)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Noether normalization lemma proof I would like to prove the following statement without using Noether normalization lemma (cause it is actually the base case in the induction process of the proof of this lemma).
Let $k$ a field with an infinity of elements, and $A=k[a_1]$ a finitely generated $k$-algebra. Then there exist $b_1\in A$ such that
*
*$\lbrace b_1 \rbrace$ is algebraically independent over $k$; and
*$A$ is a finite $k[b_1]$-module.
Let $\varphi : k[X_1]\to k[a_1]$ defined by $\varphi (X_1)=a_1$. $\varphi$ is surjective so using the isomorphism theorem we have
$$k[X_1]/\ker \varphi \cong k[a_1].$$
$\bullet$ If $\ker\varphi =\lbrace 0\rbrace$ then $b_1=a_1$ suit.
$\bullet$ If not there exist $P\in\ker\varphi$, since $k[X_1]$ is principal and $k$ is a field we can assume that $P$ is a monic polynomial with $\deg P \geq 1$ and $\ker\varphi = <P>$.
To prove $2.$ I note that $k\subset k[a_1]$ and by defition of $\ker\varphi$, there exist a monic polynomial $P$ such $P(a_1)=0$, so $a_1$ is algebraic integer over $k$. And that implies that A si a $k$-module. So here, I would choose $b_1\in k^*$ to have $k[b_1]=k$ and $2.$. But if I do so $1.$ is not true.
To prove $1.$, $\lbrace b_1 \rbrace$ is algebraically independent over $k$ means that $\varphi : k[X_1]\to k[a_1]$ defined by $\varphi (X_1)=b_1$ is injective, e.g. if $P\in k[X_1]$, $\varphi (P)=0\implies P=0$. Here I d'on't know how to choose $b_1$.
Any help will be greatly appreciate.
|
Noether normalization says:
Let $k$ be an infinite field, $A = k[a_1, ... , a_n]$ a finitely generated $k$-algebra. Then for some $0 \leq r \leq n$, there exist $r$ elements $b_1, ... , b_r \in A$, algebraically independent over $k$, such that $A$ is finitely generated as a module over $k[b_1, ... , b_r]$.
If $S \subseteq A$, the ring $k[S]$ is by definition the intersection of all subrings of $A$ containing $k$ and $S$. If it happens that $r = 0$, then $k[b_1, ... , b_r]$ just means $k[\emptyset] = k$, so Noether normalization just says that $A$ is already finitely generated as a module over $k$. You need to consider the possibility that $r = 0$ when you formulate what Noether normalization is saying in the case $n = 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Fisherman Combinations This is a real problem. Ten fishermen are going fishing for nine days. Each day, the ten will split into five pairs. For example, on Day 1 Fisherman A will fish with B, C with D, E with F, G with H, I with J. How should the fishermen pair off each day so that each fisherman fishes with every other fisherman exactly once over the nine day trip?
I figured it out easily by trial and error for six fishermen over five days and eight fishermen over seven days, but I can't see the pattern that would allow me to generalize to a larger groups and trial and error isn't getting me there,
|
The problem is $n$ fisherman, $n-1$ days.
We need even $n$ for this to make sense, the following construction works: Take a regular polygon with $n-1$ sides. Then pick one color for each of the $n-1$ sides of the polygon, color each diagonal with the color of the side that is parallel to it. Then every vertex will have exactly one diagonal of each color coming out of it, except for the color of the edge opposite to it (it will have no diagonals of this color). So now take an $n$'th vertex and color the line from the new vertex to every old vertex with the unique color the old vertex is missing. This gives you a solution. To see this, associate every fisherman with a vertex, and every color with one of the days.
Here is a drawing of the construction:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$\int_{\bigcup_{n=1}^{\infty}E_n}f=\sum_{n=1}^{\infty}\int_{E_n}f$ given $f$ positive and measurable I'm learning about measure theory (specifically Lebesgue intregation) and need help with the following problem:
Let $f:\mathbb{R}\rightarrow[0,+\infty)$ be measurable and let $\{E_n\}$ be a collection of pairwise disjoint measurable sets. Prove that $\int_{\bigcup_{n=1}^{\infty}E_n}f=\sum_{n=1}^{\infty}\int_{E_n}f.$
For convenience I set $E=\bigcup_{n=1}^{\infty}E_n$.
This problem looks like an application of the monotone convergence theorem but I'm having a hard time applying it. I need to find a sequence of functions that is positive an nondecreasing but I don't know how to define it.
|
Set $f_N=\sum_{n=1}^{N} f\chi_{E_n}$.
As $f\chi_{E_n}\geq 0$ for each $n$, $f_1\leq f_2 \leq f_3 \leq f_4....$.
Now observe that $f_n \rightarrow f \chi_{E}$ as $n\rightarrow \infty$, where $E=\bigcup_{n=1}^{\infty}E_n$. Thus by MCT we obtain the following equality:
$\lim_{n} \int f_n d\mu= \int f\chi_{E} d\mu$.
Which is the conclusion you seek.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1739995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
question on vector calculus notation I just have a question about the vector calculus notation:
$$(u \cdot \nabla)u$$ Is that the same as $( \nabla \cdot u)u$?
|
No, these are not the same. The vector $(u\cdot\nabla)u$ is the directional derivative of $u$ in the direction of $u$. It may not be (and probably isn't) parallel to $u$ at each point. The vector $(\nabla \cdot u)u$ is $u$ multiplied by its divergence. It is always parallel to $u$ at each point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How would I find the characteristic equation of this Recurrence Relation? Find and solve a recurrence relation for the number of $n$-digit ternary sequences with no consecutive digits being equal.
Since for ternary, meaning only $3$ possible entries for each space, e.g. $0$, $1$, $2$, the first slot $n$ has $3$ possible choices and then each position before ($n-1$, $n-2$, $\ldots$ , $2$, $1$) have $2$ choices each, I got the following recurrence relation:$$a_n=2\cdot a_{n-1}$$
Now I am trying to solve the homogeneous linear recurrence model, but I am stuck in finding a characteristic equation. How would I do this?
|
In the usual way...just assume that $a_n = A\cdot r^n$ therefore you have:
$$
a_n = 2a_{n-1} \rightarrow Ar^n = 2A\cdot r^{n - 1} = \frac{2A\cdot r^n}{r}\\
1 = \frac{2}{r} \\
r = 2
$$
So $a_n = A\cdot2^n$ (which makes perfect sense since that recursion relation is clearly a simple exponential with common ratio of $2$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Question about idempotent matrices.
Let $E$ be the $m \times m$ matrix that extracts the "even part" of an $m$-vector $Ex = (x+Fx)/2$, where $F$ is the $m\times m$ matrix that flips $[x_1,\dotsc ,x_m]^{T}$ to $[x_m,\dotsc ,x_1]^T$. Is $E$ idempotent?
|
Yes, of course. What happens if you extract the "even part" of a vector which is already "even"?
$$E x=\left[\begin{matrix}(x_1+x_m)/2 \\ (x_2+x_{m-1})/2 \\ \vdots \\ (x_m+x_1)/2 \end{matrix}\right]$$
$$E^2 x=\left[\begin{matrix}((x_1+x_m)/2 + (x_m+x_1)/2)/2 \\ ((x_2+x_{m-1})/2 + (x_{m-1}+x_2)/2)/2 \\ \vdots \\ ((x_m+x_1)/2+(x_1+x_m)/2)/2 \end{matrix}\right]=\left[\begin{matrix}(x_1+x_m)/2 \\ (x_2+x_{m-1})/2 \\ \vdots \\ (x_m+x_1)/2 \end{matrix}\right]=E x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
row space and kernel of a matrix A Given a real $m \times n$ matrix $A$ and vectors $x,z \in R^n$ how can I show that
$x \in \ker{A} \wedge x^Tz = 0 \quad \Rightarrow \quad \exists y \in R^m : z = A^Ty $ ?
I thought to start with
$Ax= 0$ and left multiply each side by a vector $ y \in R^m$ to obtain $y^TAx= 0$. The equation is now scalar and we can transpose both sides into $x^TA^Ty= 0$. Now I don't know how to go on.
|
You have to show that $z\in \text{Im}(A^T)$. Recall that $\ker(A)=\text{Im}(A^T)^{\perp}$. Hence $x\in \text{Im}(A^T)^{\perp}$. Since $x^Tz=0$, $\left\langle x,z\right\rangle =0$, so $x\perp z$. Hence $z\in (\text{Im}(A^T)^{\perp})^{\perp}=\text{Im}(A^T)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Prove that the determinant of an invertible matrix $A$ is equal to $±1$ when all of the entries of $A$ and $A^{−1}$ are integers.
Prove that the determinant of an invertible matrix $A$ is equal to $±1$ when all of the entries of $A$ and $A^{−1}$ are integers.
I can explain the answer but would like help translating it into a concise proof with equations.
My explanation:
The fact that $\det(A) = ±1$ implies that when we perform Gaussian
elimination on $A$, we never have to multiply rows by scalars. This means
that for each column, the pivot entry is created by the previous column’s
row operations and can be brought into place by swapping rows.
(And the first column must already contain a $1$). Therefore, we never need to
multiply by a non-integral value to perform Gaussian elimination.
|
Let $A\in\mathbb Z^{n\times n}$ such that $A^{-1}\in\mathbb Z^{n\times n}$. Note that the determinant of an integer matrix is an integer, so $\det\colon\mathbb Z^{n\times n}\to \mathbb Z$. Now, $1=\det(\mathbb I)=\det(A\cdot A^{-1})=\det(A)\cdot\det(A^{-1})$. Since both $\det(A)$ and $\det(A^{-1})$ are integers, they can only be $1$ or $-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Find $\int_{-1}^{1} \frac{\sqrt{4-x^2}}{3+x}dx$ I came across the integral $$\int_{-1}^{1} \frac{\sqrt{4-x^2}}{3+x}dx$$ in a calculus textbook. At this point in the book, only u-substitutions were covered, which brings me to think that there is a clever substitution that one can use to knock off this integral.
I was able to find the answer using $x= 2 \sin \theta$, doing a bit of polynomial long division and then Weiestrass substitution. However, this solution was rather ugly and I don't think this was what the author had in mind.
What else could I try here? Wolfram gives a closed form $$\pi + \sqrt{5} \left [ \tan ^{-1} \left ( \frac{1}{\sqrt{15}} \right ) - \tan ^{-1} \left ( \frac{7}{\sqrt{15}} \right ) \right ]$$
and the indefinite integral was
$$\sqrt{4-x^2} - \sqrt{5} \tan ^{-1} \left ( \frac{3x+4}{\sqrt{5}\sqrt{4-x^2}}\right ) + 3 \sin ^{-1} \left ( \frac{x}{2} \right )+C $$
|
One option is to substitute
$$x=2\sin(u)$$
$$dx=2\cos(u)du $$
This means our integral is now
$$=-2 \int _{-\pi/6}^{\pi/6} \frac{2 \cos(u) (\cos^2(u)^{1/2})}{2\sin(u)+3} $$
If we simplify
$$4 \int _{-\pi/6}^{\pi/6} \frac{(\cos^2(u))}{2\sin(u)+3} $$
Now we carry out another substitution of
$$s=\tan(u/2)$$
$$ds=1/2 \sec^2(u/2) du$$
This will give the "more" manageable integral of
$$4 \int _{-3^{(1/2)}-2}^{2-(3)^{(1/2)}} \frac{(s^2-1)^2)}{(s^2+1)^2(3s^2+4s+3)}$$
We can now use partial fractions which gives 3 functions (one which requires another substitution) two of which are the correct integral for $\tan^{-1}$ and one which is odd. Which should now give
$$=\pi + (5)^{(1/2)}(\tan^{-1}(1/(15^{(1/2)})-\tan^{-1}(7/(15^{(1/2)}))$$
That you said wolfram agreed with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Does excluding or including zero from the definitions of "positive" and "negative" make any consequential difference in mathematics? I was absolutely certain that zero was both positive and negative. And zero was neither strictly positive nor strictly negative.
But today I made a few Google searches, and they all say the same thing: zero is neither positive nor negative.
I suppose that the definition of "positive" and "negative" depend on which country we're living in. In the U.S. "positive" and "negative" exclude zero. In France "positive" and "negative" include zero.
My question therefore is: does excluding or including zero from the definitions of "positive" and "negative" make any consequential difference in mathematics?
|
When multiplying value A by a positive value B, the sign of the result is identical to the sign of A:
*
*If A is positive, then the result is positive
*If A is negative, then the result is negative
When multiplying value A by a negative value B, the sign of the result is opposite to the sign of A:
*
*If A is positive, then the result is negative
*If A is negative, then the result is positive
Let's prove by contradiction that $0$ is not positive:
*
*Assume that $0$ is positive
*$-1$ is negative
*Therefore $(-1)\cdot0$ is negative
*But $(-1)\cdot0=0$, and $0$ is positive
Let's prove by contradiction that $0$ is not negative:
*
*Assume that $0$ is negative
*$-1$ is negative
*Therefore $(-1)\cdot0$ is positive
*But $(-1)\cdot0=0$, and $0$ is negative
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
if $f(x)=x^2$ and $g(x)=x\sin x+\cos x$ then number of points where $f(x)=g(x)$? The question is if $f(x)=x^2$ and $g(x)=x\sin x+\cos x$ then number of points where $f(x)=g(x)$?
My approach:-
$$f(x)=g(x)\implies x^2=x\sin x+\cos x\implies x^2-x\sin x-\cos x=0$$
Let $$h(x)=x^2-x\sin x-\cos x$$Now we have to find out roots of $h(x)$.
To do it I differentiate it and get the minimum point of $h(x)$ as follows:-
$$h'(x)=x(2-\cos x)=0\implies x=0 $$or
$$\cos x=2$$, which is impossible. At $x=0 $, $ h(x)=-1<0$. Now in the graph we have one information that the function is minimum on $0$, now where to go from here, what is the next step?
|
Equality occurs at the roots of
$$h(x):=f(x)-g(x)=x^2-x\sin(x)-\cos(x).$$
For a continuous function, the minima and maxima alternate and there is at most one root in-between, one before the first and one after the last. A root is there if the function changes sign in the corresponding interval.
Then
$$h'(x)=2x-x\cos(x)=x(2-\cos(x))$$
has a single root at $x=0$ (the other factor cannot change sign).
From
$$h(-\infty)=\infty,h(0)=-1,h(\infty)=\infty$$ we conclude that there are exactly two roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proof of $[(a \; \text{mod} \; n)+(b \; \text{mod} \; n)] \equiv (a+b)\; \text{mod}\; n$ I'm currently self-studying a course in cryptography, and i understand the importance of understanding modular arithmetic fully. I have proved many operations on modular arithmetic, but one i am stuck on is why:
$[(a \; \text{mod} \; n)+(b \; \text{mod} \; n)]=(a+b)\; \text{mod}\; n$, and the full proof of this.
I have had a few ideas on it, but not proved it fully.It may be obvious, but i am only $15$.
Thanks for any help.
|
Let
$$a=q_an+r_a$$
$$b=q_bn+r_b$$
for quotients $q_a,q_b$ and remainders $0\le r_a,r_b<n$ of $a,b$ modulo $n$.
Then
$$\begin{align}
a+b&=(q_a+q_b)n+(r_a+r_b)\\
&=\left(q_a+q_b+\delta\right)n
+\left(r_a+r_b-\delta n\right)
\end{align}$$
for $$\delta=\left\lfloor\frac{r_a+r_b}{n}\right\rfloor$$ where
$\lfloor x\rfloor$ is the greatest integer (less than or equal to $x$) or
floor function
and $$\left(r_a+r_b-\delta n\right)=(a+b)\text{ mod }n.$$
Now equality only holds if $\delta=0$:
it is not true in general that $r_a+r_b=r_a+r_b-\delta n$.
For example, try $a=b=1,n=2$.
What does hold is congruence modulo $n$:
$$(a \; \text{mod} \; n)+(b \; \text{mod} \; n)
\equiv (a+b)\; \pmod n$$
which we have just proved.
In mathematics, we say
$$a\equiv b\pmod n \qquad\iff\qquad n | b-a$$
i.e. if their difference is divisible by $n$, but in some computer science contexts,
$$r_a=a\text{ mod }n$$
means that $r_a$ is the remainder on dividing $a$ by $n$ using the
division algorithm,
with either $0\le r_a<n$ or sometimes $-\lfloor\frac n2\rfloor\le r_a<\lfloor\frac n2\rfloor$ or even $-n<r_a<n$, which can be a source of confusion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1740989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Inverse Vectorization Vec^-1 Hope that you will find this post in good health.
I am Mr. Adnan from Pakistan with research background in Control systems. I am working on one problem in which Hadamard weights are using.
During solving that problem, i am facing one problem to calculate inverse vectorization/stacking operator i.e $\text{vec}^{-1}$. I would be highly thankful if could you please guide me that how to calculate for example
$$\text{vec}^{-1}(Wq)$$
where $W$ is non singular matrix and $q$ is controller having state space representation
\begin{align}
X'&=Ax+Bu, \\
Y&=Cx+Du
\end{align}
I just have two queries to calculate above inverse
*
*what should be order of $q$ to calculate inverse vector operator $\text{vec}^{-1}$
*if order of controller $q$ is know then Please tell me how to solve $\text{vec}^{-1}$, any formula,relation or command.
I Knew these two commands $\text{vec}$ and $\text{vec}^{-1}$ but that is not working in my problem.
-Thanks in Advance
|
I do not really understand what your problem is?
Let $A$ be defined as:
$$A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix}$$
Then $\operatorname{vec}(A)$ is defined as
$$\operatorname{vec}(A) = \begin{bmatrix} a_{11} \\ a_{21}\\ a_{12} \\ a_{22} \end{bmatrix}$$
As a result the inverse is defined as
$$\operatorname{vec}^{-1}(\operatorname{vec}(A)) = A$$
Now, when I typed this down, I think I understand your problem. You have the matrix $B = Wq$ which is vectorized and want to inverse it. This will only work if you know the size of $W$ and $q$ before they were vectorized. Then you know size which the result should be after the inverse vectorization.
Furthermore, what is $q$ exactly? You say that it is a state space system? What do you imagine the result of $Wq$ to be then?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is a clever way to show that if $0 \leq x \leq 1$ then $x^n \leq x$ for every $n > 1$ ? ($n \in \mathbb{R}$) What is a clever way to show that if $0 \leq x \leq 1$ then $x^n \leq x$ for every $n > 1$ ? ($n \in \mathbb{R}$)
I tried to do it with derivatives but I didn't manage to show why this is true...
|
Use induction on $n$:
*
*$n=1$: ok
*$n \to n+1$: By induction, $x^n \le x$. Multiply by $x \le 1$ and get $x^{n+1} \le x$. Note that $x\ge0$ is essential here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
the appropriateness of t-test Two different Universities record data on students who are unable to attend classes due to illness. University 1 recorded absences over ten consecutive days. This data is recorded as N1 below. University 2 recorded absences over six consecutive days. This data is recorded as N2 in below.
$N1: 9, 9, 5, 5, 5, 6, 16, 8, 8, 7$ (10 points)
$N2: 13, 11, 14, 13, 12, 11$ (6 points)
a) Assuming equal variances, using a two sampled t-test, determine whether there is a difference between means.
I have done this one and found that: The absolute value of the test statistic in this example, 3.2066, is greater than the critical value of 2.1448, so we reject the null hypothesis and conclude that the two population means are different at the 0.05 significance level.
b) Could you please comment on the appropriateness of the test?
c) In a related study, you observe that when students are ill, they take on average five days off. How would this affect the appropriateness of using a t-test described above.
Could anyone give me some advices/insights on solving the b, c questions?
|
b) The t-test (or z-test for that matter) are generally only used if the samples are coming from a population that is normally distributed, or if the samples are relatively large. In this case, there is no indication from the question that the number of absences per day follows a normal distribution, and the sample sizes are rather small, so it may be more appropriate to use a non-parametric test (such as the Wilcoxon Rank Test).
c) If a student who is sick takes an average of 5 days off, then the number of absences from one day to the next are not independent. This further makes the t-test inappropriate, as the t-test is designed for cases where the sample items are independent from one another.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that the g.c.d is equal to the prime factorization raised to the minimum of two powers for the prime factorization of $a$ and $b$ as $$a = p_1^{\alpha_1}p_2^{\alpha_2}\cdots{p_t}^{\alpha_t}$$ and $$b = p_1^{\beta_1} p_2^{\beta_2} \cdots p_s^{\beta_s}.$$ I want to prove that $d = gcd(a,b)$ is equal to $${p_1}^{\min(\alpha_1, \beta_1)}{p_2}^{\min(\alpha_2, \beta_2)}\cdots {p_r}^{\min(\alpha_r\beta_r)}$$ so I begin by proving that $d$ is a common divisor of $a$ and $b$.
$$\frac{a}{d} = \frac{p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_t^{\alpha_t}}{p_1^{\min(\alpha_1\beta_1)}p_2^{\min(\alpha_2\beta_2)} \cdots {p_s}^{\min(\alpha_s\beta_s)}}$$ and
$$\frac{b}{d} = \frac{p_1^{\beta_1}p_2^{\beta_2} \cdots p_s^{\beta_s}}{p_1^{\min(\alpha_1\beta_1)}{p_2}^{\min(\alpha_2\beta_2)} \cdots p_s^{\min(\alpha_s\beta_s)}}$$
I don't know if $a$ is always divisible by $d$ in $\mathbb{Z^+}$ I'm also very new to proofs can I have some guidance?
|
Remember what divisibility means: $d$ divides $a$ if I can find some integer $c$ such that $dc = a$.
Let's look at one term of your $\frac{a}{d}$ fraction: $\frac{p_1^{\alpha_1}}{p_1^{min(\alpha_1, \beta_1)}}$.
Since $min(\alpha_1, \beta_1) \le \alpha_1$, this fraction has an integer value of $p_1^{\alpha_1 - min(\alpha_1, \beta_1)}$
You can then do this for every prime $p$ in your expansions of both $\frac{a}{d}$ and $\frac{b}{d}$
You're not quite done, though: you've only shown that $d$ is a common divisor of $a$ and $b$ after you're done with these steps. You still need to show that it's the greatest common divisor: this is true if a common divisor of $a$ and $b$, call it $z$, divides $d$. If every common divisor divides $d$ then you can be sure that $d$ is your greatest common divisor. I'll leave this step to you.
Let me know in a comment if you have any questions!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Given three coordinates (a,b,c), (d,e,f), and (l,m,n), what is the center of the circle in the 3D plane (h,k,i) that contains these three points. I have tried the following:
$$(a-h)^2+(b-k)^2+(c-i)^2=r^2$$
$$(d-h)^2+(e-k)^2+(f-i)^2=r^2$$
$$(l-h)^2+(m-k)^2+(n-i)^2=r^2$$
Subtracted equation 2 from 1, equation 3 from equation 2, and equation 3 from equation 1 to get:
$$2(a-d)h+2(b-e)k+2(c-f)i=a^2-d^2+b^2-e^2+c^2-f^2$$
$$2(d-l)h+2(e-m)k+2(f-n)i=d^2-l^2+e^2-m^2+f^2-n^2$$
$$2(a-l)h+2(b-m)k+2(c-n)i=a^2-l^2+b^2-m^2+c^2-n^2$$
Closer inspection revealed to me that the left side formed a 3 x 3 matrix and the left a 3 by 1 matrix.
Tried solving for $(h,k,i)$ by taking the inverse of the left side matrix and multiplying that by the right side 3 by 1 matrix, but it turns out the left side matrix's rows are dependent, so the determinant of the left side matrix is 0 (singular matrix); hence, no inverse.
Please guide me as to solve this problem.
Thank you in advance.
|
For points $(a,b,c)$ and $(d,e,f)$, their perpendicular bisector can be found by:
$$\begin{align*}
(x-a)^2 + (y-b)^2+(z-c)^2 &= (x-d)^2 + (y-e)^2 + (z - f)^2\\
a^2-2ax+b^2-2by+c^2 -2cz &= d^2 - 2dx + e^2 - 2ey + f^2 - 2fz\\
2(a-d)x +2(b-e)y + 2(c-f)z &= a^2+b^2+c^2-d^2-e^2-f^2
\end{align*}$$
Do the same and find the perpendicular bisector of $(d,e,f)$ and $(l,m,n)$.
For three points $(a,b,c)$, $(d,e,f)$ and $(l,m,n)$, their common plane $px+qy+r = z$ can be found by:
$$\begin{pmatrix}
a&b&1\\d&e&1\\l&m&1
\end{pmatrix}\begin{pmatrix}
p\\q\\r
\end{pmatrix} = \begin{pmatrix}
c\\f\\n
\end{pmatrix}$$
This might have some problem if their common plane is perpendicular with the $xy$-plane, but you can always change to use $x$ or $y$ to be the subject of the equation and try to solve again. You are using a computer anyway.
Now you have three equations in the form of $tx + uy + vz = w$, and you should be able to solve for their intersection $(x,y,z) = (h,k,i)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Why do division algebras always have a number of dimensions which is a power of $2$? Why do number systems always have a number of dimensions which is a power of $2$?
*
*Real numbers: $2^0 = 1$ dimension.
*Complex numbers: $2^1 = 2$ dimensions.
*Quaternions: $2^2 = 4$ dimensions.
*Octonions: $2^3 = 8$ dimensions.
*Sedenions: $2^4 = 16$ dimensions.
|
The particular family of algebras you are talking about has dimension over $\Bbb R$ a power of $2$ by construction: the Cayley-Dickson construction to be precise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Joint distribution of multivariate normal distribution So the question asks: Let $X = (X_1, ... ,X_{2n})$~ $ N (0, ∑)$ (multivariate normal distribution with mean vector $(0,..., 0)$ and covariance matrix $∑$ ), where $n≥ 1$. Find the joint distribution of the vector $(X_1 +... + X_n,X_{n+1} + ... + X_{2n})~ $.
So so far I got:
The random vector $X = (X_1, ... ,X_{2n})$ will have a multivariate Gaussian distribution if the joint distribution of $X_1, ... ,X_{2n}$ has density
$\begin{align}f_X(X_1, ... ,X_{2n}) =& \dfrac 1{(2π) ^{2n/2} \det(Σ)^{1/2}}\exp(-(1 /2) (x − µ)^ tΣ^ {−1} (x − µ)) \\ =& \dfrac 1{(2π) ^{n} \det(Σ)^{1/2}}\exp(-(1 /2) (x − µ)^ tΣ^ {−1} (x − µ))\end{align}$
Is this right? But how do I suppose to do to find the joint distribution of the vector $(X_1 +... + X_n,X_{n+1} + ... + X_{2n})$?
|
Write
\begin{align}
\begin{pmatrix}
Y_1 \\
Y_2
\end{pmatrix} :=
\begin{pmatrix}
X_1 + \cdots + X_n \\
X_{n + 1} + \cdots + X_{2n}
\end{pmatrix} =
\begin{pmatrix}
1 & \ldots & 1 & 0 & \ldots & 0 \\
0 & \ldots & 0 & 1 & \ldots & 1
\end{pmatrix}X := AX.
\end{align}
Then use if $X \sim N(\mu, \Sigma)$, then $AX \sim N(A\mu, A\Sigma A^T)$. So the remaining task is to find $A\Sigma A^T$, to simplify the result, you may partition $\Sigma$ according to the blocking structure of $A$ as
$$\Sigma = \begin{pmatrix}
\Sigma_{11} & \Sigma_{12} \\
\Sigma_{21} & \Sigma_{22}
\end{pmatrix}.$$
As a note, for problems of the same type, usually you don't need to deal with the density function directly, the invariance property of normal random vector under linear transformation in general would suffice.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solve $\cos 2x - 3\sin x - 1 = 0$ using addition formula
Solve $\cos 2x - 3\sin x - 1 = 0, \quad 0^{\circ} \le x \le 360^{\circ}$
\begin{align} \cos 2x - 3\sin x - 1 = 0
&\iff 1 - 2\sin^2 x - 3\sin x - 1 = 0 \\
&\iff- 2\sin^2 x - 3\sin x = 0 \\
&\iff2\sin^2 x + 3\sin x =0\\
&\iff\sin x(2\sin x + 3) =0 \\
&\iff\sin x = 0 \lor 2 \sin x + 3= 0
\end{align}
I could go on but the book gives the answer, $0^{\circ}, 180^{\circ}, 360^{\circ}$ and I am mystified as to where these answers have come from.
|
$\sin x =0 \Leftrightarrow x =\pi n (0^{\circ}, 180^{\circ}, 360^{\circ})$
$2\sin x=-3 \Rightarrow \sin x = -\frac 32 - $impossible
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Basis & Dimension for Joint Subspaces Assumption: Assume that $V_1$ and $V_2$ are subspaces of $\mathbb{R}^\mathbb{3}$
Question: "Suppose that $V_1$ is the subspace of $\mathbb{R}^\mathbb{3}$ given by
$V_1 = \{(2t-s, 3t, t+2s): t, s \in \mathbb{R}\}$ and
$V_2$ is the subspace of $\mathbb{R}^\mathbb{3}$ given by $V_2 = \{(s, t, t): t, s \in \mathbb{R}\}$. Find a basis for $V_1 \cap V_2$ and $dim(V_1 \cap V_2)$."
Where I'm currently at:
I have found the basis for $V_1$ and Basis from $V_2$
which are respectively $\{(2,3,1), (1,0,0)\}$ for $V_1$ and $\{(-1,0,2), (0,1,1)\}$ for $V_2$, and
that $dim(V_1 \cap V_2)$ should be equal or less to $dim(V_1)$ or $dim (V_2)$.
Can you kindly help with the basis of $V_1 \cap V_2$ and $dim(V_1 \cap V_2)$ ?
Any help will be very appreciated
Thank you
|
The basis for $V_1$ and $V_2$ should be $\{(2,3,1), (-1,0,2)\}, \{(1,0,0), (0,1,1)\}$, respectively. These are two dimensional planes. Their intersection in general should be a $1$-dimensional line.
To find the intersection, you can transform them to equations in terms of $x,y,z$. For $V_1$, we have $x=2t-s, y=3t, z=t+2s$. Some manipulation should give us $z=\frac{5}{3}y-2x$.
For $V_2$, we see that it is $y=z$.
Plugging this into the equation for $V_1$, we get the intersection $x=\frac{1}{3}y$. Combining with $y=z$, the basis is $\{(\frac{1}{3}, 1, 1)\}$ since the vectors can be written as $\{(\frac{1}{3}t, t, t)\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1741950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
subsets of non regular language I know that there are many languages that are context free but not regular
like $\{a^n b^n :n>0\}.$
But I want to know if every context free but non-regular language has infinitely many non-regular subsets?
Thank you.
|
Yes.
If $L$ is context free but non-regular, then for every positive integer n, let $L_n\subseteq L$ be a language such that $|L-L_n|=n$. If $L_n$ were regular, then $L$ would be regular, since an automaton which decides $L_n$ could be (non-deterministically for simplicity) expanded to decide $L_n$ and exactly all the finite strings removed from $L$, and thus the automaton also decides $L$. Thus we conclude that none of $L_n$ may be regular.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
A basic inequality: $a-b\leq |a|+|b|$ Do we have the following inequality:
$$a-b\leq |a|+|b|$$
I have considered $4$ cases:
*
*$a\leq0,b\leq0$
*$a\leq0,b>0$
*$a>0,b\leq0$
*$a>0,b>0$
and see this inequality is true. However I want to make sure about that.
|
Use the triangle inequality:
$$
a - b \leq \vert a - b \vert \leq \vert a \vert + \vert b \vert.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Conditional probability with "at least" We split 8 colored (and distinguishable from each other[each ball is unique]) balls to 4 kids, 2 balls for each kid.
There are 2 blue balls, 2 red balls, 2 yellow balls, 2 green balls [still each ball is unique]
A) It is known that Amy got at least 1 red ball, what is the
probability that also John got at least 1 red ball?
B) It is known that Amy got balls of different colors, what is the
probability that also John got balls of different colors?
What I have done is as follows:
A) let A be "Amy got at least 1 red ball"(B the same for John) hence $P(A)=1-{6\over8}*{5\over7}={13\over28}$ (all the cases subtract the case in which the first ball isn't red and also the 2nd isn't red)
So $P(A|B)={{{(2*{2\over8}*{6\over7})}^2}\over{13\over28}}=0.395$
This is because there is $2\over8$ chance to get red ball first and then $6\over7$ chance of getting non red ball, this chance is multiplied by 2 since we could do it the other way around(1st non red 2nd yes red) and then I square it all since the same chance applies to B(John).
I feel like I may have done it too complicated? is it even the right answer? not sure.
B)first ball can be any color $8\over8$ the 2nd ball has to be different than the first so $6\over7$ chance of that.
But now when I tried to find the new(A is now different color balls so is B) $P(A|B)$ I got something weird like that: ${({6\over7}*{6\over7})\over{6\over7}}={6\over7}$ which I really dont feel good about
|
We solve the first problem. Let $A$ be the event Amy got at least one red, and $B$ the event John got at least one red. We are asked to find $\Pr(B\mid A)$, which by definition is equal to $\Pr(A\cap B)/\Pr(A)$.
We compute the two required probabilities. You found $\Pr(A)$ correctly. Now we need $\Pr(A\cap B)$. This is the probability Amy and John each got one red.
Imagine that Amy drew a ball, then another, then John drew a ball, then another. The probability Amy got exactly one red is $\frac{2}{8}\cdot \frac{6}{7}+\frac{6}{8}\cdot\frac{2}{7}$, that is, $\frac{24}{56}$.
Given that Amy got exactly one red, the probability John got a red is $\frac{1}{6}\cdot \frac{5}{5}+\frac{5}{6}\cdot \frac{1}{5}$, that is, $\frac{10}{30}$.
Thus $\Pr(A\cap B)=\frac{24}{56}\cdot \frac{10}{30}$. Now we can find $\Pr(B\mid A)$.
It might be a little smoother to use binomial coefficients. For example, the probability that Amy gets exactly one red is $\frac{\binom{2}{1}\binom{6}{1}}{\binom{8}{2}}$.
The second problem is in a sense somewhat easier than the first. Change the meanings of $A$ and $B$ in the obvious way. You can find $\Pr(B\mid A)$ directly, without finding $\Pr(A)$ and $\Pr(A\cap B)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Closed form solution for generating function The recursion formula for some probability $P_n(s)$ is $$P_{n+1}(s) = qP_n(s+1) + pP_n(s-1).$$ Define the generating function $$G(z,n) = \sum_{s=-\infty}^{\infty} z^s P_n(s)$$ and prove the recursion relation $$G(z,n+1) = (pz + qz^{-1}) G(z,n)$$ Obtain a closed form solution for $G(z,n)$.
Attempt: I've proven the recursion relation for $G$, what I am unsure of is how to obtain the closed form solution. I thought I could take derivatives of $G$ wrt to $z$ say and maybe generate a differential equation whose solution would correspond to $G$ itself but I have not managed this. I could also try an ansatz and fix the parameters of this ansatz through the recursion relation but again I didn't manage to obtain a complete solution.
Thanks for any tips!
|
Observe that $G(z,n)$ is a geometric sequence for fixed $z$ hence $G(z,n)=(pz+qz^{-1})^nG(z,0)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why Can we work with $M$ model countable transitive model of some finite fragment of $\mathrm{ZFC}$ and why is it exist.? When we say that let $M$ be a countable transitive model of some finite
fragment of $\mathrm{ZFC}$.
Why Can we work with $M$ model countable transitive model of some finite
fragment of $\mathrm{ZFC}$ and why is it exist.?
someone can explain these questions. Where can I find information about these questions.
thanks
|
If $T$ is a finite fragment of $\mathsf{ZFC}$, then by the reflection theorem there are infinitely many ordinal $\alpha$ so that $V_\alpha \models T$. Then by the Downward Loweinheim Skolem theorem, there is a countable elementary substructure $N \prec V_\alpha$ so $N \models T$. Let $M$ be the Mostowski collapse and $\pi : N \rightarrow M$ be the Mostowski isomorphism. $M$ is then a countable transitive model of $T$.
The question "Why Can you work with $M$ model countable transitive model ... " depends on exactly what you are doing?
I presume you are using countable transitive models for some type of consistency result in conjunction with forcing arguments.
For example, if you are trying to show $\mathsf{ZFC + CH}$ is consistent, you could start off by assuming it as not consistent. Then there is some finite $T \subseteq \mathsf{ZFC}$ so that $T \cup \{\mathsf{CH}\}$ proves a contradiction (since proofs are finite). (In order to get forcing to work, you may need to extend $T$ to a bigger finite theory $T'$) Then by the above argument, let $M$ be a countable transitive model of $T'$. Then Cohen technique of forcing shows that if $M \models T'$, then the forcing extension $M[G] \models T \cup \mathsf{CH}$. But this is a contradiction since you produced a model of $T \cup \mathsf{CH}$ even though you assumed $T \cup \mathsf{CH}$ was inconsistent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Unique solution for circuits in Linear Algebra A standard application of Linear Algebra is circuits and Kirchhoff's Laws. Does anyone know of a proof of uniqueness of a solution of a system given by these laws? There are many, many examples, but little theory regarding why there is always a unique solution.
For reference (Wikipedia)
*
*At any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node
*The algebraic sum of the products of the resistances of the conductors and the currents in them in a closed loop is equal to the total emf available in that loop.
My thoughts are as follows:
Initially these laws setup two systems $Ax = 0$ and $Bx= b$, respectively. If there are $n$ nodes and m currents, then $A$ is a $n \times m$ matrix. If there are l loops in the circuit, then $B$ is a $l \times m$ matrix. I tried to work with the augmented matrix
$$ \left[\begin{matrix} A \\B\end{matrix} \right|\left.\begin{matrix}0\\b\end{matrix}\right]$$
But I see no reason why this always has a unique solution.
|
There is a statement that works for your purposes (proposition 9.4) presented in Markov Chains and Mixing Times (Levin, Peres, Wilmer) that you can follow by reading pages 115-118.
Perhaps there are other, better references for you purpose, but this is the first that comes to mind.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Finding conditions on the eigenvalues of a matrix Consider the $2\times2$ matrix
$$A = \begin{pmatrix}a&b\\c&d\end{pmatrix}$$
where $a,b,c,d\ge 0$. Show that $\lambda_1\ge\max(a,d)>0$ and $\lambda_2\le\min(a,d)$.
So the eigenvalues are given by the characteristic polynomial
$$(a-\lambda)(d-\lambda)-bc=0\implies \lambda^2 - (a+d)\lambda + ad - bc=0$$
And so the solutions to this equation are
$$\lambda_{1,2} = \frac{(a+d)\pm\sqrt{(a+d)^2-4(ad-bc)}}2 = \frac{(a+d)\pm\sqrt{(a-d)^2+4bc}}2$$
We may therefore simplify this to:
$$\lambda_1+\lambda_2=a+d$$
But now how would one simplify this into the conditions above?
|
Note that by the formula you derived one of the conditions implies the other. So suppose $\lambda_1\geq \lambda_2$. Now you know that
$$ \begin{pmatrix}
a-\lambda_1 & b\\
c & d-\lambda 1
\end{pmatrix} $$
is singular, i.e. there is a real number $r$ s.t.
$$ \begin{pmatrix}
a-\lambda_1 \\
c
\end{pmatrix} =r\begin{pmatrix}
b\\
d-\lambda_1
\end{pmatrix} . $$
the case $a=d$ is easy because then $2\lambda_1\geq \lambda_1+\lambda_2=2a$ which implies your claim.
Now suppose that $a > d$ and $a>\lambda_1\geq d$. It is enough to consider this case, since $a>d\geq \lambda_1$ would imply that $\lambda_2\geq a>d$.
Now you know that $a-\lambda_1=rb$ and therefore $b\neq 0$. So you conclude that $r=(a-\lambda_1)/b$. But now this is a positive number and in order to have $c=r(d-\lambda_1)$ you must conclude that $c=d-\lambda_1=0$ and therefore $d=\lambda_1$. But this contradicts $\lambda_1 \geq \lambda_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1742811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Is there an analogue of Frucht's theorem for sandpile groups? In other words, is it the case that for every abelian group $G$, there exists a graph $H$ such that the sandpile group of $H$ is isomorphic to $G$? If not, is the truth of falsity of this still an open question?
|
If $G=\bigoplus_{i=1}^N \mathbb{Z}_{k_i}$, then $G$ is the sandpile group of the (multi)graph on $N$ non-sink vertices where the $i$-th vertex is joined with the sink by $k_i$ edges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1743148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Topology/ Metric on possibly unbounded functions I am trying to think of a topology (possibly metric, as I am more used to think about things in metric spaces) on possibly unbounded functions (on $\mathbb{R}$) such that
1) convergence in that defined topology implies pointwise convergence and
2) the limit of continuous functions is continuous itself.
I know that a sequence of functions converges to a function $f$ under the metric derived from the uniform norm if and only if converges to $f$ uniformly. And I know the existence of the Uniform limit theorem. (link ) Also convergence in uniform norm implies pointwise convergence.
The problem is that the sup norm/ uniform norm is defined on bounded functions, whereas, I want to think about general functions.
Could I use a metric, $d'=\dfrac{d}{1+d}$ where d is the uniform metric, and 1 when d is infinity? The "normalization" makes sure that I do not get infinite values while getting the distance between functions. And it seems to be that given how this new metric is defined, it would satisfy 1) and 2). Do you guys think I am on the right track?
Also, is there a more obvious metric that I am missing?
|
Here is a metric which describes uniform convergence on $\mathbb R$: $D(f,g)=\sup\lbrace \min\lbrace|f(x)-g(x)|,1\rbrace: x\in\mathbb R\rbrace$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1743388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How do I find the equation of an osculating circle when I'm given the parabola? This is a question given out by my calculus professor, and I'm completely stumped as to how I need to go about solving it.
Let the parabola $y=x^2$ be parameterized by $r(t)=ti+t^2j$. Find the equation of the osculating circle for the parabola at $t=1$ by performing the following steps.
a) Find the formula for $\kappa(t)$, the curvature of the parabola and compute for $\kappa(1)$
b) The radius of the osculating circle we want is $\rho={1\over \kappa(1)}$. Find the center of the osculating circle by computing the unit normal $N(1)$ and calculating the sum $C=r(1)+\rho N(1)$.
c) Use the center and the radius of the osculating circle to write the equation of the circle in standard form.
To begin with I'm not sure how to find the formula for the curvature of the parabola, and even from there I don't know what to do. Where do I begin?
|
As the comment from Rory Daulton said, you can look up the formulas for calculating curvature here. These same formulas must be in your class notes or textbook, too, or else your teacher wouldn't be asking you this question.
There are two choices for the curvature formula: one if you choose to think of the curve in parametric form $\mathbf{r}(t) = \big(x(t), y(y)\big)=(t,t^2)$, and a different one if you choose to think of the curve as the graph of $y=x^2$.
Anyway, use whichever formula you want to calculate curvature $\kappa$ at $t=1$.
Then, the radius of the osculating circle is $\rho = 1/\kappa$.
Find the tangent at $t=1$. It's in the direction $(1,2)$. So the unit tangent is $(1/\sqrt5, 2/\sqrt5)$.
Rotate 90 degrees to get the unit normal, so this is $(-2/\sqrt5, 1/\sqrt5)$.
To get to the center of the osculating circle, travel a distance $\rho$ along this normal vector from the point $(1,1)$.
Now you know the center and radius of the osculating circle, so you can write down its equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1743515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to calculate intersection and union of probabilities? lets say I have a switch A with 3 legs, each leg has 0.8 chance to be connected (and then electricity will flow), we need only 1 leg connected for A to transfer the electricity (so sorry I didn't explain it that well I'm having hard time to translate this problem)
So I calculated the chance of A to transfer electricity by doing $1-(0.2)^3$ which is $124\over125$ which I think is true.
The problem is I wanted to say that A will transfer elecricity only if 1 of his legs will be connected so its like saying $0.8 + 0.8 + 0.8$ which is obviously wrong(over $1$) so I used the weird formula that says to do like this: $0.8 + 0.8 + 0.8 - (0.8)^2 -(0.8)^2 - (0.8)^2 + (0.8)^3 = {124\over125}$ too.
My only problem is that I didnt understand why I had to use that formula and why I could multiple probabilities for the intersection but couldn't just sum them for the union.
Thanks in advance
|
Hint One of the three legs running isnt dependent on the running of other two legs. So the events are independent events. So we just multiply their probabilities instead of adding .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1743631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
How to show that $\lim_{n \to \infty}\frac{2^{n^2}}{n!} = \infty$? I know $\lim \limits _{n \to \infty}\frac{2^{n^2}}{n!} = \infty$, but I need to prove it using the definition of limit, show that there is a $M$ such that all $a_n>M$.
I tried looking at $\frac{a_{n+1}}{a_n}$ and found out that $a_{n+1}>\frac{1}{2}a_n$, but I don't know then how to proceed, can someone help?
|
Let's prove that $2^{n^2}\ge(n+1)!$, for every $n$. The case $n=0$ is obviously true. We also have
$$
2^{(n+1)^2}=2^{n^2}\cdot 2^{2n+1}\ge 2^{2n+1}(n+1)!
$$
and we're done if we show that $2^{2n+1}\ge n+2$. Again, the base step is trivial; moreover
$$
2^{2n+3}=4\cdot 2^{2n+1}\ge 4(n+2)=n+3n+8\ge n+3
$$
Therefore
$$
\frac{2^{n^2}}{n!}\ge n+1
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1743804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Let $\{ y_k \}$ that satisfies $ y_k\le {2^k\over M}y_{k-1}^\beta$ , then $\lim_{k\to \infty}y_k=0$. Let be a sequence $\{ y_k \}^\infty _{k=0} \subset (0,\infty) $ that satisfies
$$ y_k\le {2^k\over M}y_{k-1}^\beta , $$
where $k=1,2,...$, and $\beta\gt 1$ , $M\gt0$.
Prove that if $M\gt2^{\beta\over \beta-1}y_0^{\beta-1}$, then $\lim_{k\to \infty}y_k=0$.
|
Try to compare with
$$
ca^ky_k=u_k\le u_{k-1}^β= (ca^{k-1}y_{k-1})^β
\\\iff\\
y_k\le c^{β-1}a^{(k-1)β-k}y_{k-1}^β=(c^{β-1}a^{-β})(a^{β-1})^ky_{k-1}^β
$$
which is successful by identifying the relations $2=a^{β-1}$, $M^{-1}= a^{-β}c^{β-1}$, that is
$$
a=2^{\frac1{β-1}},\quad c=M^{-\frac1{β-1}}2^{\frac{β}{(β-1)^2}}
$$
It follows that
$$
u_n\le u_0^{β^n}
$$
which converges to zero for $0\le u_0<1$, i.e.,
$$
0\le M^{-\frac1{β-1}}2^{\frac{β}{(β-1)^2}}y_0<1
$$
which can be transformed into the given condition.
Since $a>1$ the sequence $y_k$ converges to zero even if $u_k$ is bounded, so that also equality in the condition is permissible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1743905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Making groups of 2., probability of getting a certain group? Let us say we have $n$ people $p_1, ...., p_n$ where $n$ is even. We find some random way to make groups of 2, and we are interested in if $p_i$ gets in a group with $p_j$, if $p_i$ gets $p_k$, $p_j$ gets in a group with $p_k$, and etc. Let us say that we have $m$ 'groups' that we wonder whether they occur. How can we calculate this probability? For example, what's the probability that a group is $(p_1,p_2)$ OR $(p_2,p_3)$?
|
There are $n-1$ equally likely ways to choose $p_2$'s partner. Of these, $2$ are "favourable."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1744001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Two multi-variable limit problems $$\lim_{(x,y)\to(0,0)}\frac{2x^2y}{x^4+3y^2}$$
I'm getting that the limit DNE because using $(0,y)\to(0,0)$ it is $0$ but for $(x,x^2)\to(0,0)$ it is $1/2$. Since $0$ does not equal $1/2$ the limit does not exist.
$$\lim_{(x,y)\to(0,0)}\frac{x^2y^2}{2x^2+y^2}$$
I'm getting that the limit does not exist because using $(0,y)\to(0,0)$ it is $0$ but $(x,x^2)\to(0,0)$ it is $1$. Since $1$ does not equal $1$ the limit does not exist.
Can someone check my answers and tell me if my reasoning is correct?
Thanks!
|
Yeah, you're right, from showing that you can say that the limit does not exists, from more you can say this in terms of the formal definition of a limit, but, with this approach it's enought to say that the limit does not exists
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1744104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Problems with integrating a (step) probability density function I've been sitting a embarrassing amount of time over this problem:
I am given a probability density function f(x) like this:
y= 1/6 when x between [-2,-1]
y= 2/6 when x between [-1,1]
y= 1/6 when x between [1,2]
y= 0 else
My task is to find out how probable it is to get an result between 0.5 and 1.5
My solution for this is:
$$
\int_{0.5}^1\;\frac26\;dx\;+\;\int_1^{1.5}\;\frac16\;dx\;=\;\frac3{12}
$$
My professor's solution is 1/3, so I am probably doing something wrong. I have done the calculation several times so I hope it's not just that. Any help is greatly appreciated.
Thank you!
|
Your answer seems to be correct as $\int_{0.5}^1 \frac{2}{6} dx = 0.5 * \frac{2}{6} = \frac{1}{6}$ and $\int_1^{1.5} \frac{1}{6} dx = 0.5 \frac{1}{6} = \frac{1}{12}$
And the sum of the two is obviously $\frac{3}{12} = \frac{1}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1744188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Linear functional is continuous $\implies$ it is bounded Let $f:X \rightarrow \mathbb R$ be a continuous linear functional. Prove $f$ is bounded.
Since $f$ is continuous, $\forall \varepsilon >0$, there exists $\delta >0$ such that $|f(x)-f(y)|=|f(x-y)|=|f(z)|< \varepsilon$ whenever $|x-y|<\delta$. We let $z=x-y$.
Can we just now let $\varepsilon = C \|x\|_X$ for some $C>0$ and then it is bounded?
|
Since $f$ is continuous (at $0$), there is a neighbourhood $U$ of $0$ such that $f(U)\subset(-1,1)$. Choose $\delta>0$ such that $\{x\in X|\|x\|\leq\delta\}\subseteq U$. Then, if $x\in X$ is such that $\|x\|\leq \delta$, we have $x\in U$, and hence, $|f(x)|\leq 1$. Since $\|\frac{\delta x}{\|x\|}\|=\delta$, it follows that for all $x\in X$ we have
$$
1\geq\left|f\left(\frac{\delta x}{\|x\|}\right)\right|=\frac{\delta}{\|x\|}|f(x)|\implies|f(x)|\leq\frac{1}{\delta}\|x\|.
$$
Therefore, $f$ is bounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1744323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.