Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Solving quadratic equations in modular arithmetic Is there a general way to solve quadratic equations modulo n? I know how to use Legendre and Jacobi symbols to tell me if there's a solution, but I don't know how to get a solution without resorting to guesswork. Some examples of my problems:
$x^2 = 8$ mod 2009
$x^2 + 3x + 1 = 0$ mod 13
Thanks
|
The answer depends on the value of the modulus $n$.
*
*in general, if $n$ is composite, then solving modulo $n = \prod p_i^{e_i}$ is equivalent to solve modulo each $p_i^{e_i}$. However, this requires knowing the factorization of $n$, which is hard in general (in a computational way): there are cryptosystems based on this.
*modulo a prime $p \neq 2$, you may simply complete the square and proceed in exactly the same way as in the reals.
*modulo the prime $p = 2$, it is impossible to complete the square. Instead, the relevant way to solve quadratic equations is through Artin-Schreier theory: basically, instead of $x^2 = a$, your “standard” quadratic equation is here $x^2-x = a$. (Well, this is useful for extensions of the field $\mathbb Z/2\mathbb Z$, but not so much for this field itself, since you can then simply enumerate the solutions...).
*modulo a power $p^e$, you start by computing an “approximate” solution, that is, a solution modulo $p$. You may then refine this solution to a solution modulo $p^e$ by using Hensel's lemma. Note that this works for both the equations $x^2 = a \pmod{p \neq 2}$ and $x^2 - x = a \pmod{2}$ as stated above.
This means that the only remaining problem is how to compute a square root modulo a prime $p$. For this, the relevant reference would be the Tonnelli-Shanks algorithm; see for instance Henri Cohen's A Course in computational algebraic number theory, 1.5.1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1257648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $(S^\perp)^\perp=\overline {\operatorname{span}(S)}$ . Let $H$ be a Hilbert Space. $S\subseteq H$ be a finite set .Show that $(S^\perp)^\perp=\overline {\operatorname{span} (S)}$ .
Now $\operatorname{span}(S)$ is the smallest set which contains $S$ and $\overline{\operatorname{span}(S)}$ is the smallest closed set containing $S$. Also $S^{\perp\perp}$ is a closed set containing $S$. Thus $\overline{\operatorname{span (S)}}\subset S^{\perp\perp}$ .
How to do the reverse?
|
You don't need $S$ to be a finite set. This is true for any subset $S\subseteq \mathscr{H}$.
First of all notice that $S^{\perp}$ is a closed subspace of $\mathscr{H}$ (using continuity of inner product) and $S \subseteq (S^{\perp})^{\perp}$ for any subset $S \subseteq \mathscr{H}$. Thus $Span(S) \subseteq (S^{\perp})^{\perp} \Rightarrow \overline{Span(S)} \subseteq (S^{\perp})^{\perp}$. Further notice that if $C \subseteq D$, then $D^{\perp} \subseteq C^{\perp}$. So, we have
\begin{align*}
S \subseteq \overline{Span(S)} & & \Longrightarrow & & \left(\overline{Span(S)}\right)^{\perp} \subseteq S^{\perp} & & \Longrightarrow & & (S^{\perp})^{\perp} \subseteq \left(\left(\overline{Span(S)}\right)^{\perp}\right)^{\perp}.
\end{align*}
But $\left(\left(\overline{Span(S)}\right)^{\perp}\right)^{\perp} = \overline{Span(S)}$ (because if $M$ is a closed subspace of a Hilbert space $\mathscr{H}$, then $(M^{\perp})^{\perp} = M.$) Thus, $(S^{\perp})^{\perp} = \overline{Span(S)}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1257748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Finding coefficients, Legendre polynomials. Say I have a function
$f(\theta) = 1 + \cos^2(\theta)$
that can be expressed terms of the Legendre polynomials. When calculating coefficients should I change the Legendre polynomials from $x$ variables to theta variables? e.g. The third Legendre usually written:
$(0.5(3x^2-1))$ would have theta rather than $x$? (since my function is a function of theta not $x$).
If I am correct would it also make sense to then change the limits of integration $-1, 1$ to $-\pi, \pi...$
My third coefficient equation then looks like (sorry I don't know how to write this out correctly):
$c_3 = \frac52 \int_{-\pi} ^ \pi (1+\cos^2(\theta))(0.5(3\theta^2-1))d \theta$
Sorry if this is hard to read. Any help?
|
The answer depends on what you're seeking. Legendre polynomials $P(x)$ form an orthonormal basis on $[-1,1]$, so any nice function $f(x)$ on $[-1,1]$ can be written as a linear combination of them. Your $f(\theta)=1+\cos(\theta)^2$, while nice, will have a pretty awful expansion in terms of $P(\theta)$, but a much nicer expansion in terms of $P(\cos(\theta))$.
Most likely, the question asks you to expand in $P(\cos(\theta))$ because these are the basis functions on the unit circle for the Newtonian potential. To make your life easier, write $x=\cos(\theta)$ so you are trying to expand $1+x^2$ in terms of $P(x)$. Then yes you can compute the coefficients $c_i$ by taking inner products $\int_{-1}^1(1+x^2)P_n(x)dx$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1257843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$R/I^n$ is a local ring
Let $I$ be a two sided ideal of a ring $R$ such that $I$ is maximal as a right ideal. I need to show that $R/I^n$ is a local ring, for every $n \geqslant 1$.
For $n=1$ I was able to show that the quotient $R/I$ is a division ring and so it is a local ring (because the non-invertible elements form a group).
For $n>1$ I tried to use induction, but got stuck. Am I on the right track? Do you have any suggestions? Thanks.
|
You need to show that for each $n$, $R/I^n$ has a unique maximal ideal (just the definition). The ideals in $R/I^n$ are in bijective, inclusion-preserving correspondence to the ideals of $R$ which contain $I^n$ (the bijection is induced by the quotient map). Therefore, the image of $I$ in $R/I^n$ is a unique maximal ideal, thus $R/I^n$ must be local.
That is at least how it works for a commutative ring, where we don't have to care about left and right-ideals. I am not sure if this changes when we assume $I$ to only be maximal as a right-ideal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1257946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Statistics Problems, I don't understand what this means.. P(A)=0.46 and P(B)=0.42
If P(B∣A)= 0.174
what is P(A∩B)?
|
One may recall that
$$
P(B|A)=\frac{P(A\cap B)}{P(A)}
$$ giving
$$
P(A\cap B)=P(A)\times P(B|A)
$$Here you then have
$$
P(A\cap B)=0.46\times 0.174=0.08004.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show a random walk is transient I was going through some problems related to Markov chains and I got stuck on this bit:
We are given a random walk on $Z$, defined by the transition matrix $p_{i,i+1}=p$ and $p_{i,i-1}=1-p$. How to show that if $p\neq 0.5$ the walk is transient?
|
Since the process is irreducible, we can assume without loss of generality that $X_0=0$, and it suffices to show that $\mathbb P(N_0<\infty)<1$, where
$$N_0=\inf\{n>0: X_n=0\}$$
is the time until the first return to $0$. Let
$$F(s) = \mathbb E\left[s^{N_0}\right]$$
be the generating function of $N_0$. It can be shown through some computation that
$$F(s) = 1 - \sqrt{1-4p(1-p)s^2}.$$
It follows that
$$\mathbb P(N_0<\infty) = F(1) = 1 - \sqrt{1-4p(1-p)}<1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
formula for the $n$th derivative of $e^{-1/x^2}$
$f(x) = \begin{cases} e^{-1/x^2} & \text{ if } x \ne 0 \\ 0 & \text{ if } x = 0 \end{cases}$
so
$\displaystyle f'(0) = \lim_{x \to 0} \frac{f(x) - f(0)}{x - 0} = \lim_{x \to 0} \frac {e^{-1/x^2}}x = \lim_{x \to 0} \frac {1/x}{e^{1/x^2}} = \lim_{x \to 0} \frac x {2e^{1/x^2}} = 0$
(using l'Hospital's Rule and simplifying in the penultimate step).
Similarly, we can use the definition of the derivative and l'Hospital's Rule to show that $f''(0) = 0, f^{(3)}(0) = 0, \ldots, f^{(n)}(0) = 0$, so that the Maclaurin series for $f$ consists entirely of zero terms. But since $f(x) \ne 0$ except for $x = 0$, we can see that $f$ cannot equal its Maclaurin series except at $x = 0$.
This is part of a proof question. I don't think the answer sufficiently proves that any $n$th derivative of $f(x)$ is $0$. Would anyone please expand on the answer?
ps: I promise this is not my homework :)
|
Show by induction that $f^{(n)}(x)=P_n(\frac 1 x) \mathbb{e} ^{-\frac 1 {x^2}}$ with $P_n$ a polynomial function of degree $3n$, and then compute (again by induction if you want) $\lim \limits _{x \to 0^+} \space f^{(n)}(x)$. You'll have to use l'Hospital's thorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Arithmetic progression - terms divisible by a prime.
If $p$ is a prime and $p \nmid b$, prove that in the arithmetic progression $a, a+b, a +2b, $ $a+3b, \ldots$, every $p^{th}$ term is divisible by $p$.
I am given the hint that because $\gcd(p,b)=1$, there exist integers $r$ and $s$ satisfying $pr+bs=1$. Put $n_k$ = $kp-as$ for $k = 1,2,...$ and show that $p \mid a + n_k b$.
I get how to solve the problem once I apply the hint, but I am unsure how to prove the hint. How do I know what to set $n_k$ to?
|
Apply the extended Euclidean algorithm to find $r$ and $s$.
Suppose $p=31$, $b=23$
$$\begin{array}{c|c|c}
pr+bs & r & s \\
\hline p=31 & 1 & 0 \\
b=23 & 0 & 1 \\ \hline
8 & 1 & -1 \\
7 & -2 & 3 \\
1 & 3 & -4 \\
\end{array}$$
$31\cdot 3 + 23 \cdot -4 = 93-92 = 1 \quad \checkmark$
The process works by, at each step, subtracting a multiple of the last line from the line above it to get a smaller number. So $23-2\times 8 = 7$, for example, and the same operations are applied to the $r$ and $s$ values.
Then looking at $n_k=kp-as$, we see that
$\begin{align}a+n_kb &= a+bkp-bas \\
&= a+bkp-a(1-rp) \\
&= a+bkp-a+arp \\
&= p(bk+ar)\\
\end{align}$
and so $p\mid a{+}n_kb$ as required. Note that successive values of $n_k$ are different by exactly $p$ to complete the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is the multiplicative order of a number always equal to the order of its multiplicative inverse? Is it true that $ord_{n}(a)=ord_{n}(\bar{a})$ $\forall n$?
Here, $\bar{a}$ refers to the multiplicative inverse of $a$ modulo $n$ and $ord_{n}(a)$ refers to the multiplicative order of $a$ modulo $n$.
|
Yes, it is. Since $a \bar{a}=1$, it follows that for any positive integer $k$ we have $a^k (\bar{a})^k=1$. It follows that $a^k=1$ if and only if $(\bar{a})^k=1$. In particular, if $k$ is the smallest positive integer such that $a^k=1$, then $k$ is the smallest positive integer such that $(\bar{a})^k=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Area between $ 2 y = 4 \sqrt{x}$, $y = 4$, and $2 y + 4 x = 8 $ Sketch the region enclosed by the curves given below. Then find the area of the region.
$ 2 y = 4 \sqrt{x}$, $y = 4$, and $2 y + 4 x = 8 $
Attempt at solution:
I guess I'm supposed to divide the areas into several parts, and then sum up the areas of those parts.
Wolfram alpha shows the sketched area... and I don't think those are the correct sketched areas, because the answer 14 isn't a correct answer.
So can someone tell me which sketched areas am I even looking at?
|
Simplify your boundary equations:
$$y = 2 \sqrt{x}$$
$$y = 4$$
$$y = -2x +4$$
Sketch the area. You ought to try hand-sketching it to verify.
Split into two double integrals.
$$Area = \int_{x=0}^1\int_{y=lower curve}^{higher curve} dydx+ \int_{x=1}^4\int_{y=lower curve}^{higher curve}dydx$$.
UPDATE/EDIT: You ought to have solved it by now. For reference, here's the full integration and solution.
$$= \int_{x=0}^1\int_{y=-2x+4}^{4} dydx+ \int_{x=1}^4\int_{y=2\sqrt{x}}^{4}dydx$$
$$= \int_{x=0}^1 4 - (-2x+4) dx+ \int_{x=1}^4 4- 2\sqrt{x}\ dx$$
$$= 2\int_{x=0}^1 x\ dx+ \int_{x=1}^4 4- 2\sqrt{x}\ dx$$
$$= x^2 \bigg|_0^1 + 4x \bigg|_1^4 - \frac{4}{3}x^{\frac{3}{2}} \bigg|_1^4 $$
$$= 11/3 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Steinhaus theorem for topological groups $G$ is a locally compact Hausdorff topological group, $m$ is a (left) Haar measure on $X$, $A$ and $B$ are two finite positive measure in $G$, that is $m(A)>0$, $m(B)>0$.
My question is:
Can we conclude that $AB= \{ab, a\in A, b\in B\}$ contains some non-empty open set of G?
Is this question right?
Or is this right just for $G=R^n$, $R^n$ is the Euclid space, and $m$ is the Lebesgue measure on $R^n$. If so, how to prove it?
Thanks a lot.
|
Here is another proof, using regularity of the measure instead of convolution.
Claim: The result holds when $B=A^{-1}$.
Proof: By regularity there is a compact set $K$ and an open set $U$ such that $K\subset A\subset U$ and such that $m(U)<2m(K)$. The multiplication map sends $\{1\}\times K$ into $U$, so by continuity of multiplication and compactness of $K$ there is a neighbourhood $V$ of $1$ such that multiplication sends $V\times K$ into $U$. But then if $x\in V$ the sets $K$ and $xK$ are each more than half of $U$, so $K\cap xK$ is nonempty, so $x\in KK^{-1}$. Thus $KK^{-1}$ contains a neighbourhood $V$ of $1$. $\square$
Claim: The result holds in general.
Proof: By regularity we may assume both $A$ and $B$ are compact. For $x$ running over $G$ we have
$$\int m(A\cap xB^{-1}) \,dx = \int\int 1_A(y) 1_B(y^{-1}x) \,dy\,dx = m(A)m(B)>0$$
by Fubini's theorem, so there is some $x$ such that $m(A\cap xB^{-1})>0$. Now apply the previous result to $A\cap xB^{-1}$. Since
$$(A\cap xB^{-1})(A\cap xB^{-1})^{-1} \subset ABx^{-1},$$
we deduce that $AB$ contains a neighbourhood of $x$. $\square$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
What is the intuition/motivation behind compact linear operator. Compact Linear Operator is defined such that the operator will map any bounded set into a relatively compact set. Why is this property so special that it can be named as "compact"? Does it share some similar properties as compact sets? What is the motivation to define and study such a set?
|
The set of compact operators (in a Hilbert space) is exactly the set of norm limits of finite rank operators. This is perhaps a more natural definition than the one you indicate. Many of the nice properties of finite rank operators have analogues for compact operators. You can view compactness has a slight generalization of being finite rank that preserves (or only mildly weakens) most of these properties.
Above all, as Mariano notes, compact operators appear all the time "in the wild," so it is useful to develop their theory. You may be interested in the book History of Functional Analysis, which describes how functional-analytic abstractions like compactness arose from concrete physical problems and PDEs.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Poisson power series We have a Poisson power series of
$$Y=\sum\limits_{k=0}^{\infty}e^{-\pi\lambda v^2}\frac{(\pi\lambda v^2)^k}{k!}(A)^k $$
If we have a disk with radius $v$
where A is defined as the density of a distance of some node from the origin placed randomly inside the disk, $A=\frac{2x}{v^2}$.
If i try to plug in k = 0 first, then we have
$$Y=e^{-\pi\lambda v^2}\frac{(\pi\lambda v^2)^0}{0!}(A)^0$$
$$Y=e^{-\pi\lambda v^2}$$
next plug in k=1 and so on, then
$$Y=e^{-\pi\lambda v^2}+e^{-\pi\lambda v^2}\frac{(\pi\lambda v^2)^1}{1!}(A)^1+....+e^{-\pi\lambda v^2}\frac{(\pi\lambda v^2)^{\infty}}{\infty!}(A)^{\infty}$$
however something power to the infinity is undefined, how can we simplified the series so that we can have the result such as
$$\exp \{-\pi\lambda v^2+\pi\lambda v^2 (A)\}$$
I try to answer this, rewrite
$Y=e^{-\pi\lambda v^2} \sum\limits_{k=0}^\infty \frac{(\pi\lambda v^2)^k}{k!}A^k$
$Y=e^{-\pi\lambda v^2} \sum\limits_{k=0}^\infty \frac{(\pi\lambda v^2A)^k}{k!}$
based on series formula $e^x = \sum\limits_{k=0}^\infty \frac{x^k}{k!}$ we have
$Y = e^{-\pi\lambda v^2}e^{\pi\lambda v^2A}$ , hence
$Y=\exp\bigg\{-\pi\lambda v^2 + \pi\lambda v^2A\bigg\}$
although the end result is the same, however i'm still not sure about the processes. Is it correct ?
|
You need to learn a little about convergence of infinite series
before you tackle this.
First step: Consider the geometric series:
$$A = 1/2 + 1/4 + 1/8 + \cdots = \sum_{k=1}^\infty 1/2^k.$$
It can be evaluated as follows:
$$(1/2)A = 1/4 + 1/8 + \cdots.$$
So $A - (1/2)A = 1/2$ and $A = 1.$
You can get very close to the correct answer by summing the
first 20 terms, which can be done in Matlab or R. In R,
k = 1:20; sum(1/2^k)
## 0.999999
Next step: Look at a calculus book or math handbook or online
to find the famous infinite series that converges to
$e = 2.718282\dots.$ This is called a Taylor or Maclauren series.
A more general series is
$$e^a = \frac{a^0}{0!} + \frac{a^1}{1!} + \frac{a^2}{2!} + \frac{a^3}{3!} + \cdots = \sum_{k=0}^\infty \frac{a^k}{k!}.$$
This one is directly related to your series. It converges very
quickly also, meaning that summing just a few terms gets you
very close to the correct value. Again in R:
a = 1; k=0:20; sum(a^k/factorial(k))
## 2.718282
a = 1.5; k=0:20; sum(a^k/factorial(k))
## 4.481689
exp(1.5)
## 4.481689 # e^1.5
Something like this is probably programmed into your calculator
for the $e^x$ key.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Given $L = L_1 \cap L_2$ where $L_1 \in NP$ and $L_2 \in coNP$, how do I express L as a symmetric difference of 2 sets in NP? My ultimate goal is to show that $L \in PP$, but I need to figure out the title question first as an intermediary step. Any help is appreciated, thanks in advance.
|
Recall that NP is closed under intersection. Hence $L_1\cap\overline{L_2}$ is in NP. Finally we can realize that the symmetric difference of this set and $L_1$ is exactly $L_1\cap L_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Product of rings: If $K$ is an ideal of $R\times S$, then there exists $I$ ideal of $R$, $J$ ideal of $S$ such that $K=I\times J$. Let $R$ and $S$ be two rings. We consider the product $R\times S$.
It is a ring with operations of sum and product defined coordinate by coordinate, i.e.
$$(r_1, s_1) + (r_2, s_2) = (r_1 + r_2, s_1 + s_2) \text{ and } (r_1, s_1) · (r_2, s_2) = (r_1 · r_2, s_1 · s_2)$$
The element $1$ of the ring $R \times S$ is $(1, 1)$ and the element $0$ is $(0, 0)$.
$(a)$ Let $I$ be a two-sided ideal of $R$ and let $J$ be a two-sided ideal of $J$. Show that $I \times J$ is a two-sided ideal of $R \times S$.
$(b)$ Show that the converse holds. If $K$ is an ideal of $R\times S$, then there exists $I$ ideal of $R$, $J$ ideal of $S$ such that $K=I\times J$.
I have been able to show part $(a)$ by
*
*$I$ is nonempty, $J$ nonempty so $I\times J$ is nonempty
*If $(i,j)$, $(i',j') \in I\times J$, then $(i,j) + (i',j') \in I\times J$
*If $(i,j) \in I\times J$, $(a,b)\in R\times S$ then $(i,j)\cdot (a,b) \in I\times J$
However I am not sure how to prove $(b)$. Any help would be appreciated.
|
For (a) you should also note that $(a,b)(i,j) \in I \times J$ (you want $I \times J$ to be two-sided).
For (b): Let $I := \{i \in R \mid \exists s \in S : (i,s) \in K\}$ and $J := \{j \in S \mid \exists r \in R: (r,j) \in K\}$. Then $I$ is an ideal: $I$ is non-empty, if $i,i' \in I$, say $(i,s), (i',s') \in K$, then $(i+i', s+s') \in K$, hence $i+i' \in I$. For $i \in I$, $r \in R$, say $(i, s) \in K$, we have $(ir,s), (ri,s) \in K$, hence $ir, ri \in I$, so $I$ in a two-sided ideal, same for $J$.
It remains to show that $I \times J = K$. On one hand, if $(i,j) \in K$, then $i \in I$, $j \in J$ by definition of $I$ and $J$, that is $K \subseteq I \times J$. Now suppose $i \in I$, $j \in J$. Then for some $r \in R$, $s \in S$, we have $(i,s), (r,j) \in K$. That is
$$ (i,j) = (i,0) + (0,j) = (i,s) \cdot (1,0) + (r,j) \cdot (0,1) \in K $$
Hence $I \times J \subseteq K$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1258998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Image of point of codimension one has codimension one? I'm working on the following exercise (7.2.3) from Liu's Algebraic Geometry and Arithmetic Curves:
Let $f: X \rightarrow Y$ be a morphism of Noetherian schemes. We suppose that either $f$ is flat or $X,Y$ are integral and $f$ is finite surjective.
Let $x \in X$ be a point of codimension $1$, and $y = f(x)$. Show that $\dim \mathcal{O}_{Y,y} = 1$ if $f$ is finite surjective...
Is this statement true? The going-up theorem only shows that $\dim V(y) = \dim V(x)$, which is not the same thing, since it is not always true that $\dim V(y) + \dim \mathcal{O_{Y,y}} = \dim Y$. It seems to me like we need the going-down theorem, which requires an additional normality hypothesis on $Y$. I think the problem is that $\mathcal{O}_{Y,y} \rightarrow \mathcal{O}_{X,x}$ is not necessarily finite.
|
No, this is not true. There is a famous example of Nagata of a Noetherian local domain $A$ of dimension $2$ which has a finite overring $A \subset B$ with two maximal ideals $\mathfrak m, \mathfrak n \subset B$ with $\dim(B_{\mathfrak m}) = 2$ and $\dim(B_{\mathfrak n}) = 1$. Let $X$ be the spectrum of $B$ and $Y$ be the spectrum of $A$ and let $x$ be the point corresponding to the prime ideal $\mathfrak n$.
The example can be found in Appendix, Example 2 of Nagata's book ``Local rings'' published in 1962. See also Tag 02JE.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Computing Ancestors of # for Stern-Brocot Tree Reading about the Stern-Brocot tree, the article gives this example:
using 7/5 as an example, its closest smaller ancestor is 4/3, so its left child is (4 + 7)/(3 + 5) = 11/8, and its closest larger ancestor is 3/2, so its right child is (7 + 3)/(5 + 2) = 10/7.
Getting the left and right children seem clear to me:
*
*left - mediant of # (7/5) and closest smaller ancestor (4/3)
*right- mediant of # (7/5) and closest larger ancestor (3/2)
But, how can I figure out the closest smaller and larger ancestors of 7/5?
|
"But, how can I figure out the closest smaller and larger ancestors of 7/5?"
Here is a method using a version of the subtractive Euclidean algorithm:
A : 7 (1) - 5 (0) = 7
B : 7 (0) - 5 (1) = -5
C : 7 (1) - 5 (1) = 2 A + B
D : 7 (1) - 5 (2) = -3 B + C Adding smallest positive to 'smallest' (lowest absolute value) negative result
E : 7 (2) - 5 (3) = -1 D + E Repeating above procedure
F : 7 (3) - 5 (4) = 1 C + E
G : 7 (5) - 5 (7) = 0 E + F
At each stage the smallest positive and 'smallest' (i.e. closest-to-zero, smallest absolute value) results are added.
The bracketed coefficients at each stage represent convergent fractions.
The coefficients of the equations with equal and smallest positive and negative results (E and F) represent the Bezout identity and are the 'parents' in the Stern Brocot tree. The result of these equations being |1|, 1 is the greatest common divisor of 7 and 5. So the 'parents' are $\frac{3}{4}$ and $\frac{2}{3}$ . They are the two fractions of which $\frac{5}{7}$ is the mediant.
Disregarding A and B - which are the 'set-up' for the algorithm - the positive and negative results are in a sequence of one positive, two negative, one positive and zero. The interpretation of this 'flipping' is that the continued fraction of $\frac{5}{7}$ is [0; 1, 2, 1, 1] or (which is the same thing) [0; 1, 2, 2].
A longer discussion of this adaptation of the subtractive Euclidean algorithm is here: https://simplyfractions.blogspot.com/p/blog-page_5.html
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
range of $m$ such that the equation $|x^2-3x+2|=mx$ has 4 real answers. Find range of $m$ such that the equation $|x^2-3x+2|=mx$ has 4 distinct real solutions $\alpha,\beta,\gamma,\delta$
To show how I got the wrong answers.
From $|x^2-3x+2|=mx$
I got the two case $x^2-3x+2=mx$ when $x>2 $ or $ x<1$
and $x^2-3x+2=-mx$ when $1<x<2$
also $m\neq0$ (because if $m=0$ , This will given only 2 answers not 4)
try to find the first two answers $x^2-3x+2=mx$ when $x>2 $ or $ x<1$
$x^2-(3+m)x+2=0$ when $x>2 $ or $ x<1$
Use quadratic formula will given $x= \frac{3+m\pm \sqrt{(3+m)^2-4\times2}}{2}$
$x$ will be real number and have two answers if $\sqrt{(3+m)^2-4\times2} > 0$
$m^2+6m+11>0$, got that $-3-2\sqrt{2}<m<-3+2\sqrt{2}$
and on the another case where $x^2-3x+2=-mx$ when $1<x<2$
$x^2-(3-m)x+2=0$ when $1<x<2$
Use quadratic formula will given $x= \frac{3-m\pm \sqrt{(3-m)^2-4\times2}}{2}$
$x$ will be real number and have the others two answers if $\sqrt{(3-m)^2-4\times2} > 0$
$\sqrt{(3-m)^2-4\times2} > 0$, got that $3-2\sqrt{2}<m<3+2\sqrt{2}$
So, I believe that the answers should be $(-3-2\sqrt{2}<m<-3+2\sqrt{2}) \cup (3-2\sqrt{2}<m<3+2\sqrt{2})$
However, the book's right answer is $0 < m < 3-2\sqrt{2}$
Please show me the method to obtain the right answers.
|
There is some positive value $m$ such that $y=mx$ is tangent to $y=-(x^2-3x+2)$.
This value must make $0$ the discriminant of the equation
$$x^2-3x+2=-mx$$
That is, $$m^2-6m+1=0$$
The least root of this equation is $$3-2\sqrt2$$
So $0<m< 3-2\sqrt 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $l^2$ is a Hilbert space
Let $l^2$ be the space of square summable sequences with the inner product $\langle x,y\rangle=\sum_\limits{i=1}^\infty x_iy_i$.
(a) show that $l^2$ is H Hilbert space.
To show that it's a Hilbert space I need to show that the space is complete. For that I need to construct a Cauchy sequence and show it converges with respect to the norm. However, I find it confusing to construct a Cauchy sequence of sequences?
|
A typical proof of the completeness of $\ell^2$ consists of two parts.
Reduction to series
Claim: Suppose $ X$ is a normed space in which every absolutely convergent series converges; that is, $ \sum_{n=1}^{\infty} y_n$ converges whenever $ y_n\in X$ are such that $ \sum_{n=1}^{\infty} \|y_n\|$ converges. Then the space $X$ is complete.
Proof. Take a Cauchy sequence $ \{x_n\}$ in $X$. For $ j=1,2,\dots$ find an integer $ n_j$ such that $ \|x_n-x_m\|<2^{-j}$ as long as $ n,m\ge n_j$. (This is possible because the sequence is Cauchy.) Also let $ n_0=1$ and consider the series $ \sum_{j=1}^{\infty} (x_{n_{j}}-x_{n_{j-1}})$. This series converges absolutely, by comparison with $\sum 2^{-j}$. Hence it converges. Its partial sums simplify (telescope) to $ x_{n_j}-x_1$. It follows that the subsequence $ \{x_{n_j}\}$ has a limit. It remains to apply a general theorem about metric spaces: if a Cauchy sequence has a convergent subsequence, then the entire sequence converges. $\quad \Box$
Convergence of absolutely convergent series in $\ell^2$
Claim:: Every absolutely convergent series in $ \ell^2$ converges
Proof. The elements of $ \ell^2$ are functions from $ \mathbb N$ to $ \mathbb C$, so let's write them as such: $ f_j: \mathbb N\to \mathbb C$. (This avoids confusion of indices.) Suppose the series $ \sum_{j=1}^{\infty} \|f_j\|$ converges. Then for any $ n$ the series $ \sum_{j=1}^{\infty} f_j(n)$ converges, by virtue of comparison $|f_j(n)| \le \|f_j\|$.
Let $ f(n) = \sum_{j=1}^{\infty} f_j(n)$. So far the convergence is only pointwise, so we are not done. We still have to show that the series converges in $ \ell^2$, that is, its tails have small $ \ell^2$ norm: $ \sum_{n=1}^\infty |\sum_{j=k}^{\infty} f_j(n)|^2 \to 0$ as $ k\to\infty$.
What we need now is a dominating function/sequence (sequences are just functions with domain $\mathbb{N}$), in order to apply the Dominated Convergence Theorem. Namely, we need a function $ g: \mathbb N\to [0,\infty)$ such that
$$ \sum_{n=1}^{\infty} g(n)^2<\infty \tag{1}$$
$$ \left|\sum_{j=k}^{\infty} f_j(n)\right| \le g(n) \quad \text{for all } \ k,n \tag{2} $$
Set $ g(n) = \sum_{j=1}^{\infty} |f_j(n)| $. Then (2) follows from the triangle inequality. Also, $ g$ is the increasing limit of functions $ g_k(n) = \sum_{j=1}^k |f_j(n)| $. For each $k$, using the triangle inequality in $ \ell^2$, we get
$$ \sum_n g_k(n)^2 = \|g_k\|^2\le \left(\sum_{j=1}^k \|f_j\|\right)^2 \le S^2$$
where $S= \sum_{j=1}^\infty\|f_j\|$. Therefore, $ \sum_n g(n)^2\le S^2$ by the Monotone Convergence Theorem.
To summarize: we have shown the existence of a summable function $g^2$ that dominates the square of any tail of the series $\sum f_j$. This together with the pointwise convergence of said series yield its convergence in $\ell^2$. $\quad\Box$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 5,
"answer_id": 0
}
|
Prove that a continuous function of compact support defined on $R^n$ is bounded. I am working through a few sets of notes I found on the internet and I came across this exercise. How do I prove that a continuous function $f$ of compact support defined on $R^n$ is bounded?
It seems believable that it is true for $f$ in $R$ because I can visualize it but how do I prove this properly for $R^n$? Any ideas?
|
Put $K = {\rm supp}(|f|)$. Since $K$ is compact and $|f|$ is continuous, then $\sup_{K} |f| < \infty$ and the supremum is attained on $K$. Since $f$ is zero off $K$, we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
probability of chess clubs problem The chess clubs of two schools consists of, respectively, 8 and 9 players. Four members from each club are randomly chosen to participate in a contest between the two schools. The chosen players from one team are then randomly paired with those from the other team, and each pairing plays a game of chess. Suppose that Rebecca and her sister Elise are on the chess clubs at different schools. What is the probability that (a) Rebecca and Elise will be paired?
Can someone explain step in step? Because from the book, the solution is
$$\frac{\binom{7}{3}\binom{8}{3}3!}{\binom{8}{4}\binom{9}{4}4!}$$
I really don't know what is going on at all, especially $\dfrac{3!}{4!}$???
|
There are 24 or (4!)ways to pair the 4 members of the 2 school teams, and 6 or (3!) Ways to pair the members ensuring that rebecca and elise are paired. Hence the probability that rebecca and elise are paired given that they were chosen is 3!/4!. Now we must multiply this probability that both rebecca and elise are chosen to get the probability that rebecca and elise are both chosen and paired together. This means that we multiply $3!/4!$ with $(7C3×8C3)/(8C4×9C4)$, which simplifies to $(7C3×8C3×3!)/(8C4×9C4×4!)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Orthogonality lemma sine and cosine I want to know how much is the integral $\int_{0}^{L}\sin(nx)\cos(mx)dx$ when $m=n$ and in the case when $m\neq n$. I know the orthogonality lemma for the other cases, but not for this one.
|
If $n=m$, then we have
$$\intop_{x=0}^{L}\sin\left(nx\right)\cos\left(nx\right)\mathrm{d}x
=\left[\frac{\sin^2\left(nx\right)}{2n}\right]_{x=0}^{x=L}
=\frac{\sin^2\left(nL\right)}{2n},$$
and for $n\neq m$ we have
$$\intop_{x=0}^{L}\sin\left(nx\right)\cos\left(mx\right)\mathrm{d}x
=\frac{1}{2}\left(\intop_{x=0}^{L}\sin\left(\left(n+m\right)x\right)\mathrm{d}x+\intop_{x=0}^{L}\sin\left(\left(n-m\right)x\right)\mathrm{d}x\right)$$
$$=\frac{1}{2}\left(\left[\frac{\cos\left(\left(n+m\right)x\right)}{n+m}\right]_{x=L}^{x=0}+\left[\frac{\cos\left(\left(n-m\right)x\right)}{n-m}\right]_{x=L}^{x=0}\right)
=\frac{1}{2}\left(\frac{1-\cos\left(\left(n+m\right)L\right)}{n+m}+\frac{1-\cos\left(\left(n-m\right)L\right)}{n-m}\right)$$
$$=\frac{1}{2}\frac{2n-2m\cos\left(nL\right)\cos\left(mL\right)-2n\sin\left(nL\right)\sin\left(nL\right)}{n^2-m^2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What's wrong with my permutation logic? The given question:
In how many ways the letters of the word RAINBOW be arranged, such that A is always before I and I is always before O.
I gave it a try and thought below:
Letters A, I and O should appear in that order. Then there are four places in which all the remaining four letters can appear. It means there are a total of 16 places: $$*_1*_2*_3*_4\quad A\quad*_5*_6*_7*_8\quad I\quad*_9*_{10}*_{11}*_{12}\quad O \quad *_{13}*_{14}*_{15}*_{16}$$
Out of these 16 places remaining, the letter can appear in any of the 4 places, giving total $^{16}P_4 = 43680$ possible arrangements.
But the given answer is just 840 with following explanation:
All the 7 letters of the word RAINBOW can be arranged in 7! ways and 3 particular letters (A,I,O) can be arranged in 3! ways. But the given condition is satisfied by only 1 out of 6 ways. Hence required number of arrangements
$$=\frac{7!}{3!} = 840$$
So what's wrong with my logic? I must be counting certain arrangements multiple times or my logic must be at fault in large. But what's that exactly?
|
You would be entirely correct if there really were 16 places ( 19 in all ) and the places not filled with letters were filled with spaces so that
R***A****I*N*BO**W*
was truly distinct from
*R**A****I*N*BO***W ( I has used * because spaces are hard to count )
what $ ^{16} P _4$ counts is the number of four-tuples with non-repeating elements from 1 to 16.
You might have the first element represent the position ( in your scheme of 16 positions ) of the letter "R" and the other three the positions of "N", "B", and "W" respectively.
so that R***A****I*N*BO**W* would be represented as (1, 10, 12, 15)
and *R**A****I*N*BO***W would be represented as (2, 10, 12, 16) .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 3
}
|
Find a nontrivial unit polynomial in $\mathbb Z_4[x]$ Find a unit $p(x)$ in $\mathbb{Z}_4[x]$ such that $\deg p(x)>1$
What I know:
A unit has an inverse that when the unit is multiplied by the inverse we get the identity element.
But I am confused by the concept of degree
"If $n$ is the largest nonnegative number
for which an $a_n \neq 0$, we say that the degree of $f$ is $n$."
So I am looking for a polynomial in $\mathbb{Z}_4[x]$ that has an inverse..
Also I am not sure exactly what is $\mathbb{Z}_4[x]$?
Any ideas on how to get started on this problem?
|
Take $p(x) = 2x^2+1$. Observe that $p(x)^2 = (2x^2+1)^2 = 4x^4+4x^2+1 = 1$, and $p(x)$ has degree $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Four 6-sided dice are rolled. What is the probability that at least two dice show the same number? Am I doing this right? I split the problem up into the cases of 2 same, 3 same, 4 same, but I feel like something special has to be done for 2 of the same, because what if there are 2 pairs (like two 3's and two 4's)?
This is what I have:
For 2 of the same: $5\times 5\times 6\times {4\choose 2}=900$
For 3 of the same: $5\times 6\times {4\choose 3}=120$
For 4 of the same: $6\times {4\choose 4}=6$
Combined: $900+120+6=1026$
Total possibilities: $6^4=1296$
Probability of at least 2 die the same: $\frac {1026}{1296}\approx 79.17$%
Confirmation that I'm right, or pointing out where I went wrong would be appreciated. Thanks!
Sorry if the formatting could use work, still getting the hang of it.
|
I split the problem up into the cases of 2 same, 3 same, 4 same, but I feel like something special has to be done for 2 of the same, because what if there are 2 pairs (like two 3's and two 4's)?
Yes. Your cases are
*
*1 quadruplet: $\binom{4}{4}\times \binom{6}{1}$ arrangements.
*1 triplet, 1 singleton: $\binom{4}{3,1}\times \binom{6}{1}\times \binom{5}{2}$ arrangements.
*1 doublet, 2 singletons: $\binom{4}{2,1,1}\times \binom{6}{1}\times \binom{5}{2}$ arrangements.
*2 doublets: $\binom{4}{2,2}\times \binom{6}{2}$ arrangements.
And the complement is the remaining case of 4 singletons: $\binom{6}{4}$ arrangements.
Finally, the total space is, of course, of $6^4$ arrangements.
NB: $\dbinom{4}{2,1,1} = \dfrac{4!}{2!\,1!\,1!}$ is a multinomial coefficient.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1259965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Are derivatives linear maps? I am reading Rudin and I am very confused what a derivative is now. I used to think a derivative was just the process of taking the limit like this
$$\lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{(x+h)-x}$$
But between Apostol and Rudin, I am confused in what sense total derivatives are derivatives.
Partial derivatives much more resemble the usual derivatives taught in high school
$$f(x,y) = xy$$
$$\frac{\partial f}{\partial x} = y$$
But the Jacobian doesn't resemble this at all. And according to my books it is a linear map.
If derivatives are linear maps, can someone help me see more clearly how my intuitions about simpler derivatives relate to the more complicated forms? I just don't understand where the limits have gone, why its more complex, and why the simpler forms aren't described as linear maps.
|
If $f:M\to N$ is some (possibly nonlinear) function (here I have in mind a diffeomorphism), then the Jacobian $J_f$ can be viewed as a linear map taking tangent vectors at some point $p \in M$ and returning a tangent vector at $f(p) \in N$.
Let's consider the case where $M$ and $N$ are both $\mathbb R^3$. $f$ is some vector field (or perhaps it could be viewed as a transformation of the vector space), and the Jacobian $J_f$ tells us about directional derivatives of $f$. Given a direction $v$ at a point $p$, $J_f(v) = (v \cdot \nabla) f|_p$.
If $M$ and $N$ are both $\mathbb R^1$, then what do we have? There's only one linearly independent tangent vector at each point, so $J_f$ is uniquely determined by some unit vector $\hat x$ and we get $J_f(\hat x) = \frac{df}{dx} \hat x$. The Jacobian here just tells us how a unit length is stretched or shrunk when we view $f$ as a transformation of the real line.
This is the way in which a Jacobian is a linear map: it tells us how directions in the domain correspond to directions in the range. And even 1d derivatives can be seen in this way. The components of the Jacobian are still the partial derivatives you're familiar with. We're just using those partial derivatives to talk about transformations of directions under some function, some map.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
Help with factorial inequality induction proof: $3^n + n! \le (n+3)!$ So I'm asked to prove $3^n + n! \le (n+3)!$, $\forall \ n \in \mathbb N$ by induction. However I'm getting stuck in the induction step.
What I have is:
(n=1)
$3^{(1)} + (1)! = 4$ and $((1) + 3)! = (4)! = 24$
So $4 \le 24$ and the statement holds for n = 1.
(n -> n=1)
Assume $3^n + n! \le (n+3)!$ for some $n \in \mathbb N$
So I get $3^n + 2 \cdot 3^n + n!(n + 1)$ and on the RHS $(n + 4)(n + 3)!$ but I can't think/find any way to reduce/manipulate either side by multiplying or adding anything.
The only help my professor gave was:
see if this helps: if $a + b \lt c$ and $a \gt 0$ and $b \gt 0$ then $a \lt c$ and $b \lt c$
But I can't figure out how that helps out either. Thanks in advance for any help and I hope I didn't mess up my first post here.
|
The proof for n = 1 is correct.
Now, let's go on to the inductive step. Let us consider that $\color{blue}{3^n + n!} \le \color{green}{(n+3)!}$ holds true for some integer n. Then the next step is to check what happens to the inequality for some integer n+1:
$3^{n+1}+(n+1)!=$
$3 \times 3^n + (n+1) \times n!=$
$(3^n + 3^n + 3^n) + \underbrace{n! + ... + n!}_{n+1 \space times}=$
$3^n+n!+3^n+n!+3^n+n!+\underbrace{n! + ... + n!}_{n+1-3 \space times}=$
$3 \times (\color{blue}{3^n + n!}) + \underbrace{n! + ... + n!}_{n-2 \space times}=$
Now substituting the inequality obtained for n,
$\le [3 \times \underbrace{\color{green}{(n+3)!}}_{replacement\space holds \space true}] + (n-2)n!$
$\le [(n+4)(n+3)!]$, since $(n+4)(n+3)! \gt \frac{(n-2)n!}{3}$, as n > 1
$\le (n+4)!$
QED
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Compute $\lim_\limits{n\to\infty}a_n$ where $a_{n+2}=\sqrt{a_n.a_{n+1}}$ I managed to show that the limit exists, but I don't know how to compute it.
EDIT:
There are initial terms: $a_1=1$ and $a_2=2$.
|
Note that $$ a_{n+2}\sqrt{a_{n+1}}=a_{n+1}\sqrt{a_n} =\cdots =a_2\sqrt{a_1}=2$$
Hence limit is $2^\frac{2}{3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
A valid method of finding limits in two variables functions? I was wondering if in finding the limit of a two variables function (say, $F(x,y)$), I can choose the path by let $y=f(x)$, then find the limit in the same way of that in one variable functions.
For example,
$$
\lim_{(x,y) \to (0,0)} \frac{xy}{x^2+xy+y^2}
$$
(It has no limit there by choosing first $y=0$ and then $y=x$)
So I'm asking if the following procedures are correct:
Let $y=f(x)$ where $f(0)=0$ since the function passes $(0,0)$
The function then becomes:
$$
\frac{xf(x)}{x^2 + xf(x) +f(x)^2}
$$
Then it's an indeterminate form $[0/0]$, so I differentiate,
$$
\frac{xf'(x)+f(x)}{2x + xf'(x)+f(x) +2f(x)f'(x)}
$$
Then it's still $[0/0]$ so I differentiate again,
$$\frac{xf''(x)+2f'(x)}{2+xf''(x)+2f'(x)
+2f(x)f"(x)+2f'(x)^2}$$
By substituting $x=0$, I get
$$\lim F(x,y) = \frac{2f'(x)}{2+2f'(x)+2f'(x)^2}$$
Since $f'(x)$ depends on the path I choose, the limit depends on the path I choose also.
Thus, the limit at $(0,0)$ does not exist.
So that's all, the question I have are
*
*is this a valid method to determine existence of limits?
*is this a valid method to find the limit?
(My teacher says it wont work for 2.) but I'm still unclear about his explanations)
(Sorry if I made any mistake or this is a very stupid question, I'm very new to this site and this is my first question, thank you in advance!)
|
If the limit does not exist, you can find two paths that disagree. However, in case the limit does exit, your failure to find such paths is not a proof of anything. You will need to show that the limits are the same for all paths.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to show real analyticity without extending to complex plane Suppose we have some $f \in C^\infty(\mathbb{R},\mathbb{R}).$ For example, $$f(x)=(1+x^2)^{-1}.$$ Using complex analysis, we can easily show $f$ is real analytic. Is there an easy, general method to show this which doesn't use complex analysis?
|
The most popular general method is to calculate the general term $f^{(n)}(a)$, and if that's possible, and for every $a$, find an interval $[a-h,a+h]$, such that
if
$$
M_n=\max_{x\in[a-h,a+h]}\lvert \,f^{(n)}(x)\rvert,
$$
then
$$
\limsup_{n\to\infty} \left(\frac{k_n}{n!}\right)^{\!1/n}=L<\infty.
$$
The $M_n$ can be approximated using Taylor expansion remainder.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How many ordered pairs are there in order for $\frac{n^2+1}{mn-1}$ to be an integer? For how many ordered pairs of positive integers like $(m,n)$ the fraction
$\frac{n^2+1}{mn-1}$
is a positive integer?
|
We have:
$$n^2+1=kmn-k$$
so we have $n$ divides $k+1$ we can write $k+1=nt$ so that $$n^2+1=(nt-1)(mn-1)$$
but if $m,t,n>1$ we have $(nt-1)(mt-1)\geq (2n-1)^2>n^2+1$impossible
if either $t=1$ or $m=1$ in the two cases $n-1$ divides $n^2+1$ but we know that $n-1=\gcd(n^2+1,n-1)=1$ or $2$, so that $n=1$, $n=2$ or $n=3$ and here you have finite cases to find $m$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $f=x^4-4x^2+16\in\mathbb{Q}[x]$ is irreducible
Prove that $f=x^4-4x^2+16\in\mathbb{Q}[x]$ is irreducible.
I am trying to prove it with Eisenstein's criterion but without success: for p=2, it divides -4 and the constant coefficient 16, don't divide the leading coeficient 1, but its square 4 divides the constant coefficient 16, so doesn't work. Therefore I tried to find $f(x\pm c)$ which is irreducible:
$f(x+1)=x^4+4x^3+2x^2-4x+13$, but 13 has the divisors: 1 and 13, so don't exist a prime number p such that to apply the first condition: $p|a_i, i\ne n$; the same problem for $f(x-1)=x^4+...+13$
For $f(x+2)=x^4+8x^3+20x^2+16x+16$ is the same problem from where we go, if we set p=2, that means $2|8, 2|20, 2|16$, not divide the leading coefficient 1, but its square 4 divide the constant coefficient 16; again, doesn't work.. is same problem for x-2
Now I'll verify for $f(x\pm3)$, but I think it will be fall... I think if I verify all constant $f(x\pm c)$ it doesn't work with this method... so have any idea how we can prove that $f$ is irreducible?
|
$$x^4-4x^2+16=(x^2-(2+\sqrt{-12}))(x^2-(2-\sqrt{-12}))$$
No rational roots and no factorization into quadratics over the rationals. The polynomial is irreducible over the rationals
edit for those who commented that this is not enough. I factorized over $\mathbb{C}[X]$ and thus proved that there are no rational solution i.e no degree $1$ factors. The only factorization possible is therefore into two quadratics. $\mathbb{C}[X]$ is a UFD and therefore we have
$$x^4-4x^2+16=(x-\sqrt{2+\sqrt{-12}})(x+\sqrt{2+\sqrt{-12}})(x-\sqrt{2-\sqrt{-12}})(x+\sqrt{2-\sqrt{-12}})$$
And this is unique. So combining the degree 1 factors in pairs is the only way to factorize in quadratics and there are three different ways to combine and none is rational
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 9,
"answer_id": 8
}
|
Analytical solution for $\max{x_1}$ in $(x_n)_{n\in\mathbb{N}}$ Let be $x_1,x_2,x_3,\ldots,$ a sequence of positive integers. Suposse the folowing conditions are true for all $n\in\mathbb{N}$
*
*$n|x_n$
*$|x_n-x_{n+1}|\leq 4$
Find the maximun value of $x_1$
I can't solve this analytically, I've start with $x_1=50$ and construct a sequence that holds the conditions, so $\max{x_1}\geq 50$ and repeat it. I found thath $60\leq\max{x_1}\leq 63$
So the question is, How to solve it? Because search for $x_1=10,30,40,50,60,61,62,63$ its not elegant.
$\textbf{Edit:}$
*
*If we take $x_1$=50, then $x_2\in\{46,\not{47},48,\not{49},50,\not{51},52,\not{53},54\}$, my strategy is go down in numbers to have for some $k$ that $x_k=4k$, example: $x_2$=46, $x_3=42$, $x_4=40$, $x_5=40$, $x_6=36$, $x_7=35$, $x_8=4\times8=32$ so from here we take for $n>8:x_n=4n$ and the sequence exists, so $\max{x_1}\geq50$.
|
If $n\ge 9$ then there is at most $1$ multiple of $n+1$ within distance $4$ of $x_n$, and there is at most $1$ multiple of $n$ within distance $4$ of $x_{n+1}$, so a term determines and is determined by the next term. Hence the terms of the sequence appearing for $n\ge 9$ are injectively determined by $x_9$
We clearly have $x_n \le x_1 + 4n$. For $n > x_1$, $(x_1+4n)/n < 5$ and so we must have $x_n/n \in \{1;2;3;4\}$. By the previous point, this implies that $(x_9,x_{10},x_{11},\ldots)$ must be one of the $4$ sequences $(9a,10a,11a,\ldots)$ for $a \in \{1;2;3;4\}$
This leaves you finitely many choices for $x_9$, ($9,18,27,36$). Now you can go back step by step and determine all the possible values for $x_8,x_7,$ and so on downto $x_1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to factor the polynomial $x^4-x^2 + 1$ How do I factor this polynomial: $$x^4-x^2+1$$
The solution is: $$(x^2-x\sqrt{3}+1)(x^2+x\sqrt{3}+1)$$
Can you please explain what method is used there and what methods can I use generally for 4th or 5th degree polynomials?
|
Actually you have:
$$x^4-x^2+1=x^4+2x^2+1-3x^2=(x^2+1)^2-(\sqrt3 x)^2 $$
and use the identity $a^2-b^2=(a-b)(a+b)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
Lindebaum's Lemma seemingly inconsistent with Gödel's incompleteness theorem? Lindenbaum's Lemma: Any consistent first order theory $K$ has a consistent complete extension.
First Incompleteness Theorem: Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete.
If the hypothesis of the First Incompleteness Theorem holds for a theory $K$, why doesn't an application of Lindenbaum then yield a contradiction?
|
What you've shown, as Asaf points out, is that Goedel's incompleteness theorem implies that $PA$ has no computable completion.
This addresses your question completely. However, at this point it's reasonable to ask, "How complicated must a completion of $PA$ be?"
It turns out the answer to this question is extremely well-understood (google "PA degree"). One interesting and very important aspect of this is the following. Let $0'$ be the set of all (indices for) computer programs that halt. It's easy to see that $0'$ can be used to compute a completion of $PA$ (or any computably axiomatizable theory), since the question "Is $\varphi$ consistent with $T$?" can be phrased as "Does the machine which searches for proofs of "0=1" from the axioms $T+\varphi$ ever halt?" So there will be some completion of $PA$ which is no more complicated than $0'$.
However, it turns out we can do substantially better. There is a certain class of sets called "low" - low sets are not computable, but are "close to" computable in a precise sense. Roughly speaking, it is no harder to tell if a low theory is consistent than it is to tell if a computable theory is consistent. By the Low Basis Theorem, every computably axiomatizable theory actually has a low completion!
So even though we can't always get computable completions of computable theories, we can always get "not too incomputable" completions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1260994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Dimension of $m\times n$ matrices I'm a bit confused on the notion of the dimension of a matrix, say $\mathbb{M}_{mn}$. I know how this applies to vector spaces but can't quite relate it to matrices.
For example take this matrix:
$$ \left[
\begin{array}{ccc}
a_{11}&\cdots&a_{1n}\\
\vdots & \vdots & \vdots \\
a_{m1}&\cdots&a_{mn}
\end{array}
\right] $$
Isn't the set of matrices $\mathbb{M}_{mn}$ with exactly one entry $a_{ij}$ set to $1$ on each matrix and $m\times n$ total matrices a basis for $\mathbb{M}_{mn}$? In the sense that we can take some linear combination of them and add them up to create:
$$ a_{11} \left[
\begin{array}{cccc}
1&\cdots&\cdots&0\\
\vdots & \vdots & \vdots & \vdots \\
0&\cdots&\cdots&0
\end{array}
\right] + a_{12} \left[
\begin{array}{cccc}
0 &1 &\cdots&0\\
\vdots & \vdots &\vdots & \vdots \\
0&0&\cdots&0
\end{array}
\right]\\ + \cdots +
a_{mn} \left[
\begin{array}{cccc}
0 &0 &\cdots&0\\
\vdots & \vdots &\vdots & \vdots \\
0&0&\cdots&1
\end{array}
\right]
= \left[
\begin{array}{ccc}
a_{11}&\cdots&a_{1n}\\
\vdots & \vdots & \vdots \\
a_{m1}&\cdots&a_{mn}
\end{array}
\right] $$
So the dimension of all $\mathbb{M}_{mn}$ is $m\times n$?
|
The term ''dimension'' can be used for a matrix to indicate the number of rows and columns, and in this case we say that a $m\times n$ matrix has ''dimension'' $m\times n$.
But, if we think to the set of $m\times n$ matrices with entries in a field $K$ as a vector space over $K$, than the matrices with exacly one $1$ entry in different positions and all other entries null, form a basis as find in OP, and the vector space has dimension $m n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Corollary of Gauss's Lemma (polynomials) I am trying to prove the following result. I have outlined my attempt at a proof but I get stuck.
Any help would be welcome!
Theorem:
Let $R$ be a UFD and let $K$ be its field of fractions.
Suppose that $f \in R[X]$ is a monic polynomial.
If $f=gh$ where $g,h \in K[X]$ and $g$ is monic, then $g \in R[X].$
Proof Attempt:
Clearly we have $gh=(\lambda \cdot g_0)(\mu\cdot h_0)$ for some $\lambda, \mu \in K$ and $g_0, h_0 \in R[X]$ primitive.
Write $\lambda=a/b$ and $\mu=c/d$ for some $a,b,c,d \in R$.
Clearing denominators yields $(bd) \cdot f = (ac) \cdot g_0h_0$ where both sides belong to $R[X]$.
By Gauss's lemma $g_0h_0$ is primitive and so taking contents yields $\lambda\mu=1$.
This proves that $f=g_0h_0$ is a factorization in $R[X]$ but not necessarily that $g \in R[X]$.
I can't seem to get any further than this - any help greatly appreciated?
|
why does $g$ and $h$ being monic imply that $k$ and $t$ are in $R$?
Because $kg$ and $th$ are primitive. In particular, they belong to $R[x]$. Since the highest coefficient of $kg$ is $k$, and the highest coefficient of $th$ is $h$, both $t$ and $h$ belong to $R$.
why does $k$ and $t$ being invertible in $R$ imply that $g$ and $h$ are in $R[x]$?
If the proof is still not clear to you, feel free to ask more questions.
Similarly, because $kg\in R[x]$, then $g=k^{-1}kg\in R[x]$ too. The same for $h$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Generalized way to solve $x_1 + x_2 + x_3 = c$ with the constraint $x_1 > x_2 > x_3$? On my example final exam, we are given the following problem:
How many ways can we pick seven balls of three colors red, blue, yellow given
also that the number of red balls must be strictly greater than blue and
blue strictly greater than yellow?
The solution I used (and was given) was a brute force counting. In particular, fix the number of red balls for $0, 1, \dots, 7$ and see how many viable cases we procure each time.
However, I wanted to try and find a more clever way to do it, but couldn't. Is there a better/general way to do this problem when the numbers get larger?
If possible, it would be even better if we solve the following more generalized form:
$$x_1 + \dots + x_n = c, x_1 > \dots > x_n \geq 0$$
|
For the case $n = 3$, since $x_2 > x_3 \geq 0 \Rightarrow x_2 \geq x_3+1 \Rightarrow x_2=x_3+1+r, r \geq 0$, and similarly, $x_1 > x_2 \Rightarrow x_1 \geq x_2+1 \Rightarrow x_1=x_2+1+s = (x_3+1+r)+1+s = x_3+2+r+s$. Substituting these into the equation: $x_3+2+r+s + x_3 + 1 + r + x_3 = c \Rightarrow 3x_3+2r+s = c-3$. From this we can divide into cases with $x_3 = 0, 1,2,...,\lfloor \frac{c-3}{3}\rfloor$. This can generalize to $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
almost sure convergence for non-measurable functions Let $(\Omega,\mathscr{F},P)$ be a probability space. Assume for each $n$, $Y_n:\Omega\rightarrow\mathbb{R}$ is a function but $Y_n$ is not necessarily $\mathscr{F}$-measurable. In this case, is it still meaningful to talk about almost sure convergence of $Y_n$? Conceptually, we can define almost sure convergence as $$\exists\hat{\Omega}\in\mathscr{F}\quad\mathrm{such\,\,that}\quad P(\hat{\Omega})=1\quad\mathrm{and}\quad \{Y_n(\omega)\}\,\,\mathrm{converges}\,\,\forall \omega\in\hat{\Omega}.$$
In every probability textbook I have, they all define almost sure convergence for "random variables". But I think what I mentioned might arise naturally in some situations. For example, if for each $n$, $\{X^n_\lambda\}_{\lambda\in \Lambda}$ is a class of random variables where $\Lambda$ is uncountable, then $$Y_n\equiv\sup_{\lambda}X_\lambda^n$$ is not necessarily measurable, but still we sometimes want to talk about convergence property of $\{Y_n\}$.
|
Not only a.s. convergence but pointwise convergence, as well, can be defined in the case of sequences of non measurable functions. Let, for instance, $$([0,1],\mathscr A=\left\{\emptyset,[0,1/2],(1/2,1],[0,1]\right\},\mathbb P((0,1/2]))=\mathbb P((1/2,1])=1/2)$$
be a probability space, and let $$X_n(\omega)=\frac{\omega}{n}, \text{ if }\ \omega\in[0,1].$$
Obviously $X_n$ converges pointwise to $0$ on $[0,1]$. So far so good. However, there is no answer to important$^*$ questions. Consider only the following example:
$$\mathbb P\left(X_3<\frac{1}{5}\right)=\ "\mathbb P"\left(\left\{\omega:0\le \omega<\frac{3}{5}\right\}\right)=??$$
There is no answer because $\mathscr A$ and $\mathbb P$ could be extended many different ways.
$^*$ Philosophcal-BTW: What is important at all?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
user friendly proof of fundamental theorem of calculus Silly question. Can someone show me a nice easy to follow proof on the fundamental theorem of calculus. More specifically,
$\displaystyle\int_{a}^{b}f(x)dx = F(b) - F(a)$
I know that by just googling fundamental theorem of calculus, one can get all sorts of answers, but for some odd reason I have a hard time following the arguments.
|
The key fact is that, if $f$ is continuous, the function $G(x)=\int_a^xf(t)\,dt$ is an antiderivative for $f$. For this,
$$
\frac1h\,\left(\int_a^{x+h}f(t)\,dt-\int_a^xf(t)\,dt\right)
=\frac1h\,\int_x^{x+h}f(t)\,dt\xrightarrow[h\to0]{}f(x).
$$
The justification of the limit basically plays on the fact that $f$ is continuous. A formal proof requires dealing with the formal definition of continuity. Namely, given $\varepsilon>0$ by definition of continuity there exists $\delta>0$ such that $|f(x)-f(t)|<\varepsilon$ whenever $|x-y|<\delta$. If we choose $h<\delta$, then $|f(t)-f(x)|<\varepsilon$ for all $t\in [x,x+h]$. Then
\begin{align}
f(x)&=\frac1h\int_x^{x+h}f(x)\,dt\leq\frac1h\int_x^{x+h}(\varepsilon +f(t))\,dt=\varepsilon + \frac1h\int_x^{x+h}f(t)\,dt\\[0.3cm]
&\leq2\varepsilon + \frac1h\int_x^{x+h}f(t)\,dt=2\varepsilon+f(x).
\end{align}
Thus
$$
f(x)-\varepsilon\leq \frac1h\int_x^{x+h}f(t)\,dt\leq f(x)+\varepsilon,
$$
showing the convergence.
Now $G(a)=0$, and $G(b)-G(a)=G(b)=\int_a^bf(t)\,dt$. If $F$ is any other antiderivative of $f$, we have $F'=f=G'$, so $(G-F)'=G'-F'=0$, i .e. $G(x)-F(x)=c$ for some constant. That is, $F(x)=G(x)-c$. Then
$$
F(b)-F(a)=(G(b)-c)-(G(a)-c)=G(b)-G(a)=\int_a^bf(t)\,dt.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
}
|
Is it possible to prove this? $\ln(\frac{x}{x-1}) < \frac{100}{x} $ for $ x > 1$ $-\ln(1-(\frac{1}{x})) < \frac{100}{x} $ for $ x > 1$ is what I want to prove. I pulled a negative sign out and I got $\ln(\frac{x}{(x-1)}) < \frac{100}{x} $ for $ x > 1$.
How do I continue with this proof?
Or is it actually possible to prove this
Edit : I want this proof Algebrically, Calculus is allowed
|
It is not true for $x=1+e^{-100}$. We then have
$$ \log\frac{x}{x-1} = \log(x)+100 > 100 $$
but
$$ \frac{100}{x} < 100 $$
because $x>1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
}
|
Can you help me subtract intervals? I was reading my abstract math textbook and they subtracted
$[3, 6] - [4, 8) = [3, 4)$. I was wondering if someone could write out how they got to $[3, 4$). I looked at wikipedia and it said I should go
$[a, b] - [c, d] = [a-d, b-c]$. When I did this, I got $[-5, 2)$. I would be thankful for an explanation-- the book doesn't explain so it's probably really obvious-- but I don't know.
|
You are looking at two different definitions of $A-B$:
Set difference: $A - B = \{ x\in A \, \mid \, x \notin B \}$ which in this case gives $$[3,6]−[4,8) = [3,4)$$
Interval arithmetic: $A - B = \{ x-y \in \mathbb{R} \, \mid \, x\in A, \,y \in B \}$ which in this case gives $$[3,6]−[4,8) = (-5,2]$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Explain this inequality, related to logarithms I am trying to understand a proof of Stirling's formula.
One part of the proof states that, 'Since the log function is increasing on the interval $(0,\infty)$, we get
$$\int_{n-1}^{n} \log(x) dx < \log(n) < \int_{n}^{n+1} \log(x) dx$$ for $n\geq 1$.'
Please could you explain why this is true?
In particular, I am struggling to visualise this inequality graphically/geometrically.
|
$$\int_{n-1}^n\log(x) dx<\int_{n-1}^{n}\log( n) dx=\log(n)$$
using $\log(n)>\log(x)$ for $n-1\leq x<n$.
Similarly:
$$\int_n^{n+1}\log(x)dx>\int_n^{n+1}\log(n)dx=\log(n)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
ADMM formalization I found lots of examples of ADMM formalization of equality constraint problems (all with single constraint). I am wondering how to generalize it for multiple constraints with mix of equality and inequality constraints.
I have a problem
Minimize $||A_x||_1 + \lambda ||A_y||_2 $, such that:
$$A_x X = A_x$$
$$\mathrm{diag}(A_x) = 0$$
$$A_x \ge 0$$
$$A_y \le 0$$
How can I write this in ADMM form ?.
|
One way to formulate the problem using ADMM is to let the ADMM-variable $X$ contain $A_x$ and $A_y$, i.e. $X = [A_x; A_y]$ (semi-colon denotes stacking, as in Matlab etc.), and let $Z=[Z_1; Z_2; Z_3; Z_4]$ contain four blocks corresponding to $A_x$, $A_x$, $A_x$ and $A_y$ respectively. (I will write $Q$ for $X-I$, where $X$ is your variable $X$! Thus $A_x Q=0$ should hold.)
$X$ and $Z$ would then be linked by
$$
\left[\begin{array}{cc}
I & 0 \\
I & 0 \\
I & 0 \\
0 & I
\end{array}\right]
\left[ \begin{array}{c} A_x \\ A_y \end{array}\right]
= \left[\begin{array}{c} Z_1 \\ Z_2 \\ Z_3 \\ Z_4 \end{array}\right]
$$
where $I$ and $0$ are identity matrices and zero matrices of suitable sizes (this defines the matrices $A$ and $B$ of ADMM).
Now updating $A_x$ amounts to computing the proximal operator for the 1-norm (is the norm $\|\cdot\|_1$ the entry-wise sum of absolute values of elements, or the operator norm induced by the 1-vector norm?). Updating $A_y$ amount to evaluating the proximal operator of the 2-norm.
The function $g(Z)$ should be $g(Z) = g_1(Z_1) + \ldots + g_4(Z_4)$, where the $g_i$ encode the four constraints $Q A_x=0$, $diag(A_x)=0$, $A_x\geq 0$, $A_y\leq 0$. More precisely, they are indicator functions for the respective sets, so the proximal operators become projections. The norm for these projections is the Frobenius norm (the "vector 2-norm for matrices").
Thus you need to compute the projections on the respective sets, which should be manageable.
Does this make sense?
/Tobias
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Would this theorem also work for any integer $n$, not necessarily a prime ?
Would this theorem also work for any integer $n$, not necessarily a prime ?
I don't see why it should not, can you verify it or do you have an counterexample for a nonprime integer ?
|
In short, it depends on the notion of irreducibility.
In a commutative rings that are not domains, there are problems with divisibility - or, the situation is simply a bit more complicated: one gets several different notions of associated elements, thus, several notions of irreducible elements, etc.
Just to demonstrate: In this case what can go wrong is that in general $\mathbb{Z}_n[x]$, it is no longer true that $fg=h$ implies $\deg f, \deg g \leq \deg h$. For example, over $\mathbb{Z}_8[x],$ one has
$$(4x^2+4x+2)(4x^{100}+4x+2)=4.$$
Thus, from the fact that $\bar{f}(x)$ has bigger degree than, say $\bar g(x)$, it does not simply follow that $\bar{f}(x)$ does not divide $\bar g(x)$.
So one must treat this more carefully. I am, however, convinced, that if done right, the statement will hold even for general $n$. (I can give more details if anyone wishes.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the Fourier Transform of $2x/(1+x^2)$ I tried doing this the same way you would find the Fourier transform for $1/(1+x^2)$ but I guess I'm having some trouble dealing with the 2x on top and I could really use some help here.
|
Hint: Taking the derivative with respect to $k$ of $$F(k)=\int_{-\infty}^{\infty}\frac{1}{1+x^2}e^{ikx}dx$$
yields
$$F'(k)=i\int_{-\infty}^{\infty}\frac{x}{1+x^2}e^{ikx}dx$$
Thus, the Fourier Transform of $\frac{2x}{1+x^2}$ is $-2i$ times the derivative with respect to $k$ of the Fourier Transform of $\frac{1}{1+x^2}$.
$$\mathscr{F}\left(\frac{2x}{1+x^2}\right)(k)=-2i \mathscr{F}\left(\frac{1}{1+x^2}\right)(k)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1261977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Derivatives - optimization (minimum of a function) For which points of $x^2 + y^2 = 25$ the sum of the distances to $(2, 0)$ and $(-2, 0)$ is minimum?
Initially, I did $d = \sqrt{(x-2)^2 + y^2} + \sqrt{(x+2)^2 + y^2}$, and, by replacing $y^2 = 25 - x^2$,
I got $d = \sqrt{-4x + 29} + \sqrt{4x + 29}$, which derivative does not return a valid answer.
Where did I commit a mistake?
Thank you!!
|
For better readability, $$S=\sqrt{29+4x}+\sqrt{29-4x}$$
$$\dfrac{dS}{dx}=\dfrac2{\sqrt{29+4x}}\cdot4-\dfrac2{\sqrt{29-4x}}\cdot4$$
For the extreme values of $S,$ we need $\dfrac{dS}{dx}=0\implies29+4x=29-4x\iff x=0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1262073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Area enclosed by an equipotential curve for an electric dipole on the plane I am currently teaching Physics in an Italian junior high school. Today, while talking about the electric dipole generated by two equal charges in the plane, I was wondering about the following problem:
Assume that two equal charges are placed in $(-1,0)$ and $(1,0)$.
There is an equipotential curve through the origin, whose equation is
given by:
$$\frac{1}{\sqrt{(x-1)^2+y^2}}+\frac{1}{\sqrt{(x+1)^2+y^2}}=2 $$ and
whose shape is very lemniscate-like:
Is there a fast&tricky way to compute the area enclosed by such a curve?
Numerically, it is $\approx 3.09404630427286$.
|
Here is another method based on the curve-linear coordinates introduced by Achille Hui. He introduced the following change of variables
$$\begin{align}
\sqrt{(x+1)^2+y^2} &= u+v\\
\sqrt{(x-1)^2+y^2} &= u-v
\end{align} \tag{1}$$
Then solving for $x$ and $y$ we shall get
$$\begin{align}
x &= u v\\
y &= \pm \sqrt{-(u^2-1)(v^2-1)}
\end{align} \tag{2}$$
required that
$$-(u^2-1)(v^2-1) \ge 0 \tag{3}$$
It does not look familiar but in fact it is! Taking into account the equations $(2)$ and $(3)$, we can consider the following as a parameterization for the first quadrant of the $xy$ plane
$$\boxed{
\begin{array}{}
x=uv & & 1 \le u \lt \infty \\
y=\sqrt{-(u^2-1)(v^2-1)} & & 0 \le v \le 1
\end{array}} \tag{4}$$
I tried to draw the coordinate curves of this curve-linear coordinates and I just noticed that it is exactly the same as the Elliptic Coordinates and nothing else! You can show this analytically by the change of variables
$$\begin{align}
u &= \cosh p \\
v &= \cos q
\end{align} \tag{5}$$
I leave the further details in this avenue to the reader.
Let us go back to the problem of calculating the area. The equation of the $\infty$ curve was
$$\frac{1}{\sqrt{(x-1)^2+y^2}}+\frac{1}{\sqrt{(x+1)^2+y^2}}=2 \tag{6}$$
so combining $(1)$ and $(6)$ leads to
$$v=\pm \sqrt{u^2-u} \tag{7}$$
and hence the parametric equation of the $\infty$ curve in the first quadrant by considering $(4)$ and $(7)$ will be
$$\boxed{
\begin{array}{}
x=u\sqrt{u^2-u} & & 1 \le u \lt \phi \\
y=\sqrt{-(u^2-1)(u^2-u-1)}
\end{array}}
\tag{8}$$
and finally the integral for the area is
$$\begin{align}
\text{Area} &=4 \int_{0}^{\phi} y dx \\
&=4 \int_{1}^{\phi}y \frac{dx}{du}du \\
&=2 \int_{1}^{\phi} (4u-3)\sqrt{-u(u+1)(u^2-u-1)}du \\
&\approx 3.09405
\end{align} \tag{9}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1262174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 2
}
|
How can I solve the integral $ \int {1 \over {x(x+1)(x-2)}}dx$ using partial fractions? $$ \int {1 \over {x(x+1)(x-2)}}dx$$
$$ \int {A \over x}+{B \over x+1}+{C \over x-2}dx $$
I then simplified out and got:
$$1= x^2(A+B+C) +x(C-2B-A) -2A$$
$$A+B+C=0$$
$$C-2B-A=0$$
$$A=-{1 \over 2}$$
However, I'm stuck because I don't know how to solve for B and C now, if I even did the problem correctly.
|
Generally you want to avoid simultaneous equations. So rather than collect coefficients of powers of $x$ as you have done, write it as $1=A(x+1)(x-1)+Bx(x-1) +Cx(x+1)$. Since this is an identity, you can substitute any value of $x$ into this. Therefore substitute values which will make brackets disappear. For example, putting $x=1$ will give you the value of $B$, and putting $x=0$ will give you the value of $A$, and so on. An even quicker way is to use the Cover-Up Rule. Do you know this?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1262263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
How to find a basis of a linear space, defined by a set of equations Problem
Find a basis of the intersection $P\cap Q$ of subspaces $P$ and $Q$ given by:
$$ P:
\begin{cases}
x_1 - 2 x_2 + 2 x_4=0,\\
x_2 - x_3 + 2 x_4 = 0
\end{cases}
\qquad Q:
\begin{cases}
-2 x_1 + 3 x_2 + x_3 -6 x_4=0,\\
x_1 - x_2 - x_3 + 4 x_4 = 0
\end{cases}
$$
Attempted solution
The intersection of these 2 sets can be written by joining the sets of equations into 1:
$$
\begin{cases}
x_1 - 2 x_2 + 2 x_4=0,\\
x_2 - x_3 + 2 x_4 = 0\\
-2 x_1 + 3 x_2 + x_3 -6 x_4=0,\\
x_1 - x_2 - x_3 + 4 x_4 = 0\\
\end{cases}
$$
After solving it I got the following matrix:
$$
\begin{pmatrix}
1 & 0 & -2 & 6\\
0 & 1 & -1 & 2\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
\end{pmatrix}
$$
So, the intersection is the set of vectors, satisfying these 2 equations:
$$
\begin{cases}
x_1 = 2 x_3 - 6 x_4,\\
x_2 = x_3 - 2 x_4\\
\end{cases}
$$
We have 2 independent and 2 dependent variables. What to do next to find the basis?
|
Having:
$$
\begin{cases}
x_1 = 2 x_3 - 6 x_4,\\
x_2 = x_3 - 2 x_4\\
\end{cases}
$$
We could set
1) For the first element of basis:
\begin{split}
x_3=1,\quad x_4=0:\\
x_1 = 2\cdot 1 - 6\cdot 0 = 2,\\
x2 = 1 - 2 \cdot 0 = 1
\end{split}
So, we get: (2, 1, 1, 0).
2) Second element of basis
\begin{split}
x_3=0,\quad x_4=1:\\
x_1 = 2 \cdot 0 - 6\cdot 1 = -6,\\
x2 = 0 - 2 \cdot 1 = -2
\end{split}
So, we get: (-6, -2, 0, 1).
Solution
So the basis is $(2, 1, 1, 0), \; (-6, -2, 0, 1)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1262342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The residue of $9^{56}\pmod{100}$ How can I complete the following problem using modular arithmetic?
Find the last two digits of $9^{56}$.
I get to the point where I have $729^{18} \times 9^2 \pmod{100}$. What should I do from here?
|
By Carmichael's function, the order of any residue of $100$ divides, and is a maximum of, $20$ (the same as for $25$). We see that $9$ is a square, so the maximum order of $9 \bmod 100$ is $10$ (or divides $10$ if it is less). This gives
$$9^{56}\equiv 9^6 \bmod 100$$
Then we can simply multiply a few small powers:
$$9^2 \equiv 81,\quad 9^4 \equiv 61, \quad 9^6 \equiv 41 \equiv 9^{56}$$
(This result also shows - since $9^6 \not\equiv 9$ - that the order of $9 \bmod 100$, not being $2$ or $5$, must be $10$, as others have shown by direct calculation)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1262409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Set theory (containing Power Set) Need Help in a proof I am confirming whether my proof is correct or not and need help.
If $ A \subseteq 2^A , $ then $ 2^A \subseteq 2^{2^A} $
Proof:
Given: $ \forall x ($ $ x\in A \rightarrow \exists S $ where $ S \in 2^A \wedge x \in S )$ --($0$)
Goal: $ \forall S \forall x ( $ $ S\in 2^A \wedge x \in S \rightarrow \exists F $ such that $F \in 2^{2^A} \wedge \exists S' $ such that $ S' \in$ $F \wedge x \in S')$
$ \forall S \forall x \text { }S \in 2 ^A \wedge x \in S $ adding to the given. -(1)
New Goal: $\exists F $ such that $F \in 2^{2^A} \wedge \exists S' $ such that $ S' \in$ $F \wedge x \in S' $
By universal instantiation (1) ,
$ A \in 2 ^A \wedge x \in A $
From the above step we have $x \in A$, Hence,
By existential instantiation ($0$) ,
$ A \in 2 ^A \wedge x \in A $
Now I am taking negation of the new goal. (Proof by contradiction)
$\exists F \text { such that } F \in 2^{2^A} \rightarrow \exists S ' \text { such that } S' \in F \rightarrow x \not \in S' $
--($2$)
By existential instantiation of $F$ and $S'$ in ($2$)
$F_0 \in 2^{2^A} \rightarrow A \in F_0 \rightarrow x \not\in A $
S' should be A.
How would I prove that $F_0 \in 2^{2^A}$
PS: Guidance using rule of inference is much appreciated.
|
Here is how I would write down this proof, in a way which makes clear the inherent symmetry. (Ignore the red coloring for now, I will use that below.)$
\newcommand{\calc}{\begin{align} \quad &}
\newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}}
\newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} }
\newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & }
\newcommand{\endcalc}{\end{align}}
\newcommand{\ref}[1]{\text{(#1)}}
\newcommand{\then}{\Rightarrow}
\newcommand{\followsfrom}{\Leftarrow}
\newcommand{\true}{\text{true}}
\newcommand{\false}{\text{false}}
\newcommand{\P}[1]{2^{#1}}
$
For every $\;X\;$, we have
$$\calc
X \in \P{A}
\op\equiv\hint{definition $\ref 0$ of $\;\P{\cdots}\;$}
X \subseteq A
\op\then\hint{using assumption $\;A \subseteq \color{red}{\P{A}}\;$, since $\;\subseteq\;$ is transitive $\ref 1$}
X \subseteq \color{red}{\P{A}}
\op\equiv\hint{definition $\ref 0$ of $\;\P{\cdots}\;$}
X \in \P{\color{red}{\P{A}}}
\endcalc$$
By the definition of $\;\subseteq\;$, this proves that $\;\P{A} \subseteq \P{\color{red}{\P{A}}}\;$.
Above I used the definition of $\;\P{\cdots}\;$ in the following form: for all $\;X\;$ and $\;A\;$ we have
$$
\tag 0
X \in \P{A} \;\equiv\; X \subseteq A
$$
And transitivity of $\;\subseteq\;$ is just
$$
\tag 1
A \subseteq B \;\land\; B \subseteq C \;\then\; A \subseteq C
$$
for all $\;A,B,C\;$.
The nice thing is that the above proof does not use the internal structure of the right hand side $\;\color{red}{\P{A}}\;$. Therefore we can replace it with $\;\color{red}{B}\;$ throughout, resulting in a proof of the following more general theorem:
$$
\tag 2
A \subseteq \color{red}{B} \;\color{green}{\then}\; \P{A} \subseteq \P{\color{red}{B}}
$$
Finally, note that we also can prove the even stronger
$$
\tag 3
A \subseteq B \;\color{green}{\equiv}\; \P{A} \subseteq \P{B}
$$
but let me leave that as an exercise for the reader.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1263483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How to calculate the integral $I=\int\limits_0^1\frac{x^n-1}{\ln(x)} \,\mathrm dx$ How can we calculate this integral:
$$I=\int\limits_0^1\frac{x^n-1}{\ln(x)}\,\mathrm dx$$
I believe that integral is equal to $\ln(n+1)$, but I don't lnow how to prove it.
|
Let: $$I(n)=\int_0^1\dfrac{x^n-1}{\ln x}dx$$
Then: \begin{align}I'(n)&=\dfrac{d}{dn}\int_0^1\dfrac{x^n-1}{\ln x}dx=\int_0^1\dfrac{\partial}{\partial n}\left[\dfrac{x^n-1}{\ln x}\right]dx\\
&=\int_0^1 x^n dx=\left.\dfrac{x^n+1}{n+1}\right\vert_0^1\\
&=\dfrac{1}{n+1} \end{align}
Therefore: $$I(n)=\int I'(n)=\ln(n+1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1263568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Is it possible to prove $g^{|G|}=e$ in all finite groups without talking about cosets? Let $G$ be a finite group, and $g$ be a an element of $G$. How could we go about proving $g^{|G|}=e$ without using cosets? I would admit Lagrange's theorem if a proof without talking about cosets can be found.
I have a proof for abelian groups which basically consists in taking the usual proof of Euler's theorem and using it in a group, I do not know if it can be modified to work in arbitrary finite groups.
The proof is as follows: the function from $G$ to $G$ that consists of mapping $h$ to $gh$ is a bijection. Therefore
$\prod\limits_{h\in G}h=\prod\limits_{h\in G}gh$ but because of commutativity $\prod\limits_{h\in G}gh=\prod\limits_{h\in G}g\prod\limits_{h\in G}h=g^{|G|}\prod\limits_{h\in G}h$.
So we have $\prod\limits_{h\in G}h=g^{|G|}\prod\limits_{h\in G}h$.
The cancellation property yields $e=g^{|G|}$.
I am looking for some support as to why it may be hard to prove this result without talking about cosets, or if possible an actual proof without cosets.
Thank you very much in advance, regards.
|
Let me take a shot-
Let $o(g)=n$ for some arbitrary $g \in G$, then $g^n=e$ (and $n$ is least such positive integer), now if suppose $g^{|G|}\neq e$, then there exist there exist $t \in \mathbb{Z}$ which is also greater than $1$ such that $g^{|G|t} = e$, but then by division algorithm $\exists \ $ $q,r \in \mathbb{Z}$ such that $|G|t=nq+r$ and $0\leq r <n \implies g^{nq+r}=g^r=e$ $\implies$ $r=0$ $\implies$ $|G| = \frac{nq}{t} \implies g^{|G|}=g^{n(q/t)} \neq e $ (by hypothesis) but $g^n=e$.
Now the question is why does $t$ has to divide $q$, but I argue as (avoiding order of element divides order of $G$, which is the question itself) that it must divide, as once $g^n=e$, then raising $e$ to the power $\frac{q}{t}$ doesn't make sense if $t$ does not divide $q$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1263674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Why must $n$ be even if $2^n+1$ is prime? This is a necessary step in a problem I am working on.
|
$n$ has to be a power of $2$.
suppose $n$ is not a power of $2$, then let $n=dk$ with $d$ odd.
Then $2^n+1=(2^k)^d+1=(2^k)^d+1^d$ ,now use the high school factorization for $x^d +y^d$ which is true when $d$ is odd that says $x^d+y^d=(x+y)(x^{d-1}-x^{d-2}y+x^{d-3}y^2\dots +y^{d-1})$.
In this case $x=2^k$ and $y=1$.
We get:
$(2^k)^d+1=(2^k+1)((2^k)^{d-1}-(2^{k})^{d-2}+(2^k)^{d-3}\dots +1)$ This tells us our number is not prime.
Fenyang Wang gives a simpler way to prove the number must be even.
this is because if it was odd it would be a multiple of three, and the only prime multiple of three is three.
However it is useful to note the number has to be a power of two, since powers of two are a lot scarcer than even numbers. Also notice that in my solution you can exchange $2$ for any positive integer other than $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1263781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Invertible skew-symmetric matrix I'm working on a proof right now, and the question asks about an invertible skew-symmetric matrix. How is that possible? Isn't the diagonal of a skew-symmetric matrix always $0$, making the determinant $0$ and therefore the matrix is not invertible?
|
No, the diagonal being zero does not mean the matrix must be non-invertible. Consider $\begin{pmatrix} 0 & 1 \\ -1 & 0 \\ \end{pmatrix}$. This matrix is skew-symmetric with determinant $1$. Edit: as a brilliant comment pointed out, it is the case that if the matrix is of odd order, then skew-symmetric will imply singular. This is because if $A$ is an $n \times n$ skew-symmetric we have $\det(A)=\det(A^T)=det(-A)=(-1)^n\det(A)$. Hence in the instance when $n$ is odd, $\det(A)=-\det(A)$; over $\mathbb{R}$ this implies $\det(A)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1263887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluating trigonometric limit: $\lim_{x \to 0} \frac{ x\tan 2x - 2x \tan x}{(1-\cos 2x)^2}$
Evaluate $\lim_{x \to 0} \cfrac{ x\tan 2x - 2x \tan x}{(1-\cos 2x)^2} $
This is what I've tried yet:
$$\begin{align} & \cfrac{x(\tan 2x - 2\tan x)}{4\sin^4 x} \\
=&\cfrac{x\left\{\left(\frac{2\tan x}{1-\tan^2 x} \right) - 2\tan x\right\}}{4\sin^4 x}\\
=& \cfrac{2x\tan x \left(\frac{\tan^2 x}{1 - \tan^2 x}\right) }{4\sin^4 x} \\
=& \cfrac{x\tan^3 x}{2\sin^4 x (1-\tan^2 x)} \\
=& \cfrac{\tan^3 x}{2x^3\left(\frac{\sin x}{x}\right)^4(1-\tan^2 x)} \\
=& \cfrac{\frac{\tan^3 x}{x^3} }{2\left(\frac{\sin x}{x}\right)^4(1-\tan^2 x)}\end{align}$$
Taking limit of the above expression, we've :
$$\lim_{x\to 0} \cfrac{\frac{\tan^3 x}{x^3} }{2\left(\frac{\sin x}{x}\right)^4(1-\tan^2 x)} = \lim_{x\to 0} \cfrac{\cos^2x}{2\cos 2x} = \cfrac{1}{2} $$
Firstly, is my answer right or am I doing somewhere wrong?
Secondly, this seems a comparatively longer method than expected for objective type questions. I'm seeking for a shortcut method for such type of questions. Is there any method I should've preferred?
|
Well, let's try something different from using power series expansions. Here, we simplify using trigonometric identities to reveal that
$$\frac{x\tan 2x-2x \tan x}{(1-\cos 2x)^2}=\frac{2x}{\sin 4x }=\frac{1}{2\text{sinc}(4x)}$$
where the sinc function is defined as $\text{sinc}(x)=\frac{\sin x}{x}$.
The limit as $x \to 0$ is trivial since $\text{sinc}(4x) \to 1$ . The limit is $1/2$ as expected.
NOTE $1$: Establishing the identity
Using standard trigonometric identities, we can write
$$\begin{align}
x\tan 2x-2x \tan x&=\frac{2x\sin x\cos x}{\cos 2 x}-2x\frac{\sin x}{\cos x}\\\\
&=2x \sin x \frac{\sin^2x}{\cos x\cos 2x}\\\\
&=2 \frac{\sin^4 x}{\text{sinc}( 4 x)}
\end{align}$$
and
$$(1-\cos 2x)^2=4\sin^4 x$$
Putting it together reveals that
$$\frac{x\tan 2x-2x \tan x}{(1-\cos 2x)^2}=\frac{1}{2\text{sinc}(4x)}$$
NOTE $2$: Series expansion is facilitated by simplifying using trigonometric identities
We can use the Laurent series for the cosecant function
$$\csc x=\frac1 x+\frac16 x+\frac{7}{360}x^3+\frac{31}{15120}x^5+O(x^7)$$
to establish that
$$\begin{align}
\frac{x\tan 2x-2x \tan x}{(1-\cos 2x)^2}&=\frac{2x}{\sin 4x }=2x\text{csc}(4x)\\\\
&=\frac12 +\frac43 x^2 +\frac{112}{45}x^4+\frac{3968}{945}x^6+O(x^7)
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1263968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
The definition of span In Linear Algebra by Friedberg, Insel and Spence, the definition of span (pg-$30$) is given as:
Let $S$ be a nonempty subset of a vector space $V$. The span of $S$,
denoted by span$(S)$, is the set containing of all linear
combinations of vectors in $S$. For convenience, we define
span$(\emptyset)=\{0\}$.
In Linear Algebra by Hoffman and Kunze, the definition of span (pg-$36$) is given as:
Let $S$ be a set of vectors in a vector space $V$. The subspace
spanned by $S$ is defined to be intersection $W$ of all subspaces of
$V$ which contain $S$. When $S$ is finite set of vectors, $S =
\{\alpha_1, \alpha_2, ..., \alpha_n \}$, we shall simply call $W$ the
subspace spanned by the vectors $\alpha_1, \alpha_2, ..., \alpha_n$.
I am not able to understand the second definition completely. How do I relate "set of all linear combinations" and "intersection $W$ of all subspaces"? Please help.
Thanks.
|
Let $S$ be a non-empty subset of a vector space $V$. The the set of all linear combinations of finite sets of elements of $S$ is called the linear span of $S$ and is denoted by $L(S)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 4
}
|
Defining Equivalence relations So I am not really comfortable with equivalence relations, so this example from Wikipedia gives me trouble. Here is what it says:
Let the set $\{a,b,c\}$ have the equivalence relation $\{(a,a),(b,b),(c,c),(b,c),(c,b)\}$.
So it can also be $\lbrace(a,a), (b,b), (c,c) \rbrace$ if I so choose?
|
Your relation is indeed an equivalence relation:
Reflexivity clearly holds, as symmetry does. Transitivity does hold since
$$\forall x,y,z\in X: xRy \wedge yRz \implies xRz $$
is a true statement: $xRy \wedge yRz$ is true in this case if and only if $x=y=z$, in which case clearly (reflexivity) $xRz$.
EDIT: Simpler argument (without symbolic logic)
You want to prove that the relation $R=\lbrace(a,a), (b,b), (c,c) \rbrace$ defined on the set $X=\{a,b,c\}$ is an equivalence relation, so you must prove three properties:
*
*For every $x\in X$, $(x,x)\in R$. (Reflexivity)
*For every $x,y\in X$, if $(x,y)\in R$ then necessarily $(y,x)\in R$ (Symmetry)
*For every $x,y,z\in R$, if $(x,y)\in R$ and $(y,z)\in R$, then necessarily $(x,z)\in R$. (Transitivity).
We now prove those properties.
*
*Take any element $x$ of $X$, we have only three possibilities, namely $x$ equals $a$, $b$ or $c$, in any case, $(x,x)\in R$ so the relation is reflexive.
*Take any element $x$ and $y$ of $X$, we must prove that if $(x,y)\in R$, then $(y,x)\in R$. We only need to focus on the cases where $(x,y)\in R$ (if $(x,y)\notin R$ there's nothing to prove!), so the only cases are $x=y=a, b, c$ where the symmetry clearly holds.
*Here's the point, we must take any three elements $x,y,z$ of $X$ and prove that if $(x,y)\in R$ and $(y,z)\in R$, then $(x,z)\in R$. Again, we only focus on the case $(x,y)\in R$ and $(y,z)\in R$, but this happens only if $x=y=z$ and therefore, clearly, $(x,z)\in R$.
Please notice not every relation you define on $X$ becomes an equivalence relation. For example, if you choose to add (only) $(a,b)$ to $R$, then symmetry is lost.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Is it true that $H = H_1 \times H_2\times \dots \times H_r$? Suppose that $G= G_1 \times \dots\times G_r$ be a decomposition of group $G$ into its normal subgroups. Let $H_i \leq G_i$ for every $i$. We know that for every $i \neq j$, we have $[G_i, G_j]=1$ and so $[H_i, H_j]=1$.
a) Is it true that $H := H_1 H_2 \cdots H_r$ is a subgroup of $G$?
b) Is it true that $H = H_1 \times H_2 \times \dots \times H_r$?
|
Yes, note that that set is closed under the group operation and inverses since all the subgroups commute with each other, so it is a subgroup.
More explicitly consider $(h_1\cdots h_r)(k_1\cdots k_r)= (h_1k_1)(h_2k_2)\cdots(h_rk_r)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can we have a one-one function from [0,1] to the set of irrational numbers? Since both of them are uncountable sets, we should be able to construct such a map. Am I correct?
If so, then what is the map?
|
Both sets $[0,1]$ and $[0,1]\setminus\mathbb Q$ have the same cardinality $\mathfrak c=2^{\aleph_0}$, so there is a bijection between them.
If you want write down some explicit bijection, you can use basically the standard Hilbert's hotel argument which shows that if $|A|\ge\aleph_0$ then $|A|+\aleph_0=|A|$.
So let us try to describe some bijection $f \colon [0,1] \to [0,1]\setminus\mathbb Q$.
*
*Choose some infinite sequence $x_n$, $n=0,1,2,3,\dots$ of irrational numbers in the interval $[0,1]$.
*Choose some bijection $g\colon \mathbb N\to\mathbb Q\cap[0,1]$.
Then you can define $f$ as:
*
*$f(x_{2n})=x_n$;
*$f(x_{2n+1})=g(n)$;
*$f(x)=x$ for $x\in [0,1] \setminus \{x_n; n=0,1,2,\dots\}$
Let me add links to some posts where a very similar ideas can be used to construct a bijection between two given sets:
*
*How to define a bijection between $(0,1)$ and $(0,1]$?
*Construct some explicit bijective functions
*Constructing a bijection
*Construct a bijection from $\mathbb{R}$ to $\mathbb{R}\setminus S$, where $S$ is countable
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 8,
"answer_id": 5
}
|
Adding rows to calculate the determinant.
Evaluate the determinants given that $\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=-6.$
*
*$\begin{vmatrix} a+d & b+e & c+f \\ -d & -e & -f \\ g & h & i \end{vmatrix}$
*$\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g+3a & h+3b & i+3c \end{vmatrix}$
Here is what I have tried:
1.
$\begin{vmatrix} a+d & b+e & c+f \\ -d & -e & -f \\ g & h & i \end{vmatrix}\stackrel{\text{add row 2 to row 1}}=\begin{vmatrix} a & b & c \\ -d & -e & -f \\ g & h & i \end{vmatrix}\stackrel{\text{factor out $-1$}}=-\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=-(-6)=6.$
2.
$\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g+3a & h+3b & i+3c \end{vmatrix}\stackrel{\text{row 1 times -3, add to row 3}}{=}\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g & h & i \end{vmatrix}\stackrel{\text{factor out 2}}{=}2\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=2(-6)=-12.$
Did I do these correctly?
I've tried some cases with numbers where adding a multiple of one row to another and found that it doesn't not change the value of the determinant. But I can't seem to grasp the intuition as to why this is so from numeric calculations.
Why is this so?
|
Geometrically, the fact that you can add multiples of rows to each other while keeping the determinant the same, is a reflection of the fact that the determinant can be seen as the volume of the parallelepiped with the rows or columns as it's vectors.
Adding a row to a row has the effect of simply skewing the parallelepiped. Much like skewing a parallelogram does not change it's area (since neither the height of base changes length) the same holds for a parallelepiped, hence the determinant stays the same.
$\quad\quad\quad\quad\quad\quad\quad\quad\quad$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Finding particular solution to $y'' + 2y' - 8y = e^{2x} $ $$y'' + 2y' - 8y = e^{2x} $$
How do I find the particular solution?
I tried setting: $y = Ae^{2x} => y' = 2Ae^{2x} => y''= 4Ae^{2x}$
If I substitute I get: $4Ae^{2x} + 4Ae^{2x} - 8Ae^{2x} = e^{2x} => 0 = e^{2x}$
What am I doing wrong?
|
You already received answers.
When you have a doubt (and when your work leads to something as $0=e^{2x}$ as you honestly pointed out), just do what you did but considering now that $A$ is a function of $x$. So, $$y=A\,e^{2x}$$ $$y'=A'\,e^{2x}+2A\,e^{2x}$$ $$y''=A''\,e^{2x}+4A'\,e^{2x}+4A\,e^{2x}$$ Plug all of that in the differential equation, divide both sides by $e^{2x}$ and get $$A''+6A'=1$$ So $A$ cannot be a constant. The simplest should be $A=\frac{x}6$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Find the value of the given limit.
The value of $\lim_{x\to \infty} (x+2) \tan^{-1} (x+2) - x\tan^{-1} x $ is $\dots$
a) $\frac{\pi}{2} $ $\qquad \qquad \qquad$ b) Doesn't exist $\qquad \qquad \qquad$ c) $\frac{\pi}{4}$ $\qquad \qquad$ d)None of the above.
Now, this is an objective question and thus, I expect that there must be an easier way to do it either by analyzing the options or something else. I'm not sure about this though!
What I've done yet is trying to apply : $\tan^{-1}a - \tan^{-1}b$ formula but it requires certain conditions to hold true which are not specified here. I'm not sure how we will approach this! Any kind of help will be appreciated.
|
Set $1/x=h$ to get $$\lim_{h\to0^+}\dfrac{(1+2h)\tan^{-1}\dfrac{1+2h}h-\tan^{-1}\dfrac1h}h$$
$$=\lim_{h\to0^+}\dfrac{\tan^{-1}\dfrac{1+2h}h-\tan^{-1}\dfrac1h}h+2\lim_{h\to0^+}\tan^{-1}\dfrac{1+2h}h$$
Now,
$$\tan^{-1}\dfrac{1+2h}h-\tan^{-1}\dfrac1h=\tan^{-1}\left[\dfrac{\dfrac{1+2h}h-\dfrac1h}{1+\dfrac{1+2h}h\cdot\dfrac1h}\right]=\tan^{-1}\dfrac{2h^3}{(h+1)^2}$$
$$\implies\lim_{h\to0^+}\dfrac{\tan^{-1}\dfrac{1+2h}h-\tan^{-1}\dfrac1h}h=\lim_{h\to0^+}O(h^2)=0$$
Finally, $$\lim_{h\to0^+}\tan^{-1}\dfrac{1+2h}h=\tan^{-1}(+\infty)=+\dfrac\pi2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Inverse Rule for Formal Power Series I am just really starting to get into formal power series and understanding them. I'm particularly interested in looking at the coefficients generated by the inverse of a formal power series:
$$\left(\sum_{n\ge 0}a_nx^n\right)^{-1}=\sum_{n\ge 0}b_nx^n$$
I first thought that my approach would be looking at
$$\frac{1}{\sum_{n\ge 0}a_nx^n}$$
But I'm more thinking that since we know that a series is invertible in the ring if $a_0$ is invertible in the ring of coefficients. Thus, since if we assume it is, and since the unit series is $\{1,0,0,0,....\}$ then we have
$$\left(\sum_{n\ge 0}a_nx^n\right)\left(\sum_{n\ge 0}b_nx^n\right)=1$$
Thus we know that $a_0b_0=1$ and thus $b_0=\frac1{a_0}$. And for the remaining terms we are just looking at the convolution generated by the Cauchy Product and so
$$0=\sum_{j=0}^ka_jb_{k-j}$$
$$-a_0b_k=\sum_{j=1}^ka_jb_{k-j}$$
$$b_k=\frac{-1}{a_0}\sum_{j=1}^ka_jb_{k-j}$$
And thus we have a recursive definition.
Is there another approach that defines the numbers $b_k$ without recursive means? Are you forced to only recursive methods when operating on the ring of formal power series to calculate coefficents?
|
As closed a form as I could get I posted here. It looks pretty ugly... but I'm not sure how much prettier it can get.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 1
}
|
What does $\sim$ in $X\sim \mathcal{N}(\mu,\sigma^{2})$ really mean? This is a bit of a silly question, but I can't seem to find the answer anywhere.
I feel like $X\sim \mathcal{N}(\mu,\sigma^{2})$ means that $\sim$ is a relation, but if it is a relation, what precisely is this relation?
If this is a relation, could I instead write $X\in \mathcal{N}(\mu,\sigma^{2})/ \sim$?
One definition I think would be reasonable is to say that $X \sim Y$ if $X$ and $Y$ have the same characteristic or moment generating function, but it seems a little heavy handed to be the "right" definition.
Another definition I would guess is to put a topology on something which characterizes the random variables and say $X\sim Y$ if they satisfy a homeomorphism?
From the homeomorphism viewpoint, I would want to conclude that if $X$ is a random variable whose image is always non-negative, then $X^{2}$ could be normally distributed since you can take the positive square root and exhibit a bijection between $X$ and $X^{2}$ without going into $\mathbb{C}$, but I know this to not be true since it is Chi-Squared, so this can't be a reasonable definition.
|
It means "distributed like". It's just a shorthand way of saying "$X$ assumes a normal distribution with these parameters."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Let $a_n$ be defined inductively by $a_1 = 1, a_2 = 2, a_3 = 3$, and $a_n = a_{n−1} + a_{n−2} + a_{n−3}$ for all $n \ge 4$. Show that $a_n < 2^n$.
Suppose that the numbers $a_n$ are defined inductively by $a_1 = 1, a_2 = 2, a_3 = 3$, and $a_n = a_{n−1} + a_{n−2} + a_{n−3}$ for all $n \geq 4$. Use the Second Principle of Finite Induction to show that $a_n < 2^n$ for every positive integer $n$.
Source: David Burton's Elementary Number Theory, p. 8, question 13 of "Problems 1.1".
|
Hint: Consider instead the sequence defined by the same initial values and $\displaystyle a_n = \sum_{k=0}^{n-1} a_k$ for $n \geq 4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $f_n(x)=\frac{1-(x/b)^n}{1+(a/x)^n}$ is uniformly convergent on the interval $[a+\epsilon ; b - \epsilon]$ Title says it all; I have to prove that the function sequence $f_n(x)=\dfrac{1-(x/b)^n}{1+(a/x)^n}$ is uniformly convergent on the interval $[a+\epsilon ; b - \epsilon]$, with $0<\epsilon<(b-a)/2$ - I've already shown that the sequence is pointwise convergent on $f(x)=1$, and intuitively it's pretty obvious, that for a high enough $N \in \mathbb{N}$ that $\sup\{|f_n(x)-1|\}<\epsilon$ for $n \geq N$ - but I am having trouble actually proving it, that is writing down a proper proof of it. Any hints?
Much appreciated
|
For all $x \in [a - \epsilon, b + \epsilon]$,
$$|f_n(x) - 1| = \frac{(a/x)^n + (x/b)^n}{1 + (a/x)^n} \le (a/x)^n + (x/b)^n \le \left(\frac{a}{a + \epsilon}\right)^n + \left(\frac{b - \epsilon}{b}\right)^n.$$
Thus
$$\sup_{x\in [a-\epsilon,b+\epsilon]} |f_n(x) - 1| \le \left(\frac{a}{a+\epsilon}\right)^n + \left(\frac{b-\epsilon}{b}\right)^n.$$
Now finish the argument.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
I cannot solve this limit $$
\lim_{n\to\infty}\frac{(\frac{1}{n}+1)^{bn+c+n^2}}{e^n}=e^{b-\frac{1}{2}}
$$
I am doing it like this, and I cannot find the mistake:
$$
\lim_{n\to\infty}\frac{1}{e^n}e^{n+b+c/n}=
\lim_{n\to\infty}e^{n+b-n+c/n}=e^b
$$
|
hint: $a_n=\left(\dfrac{\left(1+\frac{1}{n}\right)^n}{e}\right)^n \Rightarrow \ln (a_n) = \dfrac{\ln(1+\frac{1}{n})-\frac{1}{n}}{\frac{1}{n^2}}\to 0$ by L'hospitale rule with $x = \frac{1}{n} \to 0 \Rightarrow a_n \to 1$. Given that the answer is $e^{b-1/2}$, can you figure the other part ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1264971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Entropy Calculation and derivation of logarithm I have probabilities as
$$p_1 = 0.4,\ p_2 = 0.3,\ p_3=0.2,\ p_4=0.1$$
hence entropy is given by:
$$H(x) = -\big(0.4\cdot \log_2(0.4) + 0.3\cdot \log_2(0.3) + 0.2\cdot \log_2(0.2) + 0.1\cdot \log_2(0.1)\big)$$
I derive this to
$$H(x) = -\big(1 - \log_2(10) + 0.3\cdot \log_2(3)\big)$$
and I am unable to derive it further
can you please say if I just need to use calculator or it is possible to use log tricks.
|
You could take $$\begin{align}-\big(1 - \log_2(10) + 0.3\cdot \log_2(3)\big) = -\big(1 - \log_2(2\cdot 5) + 0.3\cdot \log_2(3)\big) \\ = -\big(1 - 1-\log_2(5) + 0.3\cdot \log_2(3)\big) \\ =\log_2(5)-0.3\cdot \log_2(3)\end{align}$$ I don't think there is much left you can do with this besides stick it into a calculator
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can the range of a variable be inclusive infinity? Can a range be $[0, \infty]$ or must it be $[0, \infty)$ because you can never quite reach infinity?
Clarification:
$[0, 1]$ means $0 \leqslant x \leqslant 1 $, while $(0, 1)$ means $0 < x < 1 $. My question is whether infinity can be written as inclusive when stating the range.
|
An example of where this is actually used are measure functions. For instance, the Lebesgue measure, $\lambda$ on (certain subsets of) $\Bbb R$ has the property that $$\lambda((a,b)) = b-a $$
when $a,b$ are finite, while, for example $$\lambda(\Bbb R) = \infty$$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Why represent a complex number $a+ib$ as $[\begin{smallmatrix}a & -b\\ b & \hphantom{-}a\end{smallmatrix}]$? I am reading through John Stillwell's Naive Lie Algebra and it is claimed that all complex numbers can be represented by a $2\times 2$ matrix $\begin{bmatrix}a & -b\\ b & \hphantom{-}a\end{bmatrix}$.
But obviously $a+ib$ is quite different from $\begin{bmatrix}a & -b\\ b & \hphantom{-}a\end{bmatrix}$, as the latter being quite clumsy to use and seldom seen in any applications I am aware of. Furthermore, it complicates simple operations such as matrix multiplication whereby you have to go one extra step and extract the complex number after doing the multiplication.
Can someone explain what exactly is the difference (if there is any) between the two different representations? In what instances is a matrix representation advantageous?
|
Matrix representation of complex numbers is useful and advantageous because we can discover and explore new concepts, like this:
$\begin{bmatrix}\hphantom{-}a & b\\ -b & a\end{bmatrix}$ ---> complex numbers
$\begin{bmatrix}\hphantom{-}a & b\\ \hphantom{-}0 & a\end{bmatrix}$ ---> dual numbers
$\begin{bmatrix}\hphantom{-}a & b\\ \hphantom{-}b & a\end{bmatrix}$ ---> split complex numbers
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 11,
"answer_id": 5
}
|
Write 100 as the sum of two positive integers
Write $100$ as the sum of two positive integers, one of them being a multiple of $7$, while the other is a multiple of $11$.
Since $100$ is not a big number, I followed the straightforward reasoning of writing all multiples up to $100$ of either $11$ or $7$, and then finding the complement that is also a multiple of the other. So then
$100 = 44 + 56 = 4 \times 11 + 8 \times 7$.
But is it the smart way of doing it? Is it the way I was supposed to solve it? I'm thinking here about a situation with a really large number that turns my plug-in method sort of unwise.
|
While certainly not the ideal solution, this problem is certainly in the realm of Integer Programming. As plenty of others have pointed out, there are more direct approaches. However, I suspect ILP solvers would operate quite efficiently in your case, and requires less 'thought capital'.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 8,
"answer_id": 4
}
|
Using trigonometry to predict future position Intro
I'm currently creating an AI for a robot whose aim is to shoot another robot.
All I want to do is to be able to calculate at what angle to shoot my bullet, so that it hits my enemy, with the assumption that the enemy continues moving at the same bearing and velocity.
Variables
Here are the variables that are known
*
*Bullet speed is: $bulletSpeed$ m/s
*$x$ and $y$ coordinates of both mine $(x_1, y_1)$ and the
enemy robot $(x_2, y_2)$
*The angle between me and the enemy: $a^\circ$
*The heading of the enemy robot: $b^\circ$
*The distance between me and the enemy robot: $d$ metres
*The velocity of the robot: $v$ m/s
[Note: this may be more information than needed. Also all angles range from 0 to 360, where North is $0^\circ$ and East is $90^\circ$ and so on]
Diagram
Here's a diagram to help illustrate the problem, where the arrows show what direction each robot is facing and $travel$ is the distance of the robot has travelled and $bullet$ is the distance the bullet has travelled in $t$ time:
Objective
I wish to find the angle: $\theta$ to shoot my bullet such that it hits the robot and in the minimum amount of time.
The answer should be a formula such that $\theta$ is the subject, so I can substitute the $bulletSpeed$, $a^\circ$, $b^\circ$, $x_1$, $y_1$, $x_2$ and/or $y_2$ to get an answer for $\theta$. i.e.:
$$ \theta = ... $$
What I got so far
Here's what i have got so far:
$$bullet = bulletSpeed * t $$
$$travel = v * t $$
As you can see its not much.
Please help, as I really need to get this section of my AI done by the end of today and really struggling.
Please also explain your solution so I can understand it. Many Thanks
|
Name the point where you robot shoots from $P$, the point of departure of the enemy robot $Q$, and the point where the bullet hits the enemy $R$. In triangle $PQR$ we then have
$$
\angle Q=90^o+(180^o-90^o-a)+b=180^o+b-a
$$
and from the law of sines we know that
$$
\frac{q}{\sin Q}=\frac{p}{\sin\theta}\iff \frac{\sin\theta}{\sin Q}=\frac pq=\frac{enemyspeed\times t}{bulletspeed\times t}
$$
and if you look close enough, $\theta$ is the only unknown variable in this expression as $t$ cancels out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Result of solving an unsolved problem? I know that when some of the previously unsolved problems were solved they created new fields in mathematics. May someone explain to me what would be the result of a major problem like the Hodge Conjecture being solved vs a "smaller" problem like "Do quasiperfect numbers exist?" in today's society.
Thank you in advance.
|
Most of the time, the actual result isn't important as the theory. The reason why problems are unsolved is because either the math doesn't exist yet, or some connection between current fields has not been established yet.
Either way, creating new math and connecting existing math are the real reasons why solving open problems is important.
For example, if the word of god came down and told us that yes, indeed the Hodge conjecture was true/false, it would be nice but not nearly as groundbreaking as a proof for it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Chromatic number $\chi(G)=600$, $P(\chi(G|_S)\leq 200) \leq 2^{-10}$ I am learning martingale and Hoeffding-Azuma inequality recently but do not how to apply the those inequality or theorem here.
Let $G=(V,E)$ be a graph with chromatic number 600,i.e. $\chi(G)=600$. Let $S$ be a random subset uniformly chosen from $V$. Denote $G|_S$ the induced subgraph of $G$ on $S$. Prove that
$$P(\chi(G|_S)\leq 200 )\leq 2^{-10}.$$
I am not sure how to approach ones, especially for the condition $\chi(G)=600$. I am thinking that for a 600 vertices complete graph, the probability to be computed is just the ratio $$\frac{\sum_{i=0}^{200}C_i^{600} }{2^{600}},$$
meaning the ratio btween the number of all subgraph with vertices number less than 200 and the total number of subset of $V$. But is it enough? Even this ratio is hard to compute.
|
Hints:
*
*If you choose a set at random, then with probability at least 1/2 the chromatic number is at least 300 (why?).
*Azuma's inequality shows that the chromatic number of $G|_S$ is concentrated around its mean.
*Since the chromatic number is always between 0 and 600, and its median is above 300, its mean can't be too low.
*Hence the probability that the chromatic number of $G|_S$ is as low as 200 is small.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
A formal proof that the function $ x \mapsto x^{2} $ is continuous at $ x = 4 $. Problem: Show $f(x)=x^2 $ is continuous at $ x = 4$.
That is to say, find delta such that:
$ ∀ε>0$ $ ∃δ>0 $ such that $ |x-a|<δ ⇒ |f(x)-f(a)|<ε$
Where $a=4$, $f(x)=x^2$,and $f(a)=16$.
So in order to do to do these delta/epsilon proofs, I was originally taught to solve |f(x)-f(a)|<ε by setting it as:
$-ε < f(x) - f(a) < ε$
Which, in this case would be:
$-ε < x^2-16 < ε$
$16-ε < x^2 < 16+ε$
$\sqrt{16-ε} < x< \sqrt{16+ε}$
Now, since we're looking for |x-a|<δ, set $δ=\sqrt{16+ε}-4$ from the equation above.
For the proof itself:
For $ ∀ε>0$, choose $δ=\sqrt{16+ε}-4$ and then $δ>0$ exists and implies:
$|x-a|<δ$
$ |x-4|<δ= \sqrt{16+ε}-4$
$|x-4|+4<\sqrt{16+ε}$
$|x|<\sqrt{16+ε}$
$|x^2|<16+ε$
$|x^2-16|<ε$
Q.E.D.
The problems i'm running into are mainly with the last step. In the solutions, it uses the method of splitting up $|x^2-16|<ε$ into $|(x-4)(x+4)|<ε$ and then going from there in order to get $δ=min(1,ε/9)$. I understand how their solution works but as I was taught my way a long time ago, I'm trying to figure out why mine does/doesn't work and if it doesn't if there's a way to modify it so that it does.
Thanks for your help!
|
$|x^2|<16+ε$ does not imply $|x^2-16|<ε$
For example, take $x= 0 , \epsilon = 1$.
Then $|x^2| = 0 < 17 = 16+ε$
But $|x^2-16| = |0^2-16| = 16 \geq 1 = ε $.
So in the current form, your proof is invalid.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Can some explain very quickly what $ |5 x + 20| = 5 $ actually means? I don’t mean to bother the community with something so easy, but for the life of me, I can’t remember how to do these. I even forgot what they were called, and I’m referring to the use of the “$ | $” symbol in math. I think I may have been studying too much all day. This is my last section, and if I can’t even remember the name, then I can’t look it up.
Can some please tell me what these types of math problems are called and how to solve them? I just need to see someone do it. I remember it was easy, but it’s been sometime since I’ve last seen these. I really do appreciate everyone helping me study today. I couldn’t of done it without ya! ^_^
Problem. Solve for $ |5 x + 20| = 5 $ for $ x $.
|
$$|5x+20|=5$$
We want to consider both $5$ and $-5$ for this absolute value function. So we solve for both $5$ and $-5$.
$$5x+20=5$$
$$5x=-15$$
$$\boxed{x=-3}$$
$$5x+20=-5$$
$$5x=-25$$
$$\boxed{x=-5}$$
If we plug $-3$ and $-5$ back into the original equation, we should get an equivalent statement. So the final answer is $\boxed{x\in \{-5,-3\}}$.
If you would like more information on solving absolute value equations and inequalities, please check this out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1265871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Showing that $\sin(x) + x = 1$ has one, and only one, solution Problem:
Prove that the equation $$\sin(x) + x = 1$$ has one, and only one solution. Additionally, show that this solution exists on the interval $[0, \frac\pi2$]. Then solve the equation for x with an accuracy of 4 digits.
My progress:
I have no problems visualizing the lines of the LHS and the RHS.
Without plotting, I can see that $\sin(x) + x$ would grow, albeit not strictly (it would have its derivative zero at certain points), and being continuous, it would have to cross $y=1$ at least once.
However, my problem lies in the fact that I can see how equation could have either 1, 2, or 3 solutions. And I'm not able to eliminate the possibility of there existing 2 or 3 solutions. This is of course wrong on my part.
Also, I'm not able to prove that the solution must exist on the given interval.
Any help appreciated!
P.S. As far as actually calculating the solution, I'm planning on using Newton's Method which should be a trivial exercise since they've already provided an interval on which the solution exists.
|
Derivative of the LHS is always greater than or equal to zero and it is zero only at points that $\cos(x) = -1$ which is $x = 2n\pi+\pi$. So, the function is always increasing which crosses level $1$ at only one point. Since $\sin (x) + x$ is more than $1$ at point $\frac{\pi}{2}$ and $0$ at point $0$, it should have only one answer in this interval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 2
}
|
Where is the mistake in proving 1+2+3+4+… = -1/12? https://www.youtube.com/watch?v=w-I6XTVZXww#t=30
As I watched the video on YouTube of proving sum of $$1+2+3+4+\cdots= \frac{-1}{12}$$
Even we know that the series does not converge.
First I still can't prove what wrong in infinite sum of $$1-1+1-1+1+\cdots= \frac{1}{2}$$
and I want to know more about of what mistake did they make in the video.
I did research on our forum about this topic. But still not clearly understand about of that proving.
Thank you.
(Sorry about my bad English )
|
In proving that 1-1+1-1+... = 1/2, you add two divergent series, which can sometimes produce a convergent series. However, here, (I'll mark our sum I = 1-1+1-1+...) I+I = 2*I = 2-2+2-2+... and that still diverges, although we shifted one of those to get that 2*I = 1. That's a contradiction.
That is the main problem with this proof.
Hope this helped!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Famous Problems the Experts Could not Solve After Yitang Zhang stunned the mathematics world by establishing the first finite bound on gaps between prime numbers, it got me thinking about the following question:
$\underline{\text{Question}}:$ What are other examples of proofs provided by younger, less accomplished mathematicians that the experts of the time could not solve or did not attempt to solve?
For example, in 1979 the American mathematician Thomas Wolff created a new proof of the Corona problem whilst still a graduate student at Berkeley. He solved the equations in a nonanalytic way and then modified the solution to make it analytic bounded (which leads to the equation $\overline{\partial}u=f$).
|
Kurt Heegner showed the Stark-Heegner theorem. At that time, he wasn't connected to any university, in fact, no one looked to his proof until Stark showed the same result. He wasn't young, but Yitang Zhang wasn't either.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding the inverse of a number under a certain modulus How does one get the inverse of 7 modulo 11?
I know the answer is supposed to be 8, but have no idea how to reach or calculate that figure.
Likewise, I have the same problem finding the inverse of 3 modulo 13, which is 9.
|
${\rm mod}\ 11\!:\,\ \dfrac{1}7\equiv \dfrac{12}{-4}\equiv -3\ $ (see Gauss's algorithm. for an algorithmic version of this).
Or, compute the Bezout identity $\,\gcd(11,7) = 2(11)-3(7) = 1\,$ by the Extended Euclidean Algorithm $ $ (see here for a convenient version). $ $ Thus $\ {-}3(7)\equiv 1\pmod{11}$
${\rm mod}\ 13\!:\,\ \dfrac{1}3\equiv \dfrac{-12}{3}\equiv -4\ $
Generally inverting $\,a\,$ mod $\,m\,$ is easy if $\,a\mid m\pm1,\ $ i.e. $\ m = ab\pm 1$
${\rm mod}\ ab-1 \!:\qquad ab\equiv 1\,\Rightarrow\, a^{-1}\equiv b$
${\rm mod}\ ab+1 \!:\,\ a(-b)\equiv 1\,\Rightarrow\, a^{-1}\equiv -b$
Above are special cases: $\,3\mid 13\!-\!1\ $ and $\ 7\equiv -4\mid 11\!+\!1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
}
|
Proving Convergence and Absolute Convergence of Power Series How do you prove the following claim?
If a power series $\sum_{n=0}^{\infty} a_n (x-a)^n$ converges at some point $b ≠ a$, then this power series converges absolutely at every point closer to $a$ than $b$ is.
Here's what I tried so far. Does this proof make sense? What would you change to hone it further?
Proof:
Suppose that $\sum_{n=0}^{\infty} a_n (x-a)^n$ converges at $b ≠ a$.
Then, the $ \lim\limits_{x\to \infty} |a_n (b-a)^n|=0$
For $ɛ=1>0$, $∃N$ such that, if $n>N$, then $-ɛ<|a_n (b-a)^n|<ɛ$
That is to say, $|a_n (b-a)^n|<1 (*)$
Take any $x$, such that $|x-a| < |b-a| $ Implicitly, $\frac{| (x-a)|}{|(b-a)|}<1 (**)$
For $n>N$, notice that $|a_n (x-a)^n| = |a_n (x-a)^n| × \frac{|a_n (b-a)^n|}{|a_n (b-a)^n|}$
Rearranging, we get:
$|a_n (b-a)^n| × \frac{| (x-a)^n|}{|(b-a)^n|} < 1× \frac{| (x-a)^n|}{|(b-a)^n|} $ because of $(*)$
$\sum_{n=0}^{\infty} |a_n (x-a)^n| =\sum_{n=0}^{N} |a_n (x-a)^n| +\sum_{n=N+1}^{\infty} |a_n (x-a)^n|
< \sum_{n=0}^{N} |a_n (x-a)^n| +\sum_{n=N+1}^{\infty} \frac{| (x-a)^n|}{|(b-a)^n|}$
But, $\sum_{n=N+1}^{\infty} \frac{| (x-a)^n|}{|(b-a)^n|}$ is a geometric series, with ratio $<1$ (from **)
Therefore, $\sum_{n=N+1}^{\infty} \frac{| (x-a)^n|}{|(b-a)^n|}$ converges.
Hence, $\sum_{n=0}^{\infty} |a_n (x-a)^n| $ also converges (by the comparison test).
Consequently, by definition, $\sum_{n=0}^{\infty} a_n (x-a)^n$ converges absolutely.
|
There's something a bit strange about your proof. Look at the line after "Rearranging, we get:".. That inequality you say is because of (*) but if you cancel $\frac{|(x-a)^n|}{|(b-a)^n|}$ it's simply equivalent to $|a_n(b-a)^n|<1$. So you don't need to go to all that work to get to this inequality.
Also, how do you get $\sum_{n=N+1}^{\infty} |a_n (x-a)^n|
< \sum_{n=N+1}^{\infty} \frac{| (x-a)^n|}{|(b-a)^n|}$
I don't think that follows from what you showed pervious to that line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove that $I+A^{T}A$ is invertible
Let $A$ be any $m\times n$ matrix and $I$ be the $n\times n$ identity.
Prove that $I+A^{T}A$ is invertible.
|
The matrix $I+A^TA$ is invertible because it is positive definite: if $v\in\mathbb{R}^n$ is nonzero, then
$$
v^T(I+A^TA)v=v^Tv+v^TA^TAv=|v|^2+|Av|^2\geq |v|^2>0.
$$
In particular, all the eigenvalues are positive and their product must too be positive (i.e. nonzero).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What are "tan" and "atan"? As the title says, I'm confused on what tan and atan are. I'm writing a program in Java and I came across these two mathematical functions. I know tan stands for tangent but if possible could someone please explain this to me. I have not taken triginomotry yet (I've taken up to Algebra 1) so I don't really need a very in depth explanation since i wouldnt understand but just a simple one so i could move on with my program would be great! Thanks in advanced. Also if possible could someone possibly give me a link to an image/example of a tangent and atan.
|
A quick google search of "java atan" would tell you that it stands for "arctangent", which is the inverse of tangent. Tangent is first understood as a ratio of non-hypotenuse sides of a right triangle. Given a non-right angle $x$ of a right triangle, $\tan(x)$ is the ratio $\frac{o}{a}$ where $o$ is the length of the leg of the triangle opposite $x$ and $a$ is the length of the leg adjacent to $x$. Arctangent takes the ratio as an input and returns the angle.
I don't think any single answer to your post will give you a complete understanding of tangent and arctangent. To get that, I recommend spending some time with sine, cosine, tangent and the unit circle. Here is a link to an image http://www.emanueleferonato.com/images/trigo.png that uses an angle $A$ and triangle sides $a,o,h$. In the context of that photo, $\arctan\left(\frac{\text{opposite}}{\text{adjacent}}\right) = A$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Is a polynomial ring over a UFD in countably many variables a UFD? Let $R$ be a UFD. It is well know that $R[x]$ is also a UFD, and so then is $R[x_1,x_2,\cdots,x_n]$ is a UFD for any finite number of variables. Is $R[x_1,x_2,\cdots,x_n,\cdots]$ in countably many variables also a UFD? If not, what about if we take $\tilde{R} = R[x_1,\cdots,x_n,\cdots]$ where each polynomial must be given in finitely many of the variables?
|
Hint: first show that for any $n\in\mathbb N$ and any $f\in R[x_1,x_2,\cdots,x_n]\subset R[x_1,x_2,\cdots]$, $f$ is irreducible in $R[x_1,x_2,\cdots]$ iff it's so in $R[x_1,x_2,\cdots,x_n]$; then notice any element of $R[x_1,x_2,\cdots]$ lies in $R[x_1,x_2,\cdots,x_n]$ for some $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Finding the eigenvectors of a matrix that has one eigenvalue of multiplicity three This is a simple question, which hopefully has a quick answer. I have a given matrix A, such that
\begin{equation} A = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} \end{equation}
Since it's fairly straightforward, I'll just state that the eigenvalue of this matrix is $\lambda = 1$ with algebraic multiplicity $3$. To find the eigenvectors of this matrix, all I have to do is find the kernel of $(A-\lambda I)$. Thus,
\begin{equation} (A-\lambda I) = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} -(1)\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}\end{equation}
The kernel of this matrix, according to my work and Wolfram Alpha, is $Ker(A-\lambda I) = \{(-1, 0, 1)^T, (0, 1, 0)^T\}$.
However, MATLAB and my calculator say that the eigenvectors are $(0, 1, 0)^T, (0,-1,0)^T, (0, -1, 0)^T$.
My question, then, is where did I go wrong? I looked through my book, and it does indeed cover eigenvalues with multiplicity, but it doesn't treat them any differently than the case with no multiplicity. Did I commit an algebraic error somewhere?
NOTE: I should be a little more precise. Find the kernel of $(A-\lambda I) gives the eigenspace of the corresponding eigenvalue, which happens to be composed of eigenvectors.
|
Your answer is correct, and Matlab is being problematic. Here's some discussion of why this isn't so surprising:
Defective eigenvalue problems are numerically problematic. This is because diagonalizable matrices are dense in the space of all matrices. Consequently an arbitrarily small arithmetic error can make a nondiagonalizable matrix into a diagonalizable matrix. So unless you use a solver which uses exact arithmetic, your solver will assume that your matrix is diagonalizable and attempt to find three independent eigenvectors.
The problem occurs when, in the background, some of the approximate eigenvectors converge to one another. If you use a solver with exact arithmetic, then no issues occur. Indeed, Matlab's jordan command gives the correct output for this problem.
Apparently Matlab's eig command is having trouble with this problem in particular. One thing that makes this problem especially bad is that the eigenvector Matlab is successfully finding gets split into two independent vectors when you perturb your matrix into a diagonalizable matrix (for instance by replacing all of the $0$s with $10^{-6}$). Somehow this causes that eigenvector to get weighted much more heavily, and makes the other one "invisible" to the algorithm used by eig.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$\langle A,B\rangle = \operatorname{tr}(B^*A)$ "define the inner product of two matrices $A$ and $B$ in $M_{n\times n}(F)$ by $$\langle A,B \rangle = \operatorname{tr}(B^*A), $$ where the {conjugate transpose} (or {adjoint}) $B^*$ of a matrix $B$ is defined by $B^*_{ij} = \overline{B_{ji}}.$
Prove that $\langle B,A \rangle = \overline{\langle A,B \rangle}$
This is left to the reader in my linear algebra text book and can't seem to work out a solution. Any help would be much appreciated.
|
Hint: $\operatorname{tr}(AB)=\operatorname{tr}(BA)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1266928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Something screwy going on in $\mathbb Z[\sqrt{51}]$ In $\mathbb Z[\sqrt{6}]$, I can readily find that $(-1)(2 - \sqrt{6})(2 + \sqrt{6}) = 2$ and $(3 - \sqrt{6})(3 + \sqrt{6}) = 3$. It looks strange but it checks out.
But when I try the same thing for $3$ and $17$ in $\mathbb Z[\sqrt{51}]$ I seem to run into a wall. I can't solve $x^2 - 51y^2 = \pm3$ in integers, nor $x^2 - 51y^2 = \pm17$. I've had six decades in which to get rusty at solving equations in two variables, so maybe I have managed to overlook solutions to both of these. Or could it really be possible that $3$ and $17$ are actually irreducible in this domain?
|
There is one crucial difference between $\mathbb{Z}[\sqrt{6}]$ and $\mathbb{Z}[\sqrt{51}]$: one is a unique factorization domain, the other is not. You have to accept that some of the tools that come in so handy in UFDs are just not as useful in non-UFDs.
One of those tools is the Legendre symbol. Given distinct primes $p, q, r \in \mathbb{Z}^+$, if $\mathbb{Z}[\sqrt{pq}]$ is a unique factorization domain, then $\left(\frac{pq}{r}\right)$ reliably tells you whether $r$ is reducible or irreducible in $\mathbb{Z}[\sqrt{pq}]$. Also, both $p$ and $q$ are composite, because otherwise you'd have $pq$ and $(\sqrt{pq})^2$ as valid distinct factorizations of $pq$, contradicting that $\mathbb{Z}[\sqrt{pq}]$ is a unique factorization domain.
Of course if $\left(\frac{pq}{r}\right) = -1$, then $r$ is irreducible regardless of the class number of $\mathbb{Z}[\sqrt{pq}]$. But if $\mathbb{Z}[\sqrt{pq}]$ has class number 2 or greater, then a lot of expectations break down. It can happen that $\left(\frac{pq}{r}\right) = 1$ and yet $r$ is nonetheless irreducible, e.g., $\left(\frac{51}{5}\right) = 1$, yet 5 is irreducible.
And it can also happen in a non-UFD that $pq$ and $(\sqrt{pq})^2$ are both valid distinct factorizations of $pq$. This is by no means unique to $\mathbb{Z}[\sqrt{51}]$. It happens with 2 and 5 in $\mathbb{Z}[\sqrt{10}]$, 3 and 5 in $\mathbb{Z}[\sqrt{15}]$, 2 and 13 in $\mathbb{Z}[\sqrt{26}]$, etc. In fact, I'd be very surprised if someone showed me an example of a non-UFD $\mathbb{Z}[\sqrt{pq}]$ in which both $p$ and $q$ are composite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
I can't find the critical points for this function. I showed my work :) So, I have to find Critical Points of $y=\frac{1}{(x^3-x)}$
I know the derivative.
Derivative = $(3x^2-1)/(x^3-x)^2$
To find Critical Points I equal to $0$.
$x=1/\sqrt3$ and $x=-1/\sqrt3 $
But Critical points are the Max and Min value of your graph... and the graph is a little tricky... I don't know what to do... Because in my opinion, there are no critical points. It goes to infinity and -infinity.
Opinions? Help please!
|
The points you found are just local extrema. Regarding the + and - $\infty$, you can simply note that those are local maxima of the function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Differential Equations: Recursive Functions Functions I have some familiarity with look so, $y^\prime(x) = \tan(x+2)$: straightforward expression of the first derivative of y as a function of x.
But say I have a function, $y^\prime(x) = \cos{(y)}$? I'm not sure what 'y' is supposed to signify when it's being called recursively like this: $y(x), \ y^\prime(x), \ $ or something else.
If it's not saying, $y^\prime(x)= \cos{\left(\int y^\prime(x) \ \text{d}x \right)}$, what is it saying? Any insights would be greatly appreciated...
|
You have: $\dfrac{dy}{dx} = \cos y \to \dfrac{dy}{\cos y} = dx \to x = \displaystyle \int \sec ydy$. Can you continue? more generally, you have a separable ODE: $y' = f(y)$, then the way to solve it is "separate" the $x$ and the $y$. like the one I did.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to approximate Heaviside function by polynomial I have a Heaviside smooth function that defined as
$$H_{\epsilon}=\frac {1}{2} [1+\frac {2}{\pi} \arctan(\frac {x}{\epsilon})]$$
I want to use polynominal to approximate the Heaviside function. Could you suggest to me a solution? Thanks
UPDATE: This is Bombyx mori result in blue line and my expected result is red line
|
It is not possible to use polynomial as Heaviside step function with a good average precision, because any polynomial is infinite at both positive and negative infinity and Heaviside is not. Second, you would need to have zero value for all negative values. Even if you take that the value is only approximately zero you would need all larger and larger degree to achieve that.
In that sense the only option is to use rational function of two polynomials.
If you still insist of having just one polynomial expression then you can use normal Lagrange/Newton approximations taking as many points as you want. This can give you some accuracy within some region, and for the jump around $0$ you need to take more points around $0$.
Here is the polynomial for:
$$(0,1/2) (-1,0) (-2,0) (-3,0) (-4,0) (1,1) (2,1) (3,1) (4,1) (5,1)$$
$$\frac{x^9}{51840}-\frac{13x^7}{12096}+\frac{71x^5}{3456}-\frac{107x^3}{648}+\frac{1627x}{2520}+\frac{1}{2}$$
Is it possible to get the coefficients directly?
Sure, just create a linear system and solve it against the unknown coefficients. Notice that there is an infinite polynomial given by Taylor expansion of any of the approximations like the one using $\tanh(kx)$ with extremely slow convergence.
In essence, since we ask a polynomial to be too much flat, a polynomial is a bad approximation to step function no matter what we do. A rational polynomial function is much better choice.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
prime numbered currency The unit of currency is the Tao(t) the value of each coin is a prime number of Taos. The coin with the smallest value is 2T there are coins of every prime number Value under 50.
Help! I don't under stand if the question means that 1coin equals 2T.
Some pointers in the right direction would be very helpful
Thanks
|
The T after the number 2 isn't a variable in this case - it's the unit. Is that what the problem was?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find $P + Q + R$ I was doing questions from previous year's exam paper and I'm stuck on this question.
Suppose $P, Q, R$ are positive integers such that $$PQR + PQ + QR + RP + P + Q + R = 1000$$ Find $P + Q + R$.
It seems easy but I am not getting the point from where I should start. Thank you.
|
Note that
$(1 + P) (1 + Q) (1 + R) = 1 + P + Q + R$
$+ PQ+ PR + QR + PQR = 1 + 1000 = 1001, \tag{1}$
by the hypothesis on $P$, $Q$, $R$; also,
$1001 = 7 \cdot 11 \cdot 13, \tag{2}$
all primes; thus we may take
$P = 6; \;\; R = 10; \;\; Q = 12, \tag{3}$
or some permutation thereof; in any event, we have
$P + Q + R = 28. \tag{4}$
Note Added in Edit, Saturday 12 August 2017 10:04 PM PST: The solution is unique up to a permutation of $(P, Q, R) = (6, 10 ,12)$ by virtue of the fact that $1001$ may be factored into three non-unit positive integers in exactly one way, $1001 = 7 \cdot 11 \cdot 13$, since $7$, $11$, and $13$ are all primes. This follows from the Fundamental Theorem of Arithmetic. These remarks added at the suggestion of user21820 made 5 May 2015. Sorry about the delayed response. End of Note.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
}
|
Universal quadratic formula? Is there any way to write the quadratic formula such that it works for $ac= 0$ without having to make it piecewise?
The traditional solution of $x = (-b \pm \sqrt{b^2 - 4ac}) / 2a$ breaks when $a = 0$, and the less-traditional solution of $x = 2c / (-b \pm \sqrt{b^2 - 4ac})$ breaks when $c = 0$... so I'm wondering if there is a formula that works for both cases.
My attempt was to make the formula "symmetric" with respect to $a$ and $c$ by substituting $$x = y \sqrt{c/a}$$ to get $$y^{+1} + y^{-1} = -b/\sqrt{ac} = 2 w$$
whose solution is
$$y = -w \pm \sqrt{w^2 - 4}$$
which is clearly symmetric with respect to $a$ and $c$, but which doesn't really seem to get me anywhere if $ac = 0$.
(If this is impossible, it'd be nice if I could get some kind of theoretical explanation for it instead of a plain "this is not possible".)
|
Answering my own question, but I just realized this algorithm on Wikipedia works if we cheat a little and don't consider $\operatorname{sgn}(x) = |x| \div x$ a "piecewise" function:
$${\begin{aligned}x_{1}&={\frac {-b-\operatorname{sgn}(b)\,{\sqrt {b^{2}-4ac}}}{2a}}\\x_{2}&={\frac {2c}{-b-\operatorname{sgn}(b)\,{\sqrt {b^{2}-4ac}}}}\end{aligned}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Are sections of vector bundles $E\to X$ over schemes necessarily closed embeddings? A brief question, if $\pi\colon E\to X$ is a vector bundle over a scheme $X$, is it automatic that any section $s\colon X\to E$ always a closed embedding?
|
First, a general observation:
If $p : Y \to X$ is a separated morphism of schemes and $s : X \to Y$ satisfies $p \circ s = \mathrm{id}_X$, then $s$ is a closed embedding.
Indeed, under the hypotheses, we have the following pullback diagram,
$$\require{AMScd}
\begin{CD}
X @>{s}>> Y \\
@V{s}VV @VV{\Delta}V \\
Y @>>{\langle s \circ p, \mathrm{id}_Y \rangle}> Y \times_X Y
\end{CD}$$
and $\Delta : Y \to Y \times_X Y$ is a closed embedding by definition, so $s : X \to Y$ is indeed a closed embedding.
Next, we should show that the projection of a vector bundle is a separated morphism. But vector bundle projections are affine morphisms, hence separated in particular.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Galois group of the simple extension Let $K=Q(\sqrt3,\sqrt5)$. Show that the extension is $K/Q$ is simple and also Galois extension. Determine its Galois group.
I showed that extension is simple because $K=Q(\sqrt{3}+\sqrt{5})$
But I can not find its Galois group?
|
Hint: Notice that any $\sigma \in \mathrm {Aut} L$ is determined by the images of $\sigma (\sqrt{3})$ and $\sigma (\sqrt 5)$ then the possibilities are $\sigma (\sqrt 3) \in \{-\sqrt 3, \sqrt 3 \}$ and $\sigma (\sqrt 5) \in \{-\sqrt 5, \sqrt 5\}$.
Can you see how this is related to $\mathbb Z_2 \times \mathbb Z_2$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1267913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Gravitational force inside a uniform solid ball - evaluation of the integral in spherical coordinates - mistake I have been reading this PDF document: www.math.udel.edu/~lazebnik/BallPoint.pdf
While trying Case A I found a small error (a $2$ was missing) but I was able to follow the argumentation and got to the solution.
My problem is Case 2:
After substituting $u=\sqrt{a^2 \cos^2 \phi + R^2 - a^2}$
$$2\pi G\delta m(\int_0^\pi a \cos^2 \phi \sin \phi \, d\phi + \int_0^\pi \sqrt{a^2 \cos^2 \phi + R^2 - a^2} \cos\phi \sin \phi \, d\phi)$$
should evaluate to $$Gm\frac{4}{3}\pi a^3 \delta \cdot \frac{1}{a^2}$$
How do I get to that?
I have tried:
Evaluating the first integral, I get $$\int_0^\pi a \cos^2 \phi \sin \phi \, d\phi = \frac{2}{3}a.$$
Evaluating the second integral, I get $$\int_0^\pi \sqrt{a^2 \cos^2 \phi + R^2 - a^2} \cos\phi \sin \phi \, d\phi$$ $$\int \sqrt{a^2 \cos^2 \phi + R^2 - a^2} \cos\phi \sin \phi \, d\phi=-\frac{u^3}{3 a^2}+ C$$ Now I have to change the limits: If $\phi=0$ then $u=R$. I already used this in Case A. But if $\phi = \pi$ then $u=R$, so the second integral is zero $$\int_0^\pi \sqrt{a^2 \cos^2 \phi + R^2 - a^2} \cos\phi \sin \phi \, d\phi= 0$$
Summing the two parts I would get
$$2\pi G\delta m(\int_0^\pi a \cos^2 \phi \sin \phi \, d\phi + \int_0^\pi \sqrt{a^2 \cos^2 \phi + R^2 - a^2} \cos\phi \sin \phi \, d\phi)=2\pi G\delta m \cdot \frac{2}{3}a$$ which is obviously wrong.
Where is my mistake?
|
By symmetry,
$$
\int_{0}^{\pi} \sqrt{a^{2}\cos^{2} \phi + R^{2} - a^{2}} \cos\phi \sin\phi\, d\phi = 0.
$$
Loosely, the radical and $\sin\phi$ are "even-symmetric" on $[0, \pi]$, while $\cos\phi$ is "odd-symmetric". That is, substituting $\psi = \phi - \frac{\pi}{2}$ gives
$$
\int_{0}^{\pi} \sqrt{a^{2}\cos^{2} \phi + R^{2} - a^{2}} \cos\phi \sin\phi\, d\phi
= -\int_{-\pi/2}^{\pi/2} \sqrt{a^{2}\sin^{2} \psi + R^{2} - a^{2}} \cos\psi \sin\psi\, d\psi,
$$
the integral of an odd function over a symmetric interval.
The rest of your calculation is correct; that is,
$$
2\pi G\delta m \int_{0}^{\pi} a\cos^{2}\phi \sin\phi\, d\phi
= 2\pi G\delta m \cdot \frac{2}{3} a
= Gm \frac{4}{3} \pi a^{3} \delta \cdot \frac{1}{a^{2}},
$$
as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1268052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Differentiate $\sin^{-1}\left(\frac {\sin x + \cos x}{\sqrt{2}}\right)$ with respect to $x$
Differentiate $$\sin^{-1}\left(\frac {\sin x + \cos x}{\sqrt{2}}\right)$$ with respect to $x$.
I started like this: Consider $$\frac {\sin x + \cos x}{\sqrt{2}}$$, substitute $\cos x$ as $\sin \left(\frac {\pi}{2} - x\right)$, and proceed with the simplification. Finally I am getting it as $\cos \left(x - \frac {\pi}{4}\right)$. After this I could not proceed. Any help would be appreciated. Thanks in advance!
|
An alternative approach is to use Implicit Differentiation:
\begin{equation}
y = \arcsin\left(\frac{\sin(x) + \cos(x)}{\sqrt{2}} \right) \rightarrow \sin(y) = \frac{\sin(x) + \cos(x)}{\sqrt{2}}
\end{equation}
Now differentiate with respect to '$x$':
\begin{align}
\frac{d}{dx}\left[\sin(y) \right] &= \frac{d}{dx}\left[\frac{\sin(x) + \cos(x)}{\sqrt{2}} \right] \\
\cos(y)\frac{dy}{dx} &= \frac{\cos(x) - \sin(x)}{\sqrt{2}} \\
\frac{dy}{dx} &= \frac{\cos(x) - \sin(x)}{\sqrt{2}\cos(y)}
\end{align}
Thus:
\begin{equation}
\frac{dy}{dx} = \frac{d}{dx}\left[\arcsin\left(\frac{\sin(x) + \cos(x)}{\sqrt{2}} \right) \right] = \frac{\cos(x) - \sin(x)}{\sqrt{2}\cos\left(\arcsin\left(\frac{\sin(x) + \cos(x)}{\sqrt{2}}\right) \right)}
\end{equation}
Here this method is unnecessarily complicated in comparison to those already presented. It is however good to know if an identity is either unknown.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1268153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
How to integrate $\int \cos^2(3x)dx$ $$\int \cos^2(3x)dx$$
The answer according to my instructor is:
$${1 + \cos(6x) \over 2} + C$$
But my book says that:
$$\int \cos^2(ax)dx = {x \over 2} + {\sin(2ax) \over 4a} + C$$
I'm not really sure which one is correct.
|
There are two methods you can use. Integration by parts and solving for the integral, or the half angle formula.
Remember that the half angle formula is given by $\cos^2(x) = \frac12 (1+\cos(2x))$ and also $\sin^2(x) = \frac12(1-\cos(2x))$.
Thus $$\int \cos^2(3x) dx = \frac12 \int (1+\cos(6x)) dx = \frac12 \left(x + \frac{\sin(6x)}{6}\right) + C.$$
The method of using integration by parts goes like this (I am changing $\cos(3x)$ to $\cos(x)$ for this example).
$$\int \cos^2(x) dx = \cos(x)\sin(x) + \int \sin^2(x) dx$$ we used $u=\cos(x)$ and $dv = \cos(x)dx$. Now using the pythagorean identity we have:
$$\int \cos^2(x) dx = \cos(x)\sin(x) + \int (1-\cos^2(x)) dx$$
Which leads to:
$$2 \int \cos^2(x) dx =\cos(x)\sin(x) + \int 1 dx.$$
So we have:
$$\int \cos^2(x) dx = \frac12 \left( x + \cos(x)\sin(x) \right) +C.$$
Finally using the double angle formula for $\sin(x)$ we have:
$$\int \cos^2(x) dx = \frac12 \left( x + \frac{\sin(2x)}{2} \right) +C.$$
This is the form we would find if we used the half angle formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1268278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.