Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
About lower upper bound of a subset of all topologies for a non empty set X
How i may prove what is stated? That the topology generated using the theorem D as in the image, is equal to the lower upper bound of a non-empty family of topologies for a non empty set $X$.
Definitions:
The lower upper bound of a non-empty family of topologies of $X$ is the topology whose arise from the intersection of all topologies stronger than each topology in the family.
A topology T is stronger than R if R is cointained in T.
I tried to prove that one set is cointained in the other by proving that every open set from one is in the another, but i could not, in special because the union of the topologies. One there know how to prove it or have a hint? Thank you very much.
|
If you want to prove it in detail, all you have to show is that last statement:
the class of all unions of finite intersections forms a topology
which is to say, this class is closed under the formation of finite intersection and arbitrary unions.
Doesn't it follow by construction?
(Notice that $\bigcup \varnothing = \varnothing$ and $\bigcap \varnothing = X$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2072807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Calculating the limit of $\frac{x^2\sin(\frac{1}{x})}{\sin x}$ $$ \lim _{x\to 0} \frac{x^2\sin(\frac{1}{x})}{\sin x}$$
Okay since sine is bounded ${x^2\sin(\frac{1}{x})} \ \ \to 0$
$\sin x\to 0$ Thus we can apply l'hospitals to it .
Applying l'Hospital's we get :
$$\lim_{x\to 0}\frac{\sin(\frac{1}{x})(1-2x)}{\cos x}$$ Here We can't find a way out? What would you advice me to do? Is there anyway to compute the limit?
Does this imply that the limit doesn't exist btw? No I don't think so, I think the conditions of the l'Hospital's are just not met.
|
Recall that
$$\lim_{x\to0}\frac{\sin(x)}x=1$$
and
$$-x<x\sin(1/x)<x$$
Thus,
$$\lim_{x\to0}\frac{x^2\sin(1/x)}{\sin(x)}=\lim_{x\to0}\frac x{\sin(x)}x\sin(1/x)=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2072914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 3
}
|
Make a $2$ unit bottle from $4$ and $3$ unit bottles Assume that we have only two bottles. The first one's volume is $3$ units and the second one's volume is $4$ units. We can perform $3$ kinds of operations :
*
*Fill an empty bottle with water.
*Empty a full bottle.
*Pouring the contents of one bottle into the other one.
How can we reach a bottle with $2$ units of volume watter inside it?
|
The way to solve this kind of problem is generally similar.
Given $a$ and $b$ with $gcd(a,b)=1$ we can apply Bezout to find $(u,v)$ such that $au+bv=1$.
In fact here $1$ (1 liter) is not interesting, but similarly there exists $(u,v)$ such that $au+bv=2$.
Here $4*(+2)+3*(-2) = 8-6=2$, the signs are also of interest:
*
*$+$ indicates the recipient which is filled
*$-$ the recipient which is emptied (this is also the one we pour into).
\begin{array}{c|ccc}
action & \text{4 lit.} & \text{3 lit.} \\
\hline
& 0 & 0 \\
fill & 4 & 0 & + \\
pour & 1 & 3 \\
empty & 1 & 0 & - \\
pour & 0 & 1 \\
fill & 4 & 1 & +\\
pour & 2 & 3 \\
empty & 2 & 0 & -
\end{array}
So you see we filled twice the 4 liters bottle and emptied twice the 3 liters bottle to finally get $2l$ in the 4 liters bottle.
Let's examine another situation: bottle of 7 liters and 11 liters and I want 6 liters.
Here $7*(+4)+11*(-2)=28-22=6$, the signs indicate we are filling the 7 liters bottle 4 times and we pour in the 11 liters bottle and empty it 2 times. Let's go...
\begin{array}{c|ccc}
action & \text{7 lit.} & \text{11 lit.} \\
\hline
& 0 & 0 \\
fill & 7 & 0 & + \\
pour & 0 & 7 \\
fill & 7 & 7 & + \\
pour & 3 & 11 \\
empty & 3 & 0 & - \\
pour & 0 & 3 \\
fill & 7 & 3 & + \\
pour & 0 & 10 \\
fill & 7 & 10 & + \\
pour & 6 & 11 \\
empty & 6 & 0 & -
\end{array}
So next time you have a problem with two bottles of contenance $a,b$ and you want $c$, try to find first the equation $au+bv=c$. In general it can be found by hand quite quickly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Two mappings with constant composition Can you tell me if my answer is correct? Thank you so much!!!
Here is the problem:
If $f,g$ are mappings of $S$ into $S$ and $f\circ g$ is a constant function, then
(a) What can you say about $f$ if $g$ is onto?
(b) What can you say about $g$ if $f$ is 1-1.
Original Image
As $f\circ g$ is a constant function, $f\circ g$ is neither onto nor 1-1.
If $g$ is onto, $f$ cannot be onto because if $f$ is onto and $g$ is onto, $f\circ g$ would be onto.
If $f$ is $1-1$, $g$ cannot be 1-1 because if $f$ is 1-1 and $g$ is 1-1, $f\circ g$ would be 1-1.
In any case, the range of $g$ must be a subset of the domain of $f$.
|
An informal, yet correct, answer provided you know what all the terms mean can be:
$g$ is onto means $g(S) = \{g(x)|x \in S\}$ can, and will be anything. So $f(g(x))$ can, and will, have $g(x)$ being any value yet $f(g(x))$ is constant so if $g(x)=y$ is .... anything....$f(y)$ is constant. So $f$ is constant.
$f$ is 1-1 means different input of $f$ will have different output. So if $f(g(x)) = f(g(y)) = c$ is constant then the output of $f$ is the same for all possible input of $g(x)$ or $g(y)$and the only way that can happen is if the input is the same. So all the $g(x)$ or $g(y)$ are the same no matter what $x$ or $y$ are. So $g$ is constant.
It's informal, but those are the arguments and reasoning.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Finding the $k^{th}$ root modulo m We know that method of finding $k^{th}$ root modulo $m$, i.e. if $$x^k\equiv b\pmod m,\tag {$\clubsuit$}$$ with $\gcd (b,m)=1$, and $\gcd(k,\varphi(m))=1$, then $x\equiv b^u\pmod m$ is a solution to $(\clubsuit)$, where $ku-v\varphi(m)=1$. Because
$$\begin{array}
{}x^k &\equiv \left(b^u\right)^k\pmod m\\
&\equiv b^{uk}\pmod m\\
&\equiv b^{1+v\varphi (m)}\pmod m\\
&\equiv b\cdot b^{v\varphi(m)}\pmod m\\
&\equiv b\cdot \left(b^{\varphi (m)}\right)^v\pmod m\\
&\equiv b\pmod m
\end{array}$$
Thus $x\equiv b^u\pmod m$ is a solution to $(\clubsuit)$.
Here we use $\gcd(b,m)=1$, since we used Euler's theorem that $b^{\varphi(m)}\equiv1\pmod m$.
But I am asked to prove that if $m$ is the product of distinct primes, then $x\equiv b^u \pmod m$ is always a solution, even if $\gcd (b,m)\gt1.$
What I did, is say $m=p_1p_2$. Then $\varphi(m)=(p_1-1)(p_2-1)$
$$\begin{array}
{}b^{uk}&\equiv b\cdot b^{\varphi (m)}\pmod m\\
&\equiv b\cdot b^{(p_1-1)(p_2-1)}\pmod m
\end{array}$$
Now, we just have to compute $b^{(p_1-1)(p_2-1)}\pmod {p_i}$. Here I got stuck, because I really can't use the little theorem for every $p_i$'s, since some $p_i$ can exist in $b$.
Can someone help me?
|
$b^{\large ku}\!\equiv b\pmod{\!pq}\,$ is case $\,i,j,k=1\,$ of this generalization of the Fermat Euler $\color{blue}{\rm (E)}$ theorem.
${\bf Theorem}\,\ \ n^{\large k+\phi}\equiv n^{\large k}\pmod{\!p^i q^j}\ \ $ if $\,p\ne q\,$ are prime, $ \ \color{#0a0}{\phi(p^i),\phi(q^j)\mid \phi},\, $ $\, i,j \le k\ \ \ $
${\bf Proof}\,\ \ p\nmid n\,\Rightarrow\, {\rm mod\ }p^{\large i}\!:\ n^{\large \phi}\!\equiv 1\,\Rightarrow\, n^{\large k + \phi}\equiv n^{\large k},\ $ by $\,\ n^{\large \color{#0a0}\phi} = (n^{\color{#0a0}{\large \phi(p^{\Large i})}})^{\large \color{#0a0}\ell}\overset{\color{blue}{\rm (E)}}\equiv 1^{\large \ell}\equiv 1$
$\qquad\quad\ \ \color{#c00}{p\mid n}\,\Rightarrow\, {\rm mod\ }p^{\large i}\!:\ n^{\large k}\!\equiv 0\,\equiv\, n^{\large k + \phi}\ $ by $\ n^{\large k} = n^{\large k-i} \color{#c00}n^{\large i} = n^{\large k-i} (\color{#c00}{mp})^{\large i}$ and $\,k\ge i$
So $\ p^{\large i}\mid n^{\large k+\phi}\!-n^{\large k}.\,$ By symmetry $\,q^{\large j}$ divides it too, thus so too does their lcm $ = p^{\large i} q^{\large j}\,\ $ QED
Remark $\ $ The above proof immediately extends to an arbitrary number of primes, see this answer. See also Carmichael's Lambda function, a generalization of Euler's phi function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
the set of linear transformations. Let S={T:$\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$:T is a linear
transformation with T(1,0,1)=(1,2,3), T(1,2,3)=(1,0,1)} .Then S is
(a) A Singleton Set
(b) A finite set containing more than one element
(c) A countable set
(d) An uncountable set
MY APPROACH:
T(1,0,1)=(1,2,3)$\Longrightarrow$T(e$^{1}+e^{3})$=(e$^{1}+2e^{2}+3e^{3})$
$\Longrightarrow$T(e$^{1})$+ T(e$^{3})$=e$^{1}+2e^{2}+3e^{3}Similarly,$T(e$^{1})$+2T(e$^{2})$+3T(e$^{3})$=e$^{1}$+e$^{3}$
I cant think more than this
|
Hint/Solution: A linear transformation is determined by its values on a basis,hence $S$ is uncountable.(Of course you need to fill some details)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Horospherical ham sandwich Let $B_1, \ldots B_n$ be Borel sets in $\mathbb{H}^n$, and $\mu$ (absolutely continuous w.r.t) the hyperbolic volume. Is there a horosphere $H$ that cuts each of the $B_i$'s into two parts with equal $\mu$-measure ?
(For Euclidean space with hyperplanes in place of horosphere this is the folklore Ham Sandwich theorem)
At first I thought the same proof would do, namely apply Borsuk-Ulam theorem to a well-chosen sphere, but I am stuck on this (perhaps wrong) track: the space $\mathcal{H}^+$ of horoballs in $\mathbb{H}^n$ is topologically $S^n$ with two points removed, as well as the space $\mathcal{H}^-$ of complementary sets of horoballs ; glue $\mathcal{H}^+$ and $\mathcal{H}^-$ together by adding the points $\mathbb{H}^n$ and $\emptyset$ and equip the result (say $X$) with the $\mathbb{Z}_2$ action defined by taking the complementary. A solution will not occur near the gluing points, so one can perturb slightly $X$ near them to turn it into $S^n$. Sadly this $S^n$ does not have the adequate $\mathbb{Z}_2$ action needed for Borsuk-Ulam theorem...
|
Nice question.This fails however already in the case $n=2$ though, taking $B_1, B_2$ two concentric balls of suitable radii, $r_1, r_2$ with $r_1$ is sufficiently small. To prove this consider first the case when $r_1=0$ and the measure is positive concentrated at one point, the center. Once you understood this case, argue by contradiction for $r_1>0$, taking a suitable limit as $r_1\to 0+$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
A basis to a orthogonal set Let $(E, \langle \cdot, \cdot \rangle)$ be an $n$-dimensional Hilbert space and $A,B \colon E \to E$ linear isomorphisms.
Does there exist a basis $\{e_{1},...,e_{n}\}$ of $E$ such that $\mathcal{B}=\{A(e_{1}),...,A(e_{n}),B(e_{1}),...,B(e_{n})\}$ are a orthogonal set?
Hints or solutions are greatly appreciated.
Related: Find a basis for two operators
|
No, it doesn't (for $n\neq 0$). Because a orthogonal set of non-zero vectors is linearly independend. As your space is $n$ dimensional there are at most $n$ linearly independend vectors.
Indeed, let $\{v_1, \dots, v_m \}\subseteq E$ be a orthogonal set of non-zero vectors. Assume they are linearly dependend
$$ \sum_{j=1}^m a_j v_j=0,$$
then we get
$$ 0 = \langle 0, v_i\rangle = \langle \sum_{j=1}^m a_j v_j, v_i\rangle
= \sum_{j=1}^m a_j \langle v_j , v_i \rangle = a_i \langle v_i, v_i \rangle.$$
As $v_i\neq 0$, we have $\langle v_i, v_i \rangle \neq 0$ and thus $a_i=0$. Hence, the vectors are linearly independend.
Thus $2n\leq n$, which immediately implies $n=0$. In that case it is true (the empty set is a basis of the trivial vector space and the empty set is orthogonal).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $f$ is Riemann-integrable on $[a,b]$ Assume that $\{r_n\}$ is an index of rational numbers in the interval $[a,b]$ and $\{v_n\}$ is a sequence of non-zero real numbers which converges to $0$.
Define $f:[a,b] \to \mathbb R$ this way :
If $x=r_n$ , $f(x)=v_n$
If $x \notin \mathbb Q \cap [a,b]$ , $f(x)=0$
Prove that $f$ is Riemann-integrable on $[a,b]$.
My try :
I observed that $f$ is discontinuous on each interval and is not monotonic on any interval. That makes it hard to prove the statement above. I don't know what to do next...
|
I think in this way it should work:
Consider for the sake of simplicity $v_n >0$ for each $n \in \mathbb{N}$. Let $ 1 > \epsilon >0$ By definition there exists $n_m \in \mathbb{N}$ such that $v_n < \epsilon$ for each $n \ge n_m$.
Now observe that one can assume that $ a = r_1 < r_2 < . . . < r_{n_m} = b$. Take $\delta$ to be $\epsilon\min\{ | r_i - r_j |\}/2$ Consider the partition $P =r_1< r_1 + \delta/2 < r_2 . . . r_{n_m - 1} + \frac{\delta}{2^{n_m - 1}} < r_{n_m} $ which now i rename $ x_{1}^0 < x_{1}^1 < . . . < x_{n_m}^0$. We have that $U(P) = \sum_{i=1}^{n_m-1} v_i ( x_{i}^1 - x_{i}^0) + \sum_{i=2}^{n_m} \sup_{[ x_{i-1}^1, x_{i}^0]}(f(x)) (x_{i}^0 - x_{i-1}^1)$ Now you can extimate this upper riemann sum observing:
*
*our choose of $n_m$ gives $\sup_{[ x_{i-1}^1, x_{i}^0]}(f(x)) < \epsilon$
*our choose of the partition gives $\sum_{i=1}^{n_m-1} v_i ( x_{i}^1 - x_{i}^0) \le \max\{v_1, . . .,v_{n_m}\}\ \epsilon$
Thus you get $U(P) \le \max\{v_1, . . .,v_{n_m}\}\ \epsilon + \epsilon (b - a)$ and you can conclude, because a convergent sequence of real numbers is bounded, that $U(P) \rightarrow 0$, thus the claim since it's obvious that $L(P) \ge 0$ (I supposed $v_n \ge 0$, otherwise you have to do the same thing for lower sums).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Prove $10^{n+1}+3\cdot 10^n+5$ is divisible by $9$? How do I prove that an integer of the form $10^{n+1}+3\cdot 10^{n}+5$ is divisible by $9$ for $n\geq 1$?I tried proving it by induction and could prove it for the Base case n=1. But got stuck while proving the general case. Any help on this ? Thanks.
|
$10^{n+1}+3\cdot 10^{n}+5=10^{n}(10+3)+5=1300\cdots05$ has digit sum equal to $9$ and so is a multiple of $9$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 1
}
|
$\lim \limits_{n \to \infty} \frac{\sin x_n}{x_n}$ if $\lim \limits_{n \to \infty} x_n =0$ How to easily prove that
$$\lim \limits_{n \to \infty} \frac{\sin x_n}{x_n}=1,$$
if $\lim \limits_{n \to \infty} x_n =0$?
I proved it using inequality
$$ 1-\frac{x^2}{2}<\frac{\sin x}{x}<1$$
therefore,
$$1\xleftarrow[\text{$x_n \to 0$}]{}1-\frac{x_n^2}{2}<\frac{\sin x_n}{x_n}<1 \longrightarrow 1$$
|
$$
\lim \limits_{x \to 0} \frac{\sin x}{x}=\lim \limits_{x\to0} \frac{\sin x-\sin 0}{x-0}=(\sin x)'|_{x=0}=\cos 0=1.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Stuck at evaluating $\sum\limits_{d \mid n} \tau(d)$ It seems the number of nonnegative integer solutions to the equation $xyz=n$ is given by
$$\sum\limits_{d \mid n} \tau(d)$$
$\tau$ is the number of divisors function. I'm wondering if there is a way to simplify this sum. Really appreciate any kind of help. Thank you.
Here is my attempt so far
$$xyz = n$$
$x$ can be any of the factors of $n$ and the product $yz$ will be $n/x$.
Since $yz$ sees all the factors of $n$, the number of nonnegative integer solutions to $xyz=n$ is simply the sum of divisors of the product $yz$.
Edit : Special thanks to @Tryss for identifying an error in the formula. I've fixed it now..
|
Factor $n$ as $p_1^{\alpha_1}\cdot\ldots\cdot p_k^{\alpha_k}$. Then every solution is associated with three vectors (the exponents in the factorizations of $x,y,z$) with non-negative integer components and sum given by $(\alpha_1,\ldots,\alpha_k)$. By stars and bars, it follows that the number of solutions is given by
$$ \prod_{h=1}^{k}\frac{(\alpha_h+2)(\alpha_h+1)}{2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Maximum number of components in a Graph Containing $n$ vertices and $k$ edges I know that in a Graph $G$ Containing $n$ vertices and $k$ edges,$G$ will contain atleast $n-k$ components.
Explanation-:
Graph containing $n$ vertices and $0$ edges will contain $n$ components.Each time adding an edge will reduce the component by 1.Thus $k$ edge will contain $n-k$ component.
This is the minimum number.
I am interested in finding the maximum number of component.please help me out !!!
|
The maximum number of components comes from using complete graphs. For instance, with two edges, you lose two components. With three edges, you can make a complete subgraph and lose no edges. The result is $n-l$ such that $l$ is the lowest integer satisfying $k\le \frac{l(l+1)}2$, where $k$ is the number of edges, $n$ is the number of nodes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2073995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How many ways can 8 teachers be distributed among $4 $ schools? There are several ways that the teachers can be divided amongst $4$ schools, namely here are the possible choices I came up with:
$1) 1 1 1 5$
$2) 1 1 2 4$
$3) 1 1 3 3$
$4) 1 2 2 3$
$5) 2 2 2 2$
now given the fact that say $2213$ is the same as $1 2 2 3$ it was omitted. With out repeats I believe these 5 are the only possibilities.
1)
${8 \choose 5} \times {3 \choose 1} \times {2 \choose 1} \times {1 \choose 1}$:
$\frac{8!}{5!3!} \times \frac{3!}{1!2!} \times \frac{2!}{1!1!} \times 1$
Which comes out to
$56 \times 3 \times 2 \times 1 = 336$
2)
${8 \choose 4} \times {4 \choose 2} \times {2 \choose 1} \times {1 \choose 1}$:
$\frac{8!}{4!4!} \times \frac{4!}{2!2!} \times \frac{2!}{1!1!} \times 1$
Which comes out to
$70 \times 6 \times 2 \times 1= 840$
3)
${8 \choose 3} \times {5 \choose 3} \times {2 \choose 1} \times {1 \choose 1}$
$\frac{8!}{3!5!} \times \frac{5!}{3!2!} \times \frac{2!}{1!1!} \times 1$
Which comes out to
$56 \times 10 \times \times 2 = 1,120$
4)
${8 \choose 3} \times {5 \choose 2} \times {3 \choose 2} \times {1 \choose 1}$
$\frac{8!}{3!5!} \times \frac{5!}{2!3!} \times \frac{3!}{2!1!} \times \frac{1!}{1!0!}$
Which comes out to:
If 8 new teachers are to be divided amongst 4 new schools how many divisions are possible?
$56 \times 10 \times 3 \times 1= 1,680$
5)
${8 \choose 2} \times {6 \choose 2} \times {4 \choose 2} \times {2 \choose 2}$
$\frac{8!}{2!6!} \times \frac{6!}{2!4!} \times \frac{4!}{2!2!} \times \frac{2!}{2!0!}$
Which comes out to:
$28 \times 15 \times 6 \times 1 = 2,520$
What am I missing?
|
Assuming distinct teachers, distinct schools, and having identified the $5$ patterns, a foolproof mechanical way is to sum up the product of two multinomial coefficients for each case, one for the pattern, the other for the frequencies of singletons, doubles, triples, etc, viz.
$\binom{8}{1,1,1,5}\binom{4
}{3,1} + \binom{8}{1,1,2,4}\binom{4}{2,1,1} + \binom{8}{1,1,3,3}\binom{4}{2,2} + \binom{8}{1,2,2,3}\binom{4}{1,2,1} + \binom{8}{2,2,2,2}\binom44 = 40824$
And if preferring permutations to multinomial coefficients, the equivalent expression
$\frac{8!}{1!1!1!5!}\cdot\frac{4!}{3!1!} + \frac{8!}{1!1!2!4!}\cdot\frac{4!}{2!1!1!} + \frac{8!}{1!1!3!3!}\cdot\frac{4!}{2!2!}+\frac{8!}{1!2!2!3!}\cdot\frac{4!}{1!2!1!}+ \frac{8!}{2!2!2!2!}\cdot\frac{4!}{4!} = 40824 $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
The valid interval of the maclaurin series for $\frac{1}{1+x^2}$ The Maclaurin series for $\frac{1}{1-x}$ is $1 + x + x^2 + \ldots$ for $-1 < x < 1$.
To find the Maclaurin series for $\frac{1}{1+x^2}$, I replace $x$ by $-x^2$.
The Maclaurin series for $\frac{1}{1+x^2} = 1 - x^2 + x^4 - \ldots$.
This is valid for $-1 < -x^2 < 1$ if I replace $x$ by $-x^2$. So if I multiply each side by $-1$, I get $-1 < x^2 < 1$. If I take the square roots, I get $i < |x| < 1$. And I am stuck.
And my book says that this Maclaurin series is valid for $-1 < x < 1$ anyway. There is no further explanation.
How can I derive $-1 < x < 1$ from $i < |x| < 1$? Please help.
|
The MacLaurin series for $\frac{1}{1-x}$ converges when $|x| < 1$. If we substitute $-x^2$ for $x$, then our radius of convergence is when $|-x^2| < 1 \implies |x^2| < 1 \implies -1 < x < 1$.
Hope this helps!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
the New Year will be 2017: how many pairs of integer solutions to $x^2 + y^2 = (2017)^3$? We're almost in 2017. I wonder how many pairs of integer solutions has the following diophantine equation:
$$x^2 + y^2 = (2017)^3$$
Thanks in advance.
|
Borrowing from this answer here
The answer is just $\frac{3+1}{2}=2$. ( or $4$ if the order matters).
I checked it with this code:
#include <bits/stdc++.h>
using namespace std;
typedef long long lli;
lli N=2017;
int isq(lli N){
lli s=sqrt(N);
if(s*s==N) return(1);
return(0);
}
int main(){
N=N*N*N;
int res=0;
for(lli i=0;i*i<N;i++){
if(isq(N-i*i) ) res++;
}
printf("%d\n",res);
}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
}
|
Converse Truth Table I do not understand how the converse ($B \Rightarrow A$) truth table is logical.
For instance, take the statement, "If I am in Paris, then I am in France".
If I am in Paris, then I am in France. Therefore, $A \Rightarrow B$, since if I am in Paris, then I must also be in France. However, ($B \not \Rightarrow A$), since it can be true that I am in France, but that does not necessarily mean I am in Paris specifically.
I would greatly appreciate it if someone could tell me why my understanding is incorrect.
Thank you.
|
Let's assume you have a number,for example 5. When you are asked 5 is in which circle you will say it is in B. Then by this statement one can say your number is also in A.
But lets assume you have number 3. And you are asked in which circle number is? You day it is in circle A. So one can't say the number is in B!
This case is same , assume circle B as Paris and fields A as France and see your statement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the Fourier transformation of H(-t)? I want to start by the definition of Fourier transformation. It is
$$\mathcal{F}[H(-t)]=\int_{-\infty}^0 e^{-j{\omega}t}\,dt =\left.\frac{1}{-j\omega}e^{-j{\omega}t}\right|_{-\infty}^0 $$
But when $t \rightarrow{-\infty}$, the result goes to the infinity, right?
|
As stochasticboy321 mentioned, taking the "Fourier transform" of $H(t)$ requires some slightly more sophisticated machinery. Details are given in this post: Heaviside step function fourier transform and principal values.
The idea is that one considers generalized functions which are not defined by pointwise values, but by their "action" on rapidly decreasing functions (functions in the Schwartz class). One defines genuine functions to act by integration and then one defines derivatives and Fourier transforms of generalized functions by abstracting what occurs in the genuine function case. In this way you can compute a Fourier transform of $H(t)$.
To compute the Fourier transform then of $H(-t)$, just use the time reversal property of Fourier transforms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find the value of $\theta$, which satisfy $3 − 2 \cos\theta − 4 \sin\theta − \cos 2\theta + \sin2\theta = 0$. We have to find the value of $\theta$, which satisfy $3 − 2 \cos\theta − 4 \sin\theta − \cos 2\theta + \sin2\theta = 0$.
I could not get any start how to solve it .
|
HINT:
$$4-2\cos x+2\sin x\cos x-4\sin x=1+\cos2x$$
$$-(\cos x-2)+\sin x(\cos x-2)=\cos^2x$$
$$-(\cos x-2)(1-\sin x)=1-\sin^2x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\text{Prove That,}\;f(n) = \prod\limits_{i=1}^{n}(4i - 2) = \frac{(2n)!}{n!}$
$$\text{Prove That,}\;f(n) = \prod_{i=1}^{n}(4i - 2) = \frac{(2n)!}{n!}$$
This is a problem from Elementary Number theory.
My Work:
The statement is true for $i=1$. So, if it is true for $i=k$ it must be true for $ i = k+1$ . I am stuck here, any hint will be helpful.
|
$$\prod_{i = 1}^n (4i - 2) = \prod_{i = 1}^n 2 \, (2i - 1) = 2^n \, \prod_{i = 1}^n (2i - 1) \frac{\prod\limits_{i = 1}^n 2i}{\prod\limits_{i = 1}^n 2i} = 2^n \frac{\prod\limits_{i = 1}^{2n} i}{2^n \, \prod\limits_{i = 1}^n i} = \frac{(2n)!}{n!}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Solve $\cos x-\sin(2x)=0$ Solve $\cos x-\sin(2x)=0$
I did:
$$\cos x=\color{blue}{\sin(\pi /2-x)}$$
therefore:
$$\color{blue}{\sin(\pi /2-x)}=\sin(2x)$$
Can I do that:??
now to solve only for $\pi/2-x=2x$
so $x=\pi/6+2\pi k$
|
The first step is ok, but for the second we have that
$$\sin A=\sin B\iff A+B=(2k+1)\pi\text{ or }A-B=2k\pi$$
Then, from the equality $\sin(\pi/2-x)=\sin 2x$ we get two sets of solutions:
*
*$\pi/2-x+2x=(2k+1)\pi$
*$2x-(\pi/2-x)=2k\pi$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Do we assume $f_n$'s map into $\Bbb{R}$ or $\Bbb{C}$ in Theorem 7.8 Rudin's *Principles of Mathematical Analysis*?
Theorem 7.8 The sequence of functions $\{f_n\}$ defined on $E$ converges uniformly on $E$ if and only if for every $\epsilon > 0$ there exists an integer $N$ such that $m \geq N, n \geq N, x \in E$ implies
\begin{equation}
|f_n(x)-f_m(x)| \leq \epsilon
\end{equation}
For the backwards direction, since the codomain of $f$ is not given, how can we use Theorem 3.11 (Cauchy sequence in a compact metric space (or $\mathbb{R}^k$) converges to some point in the metric space) to prove pointwise convergence of $f$?
|
For each $x \in E$, the sequence $(f_n(x))_{n \in \mathbb N}$ is a Cauchy-sequence in $ \mathbb R$ or ($\mathbb C$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
How to differentiate $\frac{x}{1-\ln(x-1)}$? I'm working on the following problem found in James Stewart's Calculus Early Transcendentals, 7th Ed., Page 223, Exercise 27. I'd just like to know where my work had gone wrong?
Please differentiate: $f(x)=\frac{x}{1-\ln(x-1)}$
My work is below. First I apply quotient rule and chain rules.
$$f'(x)=\frac{\left(1-\ln(x-1)\right)(1)-\left((x)(-\left(\frac{1}{x-1}\right)(1)\right)}{\left(1-\ln(x-1)\right)^2}$$
My algebraic simplification:
$$f'(x)=\frac{\left(1-\ln(x-1)\right)-(x-\left(\frac{1}{x-1}\right))}{\left(1-\ln(x-1)\right)^2}$$
$$f'(x)=\frac{\left(1-\ln(x-1)\right)+(-x+\left(\frac{1}{x-1}\right))}{\left(1-\ln(x-1)\right)^2}$$
$$f'(x)=\frac{\left(1-\ln(x-1)\right)+(-x+\left(\frac{1}{x-1}\right))}{\left(1-\ln(x-1)\right)^2}$$
$$f'(x)=\frac{1-\ln(x-1)-x+1}{\left(1-\ln(x-1)\right)^2(x-1)}$$
$$f'(x)=\frac{-x+2-\ln(x-1)}{\left(1-\ln(x-1)\right)^2(x-1)}$$
However the solution is:
$$f'(x)=\frac{\left(2x-1-(x-1)\ln(x-1)\right)}{(1-\ln(x-1))^2(x-1)}$$
Just need to know where my work is incorrect. Thank you for your help!
|
The following is correct:
$$f'(x)=\frac{\left(1-\ln(x-1)\right)(1)-\left((x)(-\left(\frac{1}{x-1}\right)(1)\right)}{\left(1-\ln(x-1)\right)^2}$$
The following is incorrect:
$$f'(x)=\frac{\left(1-\ln(x-1)\right)-(x-\left(\frac{1}{x-1}\right))}{\left(1-\ln(x-1)\right)^2}$$
On the right term in the numerator, you added $x$ and $-\frac{1}{x-1}$, but you were actually supposed to multiply them.
I'll go off of the top to finish. Simplify the numerator:
$$f'(x)=\frac{1-\ln(x-1)-\frac{-x}{x-1}}{\left(1-\ln(x-1)\right)^2}$$
Get rid of the double negative:
$$f'(x)=\frac{1-\ln(x-1)+\frac{x}{x-1}}{\left(1-\ln(x-1)\right)^2}$$
Multiply both the numerator and denominator by $x-1$. When you did this above, you did not multiply the $1$ and $-\ln(x-1)$ terms by $x-1$, which was another mistake, so remember to distribute the $x-1$ all the way through:
$$f'(x)=\frac{1(x-1)-\ln(x-1)(x-1)+x}{\left(1-\ln(x-1)\right)^2(x-1)}$$
Simplify the numerator:
$$f'(x)=\frac{2x-1-\ln(x-1)(x-1)}{\left(1-\ln(x-1)\right)^2(x-1)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Differential equation in polar coordinates I have the following system:
$\frac{dx}{dt} = 3x + y - x(x^2+y^2)$
$\frac{dy}{dt} = -x +3y -y(x^2+y^2)$
Converting this to polar coordinates gives us:
$\frac{dr}{dt} = r(3-r^2)$
$\frac{d\theta}{dt} = -1$
This gives us a solution $\theta(t) = -t + \theta_0$. What would the solution for $r(t)$ be though?
|
You have
$$\frac{1}{r (\sqrt{3}-r)(\sqrt{3}+r)} dr = dt.$$ Use partial fractions on the left hand side and integrate.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2074967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Applications of complex numbers to solve non-complex problems Recently I asked a question regarding the diophantine equation $x^2+y^2=z^n$ for $x, y, z, n \in \mathbb{N}$, which to my surprise was answered with the help complex numbers. I find it fascinating that for a question which only concerns integers, and whose answers can only be integers, such an elegant solution comes from the seemingly unrelated complex numbers - looking only at the question and solution one would never suspect that complex numbers were lurking behind the curtain!
Can anyone give some more examples where a problem which seems to deal entirely with real numbers can be solved using complex numbers behind the scenes? One other example which springs to mind for me is solving a homogeneous second order differential equation whose coefficients form a quadratic with complex roots, which in some cases gives real solutions for real coefficients but requires complex arithmetic to calculate.
(If anyone is interested, the original question I asked can be found here: $x^2+y^2=z^n$: Find solutions without Pythagoras!)
EDIT:
I just wanted to thank everyone for all the great answers! I'm working my way through all of them, although some are beyond me for now!
|
All of the trigonometric identities easily follow from $e^{bi} = \cos b + i \sin b$. Sine and cosine sum from $e^{(a+b)i} = e^{ai}e^{bi}$, De Moivre's theorem, and so on.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58",
"answer_count": 26,
"answer_id": 10
}
|
Example of stronger programming formulation. So I was revising the course combinatorial optimization and I find the statement $$P_1 \text{ is a stronger formulation than } P_2 \text{ if } P_1 \subset P_2. $$ Can someone give me an example of two formulations for which one is stronger than the other? And why is it usefull?
|
Usually in combinatorial optimization, for example in integer linear programming, we want to relax the feasible set such that the problem become easier, such as relax it to a linear programming problem.
The feasible set become larger that way. If the relaxed feasible set is too large, the approximation become crude. Hence we want the relaxation to be small.
For example, you want to maximize $x$ such that $x \in P$ where $P=\{x: x<=11.5, x \in \mathbb{Z} \}$
Suppose we relax the integer criteria and solve the problem of maximize $x$ such that $x \in Q$, $Q=\{x: x<=11.5 \}$, then the optimal solution is $11.5$ for this relaxed problem.
In contrast, if we further realize that there is no integer solution between $11$ and $11.5$ and solve the problem of maximize $x$ such that $x \in R$, $R=\{x: x<=11 \}$, then the optimal solution is $11$ for this relaxed problem. We can see that the solution is in fact closer to the original solution of the original problem, in fact they are equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $A(z_1)$ and $(z_2)$ are two points in argand plane, find $\angle ABO$ If $A(z_1)$ and $(z_2)$ are two points in argand(complex) plane such that $$\frac{z_1}{z_2}+\frac{\overline{z_1}}{\overline{z_2}}=2$$. Find the value of $\angle ABO$ where $O$ is origin.
Using given condition, I found that Real part of $\frac{z_1}{z_2}=1$ but I am not able to use this to find $\angle ABO$. Could someone help me with this?
|
Let $z_1 / z_2=a+bi$ with $a,b \in \mathbb{R}$. Then $\bar z_1 / \bar z_2=a-bi$ and the given condition gives $(a+bi)+(a-bi) = 2 a = 2 \iff a = 1 \iff z_1/z_2 = 1 + bi$.
The angle $\angle ABO = \arg((z_1-z_2) / z_2)=\arg(z_1/z_2-1)=\arg(1+bi-1)=\arg(bi) = \pm \pi / 2$. Ignoring orientation, $\angle ABO = \pi / 2\,$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Prove $\sum\limits_{k=1}^n a_k\sum\limits_{k=1}^n \frac1{a_k}\le\left(n+\frac12\right)^2$ then $\max a_k \le 4\min a_k$
Prove that
if
\begin{align}
&0<a_1,a_2,\dots,a_n \in \mathbb R,
&\left(\sum_{k=1}^n a_k\right)\left(\sum_{k=1}^n \frac1{a_k}\right)\le\left(n+\frac12\right)^2\\
\end{align}
then
$$\max_k \space a_k \le 4\times\min_k\space a_k$$
What I've tried was
$$\left(\sum_{k=1}^n a_k\right)\left(\sum_{k=1}^n \frac1{a_k}\right)=n+\sum_{i\ne j}\frac{a_i}{a_j}=n+\sum_{i< j}\left(\frac{a_i}{a_j}+\frac{a_j}{a_i}\right)$$
which didn't help at all. Thanks.
|
Hint:
\begin{align*}
\min_j a_j &\leq a_k \quad \text{ for every } 1 \leq k \leq n\\
\sum_{k = 1}^n \min_j a_j &\leq \sum_{k = 1}^n a_k\\
n \, \min_j a_j &\leq \sum_{k = 1}^n a_k
\end{align*}
Similarly
$$
n \frac{1}{\max\limits_j a_j} \leq \sum_{k = 1}^n \frac{1}{a_k}
$$
Hence
$$
n^2 \frac{\min\limits_j a_j}{\max\limits_i a_i} \leq \left( \sum_{k = 1}^n a_k \right)\left( \sum_{k = 1}^n \frac{1}{a_k} \right) \leq \left( n + \frac{1}{2} \right)^2 \leq \ldots
$$
This should lead to something.
Also, obviously $\frac{\max\limits_i a_i}{\min\limits_j a_j} \geq 1$. (I like having different indeces $i, j$ but these are of course dummy variables)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Given $\lim_{n\to\infty} a_n = a$ what is the limit $\lim \limits_{n \to \infty}\frac{a_n}{3^1}+\frac{a_{n-1}}{3^2}+\ldots+\frac{a_1}{3^n}$?
Given $\lim \limits_{n \to \infty}a_n = a$ then I need to find the limit $\lim \limits_{n \to \infty} \frac{a_n}{3} + \frac{a_{n-1}}{3^2} + \frac{a_{n-2}}{3^3} + \dotso + \frac{a_1}{3^n}$.
It seems this problem can be tackled by Stolz–Cesàro theorem. Unfortunately, I don't know how to pick $x_n$ and $y_n$.
|
Fix $\epsilon > 0$. Since $\displaystyle\lim_{n \to \infty}a_n = a$, there exists an $N \in \mathbb{N}$ such that $|a_n-a| < \epsilon$ for all $n \ge N$.
Let $S_n := \dfrac{a_n}{3}+\dfrac{a_{n-1}}{3^2}+\cdots+\dfrac{a_2}{3^{n-1}}+\dfrac{a_1}{3^n}$. Then, $S_{n+1} = \dfrac{1}{3}S_n+\dfrac{1}{3}a_{n+1}$ for all $n \in \mathbb{N}$.
We can rewrite this as $S_{n+1} - \dfrac{a}{2} = \dfrac{1}{3}(S_n-\dfrac{a}{2})+\dfrac{1}{3}(a_{n+1}-a)$. Then, for $n \ge N$ we have:
\begin{align}
\left|S_{n+1} - \dfrac{a}{2}\right| & = \left|\dfrac{1}{3}(S_n-\dfrac{a}{2})+\dfrac{1}{3}(a_{n+1}-a)\right|
\\
&\le \dfrac{1}{3}\left|S_n-\dfrac{a}{2}\right|+\dfrac{1}{3}|a_{n+1}-a|
\\
&\le \dfrac{1}{3}\left|S_n-\dfrac{a}{2}\right|+\dfrac{1}{3}\epsilon.
\end{align}
Now, use induction to show that $\left|S_n-\dfrac{a}{2}\right| \le \left(\left|S_N-\dfrac{a}{2}\right|-\dfrac{1}{2}\epsilon\right) \cdot 3^{-(n-N)}+\dfrac{1}{2}\epsilon$ for all $n \ge N$.
Then, pick $N' > N$ such that $\left|\left|S_N-\dfrac{a}{2}\right|-\dfrac{1}{2}\epsilon\right| \cdot 3^{-(n-N)} \le \dfrac{1}{2}\epsilon$ for all $n \ge N'$.
With this choice of $N'$, we have $\left|S_n-\dfrac{a}{2}\right| \le \epsilon$ for all $n \ge N'$.
This can be done for any $\epsilon > 0$. Thus, $\displaystyle\lim_{n \to \infty}S_n = \dfrac{a}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find $\lim\limits_{x\to\pi/4} \frac{1-\tan(x)^2}{\sqrt{2}*\cos(x)-1}$ without using L'Hôpital's rule. Find $$\lim_{x\to\pi/4} \frac{1-\tan(x)^2}{\sqrt{2}\times \cos(x)-1}$$ without using L'Hôpital's rule.
I can solve it using L'Hôpital's rule, but is it possible to solve it without using L'Hôpital's rule?
|
Just another way to do it.
Let $x=y+\frac \pi 4$, expand and simplify. You should get $$\lim_{x\to\pi/4} \frac{1-\tan^2(x)}{\sqrt{2}\times \cos(x)-1}=\lim_{y\to 0}\frac{2 (\cos(y)-\sin (y)+1)}{(\cos (y)-\sin (y))^2}$$ which looks very simple.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
proving $t^6-t^5+t^4-t^3+t^2-t+0.4>0$ for all real $t$ proving $t^6-t^5+t^4-t^3+t^2-t+0.4>0$ for all real $t$
for $t\leq 1,$ left side expression is $>0$
for $t\geq 1,$ left side expression $t^5(t-1)+t^3(t-1)+t(t-1)+0.4$ is $>0$
i wan,t be able to prove for $0<t<1,$ could some help me with this
|
Let $p(t) = t^6 - t^5 + t^4 - t^3 + t^2 - t +2/5$. Observe that
$$ p(t) = \begin{bmatrix} 1\\t\\t^2\\t^3\end{bmatrix}^\intercal \begin{bmatrix}2/5&-1/2&0&0\\-1/2&1&-1/2&0\\0&-1/2&1&-1/2\\0&0&-1/2&1\end{bmatrix}\begin{bmatrix} 1\\t\\t^2\\t^3\end{bmatrix}
$$
The matrix in the middle is positive definite, from which it follows immediately that $p(t) > 0$ for all $t$.
Edit: Positive definiteness can be determined by mechanically calculating the matrix's minors, which is easy.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Number of zeroes of solution to $y''(x)+e^{x^2}y(x)=0$ in $[0,3π]$ The question is to investigate the number of zeroes of $y''(x)+e^{x^2}y(x)=0$ in $[0,3π]$.
Solving this ODE would not be an easy task as one has to use the power series solution and then investigating the zeroes of the solution will require more analysis. I thought it to compare this ODE with the standard $y''(x)+y(x)=0$ whose solution has three or four zeroes in the interval $[0,3π]$.
Since the coefficient of $y(x)$ is $e^{x^2}\ge 1$ for $x\in [0,3π]$ so the solution of given ODE must have atleast three zeroes in $[0,3π]$. However, what I thought was in the lights of Sturm-Comparison theorem so I am not sure.
Am I correct to interpret this?
|
(Moved from a deleted duplicate question, answered Feb 18 '17 at 8:44, since it contains a more elementary approach)
See the Sturm-Picone comparison theorem which tells you that you have at least as many roots as $\cos x$ on $[0,3π]$.
You could apply it to the segments $[0,π]$, $[π,2π]$ and $[2π,3π]$ separately to get a better lower bound for the root numbers as you then compare to $y''+e^{(k\pi)^2}y=0$, $k=0,1,2$ so that you get on the respective intervals at least as many roots as $\cos(e^{k^2\pi^2/2}x)$ where the frequencies have numerical values $1,\; 139.045636661,\;
373791533.224$.
With a finer subdivision one can drive this lower bound up to $6.5·10^{17}$ roots inside the interval.
Details on the application of the Sturm-Picone comparison theorem (2/21/17): On $[0,3\pi]$ use $q_1(x)=1$ and $q_2(x)=e^{x^2}$. Then $q_1\le q_2$ and $p_1=1=p_2$, so the theorem applies and any solution $v$ of $v''+q_2v=0$ has at least one root between any two consecutive roots $x_k=k\pi$, $k=0,1,2,3$ of the solution $u(x)=\sin x$ of $u'+q_1u=0$. The roots of $\cos x$ have this property, Which is why one can say that $v$ has at least as many roots as $\cos x$ in that interval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Why is the solution to $x-\sqrt 4=0$ not $x=\pm 2$? If the equation is $x-\sqrt 4=0$, then $x=2$.
If the equation is $x^2-4=0$, then $x=\pm 2$.
Why is it not $x=\pm 2$ in the first equation?
|
Actually while solving a quadratic equation we drop one step to short the answer or to save time whatever it may be.
THE STEP IS: $$x^2-4=0\tag{Step $1$}$$ $$x^2=4\tag{Step $2$}$$ $$x=\pm\sqrt4\tag{Step $3$}$$ $$x=\pm2\tag{Step $4$}$$ In our solution we drop the $3^{rd}$ step. Actually the thing is that as the $(+)$ sign change side as $(-)$ and $(\times)$ sign change side as $(\div)$. Similarly the $(^2)$[square] sign change side as $(\pm\sqrt.)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 13,
"answer_id": 4
}
|
$P$ is projection matrix iff $A$ is reflection matrix? I have the following definition
An $n \times n$ matrix $A$ is a reflection matrix if and only if $A^2 = I$ and $A^T= A$. A projection matrix is $P = 1/2(A+I)$.
I was wondering if I can conclude that $P$ is a projection matrix if and only if $A$ is a reflection matrix. If it can be said can you please explain why?
Assuming that this is true can I say that $A^2 = I$ and $A^T = A$ if and only if $P^2 = P$ and $P^T=P$?
|
Since you’re restricting $P$ to an orthogonal projection, consider one of the standard ways to construct the (orthogonal) reflection of a vector relative to some subspace of $\mathbb R^n$: find the orthogonal rejection of the vector from that subspace and reverse it. That is, if $W\subset\mathbb R^n$ is a subspace and $\pi_W$ is orthogonal projection onto $W$, then the reflection of a vector $v$ in $W$ is $\rho_Wv=\pi_Wv-(v-\pi_wv)=2\pi_Wv-v$, or, in matrix form, $Av=(2P-I)v$, from which your equation $P=(A+I)/2$ follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
A graph problem. Two friends $A$ & $B$ are initially at points $(0,0)$ & $(12,7)$ respectively on the infinite grid plane. $A$ takes steps of size $4$ units and $B$ takes steps of size $6$ unit along the grid lines. Show that it is not possible for them to meet at a point.
It is my problem.I can find the number of ways to go from $A$ to $B$, but! how I prove or disprove it. Thank you.
|
I assume that the definition of "meet" by the OP is that both $A$ and $B$ must end a move at the same point.
Consider a move that $A$ makes from $(x_1,y_1)$ to $(x_2,y_2)$
Notice that the sum of the "net movement" of $A$ in the $x$ and $y$ directions (i.e. $|x_1 - x_2| + |y_1 - y_2|$ where $|x|$ is the absolute value of $x$) is always even. We can prove this by considering moves in opposite directions. Notice that a move in one direction cancels a move in the opposite direction, and that this "cancellation" occurs in pairs of moves. Thus the net movement (i.e. the non-cancelled moves) must have a sum equal to $total \space moves - cancelled \space moves$. Since both numbers are even, the sum of the net movement is even. Similarly for $B$.
Thus if we sum the $x$ and $y$ coordinate for any where $A$ can be, it must have the same parity as the sum of $A$'s $x$ and $y$ coordinate at the start since all moves will have a sum of net movement of an even number, preserving the parity. Similarly for $B$.
Since in order for them to meet, the sum of the $x$ and $y$ coordinate of the meeting square has to be the same for both, thus the parity has to be the same.
However, the sum of the $x$ and $y$ coordinates of $(0,0)$ is an even number, while the sum of the $x$ and $y$ coordinates of $(12,7)$ is odd.
This implies that they cannot land on a square together since one of the parities must change, which is disallowed by the moves.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2075915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to solve this using definite integral? I already have asked $2$ questions on similar topics. In that questions I have equations of parabola and line. And equations have $y$ and $x$ variable.
But in this question I have functions.
$$f(x) = |x| - 1 \quad\mbox{and}\quad g(x) = 1 - |x|.$$
a) Sketch their graphs.
b) Using integration find the area of the bounded region.
In other two questions I know how to find values of $x$ and $y$ using substitution. But in this question because of $f(x)$ and $g(x)$. I am clueless how to start.
Please provide answer in detail.
My attempt is only.
$f(x) = x - 1, -x - 1$
$g(x) = 1 - x, 1 + x$.
|
I presumed you know how to sketch the region. Then
$$A=2\int_{0}^1[(1-x)-(x-1)]dx=\int_0^1(4-4x)dx=2.$$
In set notation, the region $R$ is given by
$$R=\{(x,y):-1\leq x\leq 1,f(x)\leq y\leq g(x)\}.$$
The line $x=0$ or known as the $y$-axis serves the line of symmetry.
Okay,this is the graph of the region $R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Show that the fractional power of a linear operator is closed Let $H$ be a $\mathbb R$-Hilbert space and $(\mathcal D(A),A)$ be a linear operator.
Assume $(e_n)_{n\in\mathbb N}\subseteq\mathcal D(A)$ is an orthonormal basis of $H$ with $$Ae_n=\lambda_ne_n\;\;\;\text{for all }n\in\mathbb N\tag 1$$ for some $(\lambda_n)_{n\in\mathbb N}\subseteq(0,\infty)$ with $$\lambda_{n+1}\ge\lambda_n\;\;\;\text{for all }n\in\mathbb N\;.\tag 2$$
Let $\alpha\in\mathbb R$, $$\mathcal D(A^\alpha):=\left\{x\in H:\sum_{n\in\mathbb N}\lambda_n^{2\alpha}\left|\langle x,e_n\rangle_H\right|^2<\infty\right\}$$ and $$A^\alpha x:=\sum_{n\in\mathbb N}\lambda_n^\alpha\langle x,e_n\rangle_He_n\;\;\;\text{for }x\in\mathcal D(A^\alpha)\;.$$
Let $(x_n)_{n\in\mathbb N}\subseteq\mathcal D(A^\alpha)$ and $x,y\in H$ with $$\left\|x_n-x\right\|_H\xrightarrow{n\to\infty}0\tag 3$$ and $$\left\|A^\alpha x_n-y\right\|_H\xrightarrow{n\to\infty}0\;.\tag 4$$ I want to show that
*
*$x\in\mathcal D(A^\alpha)$
*$y=A^\alpha x$
How can we do that?
|
The projection of $x_n -x$ onto the $e_k$ component must converge to zero. Multiply this projection by $\lambda_k^\alpha$ to get that
$$\lambda_k^\alpha \langle x_n,e_k\rangle\to\lambda_k^\alpha \langle x,e_k\rangle .\tag{1}$$
The left-hand side of $(1)$ is the same as $\langle A^\alpha x_n,e_k\rangle$ which must converge to the $e_k$ component of $y$. This means $\langle y,e_k\rangle = \lambda_k^\alpha\langle x,e_k\rangle$. Since $\sum_k \langle\cdot,e_k\rangle e_k$ is the identity (convergence of sum in SOT) this gives:
$$y=\sum_k \langle y,e_k\rangle e_k=\sum_k \lambda_k^\alpha\langle x,e_k\rangle e_k\tag{2}$$
most importantly: the right-hand side exists. This means $x\in \mathcal D(A^\alpha)$ and also $y=A^\alpha x$ as can be read off of $(2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is it proper form to write $f'(x)$ in terms of $a$ when using the $x \to a$ method? For first principles derivatives, I solve for $f'(x)$ by doing the following: $$f'(x) = \lim_{x\to a}\frac{f(x)-f(a)}{x-a}$$
Since $x$ approaches $a,$ the final answer will be in terms of $a.$ Is this considered proper form? I had always assumed that if you take the derivative of $f(x),$ you would get an answer in terms of $x.$ Is the $x \to a$ definition of the derivative above (the equation) incorrect? How would I rewrite it so the final answer is in terms of $x,$ if necessary?
|
Since $x$ approaches $a$, the final answer will be in terms of $a$. Is this considered proper form?
No, this is not proper. The left-hand side is looking for an answer in terms of $x$ while the right-hand side is in terms of $a$. This really doesn't make any sense. In order to fix this, you should make $a$ approach $x$, like so:
$$f'(x)=\lim_{a \to x} \frac{f(x)-f(a)}{x-a}$$
Now, the right-hand side will give you an answer in terms of $x$, so the above is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What were some major mathematical breakthroughs in 2016? As the year is slowly coming to an end, I was wondering which great advances have there been in mathematics in the past 12 months. As researchers usually work in only a limited number of fields in mathematics, one often does not hear a lot of news about advances in other branches of mathematics. A person who works in complex analysis might not be aware of some astounding advances made in probability theory, for example. Since I am curious about other fields as well, even though I do not spend a lot of time reading about them, I wanted to hear about some major findings in distinct fields of mathematics.
I know that the question posed by me does not allow a unique answer since it is asked in broad way. However, there are probably many interesting advances in all sorts of branches of mathematics that have been made this year, which I might have missed on and I would like to hear about them. Furthermore, I think it is sensible to get a nice overview about what has been achieved this year without digging through thousands of different journal articles.
|
The Non Existent Complex 6 Sphere by Michael Atiyah
"The possible existence of a complex structure on the 6-sphere has been a famous unsolved problem for over 60 years. In that time many "solutions" have been put forward, in both directions. Mistakes have always been found. In this paper I present a short proof of the non-existence, based on ideas developed, but not fully exploited, over 50 years ago. The only change in v2. is in section 3, where the notation has been clarified."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "186",
"answer_count": 6,
"answer_id": 0
}
|
An equation with two variables is unsolvable for either one, but how can I know if it's unsolvable as an expression for both? Weird title perhaps, so let me illustrate with the question that got me thinking about this problem:
You are buying a laptop and have two to choose from. What is the
difference between the original prices of the two laptops?
What you know: After the laptops have come down in price 35 % and 45 % respectively, the difference in price between them is $50.
Can this answer be determined?
Intuitively, I would say no. If I write the problem algebraically I get:
0.65a - 0.55b = 50
It's obvious we can never work out a or b from this, but that's not what they're asking. They're asking what a - b (or rather |a - b|) is.
I understand that this answer still can't be determined, but if the percentages were 50 (or any same number) I could instead write:
0.5a - 0.5b = 50
Which simplifies to:
a - b = 100
I'm not quite understanding why I can't solve for a - b from any equation that includes a and b. What's the inutition to help me understand why this question is unsolvable in case of different percentages, but solvable in the case of the same?
|
I believe its because the difference between $a$ and $b$ is dependent on their values in every case and it just so happens the case where the difference did not depend on them is when they both drop by the same percentage. Let me illustrate using the example you've given.
$$0.65a - 0.55b = 50$$
$$0.55a - 0.55b = 50 - 0.10a$$
$$a-b = \frac{50-0.10a}{0.55}$$
As you can see the difference now depends on $a$ and this is generally what happens when the percentage drops are not equal. Intuitively think about it this way, if $a$ and $b$ both drop by a particular same percentage the difference will aswell, but if they don't the difference depends on how much of an impact the dicount on $a$ does with respect to $b$ which depends on their values.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Given a graph with distinct edge weights and a not-minimum ST, there always exist another ST of lesser total weight that differs only by one edge I have to show that, if all the edge weights of a graph are distinct, given a spanning tree $T$ that is not a MST, there always exist a spanning tree $T'$ of lesser total weight, s.t. $T'$ differs from $T$ only by one edge.
I started reasoning from this question, but it's not helpful for my case and I cannot go over.
|
This is true even if not all the edge weights are distinct.
Define
$$ w'(e) = \begin{cases} 2 \cdot w(e) &\text{ if } e \text{ belongs to } T \\ 2 \cdot w(e) + 1 & \text{otherwise}\end{cases} $$
then sort all the edges according to $w'$ (rather than $w$). Observe that it produces the same ordering as sorting according to $w$ only that it prioritizes edges of $T$ if possible. Now run the Kruskal's algorithm using that new ordering and denote by $e$ the first edge not in $T$ that was chosen by the algorithm.
Let $e'$ be the heaviest edge on the cycle created by $e$ in $T \cup \{e\}$. Observe that $e$ does not create a cycle with edges of strictly smaller $w'$, otherwise it would not be chosen by the algorithm. Thus $w'(e) \leq w'(e')$, and so $w(e) < w'(e')$, because $e$ is not in $T$ while $e'$ is. Therefore we can set $T' = T \setminus \{e'\} \cup \{e\}$.
I hope this helps $\ddot\smile$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $e^x, xe^x,$ and $x^2e^x$ are linearly independent over $\mathbb{R}$
Question: Prove that $e^x, xe^x,$ and $x^2e^x$ are linearly independent over $\mathbb{R}$.
Generally we proceed by setting up the equation
$$a_1e^x + a_2xe^x+a_3x^2e^x=0_f,$$
which simplifies to $$e^x(a_1+a_2x+a_3x^2)=0_f,$$ and furthermore to
$$a_1+a_2x+a_3x^2=0_f.$$
From here I think it's obvious that the only choice to make the sum the zero function is to let each scalar equal 0, but this is very weak reasoning.
As an undergraduate we learned to test for independence by determining whether the Wronskian is not identically equal to 0. But I can only use this method if the functions are solutions to the same linear homogeneous differential equation of order 3. In other words, I cannot use this method for an arbitrary set of functions. I was not given a differential equation, so I determined it on my own and got that they satisfy $$y'''-3y''+3y'-y = 0.$$
I found the Wronskian, $2e^{3x}\neq0$ for any real number. Thus the set is linearly independent. But it took me some time to find the differential equation and even longer finding the Wronskian so I'm wondering if there is a stronger way to prove this without using the Wronskian Test for Independence.
|
Suppose $a_1e^x + a_2xe^x+a_3x^2e^x=0 $ for all $x$.
Setting $x=0$ shows that $a_1 = 0$.
Now note that $a_2xe^x+a_3x^2e^x=0 $ for all $x$ and hence
$a_2e^x+a_3xe^x=0 $ for all $x \neq 0$. Taking limits as $x \to 0$ shows
that $a_2 = 0$, and setting $x=1$ shows that $a_3 = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2076908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
How to calculate Limit of $(1-\sin x)^{(\tan \frac{x}{2} -1)}$ when $x\to \frac{\pi}{2}$.
How to calculate Limit of $(1-\sin x)^{(\tan \frac{x}{2} -1)}$ when $x\to \frac{\pi}{2}$.
We can write our limit as $\lim_{x\to \frac{\pi}{2}}e^{(\tan \frac{x}{2} -1) \log(1-\sin x)}~ $ but I can not use L'Hopital rule.
Is there another way?
|
Using (elementary) Taylor series, to low order.
As you noticed, $$
(1-\sin x)^{(\tan \frac{x}{2} -1)}=
\exp\left( (\tan \frac{x}{2} -1) \ln (1-\sin x)\right)
$$
Now, since I am much more comfortable with limits at $0$ than at other points, let us write $x = \frac{\pi}{2}+h$ and look at the limit of the exponent when $h\to 0$:
$$
(\tan\left(\frac{\pi}{4}+\frac{h}{2}\right) -1) \ln (1-\sin(\frac{\pi}{2}+h))
=
(\tan\left(\frac{\pi}{4}+\frac{h}{2}\right) -1) \ln (1-\cos h)
$$
Now, using Taylor series at $0$:
*
*$\cos u = 1- \frac{u^2}{2} + o(u^2)$
*$\tan\left(\frac{\pi}{4}+u\right) = 1+\tan'\left(\frac{\pi}{4}\right) u + o(u) = 1+2u+o(u)$
so
$$
(\tan\left(\frac{\pi}{4}+\frac{h}{2}\right) -1) \ln (1-\sin(\frac{\pi}{2}+h))
=
(h + o(h)) \ln\left(\frac{h^2}{2} + o(h^2)\right) \operatorname*{\sim}_{h\to0} 2h \ln h
$$
and the RHS converges to $0$ when $h\to0$. By continuity of the exponential, we then have
$$
\exp\left( (\tan \frac{x}{2} -1) \ln (1-\sin x)\right)
\xrightarrow[x\to \frac{\pi}{2}]{} e^0 =1.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Determinant of odd matrix Given a matrix $A = \{{a_{i,j}}\} \in M_{7\times7}(\Bbb R)$
It is said that
$a_{i,j} = 0$ if $i$,$j$ are both odd.
Show that $det(A) = 0$
Any hints?
|
Let $e_1,\dots,e_7$ denote the standard basis of $\Bbb R^7$. Let $P$ be the permutation matrix
$$
P = \pmatrix{e_1&e_3&e_5&e_7&e_2&e_4&e_6}
$$
Then $PAP^T$ can be written in the form
$$
M = P^TAP = \pmatrix{0_{4 \times 4} & M_{12}\\M_{21}&M_{22}}
$$
$M_{21}$ is $3 \times 4$, so it has linearly dependent columns.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to prove $\sqrt{1000} < x < 1000$? I have been given that
$$x = \frac 21 \times \frac 43 \times \frac 65 \times \frac 87 \times \cdots \times \frac {996}{995} \times \frac{998}{997} \times \frac {1000}{999}$$
How can I prove that $\sqrt{1000} < x < 1000$?
|
\begin{align}
x^2 &= \left(\frac 21 \times \frac 21\right) \times \left(\frac 43 \times \frac 43\right) \times \cdots \times \left(\frac{1000}{999} \times \frac {1000}{999}\right) \\
&\ge \left(\frac 21 \times \frac 32\right) \times \left(\frac 43 \times \frac 54\right) \times \cdots \times \left(\frac{1000}{999} \times \frac {1001}{1000}\right) \\
&= 1001
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Finding the Tangent Line to a Surface
What is the equation of the tangent line to the intersection of the surface $z = \arctan (xy)$ with the plane $x=2$, at the point $(2,\frac{1}{2}, \frac{\pi}{4})$
The intersection of $x=2$ and $z= \arctan (xy)$ produces the curve $z = \arctan (2y)$ in the $yz$-plane. Thus, the partial derivative is $\frac{\partial z }{\partial y} = \frac{\partial z }{\partial y} \arctan(2y) = \frac{2}{1 +4y^2}$ and the slope of the line passing through $y=1/2$ is $\frac{\partial z }{\partial y} = 1$
My first question is, is there a conceptual/logical error in plugging the $x=2$ before taking the (partial) derivative. In the book I am using for review, the author computes the partial derivative of $z= \arctan (xy)$ wrt $y$ and then plugs in the point $(2, \frac{1}{2})$. I realize that I obtained the same answer, but that does not necessarily imply it's a valid way of solving the problem.
Now, $\frac{\partial z }{\partial y} = 1$ gives us the slope of the 2-D version of the line whose equation we interested in finding. Having a little trouble determining the equation, I consulted the book and this is what it says:
"Since tangent line is in the plane $x=2$, this calculation [namely, the calculation of $\frac{\partial z }{\partial y}$] shows that the line is parallel to the vector $v = (0,1,1)$."
I don't see how it follows from $\frac{\partial z}{\partial y} = 1$ that the "slope" vector (not exactly sure what it is called) is $v=(0,1,1)$.
|
For your first question:
is there a conceptual/logical error in plugging the $x=2$ before taking the (partial) derivative.
No, and the reason is that you're plugging in a value for $x$ and you're taking the partial derivative with respect to (wrt) $y$. When you take the partial wrt $y$, you treat $x$ as a constant anyway. So it won't matter if you replace $x$ with a constant before or after taking the partial wrt $y$. Note that it will matter if you take the partial wrt $x$ instead, because replacing $x$ with a constant will force the partial wrt x to be zero.
For your second question:
I don't see how it follows from $\frac{\partial z}{\partial y} = 1$ that the "slope" vector (not exactly sure what it is called) is $v=(0,1,1)$.
The vector $v$ doesn't have any special name with respect to $\frac{\partial z}{\partial y}$. It's simply a vector that's parallel to the tangent line. Anyway, the calculation gives us
$$
\frac{\partial z}{\partial y} = \frac2{4y^2+1}.
$$
And remember we're dealing with the tangent line at the point $(2, 1/2, \pi/4)$. So $y = 1/2$, which means
$$
\frac{\partial z}{\partial y}\bigg|_{(x,y,z) = (2,1/2,\pi/4)} = \frac2{4(\frac12)^2+1} = 1.
$$
So the line has slope $1$ and passes through $(x,y,z) = (2,\frac12, \frac\pi4)$. Since we can view this as a 2D line because we're working in the plane $x=2$, then the equation of our line is $z - \frac\pi4 = 1(y - \frac12)$, i.e., $$z = y + \frac\pi4 - \frac12.$$
So, why is this parallel to the vector $v = \langle 0,1,1 \rangle$? Notice that the line has slope $1$. That is, every time $y$ increases by $1$ unit, $z$ also increases by $1$ unit. The same is true of the vector $v = \langle 0,1,1 \rangle$. Recall that vectors really only tell us two things: magnitude and direction. Location is irrelevant, so for simplicity we can assume vectors begin at the origin. Then we can view $v = \langle 0,1,1 \rangle$ as the line segment between $(0,0,0)$ and $(0,1,1)$. (Technically it's a directed line segment but that's not relevant here.) So we can actually view our vector $v$ as a piece of the line $z=y$ in the plane $x=0$. Do you agree that $z=y$ and $z=y+ \frac\pi4 - \frac12$ are parallel?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove this relation for areas: $2[\triangle BOD] = [\square COME]$ In the figure, $D$ and $M$ are the midpoints of $AB$ and $AC$, respectively. Then prove that $2\left[\triangle BOD\right] = \left[\square COME\right]$
My Attempt
*
*$\left[\triangle COM\right]=\left[\triangle CME\right]$
*$\left[\triangle BCD\right]=\left[\triangle CDA\right]$
I could not move forward. Please help me to complete
|
Extend $AO$ to meet $BC$ at $N$.
$AN$ is median since $O$ is centroid. Therefore, $\Delta ANB=\Delta ANC$ and $\Delta ONB=\Delta ONC$.
$$\Rightarrow \left[\Delta ANB\right]-\left[\Delta ONB\right] = \left[\Delta ANC\right]-\left[\Delta ONC\right]$$
$$\Rightarrow \frac{\left[\Delta AOB\right]}{2} = \frac{\left[\Delta AOC\right]}2$$
$$\Rightarrow \left[\Delta BOD\right] = \left[\Delta COM\right] $$
$$\Rightarrow 2\left[\Delta BOD\right] = \left[COME\right]$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $|z^2-1|=|z|^2+1$, show that $z$ lies on imaginary axis If $|z^2-1|=|z|^2+1$, how do we show that $z$ lies on imaginary axis ?
I understand that I can easily do this if I substitute $z=a+ib$. How do we solve it using algebra of complex numbers without the above substitution ?
My Attempt:
$$
|z|^2+|1|^2=|z-1|^2+2\mathcal{Re}(z)=|z^2-1|\\
2\mathcal{Re}(z)=|z^2-1|-|z-1|^2=|(z+1)(z-1)|-|z-1|.|z-1|\\=|z+1|.|z-1|-|z-1|.|z-1|
$$
How do I proceed further and prove $\mathcal{Re}(z)=0$ ?
|
Use the fact that $|z|^2 = z\bar{z}$.
Squaring both sides of the given equality yields
\begin{align}
|z^2-1|^2 &= (z\bar{z} + 1)^2\\
(z^2 - 1)(\bar{z}^2 - 1) &= (z\bar{z} + 1)(z\bar{z}+1)\\
z^2 + 2z\bar{z} + \bar{z}^2 &= 0\\
(z + \bar{z})^2 &= 0\\
z = -\bar{z}
\end{align}
from which it follows that the real part of $z$ is $0$. (I skipped some simple algebra steps above.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
How to simplify an expression that does not have a common factor I am trying to simplify this expression :
$$9a^4 + 12a^2b^2 + 4b^4$$
So I ended up having this :
$$(3a^2)^2 + 2(3a^2)(2b^2) + (2b^2)^2$$
However, after that I don't know how to keep on simplifying the equation, it is explained that the answer is $(3a^2 + 2b^2)^2$ because the expression is equivalent to $(x + y)^2$ but I don't understand how they get to that ?
|
after the binomial formula $$(x+y)^2=x^2+2xy+y^2$$ we get: $$(3a^2+2b^2)^2=9a^4+12a^2b^2+4b^4$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Beta distribution, as $\epsilon \to 0$, $(-\epsilon\,\log(B_\epsilon), -\epsilon\,\log(1 - B_\epsilon)) \implies (\xi E_a, (1 - \xi)E_b)$. Fix $a$, $b > 0$. For $\epsilon > 0$, let $B_\epsilon$ be distributed according to a Beta distribution with parameters $\epsilon a$ and $\epsilon b$. Now, I wish to show that as $\epsilon \to 0$,$$(-\epsilon\,\log(B_\epsilon), -\epsilon\,\log(1 - B_\epsilon)) \implies (\xi E_a, (1 - \xi)E_b)$$where $\xi$ is a Bernoulli $(0, 1)$-valued random variable with $\mathbb{P}(\xi = 1) = b/(a + b)$, and $E_a$, $E_b$ are independent (of each other and of $\xi$) exponential random variables with rates $a$ and $b$.
But I'm not sure on where to start. Is it possible somebody could give me a hint, get me started in the right direction?
|
To show convergence in distribution, we must show for $x, y \in \mathbb{R}$:
$$\lim_{\epsilon \to 0}\mathbb{P}(-\epsilon \text{log}(B_{\epsilon}) \leq x, -\epsilon \text{log}(1 - B_{\epsilon}) \leq y) = \mathbb{P}(\zeta E_{a} \leq x, (1 - \zeta)E_{b} \leq y)$$
PART I (Left-hand side)
We start be rewriting some expressions on the left-hand side:
$$ -\epsilon \text{log}(B_{\epsilon}) \leq x \Rightarrow B_{\epsilon} \geq e^{-x/ \epsilon}$$
$$ -\epsilon \text{log}(1 - B_{\epsilon}) \leq y \Rightarrow B_{\epsilon} \leq 1 - e^{-y / \epsilon}$$
We use these back in the left-hand side of our original expression:
$$ \mathbb{P}(-\epsilon \text{log}(B_{\epsilon}) \leq x, -\epsilon \text{log}(1 - B_{\epsilon}) \leq y) = \mathbb{P}(e^{-x / \epsilon} \leq B_{\epsilon} \leq 1 - e^{-y / \epsilon})$$
Without loss of generality, assume $e^{-x / \epsilon} \leq 1 - e^{-y / \epsilon}$ for $x,y \geq 0$. The above becomes:
$$ \frac{1}{B(\epsilon a, \epsilon b)} \int_{e^{-x/ \epsilon}}^{1 - e^{-y / \epsilon}} t^{a \epsilon - 1}(1 - t)^{b \epsilon - 1} dt $$
$$ = \frac{1}{B(\epsilon a, \epsilon b)}[\int_{0}^{1} t^{a \epsilon - 1}(1 - t)^{b \epsilon - 1} dt - \int_{e^{1 - e^{-y / \epsilon}}}^{1} t^{a \epsilon - 1}(1 - t)^{b \epsilon - 1} dt - \int_{0}^{e^{-x / \epsilon}} t^{a \epsilon - 1}(1 - t)^{b \epsilon - 1} dt] $$
$$ = 1 - \frac{1}{B(\epsilon a, \epsilon b)}(\int_{0}^{e^{-y / \epsilon}} t^{b \epsilon - 1}(1 - t)^{a \epsilon - 1} dt - \int_{0}^{e^{-x / \epsilon}} t^{a \epsilon - 1}(1 - t)^{b \epsilon - 1} dt)$$
by integration by substitution; here, $B$ is the beta function. Now, let us look at the quantity $\int_{0}^{z} t^{\alpha - 1}(1 - t)^{\beta - 1}dt$, where $0 < z \leq 1$ and $0 < \alpha, \beta, < 1$. We expand $(1 - t)^{\beta - 1}$ by the binomial theorem:
$$\int_{0}^{z} t^{\alpha - 1}(1 - t)^{\beta - 1}dt$$
$$ = \int_{0}^{z} (t^{\alpha - 1} + \sum_{n = 1}^{\infty}(-1)^{n}\frac{(\beta - 1)(\beta - 2)\dots (\beta - 1 - (n - 1))}{n!}t^{n + \alpha - 1} )dt$$
$$ = \int_{0}^{z} t^{\alpha - 1}dt + \sum_{n = 1}^{\infty} (-1)^{n}\frac{(\beta - 1)(\beta - 2)\dots (\beta - 1 - (n - 1))}{n!} \int_{0}^{z}t^{n + \alpha - 1}dt, \text{ by interchanging integral and sum}$$
$$ = z^{\alpha}(\frac{1}{\alpha} + \sum_{n = 1}^{\infty} (-1)^{n}\frac{(\beta - 1)(\beta - 2)\dots (\beta - 1 - (n - 1))}{n!(n + \alpha)} z^{n}) \text{ (*)}$$
When $z = 1, \alpha = \epsilon a, \beta = \epsilon b$, the above is $B(\epsilon a, \epsilon b)$, and from how $B(\alpha, \beta) = B(\beta, \alpha)$, $(*)$ can be written as either:
$$ \frac{1}{\epsilon a} + \sum_{n = 1}^{\infty} (-1)^{n}\frac{(\epsilon b - 1)(\epsilon b - 2)\dots (\epsilon b - 1 - (n - 1))}{n!(n + \epsilon a)}$$
or
$$ \frac{1}{\epsilon b} + \sum_{n = 1}^{\infty} (-1)^{n}\frac{(\epsilon a - 1)(\epsilon a - 2)\dots (\epsilon a - 1 - (n - 1))}{n!(n + \epsilon b)}$$
From these two expressions, another way to write $B(\epsilon a, \epsilon b)$ is:
$$ B(\epsilon a, \epsilon b) = \frac{1}{2}(B(\epsilon a, \epsilon b) + B(\epsilon b, \epsilon a))$$
$$ = \frac{a + b}{\epsilon ab} + \frac{1}{2} \sum_{n = 1}^{\infty} \frac{(-1)^{n}}{n!} (\frac{(\epsilon b - 1)(\epsilon b - 2)\dots (\epsilon b - 1 - (n - 1))}{n + \epsilon a} + \frac{(\epsilon a - 1)(\epsilon a - 2)\dots (\epsilon a - 1 - (n - 1))}{n + \epsilon b} )$$
When $z = e^{-y / \epsilon}, \alpha = \epsilon b, \beta = \epsilon a$, $(*)$ becomes
$$ e^{-by}(\frac{1}{\epsilon b} + \sum_{n = 1}^{\infty} (-1)^{n} \frac{(\epsilon a - 1) \dots (\epsilon a - n)}{n!(n + \epsilon b)}e^{-yn / \epsilon})$$
and when $z = e^{-x / \epsilon}, \alpha = \epsilon a, \beta = \epsilon b$, $(*)$ becomes
$$ e^{-ay}(\frac{1}{\epsilon a} + \sum_{n = 1}^{\infty} (-1)^{n} \frac{(\epsilon b - 1) \dots (\epsilon b - n)}{n!(n + \epsilon a)}e^{-xn / \epsilon})$$
We now compute the following limits:
$$ \lim_{\epsilon \to 0} \frac{\int_{0}^{e^{-y/ \epsilon}}t^{b \epsilon - 1}(1 - t)^{a \epsilon - 1} dt}{B(\epsilon a, \epsilon b)} $$
$$ = e^{-by}\lim_{\epsilon \to 0}\frac{\frac{1}{b} + \epsilon \sum_{n = 1}^{\infty} (-1)^{n} \frac{(\epsilon a - 1) \dots (\epsilon a - n)}{n!(n + \epsilon b)}e^{-yn / \epsilon}}{\frac{a + b}{ab} + \frac{1}{2} \epsilon \sum_{n = 1}^{\infty} \frac{(-1)^{n}}{n!} (\frac{(\epsilon b - 1)(\epsilon b - 2)\dots (\epsilon b - 1 - (n - 1))}{n + \epsilon a} + \frac{(\epsilon a - 1)(\epsilon a - 2)\dots (\epsilon a - 1 - (n - 1))}{n + \epsilon b} )}$$
Note the following:
1) For fixed $n$, $e^{-yn/ \epsilon} \to 0$ as $\epsilon \to 0$ and as $n \to \infty$, $e^{-yn/ \epsilon} \to 0$ as $\epsilon \to 0$, which implies $\epsilon \sum_{n = 1}^{\infty} (-1)^{n} \frac{(\epsilon a - 1) \dots (\epsilon a - n)}{n!(n + \epsilon b)}e^{-yn / \epsilon} \to 0$ as $\epsilon \to 0$.
2) $\frac{1}{2} \epsilon \sum_{n = 1}^{\infty} \frac{(-1)^{n}}{n!} (\frac{(\epsilon b - 1)(\epsilon b - 2)\dots (\epsilon b - 1 - (n - 1))}{n + \epsilon a} + \frac{(\epsilon a - 1)(\epsilon a - 2)\dots (\epsilon a - 1 - (n - 1))}{n + \epsilon b} ) \to 0$ as $\epsilon \to 0$; this should be able to be proved by writing $\epsilon$ as $\frac{1}{1/ \epsilon}$ and using l'Hopital's rule; one will have to differentiate under the series.
From these observations, the above is $e^{-by}\frac{1/b}{(a + b)/ab} = \frac{a}{a + b}e^{-by}$. A similar computation shows $\lim_{\epsilon \to 0}\frac{\int_{0}^{e^{-x/ \epsilon}}t^{a \epsilon - 1}(1 - t)^{b \epsilon - 1}dt}{B(\epsilon b, \epsilon a)} = \frac{b}{a + b}e^{-ay}$. Plugging all of this back into our original computation shows:
$$ \lim_{\epsilon \to 0}\mathbb{P}(-\epsilon \text{log}(B_{\epsilon}) \leq x, -\epsilon \text{log}(1 - B_{\epsilon}) \leq y) = 1 - \frac{1}{a + b}(ae^{-by} + be^{-ax})$$
PART II (Right-hand side)
Now, let us look at the right-hand side of our original expression. Using that $\zeta$, $E_{a}$, and $E_{b}$ are independent and how $\zeta$ is a Bernoulli $(0,1)$-random variable, and assuming that $x, y \geq 0$:
$$\mathbb{P}(\zeta E_{a} \leq x, (1 - \zeta)E_{b} \leq y) = \mathbb{P}(E_{a} \leq x)\mathbb{P}(\zeta = 1) + \mathbb{P}(E_{b} \leq y)\mathbb{P}(\zeta = 0)$$
$$ = \frac{b}{a + b}(1 - e^{-ax}) + \frac{a}{a + b}(1 - e^{-by})$$
$$ = 1 - \frac{1}{a + b}(be^{-ay} + ae^{-by})$$
$$ = \lim_{\epsilon \to 0}\mathbb{P}(-\epsilon \text{log}(B_{\epsilon}) \leq x, -\epsilon \text{log}(1 - B_{\epsilon}) \leq y), \text{what we have shown in part I}$$
Thus, $(-\epsilon \text{log}(B_{\epsilon}), -\epsilon \text{log}(1 - B_{\epsilon}))$ converges to $(\zeta E_{a}, (1 - \zeta)E_{b})$ in distribution as $\epsilon \to 0$.
(Note to Ivan Corwin, who posed this problem: this question was asked and answered on Math Stack Exchange on 12/30, well after the final exam.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Algorithm for Random irregular polygon in between two shapes This is not a homework problem. It is meant as a challenge for people who really enjoy math and have time to spare.
Background Info
Suppose you have a 2D Cartesian coordinate system. There are three shapes: R, C, and P.
R is a large rectangle. Its left side is along the vertical axis, and its bottom side is along the horizontal axis, such that its bottom-left corner is at the origin (0, 0).
C is a small circle that is located somewhere inside of R. The center of C is not necessarily at R's geometric center. C's border cannot intersect with any part of R's border.
P is an irregular polygon of N sides. It is a simple, convex polygon (not self-intersecting, all angles under 180 degrees). R surrounds P, and P surrounds C. In other words, P's corners and sides exist in the region between C's border and R's border. The corners of P do not necessarily touch the sides of R. Any of P's sides may be tangent to C, but none of P's sides may overlap inside of C.
Objective
Design an algorithm that generates a random variation of P's corners. The corners of P are placed at random distances and random angles relative to C's center. The algorithm's output is an ordered set of Cartesian coordinates, arranged by counter-clockwise position around C.
You are given the following constant values:
*
*the width and height of the bounding rectangle R
*the radius and center of the circle C
*the number N of corners for polygon P
*the maximum distance between the center of C and any of P's corners
If this is solvable, how would you implement this algorithm?
Or if this is not solvable, can you explain why not? What would need to change so that it becomes solvable?
|
Without some geometric condition on what it means for $R$ to be "large" and $C$ to be "small," there may not be a solution for $n=3$, i.e., $P$ a triangle:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
what are the different applications of group theory in CS? What are some applications of abstract algebra in computer science an undergraduate could begin exploring after a first course?
Gallian's text goes into Hamming distance, coding theory, etc., I vaguely recall seeing discussions of abstract algebra in theory of computation / automata theory but what else? I'm not familiar with any applications past this.
Bonus: What are some textbooks / resources that one could learn about said applications?
|
The theory of Grobner Basis is a way to solve simultaneous multivariable polynomial equations. The key component underlying this is something called Buchberger's Algorithm (which given an ideal $I$ of some ring $R = k[x_1,\dots,x_n]$) computes the Grobner basis, an especially nice way to represent the ideal that makes it MUCH easier to answer certain questions about it).
The algorithm is interesting in that naively it's not clear that it terminates. Fortunately, by working with something called a monomial order we can guarantee that at each iteration of the algorithim we get "closer" to a solution, so it must terminate.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 11,
"answer_id": 1
}
|
Replacing floor operation with modulus I have this array of characters
aaaaaaaaaabbbbbbbbbbccccccccccddddddddddeeeeeeeeeeffffffffffgggggggggghhhhhhhhhhiiiiiiiiiijjjjjjjjjjkkkkkkkkkkllllllllllmmmmmmmmmmnnnnnnnnnnooooooooooppppppppppqqqqqqqqqqrrrrrrrrrrssssssssssttttttttttuuuuuuuuuuvvvvvvvvvvwwwwwwwwwwxxxxxxxxxxyyyyyyyyyyzzzzzzzzzz
it's the alphabet in order, with 10 of each character, so that's 260 characters.
Given an index between 0 and 259, how can I create a function that can return the expected letter with only knowledge of the alphabet itself (a-z)?
One simple way would be:
index_of_letter = floor(index/10);
so if index is 9, it would yield floor(.9), which is 0, which would be the letter a in the standard western alphabet. If the index was 259, that would yield floor(25.9) which would in turn yield z.
However, isn't there a way to do this with modulus instead of using a floor operator?
I can't figure out how to do this with mod, but I am sure there is away to do! Anyone know how I could do this with mod?
|
Sure. For positive $x, k$ integers, and % being the usual mod operator
floor(x/k) = (x - x % k)/k
As dxiv notes, this might be redundant (depending on the language you're using) - often, for integer types, $x/y$ is already a $floor$ operation.
One issue here is that various languages will disagree about the results when $x<0$. For example some languages believe $(-1)$ % $3 = -1$; while others insist that $(-1)$ % $3 = 2$. So that should be kept in mind.
Also, this relationship can be used when the language does not support % for non-integers; because
x % k = x - k*floor(x/k)
(again, keep track of what you want to happen with $x<0$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2077996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
A property similar to paracompacness Definitaion1: A family $\{A_t\}_{t\in S}$ of subsets of a
topological space $X$ is locally finite if for every point $x\in
X$ there exists a neighborhood $U$ of $x$ such that the set
$\{s\in S : U \cap A\neq \emptyset\}$ is finite.
Dfinition2: A topological space $X$ is called a *-space
space if $X$ is a Hausdorff space and every open cover of $X$ has
a locally finite subcover.
Are *-spaces famous? Or are there any equivalence condition for
them? (Note that they are not paracompact)
|
Even more can be said, suppose $X$ has the property that every open cover $\mathcal{U}$ has a point-finite subcover. Then $X$ is compact. It's clear that $\ast$-spaces have this property (as locally finite implies point-finite).
Proof: let $\mathcal{U}$ be any open cover of $X$. Let $U_0$ be any non-empty open set from $\mathcal{U}$ (and $p \in U_0$). Define the open cover
$$\mathcal{V} = \{ U \cup U_0: U \in \mathcal{U} \}\text{.}$$
Clearly $\mathcal{V}$ is also an open cover of $X$ so by assumption has a point-finite subcover $\mathcal{V}'$. As every member of $\mathcal{V}$ contains $p$, this subcover can only be point-finite if it is finite. But then finitely members of $\mathcal{U}$ also cover $X$, showing $X$ is compact.
So demanding subcovers instead of refinements reduces almost all such variations to plain old compactness.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Proof without induction of the inequalities related to product $\prod\limits_{k=1}^n \frac{2k-1}{2k}$ How do you prove the following without induction:
1)$\prod\limits_{k=1}^n\left(\frac{2k-1}{2k}\right)^{\frac{1}{n}}>\frac{1}{2}$
2)$\prod\limits_{k=1}^n \frac{2k-1}{2k}<\frac{1}{\sqrt{2n+1}}$
3)$\prod\limits_{k=1}^n2k-1<n^n$
I think AM-GM-HM inequality is the way, but am unable to proceed. Any ideas. Thanks beforehand.
|
Notice that in problem #$1$ if you raise each side to the $n$th power, then it is equivalent to showing that the product of the $n$ factors of the form
$$\left(1-\frac{1}{2k}\right)\tag{1}$$
is greater than $\left(\frac{1}{2}\right)^n$. But that is clearly true since each factor in equation $(1)$ is greater than or equal to $\frac{1}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Distribute balls to cells problem I couldn`t understand how did they get to (b) solution.
Can someone please give an explanation?
Thanks!
|
An explanation of the expression has already been given, I'd only like to add that since you are likely to encounter many problems of the balls in cells type,(and worked out differently in different books !), you might like to standardize the method.
The one I use for counting the arrangements is a multiplication of two multinomial coefficents, one for the number of balls in each cell, and the other for the frequency with which nulls, singles, doubles,... occur, e.g. for the two patterns possible here, viz.
$3-1-1-1-1-0-0: \binom7{3,1,1,1,1}\binom7{1,4,2}$ or the equivalent expression $\frac{7!}{3!1!1!1!1!}\cdot\frac{7!}{1!4!2!}$
$2-2-1-1-1-0-0:\binom7{2,2,1,1,1}\binom7{2,3,2}$ or the equivalent expression $\frac{7!}{2!2!1!1!1!}\cdot\frac{7!}{2!3!2!}$
Add up, and divide by the sample space $7^7$
This way the process requires least thought, and is less error prone.
NOTE
Talking of error proneness, the part of the book's expression that reads
$\frac{7!}{2!2!3!1!1!}$ is wrong !Can you tell why ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
limit using Taylor's theorem i need to find the limit of $\lim _{x \to 0}f(x)$
$f(x)=\frac{\left (\sinh \left (x \right ) \right )^{n}-x^{n}}{\left (\sin \left (x \right ) \right )^{n}-x^{n}}$
i tried this $f(x)=\frac{(\frac{\sinh (x)}{x} )^{n}-1}{(\frac{(\sin (x)}{x})^{n}-1}$
and with Taylor's theorem $\lim _{x \to 0}f(x)=\frac{(1+\frac{x^{2}}{6}+\epsilon (x^{3}))^{n}-1}{(1-\frac{x^{2}}{6}+\epsilon (x^{3}))^{n}-1}$
which is also another indeterminate form
|
Using the binomial theorem at this point, you get
$$
\lim_{x\rightarrow 0}f(x)=\lim_{x\rightarrow 0}\frac{1+\frac{nx^2}{6}+\epsilon(x^3)-1}{1-\frac{nx^2}{6}+\tilde\epsilon(x^3)-1}
=\lim_{x\rightarrow 0}\frac{\frac{nx^2}{6}+\epsilon(x^3)}{-\frac{nx^2}{6}+\tilde\epsilon(x^3)}.
$$
(Here $\epsilon$ and $\tilde\epsilon$ represents functions whose terms all include a factor of $x^k$ where $k>3$.) Now, use the standard technique of dividing the numerator and denominator by the lowest power of $x$ in the denominator to get
$$
=\lim_{x\rightarrow 0}\frac{\frac{1}{x^2}\left(\frac{nx^2}{6}+\epsilon(x^3)\right)}{\frac{1}{x^2}\left(-\frac{nx^2}{6}+\tilde\epsilon(x^3)\right)}
=
\lim_{x\rightarrow 0}\frac{\frac{n}{6}+\epsilon_1(x)}{-\frac{n}{6}+\tilde\epsilon_1(x)}.
$$
(Here $\epsilon_1(x)=\frac{1}{x^2}\epsilon(x)$ and similarly for $\tilde\epsilon_1$.) As $x$ approaches $0$, $\epsilon_1(x)$ and $\tilde\epsilon_1(x)$ approach $0$, and you are left with
$$
\frac{\frac{n}{6}}{-\frac{n}{6}}=-1.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Studying convergence of recursively defined sequence $a_1=2\text{,}\; a_{n+1}=2\sin(a_n)$ Firstly sorry for duplicate if this was asked before, i couldnt find.
This sequence is not monotone, but it seems convergent, i have plotted with Maple. Any hints to prove this is a Cauchy sequence? Or another method?
By the way, sequence seems divergent if i increase the factor 2. For example $a_1=2\text{,}\; a_{n+1}=2.25\cdot\sin(a_n)$ is divergent.
Thanks..
|
Hints:
*
*Show that $|a_n|\le2$ for all $n$
*Show that if $a_n\to L, L=2\sin L$
*Determine the stability of the fixed points of the map $x\to 2\sin x$
*Deduce the convergence behaviour of $a_n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How many factors of $N$ are a multiple of $K$? How many factors of $N = 12^{12} \times 14^{14} \times 15^{15}$ are a multiple of $K = 12^{10} \times 14^{10} \times 15^{10}$ ?
Any approach to attempt such questions ?
|
Look at it this way:
$$N=2^{38}3^{27} 5^{15} 7^{14}$$
Now ask yourself how many factors are a multiple of:
$$K=2^{30}3^{20} 5^{10} 7^{10}$$$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Question involving Householder Matrix This is an exam question that I'm having trouble solving.
Given a unit vector $v^Tv=1$, the Householder matrix is defined as $H=I-2vv^T$.
The first question is: given column vector $x$, if $Hx=c\cdot e_1$ where $c$ is constant and $e_1$ is the first vector of the canonical basis, find $c$.
After some algebra, I found that
$$c=x_1-2v_1\sum_{i=1}^n{v_ix_i}$$
I was unable to simplify it further.
The second question that I've been unable to solve is:
What is $v$, so that $H=I-2vv^T$ satisfies $Hx=c\cdot e_1$?
|
It might be a little simpler to look at what $H$ does. If $x || v$ then
$Hx = -x$ and if $x \bot v$, we have $Hx = x$. So to invert, we just apply $H$ again.
To confirm, check that $H^2 = I$, in fact, $H$ is orthogonal, which gives it
desirable numerical properties.
I suspect the purpose of the question was not to have you perform the
algebraic manoeuvre $c=e_1^T Hx= e_1^T(x-2v^Tx v) = x_1 - 2v_1(v^T x)$, but to note that
$\|Hx\| = \|ce_1\|$ from which we get $c = \pm \|x\|$.
To compute a relevant $v$, note that we want to reflect the vector $x$
onto the vector $\pm \|x\| e_1$. Use $y=\pm e_1$ to represent the
desired target unit vector. Then we want to find the hyperplane that
bisects the angle between $x$
and $\|x\| y$,
we can do this by choosing direction $u = x - \|x\|y$, and letting $v= {u \over \|u\| }$. Then a quick calculation (using
$\|x-\|x\|y\|^2 = 2 ( \|x\|^2- \|x\| x^Ty)$) shows that $Hx = y$.
We choose $\pm$ so that $u \neq 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Complex Functions using polar form Given that Z1.Z2 $\ne$ 0
use the polar form to prove that
Re($Z_1\bar Z_2$)= |$Z_1$||$Z_2$| iif $\theta_1 - \theta_2 = 2n\pi$
, n = ±1,±2,...,±n
and $\theta_1 = Arg(Z_1) , \theta_2 = Arg(Z_2)$
|
Hint: given that $\operatorname{Re} z = \frac{1}{2}(z+\bar z)$, $\operatorname{Im} z = \frac{1}{2i}(z-\bar z)$ and $|z|^2=z \bar z\,$:
$$
\begin{align}
\operatorname{Re}(z_1\bar z_2) = |z_1| |z_2| \;\;& \iff\;\;(z_1 \bar z_2+\bar z_1 z_2)^2 = 4\, z_1 \bar z_1 z_2 \bar z_2 \\
& \iff\;\; (z_1 \bar z_2-\bar z_1 z_2)^2 = 0 \\
& \iff\;\; \operatorname{Im}(z_1 \bar z_2) = 0 \\
& \iff\;\; \arg(z_1 \bar z_2) = \arg(z_1) - \arg(z_2) = k\,\pi
\end{align}
$$
Since $\operatorname{Re}(z_1\bar z_2) = |z_1| |z_2| \gt 0$ it follows that $\arg(z_1) - \arg(z_2) = 2\,n\,\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Taking the derivative inside the integral (Liebniz Rule for differentiation under the integral sign) I have a function I would like to differentiate but am wondering if I my method is allowable:
If $\displaystyle f(x)= \int_{a}^{b} h(t) \:\mathrm{d}t$, what is the derivative of $f$ with respect to $x$, if $x$ occurs in the expression of $h(t)$.
Can I solve this by simply taking the derivative of $h(t)$ as I would normally any function while treating instances of $t$ as constants? Or must I account for the integration before taking the derivative. I understand that if I could compute the integral I would end up with an expression for $f(x)$ in terms of $x$ and that the integral is kind of just a place holder for that expression, but I am unsure of whether or not taking the derivative before computing the integral would change the result. Any advice would be appreciated as well as any suggested readings on this type of problem!
|
To me the interesting result is the measure theory statement which intuitively requires:
*
*$f_x$ exists (derivative of $f_x$ wrt to x)
*$f_x$ remains measurable.
Then:
$$ \frac{d}{dx} \int_{\Omega} f(x, \omega) d\omega = \int_{\Omega} f_x(x, \omega) d\omega$$
is true.
Formally according to wikipedia:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Verify the solution of the wave equation with Heaviside initial condition. I am interested in solving the following wave equation in three dimensional space:
$$ \begin{cases}
u_{tt} & = c^2\Delta u\\
u(x,0) & = 0\\
u_t(x,0) & = h(|x|),
\end{cases} $$
where $h(r) = H(1-r)$ for $r>0$, $H(\cdot)$ being the Heaviside function. I do know that the solution to this problem is given by the expression:
$$ u(x,t) = \frac{1}{4\pi c^2t} \int_{\partial B_{ct}(x)} h(y)\, d\sigma(y),$$
where $B_{ct}(x)$ is the open ball with centre $x$ and radius $ct$. With this in mind, the solution to this problem has the form
$$ u(x,t) = \begin{cases}
\frac{1}{4\pi c^2t}\int_{\partial B_{ct}(x)}\, d\sigma(y) && \ \ \textrm{ if }\, |x|<1,\\
0 && \ \ \textrm{ otherwise}.
\end{cases} $$
Thus it remains to calculate the surface area of the $B_{ct}(x)$.
|
Since the initial condition is spherically symmetric we have $u=u(r)$ where $r$ is the radial coordinate. The wave-equation in (3D) spherical coordinates can be written $v_{tt} = c^2v_{rr}$ where $v(r,t) = ru(r,t)$ so $v(r,0) = 0$ and $v_t(r,0) = rH(1-r)$. Since this is just the standard one-dimensional wave equation we can write down the solution directly:
$$u(r,t) = \frac{1}{2cr} \int_{r-ct}^{r+ct}sH(1-s){\rm d}s \\= \frac{H(1-r-ct)}{4} \left((r+ct-1) (r+ct+1)H(1-r+ct)-\left((r-ct)^2-1\right)\right)~~~\text{for}~~~t > 0$$
Below is some plots of the soluton $v(r,t) = ru(r,t)$ generated with Mathematica:
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$
(* Generate solution and make GIF of solution *)
g[r_] = r HeavisideTheta[1 - r];
v[r_, t_] = 1/(2c) Integrate[g[s], {s,r-c t,r+c t}, GenerateConditions -> False] /. c -> 1;
data = Table[Plot[{v[r, t]}, {r, 0, 5}, PlotRange -> {-1, 1}], {t, 0, 4, 0.25}];
Export["wave.gif", data]
(* Generate 3D plot of solution *)
data2 = Table[Plot3D[{v[Sqrt[x^2 + y^2], t]}, {x, -5, 5}, {y, -5, 5}, PlotRange -> {-0.5, 0.3}], {t, 0, 4, 0.25}]
Export["wave2.gif", data2]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Inverse of a factorial I'm trying to solve hard combinatorics that involve complicated factorials with large values.
In a simple case such as $8Pr = 336$, find the value of $r$, it is easy to say it equals to this: $$\frac{8!}{(8-r)!} = 336.$$
Then $(8-r)! = 336$ and by inspection, clearly $8-r = 5$ and $r = 3$.
Now this is all and good and I know an inverse function to a factorial doesn't exist as there is for functions like sin, cos and tan etc. but how would you possibly solve an equation that involves very large values compared to the above problem without the tedious guess and checking for right values.
Edit: For e.g. if you wanted to calculate a problem like this (it's simple I know but a good starting out problem)
Let's say 10 colored marbles are placed in a row, what is the minimum number of colors needed to guarantee at least $10000$ different patterns? WITHOUT GUESS AND CHECKING
Any method or explanation is appreciated!
|
I just wrote this answer to an old question. Using $a=1$, we get a close inverse for the factorial function:
$$
n\sim e\exp\left(\operatorname{W}\left(\frac1{e}\log\left(\frac{n!}{\sqrt{2\pi}}\right)\right)\right)-\frac12\tag{1}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2078997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 5,
"answer_id": 2
}
|
If $x(b-c)+y(c-a)+z(a-b)=0$ then show that .. I am stuck with the following problem that says:
If $x(b-c)+y(c-a)+z(a-b)=0$ then show that
$$\frac{bz-cy}{b-c}=\frac{cx-az}{c-a}=\frac{ay-bx}{a-b}$$ where $a \neq b \neq c.$
Can someone point me in the right direction? Thanks in advance for your time .
|
HINT:
$$-(a-b)z=x(b-c)+y(c-a)$$
$$\frac{bz-cy}{b-c}=\dfrac{-b\{x(b-c)+y(c-a)\}-c(a-b)y}{(b-c)(a-b)}=?$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Permutations : A person picks 4 numbers from a set of 9, what are the total ways he can Win? In a casino, a person picks 4 numbers from a set of 9 numbers (3 Even, 6 Odd). A person wins if at least one of those 4 numbers is Even. What are the total ways he can Win ?
There are two approches I believe.
1) One is to find all possible ways of picking 4 from 9 (9C4) and subtract the number of ways of getting only odd (6C4) ie:
9C4 - 6C4
2) Pick one even from the set of 3 Evens and pick 3 from the remaning 8. ie :
3C1 * 8C3
What is the difference between the above two, In my oppinon the 2nd method does the same, why is it wronge ?
|
The second method is wrong because you are over counting.
Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\cdots,o_6$.
When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.
So you have $e_1,e_2,o_1,o_2$ and you win.
But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.
However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Why Is Inverse of a Polynomial Still a Polynomial? The fourth paragraph of the wikipedia article algebraic extension states that $K[a]$ is a field, which means every polynomial has an inverse. The inverse has to be a polynomial over $K$ as well. It seems it requires $K$ to be a non-integral domain. How do we resolve the confusion? How do we prove this paragraph?
|
Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.
$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.
Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
A category is $\mathbf{J}$-complete? Let $\mathbf{C}$ be a category and $\mathbf{J}$ be an index category. What does it mean to say that $\mathbf{C}$ is $\mathbf{J}$-complete? Is it just saying that all $\mathbf{J}$ shaped diagrams in $\mathbf{C}$ have a limit?
|
Yes, this is exactly what it means. A category is $\mathbf{J}$-complete when it has all $\mathbf{J}$-shaped limits.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can't recognize the series. Can somebody take a look? I am solving a question as part of which I got the below mentioned series. I tried a lot but couldn't recognize this.
$$
^m C_m m^n - {^m C_{m-1} (m - 1)^n} + \cdots \pm {^mC_1} 1^n
$$
I am sure it's expansion of some famous series.
Can somebody help?
|
We use the notation $\binom{m}{j}$ instead of $^mC_j$ and we also use the coefficient extraction operator $[z^n]$ to denote the coefficient of $z^n$ in a series. This way we can write e.g.
\begin{align*}
n![z^n]e^{jz}=j^n
\end{align*}
We obtain for $m\geq 1$
\begin{align*}
\sum_{j=1}^m\binom{m}{j}(-1)^{m-j}j^n
&=\sum_{j=1}^m\binom{m}{j}(-1)^{m-j}n![z^n]e^{jz}\\
&=n![z^n]\sum_{j=1}^m\binom{m}{j}\left(e^z\right)^j(-1)^{m-j}\\
&=n![z^n]\left((e^z-1)^m-1\right)\\
&=n![z^n]\left(\left(z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots\right)^m-1\right)\\
&=\begin{cases}
-1&\qquad n=0\\
0&\qquad 1\leq n < m\\
m!&\qquad n=m\\
n^m&\qquad n\geq m
\end{cases}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Limit question related to integration Find the limit,
$$L=\lim_{n\to \infty}\int_{0}^{1}(x^n+(1-x)^n)^{\frac{1}{n}}dx$$
My try:
$$ \int_{0}^{\frac{1}{2}}2^{\frac{1}{n}}xdx+ \int_{\frac{1}{2}}^{1}2^{\frac{1}{n}}(1-x)dx< \int_{0}^{1}(x^n+(1-x)^n)^{\frac{1}{n}}dx< \int_{0}^{\frac{1}{2}}2^{\frac{1}{n}}(1-x)dx+ \int_{\frac{1}{2}}^{1}2^{\frac{1}{n}}xdx$$
Now taking the limit I get that,
$$\frac{1}{4}<L<\frac{3}{4}$$
But, how can I get the exact answer!!
This is Problem 11941 from the American Mathematical Monthly.
|
Substitute 1/2+y for x, y going from -1/2 to 1/2. This a symmetric integral so the positve and negative intervals contribute equally and it suffices to consider only the positve interval. As n goes to infinity (1/2-y)^n is neglible compared to (1/2+y)^n. Hence the root approaches 1/2+ y and the integral approaches 2*Int((1/2+y)*dy) = 1/2+1/4 = 3/4
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Solve the inequality and show solution sets on the real line $$-\frac{x+5}{2} \le \frac {12+3x}{4}$$
I always have issues with problems like this. I chose to ignore the negative sign at the beginning and got the answer. Is that a good method? When solving this inequality what is the best method for no mistakes?
My answer/ method:
$$-(2)(12+3x) \le (4)(x+5)$$
$$-24-6x \le 4x+20$$
$$-24-20 \le 4x+6x$$
$$-44\le10x$$
$$-\frac{44}{10}\le x$$
$$-\frac{22}{5} \le x$$
$$\left(\infty,-\frac{22}{5}\right]$$
|
$$-\frac{x+5}{2} \le \frac {12+3x}{4}$$
Multiply $4$ both sides:
$$-2(x+5) \le 12+3x$$
$$-2x-10 \le 12 +3x$$
$$-22 \le 5x$$
$$x \geq \frac{-22}{5}$$
Your mistakes:
$-2(12+3x)=-24-6x$ rather than $-24+6x$. If this is just a typo, the next line is fine.
After this mistake, surprisingly in the next line, you corrected yourself.
$$x \ge -\frac{22}{5} \iff x \in [\frac{-22}{5},\infty)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Can't come even near to the solution.. Can somebody take a look? Fermat's little theorem states that if $p$ is a prime number, then for any integer $a$, the number $a^p − a$ is an integer multiple of $p$. In the notation of modular arithmetic, this is expressed as $$a^p \equiv a \mod p.$$
Use Fermat's little theorem to prove that:
Given prime number $p$, show that if there are positive integer $x$ and prime number $a$ such that $p$ divides
$\frac{x^a – 1}{x – 1}$, then either $а= p$ or $p \equiv 1 \mod a$.
I tried to connect $\frac{x^a – 1}{x – 1}$ to the theorem, but without any success..
Anything will help..
Thanks in advance
Picture in addition same as the text
|
We have that $p \mid x^a - 1$, or $x^a \equiv 1 \pmod{p}$. So first of all $x \not\equiv 0 \pmod{p}$, so that $x^ p - x \equiv 0 \pmod{p}$ implies $x^{p-1} \equiv 1 \pmod{p}$. (As the prime $p$ divides $x^ p - x = x (x^{p-1} - 1)$, and $p \nmid x$.)
And then, $a$ being prime, $x$ has order either $1$ or $a$ modulo $p$. Of course order $1$ means $x \equiv 1 \pmod{p}$. But then
$$
\frac{x^a - 1}{x - 1} = x^{a-1} + \dots + x + 1 \equiv a \equiv 0 \pmod{p},
$$
so that $p \mid a$ and thus $p = a$.
If $x$ has order $a$ modulo $p$, then $a$ must divide the order $p-1$ of the group of invertible integers modulo $p$. I guess this is where Fermat' s Little Theorem might come in. First of all,
$$
x^a \equiv 1 \pmod{p}, \quad x^{p-1} \equiv 1 \pmod{p}
$$
imply that if $p-1 = a q + r, 0 \le r < a$ (so that $r$ is the remainder of the division of $p-1$ by $a$) then
$$
x^{p-1} \equiv x^{a q + r} \equiv x^r \equiv 1 \pmod{p},
$$
so that $r = 0$, as $r$ is less than the order $a$ of $x$ modulo $p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Is it possible to find the sum of the infinite series $1/p + 2/p^2 + 3/p^3 + \cdots + n/(p^n)+\cdots$, where $p>1$? Is it possible to find the sum of the series:
$$\frac{1}{p} + \frac{2}{p^2} +\frac{3}{p^3} +\dots+\frac{n}{p^n}\dots$$
Does this series converge? ($p$ is finite number greater than $1$)
|
Hint:
$$\frac{1}{1-x}=1+x+x^2+\dots$$
Differentiate this formula (it is uniformly convergent for $|x|<1$) and multiply with x. Finally set $x=1/p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Closed subset of quasi-separated space is quasi-separated? Is every closed subset of quasi-separated space quasi-separated?
A topological space is called quasi-separated if the intersection of two quasi-compact opens is quasi-compact. Here quasi-compact means every open cover has a finite subcover.
|
Let $X$ be a space with the following property: each point has a n.h. basis consisting of quasi-compact opens.
Then (since any closed subset of a quasi-compact set is quasi-compact), we deduce that any closed subset $Z$ of $X$ inherits this property.
Furthermore, one easily sees that any quasi-compact open subset $V$ of $Z$ may be written in the form $V = U \cap Z,$ where $U$ is a quasi-compact open subset of $X$. (Without making our initial assumption on $X$, this property need not hold; e.g. consider a compact subset $Z$ of $X = \mathbb R^n$.)
Suppose now that $X$ is also quasi-separated, and let $Z$ be a closed subset of $X$. If $V_1$ and $V_2$ are two quasi-compact open subsets of $Z$, we may write $V_i = U_i \cap Z$, where $U_i$ is quasi-compact open in $X$. Then $U_1 \cap U_2$ is quasi-compact by assumption, and so $V_1 \cap V_2 = U_1 \cap U_2 \cap Z$ is also quasi-compact; hence $Z$ is quasi-separated.
(I'm not sure what's true if we don't make the additional assumption on $X$; have you looked in the topology section of the Stacks Project?)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Compute a canonical divisor Consider $C,C'$ two cubics in the plane with $C$ smooth. There are $9$ basepoint on the linear system generated by $C$ and $C'$ so if we blow them we get a map $X \to\mathbb P^1$, where $X$ is $\mathbb P^2$ blown-up at 9 points. Now is my question : how to compute $K_X$ ? I saw that $K_X = - C$. But I don't understand how to get it. Is the following argument correct ? $K_X = -c_1(X) = -c_1(f^*\mathbb P^2) = - f^*c_1(\mathbb P^2) = - f^*3H$ where $H$ is an hyperplane section. Now $3H \sim C$ and $f^*C$ it the strict transform of $C$. Thanks in advance for any remarks !
|
Here is a formula you should learn (say from Hartshorne). If $Y$ is a smooth surface, $p\in Y$ is a point and $\pi:X\to Y$ the blow up of $p$ and $E$ the exceptional divisor, then $K_X=\pi^*K_Y+E$.
In your case, $K_X=f^*K_{\mathbb{P}^2}+\sum E_i$. Since $K_{\mathbb{P}^2}=-C$, we get $K_X=-f^*C+\sum E_i$ and thus $K_X$ is the negative of the proper transform of $C$. ($K_X=-C$ is not meaningful, since $C$ is not a curve on $X$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Compute the $n$-th power of triangular $3\times3$ matrix I have the following matrix
$$
\begin{bmatrix}
1 & 2 & 3\\
0 & 1 & 2\\
0 & 0 & 1
\end{bmatrix}
$$
and I am asked to compute its $n$-th power (to express each element as a function of $n$). I don't know at all what to do. I tried to compute some values manually to see some pattern and deduce a general expression but that didn't gave anything (especially for the top right). Thank you.
|
Define
$$J = \begin{bmatrix}
0 & 2 & 3\\
0 & 0 & 2\\
0 & 0 & 0
\end{bmatrix} $$
so that the problem is to compute $(I+J)^n$. The big, important things to note here are
*
*$I$ and $J$ commute
*$J^3 = 0$
which enables the following powerful tricks: the first point lets us expand it with the binomial theorem, and the second point lets us truncate to the first few terms:
$$ (I+J)^n = \sum_{k=0}^n \binom{n}{k} I^{n-k} J^k = I + nJ + \frac{n(n-1)}{2} J^2 $$
More generally, for any function $f$ that is analytic at $1$, (such as any polynomial), if you extend it to matrices via Taylor expansion, then under the above conditions, its value at $I+J$ is given by
$$ f(I+J) = \sum_{k=0}^\infty f^{(k)}(1) \frac{J^k}{k!} = f(1) I + f'(1) J + \frac{1}{2} f''(1) J^2 $$
As examples of things whose result you can check simply (so you can still use the method even if you're uncomfortable with it, because you can check the result), you can compute the inverse by
$$ (I+J)^{-1} = I - J + J^2 = \begin{bmatrix}
1 & -2 & 1\\
0 & 1 & -2\\
0 & 0 & 1
\end{bmatrix} $$
and if you want a square root, you can get
$$\sqrt{I+J} = I + \frac{1}{2} J - \frac{1}{8} J^2
= \begin{bmatrix}
1 & 1 & 1\\
0 & 1 & 1\\
0 & 0 & 1
\end{bmatrix} $$
(These are actually special cases of $(I+J)^n$ by the generalized binomial theorem for values of $n$ that aren't nonnegative integers)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2079950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 6,
"answer_id": 3
}
|
Linearity of expectations - Why does it hold intuitively even when the r.v.s are correlated? An experiment - say rolling a die, is performed a large number of times, $n$. Let $X$ and $Y$ be two random variables that summarize this experiment.
Intuitively(by the law of large numbers), if I observe the values of $X$, over a large number of trials, take their mean, $m_{X}=\frac{1}{n}\sum_{i}{x_{i}}$, and observe the values of $Y$, take their mean $m_{Y}=\frac{1}{n}\sum_{i}{y_{i}}$ and the add the two column means, this is very close to $E(X)+E(Y)$.
If we observe the values of $X+Y$ in a third column, and take their arithmetic mean, $m_{X+Y}$, this will be very close to $E(X+Y)$.
Therefore, linearity of expectation, that $E(X+Y)=E(X)+E(Y)$ emerges as a simple fact of arithmetic (we're just adding two numbers in different orders).
I know linearity of expectations holds, even when the $X$ and $Y$ are dependent. For example, the binomial and hypergeometric expectation is $E(X)=np$, although in the binomial story, the $Bern(p)$ random variables are i.i.d., but in the hypergeometric story, they are dependent.
If two random variables are correlated, wouldn't that affect the average of their sum, than if they were uncorrelated? Any insight or intuition would be great!
|
For intuition, suppose the sample space consists of a finite number of equally probable outcomes (this is of course not true for all probability spaces, but many situations can be approximated by something of this form). Then
$$ E(X+Y) = \frac{(x_1+y_1)+(x_2+y_2)+\cdots+(x_n+y_n)}n $$
and
$$ E(X)+E(Y) = \frac{x_1+x_2+\cdots+x_n}n + \frac{y_1+y_2+\cdots+y_n}n $$
which is obviously the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 3
}
|
How can an isolated point be an open set? I have the following definition:
In a metric space $(X,d)$ an element $x \in X$ is called isolated if $\{x\}\subset$ X is an open subset
But how can $\{x\}$ be an open subset? There has to exist an open ball with positive radius centered at $x$ and at the same time this open ball has to be a subset of $\{x\}$ but how can this be if there is only one element?
I'm trying to wrap my head around this, but I can't figure it out. It doesn't make sense for metrics on $\mathbb{R}^n$ since each open ball with some positive radius has to contain other members of $\mathbb{R}^n$.
The only thing I could think of was that we have some $x$ with 'nothing' around it and an open ball that contains only $x$ and 'nothing' (even though a positive radius doesn't make sense since there is nothing), so therefore the open ball is contained in $\{x\}$. But I'm not even sure we can define such a metric space, let alone define an open ball with positive radius containing only $x$ and 'nothing'.
|
Let $X$ be any non-empty set, and define a function $d:X\times X\to\Bbb R$ as follows: for $x,y\in X$,
$$d(x,y)=\begin{cases}
0,&\text{if }x=y\\
1,&\text{if }x\ne y\;.
\end{cases}$$
You can easily check that this function $d$ is a metric on $X$; it is commonly called the discrete metric on $X$. Now observe that for if $x\in X$ and $0<r\le 1$, then
$$B(x,r)=\{y\in X:d(x,y)<r\}=\{x\}\;:$$
the set $\{x\}$ is the open $r$-ball centred at $x$ provided that $0<r\le 1$.
For a less trivial example, consider the set
$$Y=\{0\}\cup\left\{\frac1n:n\in\Bbb Z^+\right\}$$
with the metric that it inherits from the usual metric on $\Bbb R$. You can check that for each $n\in\Bbb Z^+$ we have
$$B\left(\frac1n,r\right)=\left\{\frac1n\right\}$$
provided that $0<r\le\frac1{n(n+1)}$; this is because the point of $Y$ closest to $\frac1n$ is $\frac1{n+1}$, and the distance between them is
$$\frac1n-\frac1{n+1}=\frac1{n(n+1)}\;.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
}
|
Constructing two matrices that do not commute I need to construct square matrices $A$ and $B$ such that $AB=0$ but $BA \neq 0$.
I know matrix multiplication is not commutative, but I don't know how to construct such matrices. Thanks in advance.
Edit: looking for some simple way
|
Pick $\mathrm u, \mathrm v, \mathrm w \in \mathbb R^n \setminus \{0_n\}$ such that $\neg (\mathrm u \perp \mathrm v)$ and $\mathrm v \perp \mathrm w$. Define
$$\mathrm A := \mathrm u \mathrm v^{\top} \qquad \qquad \qquad \mathrm B := \mathrm w \mathrm v^{\top}$$
whose traces are
$$\mbox{tr} (\mathrm A) = \mathrm v^{\top} \mathrm u \neq 0 \qquad \qquad \qquad \mbox{tr} (\mathrm B) = \mathrm v^{\top} \mathrm w = 0$$
Hence
$$\mathrm A \mathrm B = \mathrm u \underbrace{\mathrm v^{\top} \mathrm w}_{= \mbox{tr} (\mathrm B)} \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm u \mathrm v^{\top} = \mbox{tr} (\mathrm B) \cdot \mathrm A = \mathrm O_n$$
$$\mathrm B \mathrm A = \mathrm w \underbrace{\mathrm v^{\top}\mathrm u}_{= \mbox{tr} (\mathrm A)} \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm w \mathrm v^{\top} = \mbox{tr} (\mathrm A) \cdot \mathrm B \neq \mathrm O_n$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 8,
"answer_id": 5
}
|
Why is $-\log(x)$ integrable over the interval $[0, 1]$ but $\frac{1}{x}$ not integrable? I don't understand why some functions that contain a singularity in the domain of integration are integrable but others are not.
For example, consider $f(x) = -\log(x)$ and $g(x) = \frac{1}{x}$ on the interval $[0, 1]$. These functions look very similar when they are plotted but only $f(x)$ can be integrated.
*
*What is the precise mathematical reason(s) that makes some functions with singularities integrable while others are not?
*Are $\log$ functions the only functions with singularities that can be integrated or are there other types of functions with singularities that can be integrated?
|
Think about it this way - what's the inverse?
$$y = \frac{1}{x}; x = \frac{1}{y}$$
$$y = -\log x; x = e^{-y}$$
Looking at it this way, it's clear that as $y$ shoots off to infinity, $x$ approaches zero much faster in one case than in the other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 1
}
|
Derivative of nuclear norm of $xx^T-V$ The function is
$$f(x) = \| x x^T - V \|_*$$
where $\| \cdot \|_*$ denotes the nuclear norm and $V$ is a given matrix. $x$ is a vector. Please tell me how to differentiate $f(x)$. And, if it is possible, please show me how to compute the 2nd derivative of $f(x)$.
|
Define a new matrix variable $$M=xx^T-V$$Then find the differential of the function in terms of this new variable
$$\eqalign{
f &= \operatorname{tr}\sqrt{M^TM} \cr
\cr
df &= \frac{1}{2}(M^TM)^{-1/2}:d(M^TM) \cr
&= \frac{1}{2}(M^TM)^{-1/2}:(dM^TM+M^TdM) \cr
&= (M^TM)^{-1/2}:M^TdM \cr
&= M(M^TM)^{-1/2}:dM \cr
&= M(M^TM)^{-1/2}:d(xx^T) \cr
&= M(M^TM)^{-1/2}:(dx\,x^T+x\,dx^T) \cr
&= \big(M(M^TM)^{-1/2} + (M^TM)^{-1/2}M^T\big)\,x:dx \cr
\cr
\frac{\partial f}{\partial x} &= \big(M(M^TM)^{-1/2} + (M^TM)^{-1/2}M^T\big)\,x \cr\cr
}$$
This is the gradient. To find the hessian you must differentiate again wrt $x$. But it's going to be very messy since each term $M$ contains two $x$'s inside of it.
And we can't use the "trace trick" again
$$\operatorname{tr}(f(X))= f^\prime(X^T):dX$$
since there are no traces left in the gradient.
If there is other information that would simplify the problem (e.g. $V$ is symmetric), then you might be able to find an explicit formula for the hessian.
If you want the hessian in order to use something like Newton's method, I would suggest that you try a gradient-based method instead.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Counting the total number of possible passwords I'm working from Kenneth Rosen's book "Discrete Mathematics and its applications (7th edition)". One of the topics is counting, he gives an example of how to count the total number of possible passwords.
Question: Each user on a computer system has a password, which is six to eight characters long, where each character is an uppercase letter or digit. Each password must contain at least one digit. How many possible passwords are there?
His solution: Let $P$ be the total number of possible passwords, and let $p_6, p_7$ and $p_8$ denote the number of possible passwords of length 6, 7, and 8, respectively. By the sum rule, $p = p_6 + p_7 + p_8$.
$P_6 = 36^6 - 26^6 = 2,176,782,336 - 1,308,915,776 = 1,867,866,560$
Similar process for $p_7$ and $p_8$.
This is where I'm confused, what is the logic for finding $p_6$? If I was given the question, I would have done as follows:
$p_6 = 36^5 * 10$, because 5 of the 6 characters can be a letter or a number, so 36 possible values for each character. One character has to be numerical, so it has 10 possible values. All multiplied together, gives you $p_6$. Obviously I'm wrong, but why is he right?
I'd just like to understand the thinking behind Rosen's solution, as he does not make that clear in the book.
|
The number of passwords with at least one digit plus the number of passwords with no digit equals the total number of passwords.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Integrate $\int x e^{x} \sin x dx$
Evaluate:
$$\int x e^{x} \sin x dx$$
Have you ever come across such an integral? I have no idea how to start with the calculation.
|
This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):
We will use the results of following(easy - just substitute):
$$
∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C
$$
$$
∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C
$$
and
$$
∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C
$$
Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors($x$, $e^x$ and $\sin (x)$). However, using basic logarithm and $\exp $ properties we can transform the \exp ression into one with just two factors:
$$
x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x}
$$
this leaves us with
\begin{equation*}
I =
\int xe^xsinx dx =
\int e^{ln(x) + x} \sin (x) dx =
\end{equation*}
integrate by parts with substituting:
$$
\begin{equation*}
\left[
\begin{alignedat}{2}
u &= \sin (x) \quad & du &=\cos (x) \\
dv &= e^{\ln (x)+x} \quad & v &= xe^x-e^x
\end{alignedat}\,
\right]
\end{equation*}
$$
$$
= e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx
$$
$$
= e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx
$$
Now, another three factor integral appears, which can also be integrated by parts using the previous trick.
$$
∫ xe^x\cos (x)dx =
$$
$$
\begin{equation*}
= ∫ e^{\ln (x)+x}\cos (x)dx =
\left[
\begin{alignedat}{2}
u &= \cos (x) \quad & du &=-\sin (x) \\
dv &= e^{\ln (x)+x} \quad & v &= xe^x-e^x
\end{alignedat}\,
\right] =
\end{equation*}
$$
$$
= xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx =
$$
$$
= xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx
$$
Back to the previous equation, plug in the result above flipping the signs accordingly:
$$
I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx
$$
Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:
$$
2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx
$$
and plug in the results of simpler integrals:
$$
2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C
$$
now just simplify the right side and divide by 2 to obtain the final result:
$$
I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 3
}
|
A function that is not a derivative of any derivable function That's basically it. I need to find a function on the interval $[0,1]$ that isn't a derivative of any derivable function.
I've found one possible solution which sets $0$ for every $x \in \mathbb{Q}$, and $1$ for every $x$ from $\mathbb{R}\setminus \mathbb{Q}$, but I don't really understand it and am not sure if it is correct.
|
yes, to expand on MathematicsStudent1122's reply, derivatives have the Darboux property (see Darboux's theorem on wiki), that is, if $f:I\to\mathbb{R}$ is differentiable in the interval $I$ and $f'$ takes two values, then it takes all the values in between. So the function $g$ which is $0$ on the rationals and $1$ on the irrationals cannot be the derivative of a function $f$ on any subinterval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Coincidence set closed equivalent to diagonal closed Prove
For any topological space $Y$ and any continuous maps $f, g : Y → X$, the set
$\{y ∈ Y : f(y) = g(y)\}$ is closed in $Y$
is equivalent to
The diagonal $∆ = \{(x, x) : x ∈ X\}$ is a closed subset of $X × X$, in the product
topology.
I've proved that the diagonal being closed is equivalent to $X$ being Hausdorff, and also proved that the diagonal closed implies the coincidence set is closed but can't do the reverse implication, any hints? Have tried to show the complement is open, ie there is an open neighbourhood about each point, but can't seem to find a useful way to use the coincidence set being closed. The functions $f, g$ seem to be mapping the 'wrong way' as it were, so not sure how their continuity can be used.
|
Note that $\Delta$ is the coincidence set of the two projections from $X\times X$ to $X$. The projections are $p_1 : X\times X\to X,\;(x,x')\mapsto x$ and $p_2 : X\times X\to X,\;(x,x')\mapsto x'$.
Indeed, it's the set of points $(x,x')$ s.t. $x=x'$...
Thus the diagonal is closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2080863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Where does this proof that the abs. value of the determinant of a 2x2 matrix A is the area of the image of the unit square under A go wrong? If we have a matrix $A\in M_2(\mathbb{R})$, $A=\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, then $det(A)=ad-bc$. $A$ sends the unit square to the parallelogram whose sides are the vectors $(a, c)$ and $(c, d)$. Since the area of a parallelogram is the product of its two side lengths, the area of this parallelogram is $\|(a, c)\|\|(b,d)\|=\sqrt{a^2+c^2}\sqrt{b^2+d^2}=\sqrt{(ab)^2+(ad)^2+(cb)^2+(cd)^2}$. If this is the absolute value of $det(A)$, then it should equal $\mid ad-bc \mid$. But I just don't see how it does . . .
|
The area of a parallelogram isn't the product of the two side lengths, it's the product of its base and height. If $\theta$ is the angle between adjacient sides, then the area becomes $||(a,c)||\, ||(b,d)||\sin(\theta)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Other ways to evaluate $\lim_{\theta\to 0} \frac{\sin2\theta}{2\theta}$? What steps should be taken to find the limit:
$$\lim_{\theta\to 0} \frac{\sin2\theta}{2\theta}$$?
I went about evaluating the limit using the fundamental rules of limits. I noticed that $\lim_{\theta\to 0}$ $\sin\theta\over\theta$ $=1$ and that the $\lim_{x\to a}$ $cx=ca$
Therefore, this meant that the limit of the function evaluated will go as followed, $\sin2\theta\over2 \theta$ $\to$ $2\times1\over2$ $\to$ $1$
I was wondering if I evaluated the function correctly, or did I get to this by luck? Is there another way to evaluate this equation?
|
Well, since very fortunately L'Hopital rule is not prohibited here, why don't we use it:
$$
\lim_{x\to 0 }\frac{\sin (2x)}{2x}=\lim_{x\to 0}\frac{2\cos 2x}{2}=1.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
A question about neighboring fractions. I have purchased I.M. Gelfand's Algebra for my soon-to-be high school student son, but I am embarrassed to admit that I am unable to answer seemingly simple questions myself.
For example, this one:
Problem 42. Fractions $\dfrac{a}{b}$ and $\dfrac{c}{d}$ are called neighbor fractions if their difference $\dfrac{ad - bc}{bd}$ has numerator $\pm1$, that is, $ad - bc = \pm 1$.
Prove that
(a.) in this case neither fraction can be simplified (that is, neither has any common factors in numerator and denominator);
(b.) if $\dfrac{a}{b}$ and $\dfrac{c}{d}$ are neighbor fractions then $\dfrac{a + b}{c + d}$ is between them and is a niehgbor fraction for both $\dfrac{a}{b}$ and $\dfrac{c}{d}$; moreover, ...
Here is the snapshot from the book online (click on Look Inside on the Amazon page):
So, (a) is simple, but I have no idea how to prove (b). It just does not seem right to me. Embarrassing. Any help is appreciated.
|
We have to prove that for positive integers $a,b,c,d$, we either have
$$\frac{a}{b}<\frac{a+c}{b+d}<\frac{c}{d}$$
or
$$\frac{c}{d}<\frac{a+c}{b+d}<\frac{a}{b}$$
First of all, $\frac{a}{b}=\frac{a+c}{b+d}$ is equivalent to $ab+ad=ab+bc$, hence $ad=bc$, which contradicts the assumption $ad-bc=\pm 1$. We can
disprove $\frac{a+c}{b+d}=\frac{c}{d}$ in the same manner.
In the case of $\frac{a}{b}<\frac{a+c}{b+d}$, we get $ad<bc$, hence $ad-bc=-1$. The condition $\frac{a+c}{b+d}<\frac{c}{d}$ is equivalent to $ad+cd<bc+cd$, hende $ad<bc$, which implies $ad-bc=-1$ again. So, $\frac{a}{b}<\frac{a+c}{b+d}$ is equivalent to $\frac{a+c}{b+d}<\frac{c}{d}$.
So, if we have $\frac{a}{b}>\frac{a+c}{b+d}$, we must have $\frac{a+c}{b+d}>\frac{c}{d}$ (which can also be proven directly analogue to the calculation above)
To show that the middle fraction is a neighbor-fraction to both fractions, just use the definition of neighbor-fractions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 1
}
|
Integrating $\int\frac{5x^4+4x^5}{(x^5+x+1)^2}dx $ In the following integral:
$$I = \int\frac{5x^4+4x^5}{(x^5+x+1)^2}dx $$
I thought of making partial fractions , then solve it .
But I am not able to make partial fractions.
|
$\displaystyle \int\frac{5x^4+4x^5}{(x^5+x+1)^2}dx = \int\frac{5x^4+4x^5}{x^{10}(1+x^{-4}+x^{-5})^2}dx = \int\frac{5x^{-6}+4x^{-5}}{(1+x^{-4}+x^{-5})^2}dx$
put denominator is $=t$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Evaluating $\int\frac{\cos^2x}{1+\tan x}\,dx$ I'd like to evaluate the following integral:
$$\int \frac{\cos^2 x}{1+\tan x}dx$$
I tried integration by substitution, but I was not able to proceed.
|
This one always works for rational functions of $\sin x$ and $\cos x$ but can be a bit tedious. Set:
$$ z = \tan x / 2$$
so that
$$ \mathrm{d}x = \frac{2\,\mathrm{d} z}{1 + z^2}$$
$$ \cos x = \frac{1 - z^2}{1 + z^2}$$
$$\sin x = \frac{2z}{1 + z^2}$$
Now, you have a rational fraction in $z$ that you can integrate by standard methods (partial fraction decomposition).
There are often simpler (and trickier) substitutions for this kind of integrals, but this one will always do the job.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Greens function with non-zero boundary condition When solving a differential equation using a Greens function, is it possible to solve a problem with non-zero boundarys directly using a Greens function? For example, when solving a problem with non-zero boundarys I've broken the problem into multiple pieces, using the Greens function to solve the inhomogeneous part and then solving the homogeneous part using some other method, taking advantage of linearity. But is it possible to solve without breaking the problem up?
|
It's possible to recombine the pieces into a single formula for solution of the equation $\Delta u=f$ in $\Omega$, $u=g$ on $\partial\Omega$. Namely,
$$
u(x) = \int_\Omega G(x,y)f(y)\,dy + \int_{\partial\Omega} \frac{\partial G(x,y)}{\partial n}g(y)\,dy
$$
(With multiplicative constants subject to the normalization of $G$.) See, for example, Russell L. Herman's lecture notes on PDE.
The formula still contains two terms, reflecting the fact that interior sources and boundary sources are different in nature and affect the solution in different ways.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How do we write the derivation of distance formula of two points from two different quadrants in a cartesian plane? I learnt the derivation of the distance formula of two points in first quadrant I.e., $d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$ where it is easy to find the legs of the hypotenuse (distance between two points) since the first has no negative coordinates and only two axes ($x$ coordinate and $y$ coordinate). while finding the distance between two points from two different quadrants of a Cartesian plane where four axes exist ($x$,$x_1$,$y$, $y_1$ coordinates), the same formula applies for this problem also. But, the derivation of the formula is based only on the distance between two points in first quadrant alone. Can you please explain the DERIVATION of the formula for more than two quadrants? Please
|
Consider the following diagram:
Now to find the wanted distance,By the Pythagorean theorem,we need to know the size of the Edges BC and AC.
Suppose A and B to be:$$A(x_1,y_1),B(x_2,y_2)$$
And to find AC ,We need to subtract the length of AD from CD. Observe that C has the y component equal to the point B. So:
$$AC =\vert(y_2 - y_1) \vert$$ In order to only have the length and not worry about its sign , we took take the absolute value of this quantity.
The same argument can be given for the length of BC. $$BC =\vert (x_2 - x_1) \vert$$.
And now to use our friend Pythagoras. ABC is a right triangle,So:$${AB}^2={AC}^2+{BC}^2 $$,and with our last results,we have :
$$d=\sqrt{{\vert x_2-x_1\vert}^2 +{\vert y_2-y_1\vert}^2}$$
Now notice that square of absolute value of a number is just equal to the square of that number
(absolute value only changes the number's sign ,not size,and when you square that number,it's sign will be positive ,so they are equal.)
So we can drop the absolute values in our formula:
$$d= \sqrt{{(x_2-x_1)}^2 +{(y_2-y_1)}^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Simple Polynomial Algebra question to complete the square $4y^2+32y = 0$
$4(y^2+8y+64-64) = 0$
$4(y+4)^2 = 64$
Is that correct?
|
I made a comment, and I would do things slightly differently, so $$4y^2+32y=0$$
divide by $4$$$y^2+8y=0$$ add $\left(\frac 82\right)^2=4^2=16$ to both sides $$y^2+8y+16=16$$Rewrite the left-hand side as a square $$(y+4)^2=16$$If you want the $4$ back you can multiply through by $4$ at the end. If you want a pure square this is $(2y+8)^2=64$. The version you have put in your question does not express the left-hand side as a square.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2081864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Show that the line through $P$ and $Q$ is perpendicular to the surface at $P$. Consider a smooth surface given by the function $z = f(x,y)$, such that the partial
derivatives of $f(x,y)$ exist. Suppose $Q$ is a point that does not lie on the surface,
and $P$ is the nearest point on the surface to $Q$. Show that the line through $P$ and
$Q$ is perpendicular to the surface at $P$.
|
The question is incorrect I am afraid. $Q$ also must be at a minimum distance to the surface.
The required normal direction comes from cross-product $ \partial z / \partial x$ X $ \partial z / \partial y $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove $ \frac{1}{2} (\arccos(x) - \arccos(-x)) = -\arcsin(x)$ The identity I need to prove is this, and I am very close but I am missing a negative sign which I cannot find.
$$ \frac{1}{2} (\arccos(x) - \arccos(-x)) = -\arcsin(x)$$
I started off by using the $\cos(A-B)$ double angle formula, using $\arccos(x)$ and $\arccos(-x)$ and $A$ and $B$ respectively. Substituting this into the double angle formula, I get that $$\cos(A-B)=1-2x^2$$
Then, from this I say let $x=\sin{y}$, so that I end up with the large expression $$\cos(\arccos(\sin{y})-\arccos(-\sin{y}))=1-2\sin^2(y)$$
Now we can use the cosine double angle formula on the RHS so this becomes $$\cos(\arccos(\sin{y})-\arccos(-\sin{y}))=\cos(2y)$$
Taking the inverse cosine on both sides leads to $$\arccos(\sin{y})-\arccos(-\sin{y})=2y$$
Now reversing the substitution, saying that $y=\arcsin{x}$, we get that $$\arccos(x) - \arccos(-x) = 2\arcsin(x)$$ which is nearly the identity I need to prove except that I have made an error in some step, causing the expression to be wrong. I cannot find this mistake.
|
Maybe using the following identities will help: $$\arccos a-\arccos b= \arccos (ab +\sqrt {(1-a^2)(1-b^2)}) \tag {1}$$ $$\arcsin a +\arcsin b= \arcsin (a\sqrt {1-b^2} +b\sqrt {1-a^2}) \tag{2}$$ After applying $(1)$, use the triangle formula to convert $\arccos $ to $\arcsin $ component and thus LHS=RHS. Hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to solve this series: $\sum_{k=0}^{2n+1} (-1)^kk $ $$\sum_{k=0}^{2n+1} (-1)^kk $$
The answer given is $-n-1$. I have searched for how to do it, but I have problems simplifying the sum and solving it. How do you go about solving this?
|
Another approach is to exploit the sum $$\sum_{k=0}^{2n+1}x^{k}=\frac{1-x^{2n+2}}{1-x}.
$$ Taking the derivative we have $$\sum_{k=0}^{2n+1}kx^{k}=\sum_{k=1}^{2n+1}kx^{k}=x\frac{-\left(2n+2\right)x^{2n+1}\left(1-x\right)+1-x^{2n+2}}{\left(1-x\right)^{2}}
$$ hence taking $x=-1
$ we get $$\sum_{k=0}^{2n+1}k\left(-1\right)^{k}=\color{red}{-\left(n+1\right)}$$ as wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 4
}
|
Solving First Order ODE using the integrating factor approach
I am trying to solve the differential equation, but I do not understand the method. Here is my working:
|
The step from line 3 to line 4 isn't correct: $\frac{di}{dt} e^t + ie^t = \frac{d}{dt}(ie^t)$, so line 4 should read $\frac{d}{dt}(ie^t) = 10t$ which can then be integrated directly.
In general, the ideas behind an integrating factor are outlined here as well as I could explain (if not better).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Commutative ring $R\neq \{0\}$ in which every subring is finite is a field question on proof
Assume we are given a commutative ring $R \neq \{0\}$ with no zero
divisors (not necessarily with a unit element) in which every proper subring only has finitely many elements. Show
that $R$ is a field.
I am aware of the fact that there is a similar question found here, but my questions are of a different nature. First of all, the formulation "...in which every subring only has finitely many elements..." could be simply substituted with $R$ is finite. Am I right?
Then in the prove, we construct a mapping $$\rho_y : \begin{cases} R \to (y)\\x \mapsto xy\end{cases}$$ for some $y \in R\setminus \{0\}$. This is clearly a bijection (kernel is trivial by cancellation law). By $(y) \subseteq R$ we get that $R = (y)$. In the solutions in my book it is argumented that this is since $(y)$ is finite by assumption. But does this not also hold if $R$ or any of its subrings is necessarily finite?
|
Assuming you're talking about possibly nonunital rings and that the statement is about proper subrings being finite, we can observe that each ideal is a subring. Also I assume $R\ne\{0\}$.
Suppose $xR$ is proper for every $x\in R$, $x\ne0$. Being a finite commutative ring with no nonzero divisors, $xR$ is a field, so it has an identity $xe\ne0$. Then $xexr=xr$, for every $r\in R$, so $exr=r\in xR$ and $xR=R$. A contradiction.
Hence, for some $x\ne0$, we have $xR=R$. Then there is $y\in R$ with $xy=x$. In particular, $xR\subseteq yR$, so also $yR=R$ and there exists $z$ with $yz=y$.
If $r\in R$, then $ry=ryz$, so $r=rz$. Hence $R$ has an identity $1=z$.
Now, let $r\in R$, $r\ne0$. Then the minimal subring $S$ of $R$ containing $1$ and $r$ is either finite or $R$. In the first case $S$ is a field, so $r$ is invertible in $S$ and hence in $R$ (they share the identity).
Assume $S=R$. Let $P$ be the prime subring of $R$; then there is a surjective homomorphism $P[X]\to R$, sending $X$ to $r$. If this homomorphism is injective, then the image of $P[X^2]$ is a proper subring of $R$ and is infinite: a contradiction. Then the homomorphism has a nontrivial kernel and so $R$ is a finite field.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Limit problem involving bijective function Let $f: \Bbb N^{\star}\to \Bbb N^{\star}$ a bijective functions such that exists $$\lim _{n \to \infty} {\frac {f(n)} {n}}.$$ Find the value of this limit.
I noticed that, if $f(n)=n$, then the limit is $1$. I couldn't make more progress. Can you help me?
|
Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n<M$ for all $n>n_0$ ie is bounded ae.
Now, because $f(n)<Mn $ for all $n>n_0$ then $f=O(n)$ and by definition the limit request is 1
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Determine two changing variables only knowing the result So, about a decade ago my company came up with pricing for some banners that we sell. the prices are as follows.
$43.68 for a 3x4 banner
$44.52 for a 3x6 banner
$46.36 for a 3x8 banner
$50.00 for a 3x10 banner
$52.54 for a 3x12 banner
and I can not figure out where these prices came from. The guy who wrote them up quit before I started, and I need to figure out the equation to extend the pricing up and down.
Here's what I DO know.
The equation is based off two things
The cost of the banner per square foot
The cost of labor
I do not need to figure out the factors that went into pricing for either, I just need to know what numbers they are.
Best guess for labor was 63 dollars, it might not be, but if that works, it sounds good to me.
my attempt was to figure it out using substitution with a system of equations.
12(sqft) * X($/sqft) + 63($/hour) * Y (hours) = 43.68 and
18x + 63y = 44.52
with a second set of
24x + 63y = 46.36 and
30x + 63y = 50.00
BUT the first set gives me
x=0.14
y=0.66667
and the second set gives me
x=0.606667
y=0.504762
which leads me to believe that the hours per banner change. Meaning the y in each equation is different. Is there a way to determine what these two variables are, even though one changes, probably linearly? If not, I'll just do a whole new equation, the only issue is the number of variables going into each of these variables.
Thanks.
|
Each time you go up a size you add 2 feet to the length. The added cost for these 2 feet varies from 0.84 to 3.64, which is quite a variation. This shows you will not be able to generate a formula of $A + B(length)$ that fits the old data, as $B/2$ should be the added cost per square foot. Clearly labor is not $63$ as all the prices are less than that. You can do a least squares fit, shown below, which gives price =$38.14+1.16length$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2082696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.