Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Finding a probability on an infinite set of numbers I have $\Omega={1,2,3,...}$ and the possibility of each number is $P(A) = 2^{-n}$, $n=1,2,3...$
I have to prove $P(\Omega) = 1$ . I can understand that from the graph, but how do I actually prove it?
|
$P(Ω) = \sum_{n=1}^{\infty} P(n) = \sum_{n=1}^{\infty} 2^{-n} = \frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^3} + ..... = \frac{1/2}{1-1/2} =1 $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Let f,g be bilinear forms on a finite dimensional vector space. Show that there exist unique linear operators Let $f,g$ be bilinear forms on a finite dimensional vector space.
(a) Suppose $g$ is non-degenerate. Show tha there exist unique linear operators $T_{1}, T_{2}$ on $V$ such that $f(a,b)=g(T_{1} a,b)=g(a,T_{2} b)$ for all $a,b$.
(b) Show that this result may not be true if $g$ is degenerate.
My attempt:
Although I am clueless about this question, I was wondering if it has something to do with the adjoint of a linear operator. If we have a linear operator $T$ on a hermitian space $V$ then $ \langle Tv,w \rangle= \langle v,T^{*}w \rangle$ and $ \langle T^{*} v,w \rangle= \langle v,Tw \rangle$ for all $v,w$ in $V$. But then how would you relate it to the other bilinear form and how would you use the fact that one of the bilinear forms is non-degenerate?
|
Denote the $k$-vector space in question by $V$. As $V$ is finite dimensional, $V$ and $V^* := L(V,k)$ (the dual space) have the same dimension. Define a map $\Theta_g \colon V \to V^*$ by $\Theta_g a = g(a, \cdot)$. As $g$ is non-degenerated, $\Theta_g$ is one-to-one: If $a \in \ker \Theta_g$, we have $g(a,b) = \Theta_ga(b) = 0$ for all $b \in V$, hence $a = 0$. As $\dim V = \dim V^*$, $\Theta_g$ is an isomorphism. Now define $T_1 \colon V \to V$ by
$$ T_1 a := \Theta_g^{-1}\bigl(f(a, \cdot)\bigr) $$
Then, for all $a, b \in V$, we have
\begin{align*}
g(T_1 a, b) &= \Theta_g(T_1 a)b\\
&= \Theta_g \Theta_g^{-1}\bigl(f(a,\cdot)\bigr)b\\
&= f(a,\cdot)(b)\\
&= f(a,b)
\end{align*}
The existence of $T_2$ is proved along the same lines.
Addendum: $T_1$ is unique, as if $T$ is any operator with $f(a,b) = g(Ta, b)$ for every $a, b \in V$, we have
$$ f(a,\cdot) = g(Ta, \cdot) = \Theta_g(Ta) \iff Ta = \Theta_g^{-1}\bigl(f(a,\cdot)\bigr) = T_1 a $$
Uniqueness of $T_2$ follows along the same lines.
If $g$ is denenerate, for example $g = 0$. Then $g(T_1 a, b) = 0$ for any $T_1 \colon V \to V$, that is, if $f \ne 0$, such a $T_1$ will not exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find number of functions such that $f(f(a))=a$ Let X ={1, 2, 3, 4}. Find the number of functions $f : X \rightarrow X$ satisfying $f(f(a)) = a$ for all $1 \le a \le 4$.
I took the $f(x) =x$ and, then there are 1 possibilities. But answer is given as 10. How is it 10?
|
HINT: If $f(x)=x$, then of course we’ll have $f(f(x))=x$ as well, but there’s another possibility: if $f(x)=y$ and $f(y)=x$, then $f(f(x))=f(y)=x$ and $f(f(y))=f(x)=y$, so both $x$ and $y$ behave correctly. These are the only possibilities, however. Let’s give each such function a code describing which elements of $X$ it leaves fixed and which elements it swaps: if $f(x)=x$, we’ll write $(x)$, and if $f(x)=y\ne x$ and $f(y)=x$, we’ll write $(xy)$. The code for the identity function is therefore $(1)(2)(3)(4)$. The code for the function that sends $1$ to itself, $2$ to $3$, $3$ to $2$ and $4$ to itself is $(1)(23)(4)$.
Now the question becomes: How many such codes are there?
*
*We can have $0$ swaps; that’s $(1)(2)(3)(4)$, the identity function.
*We can have $1$ swap and two fixed points, like $(1)(23)(4)$; how many of those are there?
*We can have $2$ swaps, like $(13)(24)$; how many of those are there?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
How many such strings are there? How many 50($=n$) digit strings($=f(n)$) composed of only zeros and ones are there such that all "ones" should be in groups of at least 3(The grouping should be explicit), if they occur and must be separated by atleast one "zero".
Valid Examples:
$$(111)0(111),\;\;\;(1111)00(111)0(11111)0(111)000$$
InValid Examples:
$$(111)(11),\;\;(111)(111),\;\;(111)0(11)$$
It is easy to find it by explicitly writing for lower n. Also it seems that $f(n)$ follows recursion but I can't write it.
I also tried using stars and bars with no progress.
An Explanatory Example:
$$f(7)=17$$
$$\begin{array}{|c|c|c|c|}\hline 1&0000000&10&00(1111)0\\2&(111)0000&11&000(1111)\\3&0(111)000&12&0(1111)00\\4&00(111)00&13&00(1111)0\\5&000(111)0&14&000(1111)\\6&0000(111)&15&(11111)00\\7&(111)0(111)&16&0(11111)0\\8&(1111)000&17&1111111\\9&0(1111)00&&\\\hline\end{array}$$
|
So here's a more elementary way to find a recursion:
Denote $a(n)$ be the number of such strings that end up with a 0.
Denote $b(n)$ the number of strings that end up with a 1.
Now, we find $f(n)=a(n)+b(n)$.
Also, easily we obtain $a(n+1)=f(n)=a(n)+b(n)$ since we can add a 0 to any string.
Now, we are left to find a recursion for $b(n+1)$.
For any such string, the last three digits need to be "ones". So if at the end we have exactly $k$ "ones" (here $k \ge 3$) we have $a(n+1-k)$ ways to choose the remaining string.
So $b(n+1)=a(n-2)+a(n-3)+\dotsc+a(1)+1$ i.e. $b(n+1)=b(n)+a(n-2)$.
Now, we are able to solve for $f(n)$.
Using our first recursion, we find $b(n)=a(n+1)-a(n)$.
Plugging this into the second recursion, we find
$$a(n+2)=2a(n+1)-a(n)+a(n-2)$$
Since we established $a(n)=f(n-1)$ above, we further conclude
$$f(n+2)=2f(n+1)-f(n)+f(n-2)$$
This together with the starting values $f(0)=f(1)=f(2)=1, f(3)=2$ gives the correct values
$f(4)=4, f(5)=7, f(6)=11, f(7)=17$ so it seems to work...
Of course, solving the characteristic equation $x^4-2x^3+x^2-1=0$ we can find a closed formula for $f(n)$ but this does not give much more insight than the recursion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proof that every vector field on a Lie group is left-invariant I am just starting a course on Lie groups and I'm having some difficulty understanding some of the ideas to do with vector fields on Lie groups.
Here is something that I have written out, which I know is wrong, but can't understand why:
Let $X$ be any vector field on a Lie group $G$, so that $X\colon C^\infty(G)\to C^\infty(G)$.
Write $X_x$ to mean the tangent vector
$X_x\in T_x G$ coming from evaluation at $G$, that is, define
$X_x(-)=(X(-))(x)$ for some $-\in C^\infty(G)$.
We also write $L_g$ to mean the left-translation diffeomorphism $x\mapsto gx$.
Now
\begin{align}
X_g(-) = (X(-))(g) &= (X(-))(L_g(e))\\
&= X(-\circ L_g)(e)\\
&= X_e(-\circ L_g) \\
&= ((DL_g)_eX_e)(-).
\end{align}
Using this we can show that $((L_g)_*X)_{L_g(h)}=X_{L_g(H)}$ for all $h\in G$, and thus $(L_g)_*X=X$, i.e. $X$ is left-invariant.
I'm sure that the mistake must be very obvious, but I'm really not very good at this sort of maths, so a gentle nudge to help improve my understanding would be very much appreciated!
|
Your problem is with the equality $(X(f))(L_g(e))= X(f\circ L_g)(e).$ Note that in $(X(f))(L_g(e))$ you first get the derivative of $f$ with respect to $X$ evaluated at $g.$ But, in $X(f\circ L_g)(e)$ you modify the function by a left translation. It holds that $(f\circ L_g)(e)=f(g)$ but you cannot say anything at nearby points, which is essential to get $X(f\circ L_g)(e).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate the limit without using the L'Hôpital's rule $$\lim_{x\to 0}\frac{\sqrt[5]{1+\sin(x)}-1}{\ln(1+\tan(x))}$$
How to evaluate the limit of this function without using L'Hôpital's rule?
|
I thought that it would be instructive to present a way forward that circumvents the use of L'Hospital's Rule, asymptotic analysis, or other derivative-based methodologies. To that end, herein, we use some basic inequalities and the squeeze theorem.
In THIS ANSWER for $x>-1$, I showed
$$\frac{x}{x+1}\le\log (1+x)\le x \tag 2$$
using only Bernouli's Inequality and the limit definition of the exponential function.
And here, the inequality
$$|x\cos x|\le |\sin x|\le |x| \tag 1$$
was established by appealing to geometry only.
Using $(1)$ and $(2)$, we have for $x>0$
$$\frac{(1+x\cos x)^{1/5}-1}{\tan x}\le\frac{(1+\sin x)^{1/5}-1}{\log (1+\tan x)}\le\frac{(1+x)^{1/5}-1}{\frac{\tan x}{1+\tan x}} \tag 3$$
Note that the right-hand side 0f $(3)$ can be written
$$\begin{align}
\frac{(1+x)^{1/5}-1}{\frac{\tan x}{1+\tan x}}& =(1+\tan x)\left(\frac{x\cos x}{\sin x}\right)\\\\\
&\times \left(\frac{1}{1+(1+x)^{1/5}+(1+x)^{2/5}+(1+x)^{3/5}+(1+x)^{4/5}}\right)\\\\
&\to \frac 15\,\,\text{as}\,\,x\to 0
\end{align}$$
Similarly, the left-hand side 0f $(3)$ can be written
$$\begin{align}
\frac{(1+x\cos x)^{1/5}-1}{\tan x}& =(\cos^2 x)\left(\frac{x}{\sin x}\right)\\\\
&\times \left(\frac{1}{1+(1+x\cos x)^{1/5}+(1+x\cos x)^{2/5}+(1+x\cos x)^{3/5}+(1+x\cos x)^{4/5}}\right)\\\\
&\to \frac 15\,\,\text{as}\,\,x\to 0
\end{align}$$
Therefore, by the squeeze theorem
$$\lim_{x\to 0^+}\frac{(1+\sin x)^{1/5}-1}{\log (1+\tan x)}=\frac15$$
A similar development for $x<0$ results in the same limit. Therefore, we have
$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to 0}\frac{(1+\sin x)^{1/5}-1}{\log (1+\tan x)}=\frac15}$$
and we are done without using anything other than standard inequalities!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How to evaluate the limit $\lim_{x\to 0} \frac{1-\cos(4x)}{\sin^2(7x)}$ I am lost in trying to figure out how to evaluate the $$\lim_{x\to 0} \frac{1-\cos(4x)}{\sin^2(7x)}.$$
So far, I have tried the following:
Multiply the numerator and denominator by the numerator's conjugate $1+\cos(4x)$, which gives $\frac{\sin^2(4x)}{(\sin^2(7x))(1+\cos(4x))}$. However, I am not sure what to do after this step.
Could anyone please help point me in the right direction as to what I am supposed to do next?
All help is appreciated.
|
HINT:
Using $\cos2A=1-2\sin^2A,$
$$\dfrac{1-\cos4x}{\sin^27x}=2\cdot2^2\cdot\left(\dfrac{\sin2x}{2x}\right)^2\cdot\dfrac1{7^2\cdot\left(\dfrac{\sin7x}{7x}\right)^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Nat Deduction Proof - Distributive property I need to prove (P v Q) ^ (P v R) |- P v (Q ^ R) using natural deduction and propositional logic. I should be able to do it using only AND and OR rules, but I am stuck on how to assume Q and R. This is what I have:
(P v Q) ^ (P v R)________premise
P v Q________________^e1___1
P v R________________^e2___1
...P__________________assumption
...P v (Q ^ R)___________vi1___4
I know this is right so far. I just don't know how to introduce my assumptions for Q and R. Do I assume ~P? If so, which rule is applied, and what is the process, to conclude Q ^ R? Thanks for any help you can give me
|
Assume $\rm \neg P$, then from 2 and 3 derive $\rm Q$ and $\rm R$. From that you have $\rm Q\wedge R$, and so on...
The rule, $\rm \neg P,\, P\vee Q \,\vdash\, Q$ , is called Deductive Syllogism. Sometimes also known as disjunctive elimination ($\rm\vee e$), or historically as modus tollendo ponens.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
existence of a linear map Let $V$ and $W$ be finite dimensional spaces. Given a positive integer $m$ and vectors $v_1, .., v_m \in V$ and $w_1, .., w_m \in W$. We assume that for every linear combination $\sum_{i=1}^m a_i v_i = 0$, we have $\sum_{i=1}^m a_i w_i = 0$. I want to show that there exists a linear map $T : V \to W$ satisfying $T v_j = w_j$.
I say the following : Denote $V' = span(v_1,...,v_m)$ and $W' = span(w_1,...,w_m)$. V' has a basis $\{e_1, ..., e_ p \}$ and W' has a basis $\{f_1, ..., f_ q \}$ with $p,q \leq m$.
There exist some number $\alpha_{ij}$ and $\beta_{ik}$ such that
$v_i = \sum_{j=1}^p \alpha_{ij} e_j$ and $w_i = \sum_{k=1}^q \beta_{ik} f_k$.
Now I want to use the hypothesis, to justify that we can find $T$
such that
$$
\sum_{j=1}^p \alpha_{ij} Te_j = \sum_{k=1}^q \beta_{ik} f_k
$$
which boils down into finding some numbers $T_{jk}$ such that
$$
\sum_{j=1}^p \alpha_{ij} \left (\sum_{k=1}^q Tjk f_k \right ) = \sum_{k=1}^q \beta_{ik} f_k.
$$
How the hypothesis helps to justify that the numbers $T_{jk}$ exist?
Thanks.
|
It is simpler if you take the basis $e_1,\ldots,e_p$ to be a subset of $v_1,\ldots,v_m$. Without loss of generality, you can take $e_i=v_i,\quad i=1,\ldots, p.$ Let $T:V'\to W$ be the unique linear transformation satisfying $T(e_i)=w_i,\quad i=1,\ldots,p$. It follows from the hypothesis that $T$ is the required transformation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Smallest n such that there's a polynomial in $\Bbb Z_n [X]$ of degree 4 that has 8 roots in $\Bbb Z_n$
I'm looking for the smallest positive integer $n$ such that there's a quartic polynomial in $\Bbb Z_n [X]$ that has 8 distinct roots in $\Bbb Z_n$.
I have n equal to 15 with roots 1, 2, 4, 7, 8, 11, 13, 14 and equation $x^4-1=0$ but I'm unsure if you can go lower (is it possible to have n equal to 8?)
Also, while my math experience barely touches the surface of number theory and abstract algebra thus I'm not necessarily looking to prove this, I would like to understand why my claim of the minimality of n is true (or why yours for n less than 15 is) which I currently do not.
|
Every number raied to $4$ ends in the same digit than the number itself, so $X^4-X$ has 10 roots in $\Bbb Z_{10}$, so $15$ is not the answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Integration of Legendre Polynomial I need to evaluate the following integral
\begin{equation} \int_{-1}^1 \frac{d^4P_l(x)}{dx^4}P_n(x)dx\end{equation}. Of course the answer I need is in terms of $l$ and $n$. Does anyone have any idea how to proceed?
|
The facts below should allow you to compute the integral in question in terms of $l$ and $n$:
*
*The Legendre polynomials $P_0, \dots, P_n$ form a basis for the space of polynomials of degree at most $n$.
*The Legendre polynomials are orthogonal:
$$
\int_{-1}^{1} P_m(x) P_n(x)\,dx = {2 \over {2n + 1}} \delta_{mn}
$$
*$\dfrac{d^4P_l(x)}{dx^4}$ is a polynomial of degree $l-4$ if $l\ge 4$ or the zero polynomial otherwise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
A paradox between intuition and equations regarding the surface of revolution? On the surface of revolution $$\sigma(u,v)=(f(u)cosv,f(u)sinv, g(u))$$ by the geodesic equations (i.e. $(1)\ \ddot{u}=f(u) \dfrac{df}{du} \dot{v}^2$ and $(2)\ \dfrac{d}{dt}(f(u)^2\dot{v})=0 $) we can show that every meridian (i.e. v=constant) and every u=constant in case of only $df/du=0$ are geodesics. If $u=u_0$ and $df/du\ne 0$ according to Eq. $(1)$ the curve is not a geodesic but considering the geometry it must be; because the shortest distance between two point each on $u=u_0$ and $\frac{df}{du}\ne 0$ (both same $u=u_0$) is the very curve of that $u=u_0$. Is this a contradiction?
Thanks a lot.
PS - Equations and Pic. are taken from Section 8.3. of Elementary Differential Geometry by Pressley.
|
To give you an example, consider the sphere, which is a surface of revolution given by
$$(\sin u \cos v, \sin u \sin v, \cos u),$$
that is, $f(u) = \sin u$ and $g(u) = \cos u$, $u\in (0, \pi)$. Note that $f'(u) = \cos u$ is $0$ if and only if $u = \pi/2$, which corresponds to the great circle. Note that if $u_0 \neq \pi/2$, the curve given by $u = u_0$ is not a great circle.
Note that geodesic are curves that corresponds to critical points of the length functional. We can check that the curve $u= u_0$ cannot be a critical point of the length functional if $f'(u_0) \neq 0$. To see this, let
$$\gamma_{u_0} (v) = (f( u_0) \cos v, f( u_0) \sin v, g (u_0))$$
be a parametrization of the curve $u = u_0$. Then the length $L(\gamma_{u_0})$ of this curve is given by $L(\gamma_{u_0}) = 2\pi f(u_0)$. So if we varies this $u_0$, then
$$\frac{d}{du} L(\gamma_u) = 2\pi f'(u),$$
thus there is a family of curves in the surface so that
$$\frac{d}{du} L(\gamma_u)\bigg|_{u=u_0} = 2\pi f'(u_0) \neq 0.$$
Thus the curve $\gamma_{u_0}$ cannot be a geodesic.
Geometrically, you might think of wrapping a rubber band on this surface of revolution (To represent the curve $u = u_0$). The rubber band would be stable only when $f'(u_0) =0$, if $f'(u_0)\neq 0 $, the rubber band would tend to move to reduce it's length.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find an increasing sequence of rationals that converges to $\pi$ I am not sure how to construct a sequence that would convey convergence to $\pi$.
Except maybe $a_n=\{\pi + 1/n\}$ but the terms would not be rational.
Looking for an adequate way to show to satisfy the three conditions.
|
The sequence in graydad's answer can be written in terms of the floor function as
$$n \mapsto 10^{-n} \lfloor 10^n \pi \rfloor.$$
This suggests some cute generalizations. Instead of the decimal expansion for $\pi$, we can use the expansion in any base $q$:
$$n \mapsto q^{-n} \lfloor q^n \pi \rfloor$$
converges to $\pi$ for any whole number $q \ge 2$. In fact,
$$n \mapsto a_n^{-1} \lfloor a_n \pi \rfloor$$
converges to $\pi$ for any increasing sequence $n \mapsto a_n$ of natural numbers!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1469915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Show that $C_3 \times C_3$ is not isomorphic to $C_9$ I am trying to show that $C_3 \times C_3$ is not isomorphic to $C_9$. I am new to group-theory so forgive my foolish intuitions. One natural thought I had was to show that they are of differing cardinality but intuitively, why can't we map the elements $c_1,c_2,c_3 \in C_3$ to $C_9$ like this:
$f: C_3 \times C_3 \rightarrow C_9$ defined by $f(c_i, c_j) = c_{i+j} \in C_9$.
|
Each isomorphism preserves the orders of elements of the respective groups. But $C_9$ has an element of order $9$ whereas $C_3\times C_3$ does not. Hence they are not isomorphic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Existence of a discrete family of sets Given that in a metric space $(X,d)$, there exists a sequence $(x_n)$ which does not cluster in $X$. Then how to construct a discrete family of sets $V_n$ such that each $V_n$ is a closed neighbourhood of $x_n$? By a discrete family of sets, we mean a family such that every point of $X$ has a neighbourhood which intersects at most one element of the family.
What is the corresponding result for general topological space?
Thank you
|
For $n\in\Bbb N$ let $r_n=\frac13\inf\{d(x_n,x_k):k\ne n\}$; since $x_n$ is not a cluster point of the sequence, $r_n>0$. Now for $n\in\Bbb N$ let $V_n$ be the closed ball of radius $\frac{r_n}{2^n}$ centred at $x_n$. I’ll leave it to you to check the details; the choice of $r_n$ ensures that the sets $V_n$ are pairwise disjoint, and the factor of $2^{-n}$ ensures that any limit point of $\bigcup_{n\in\Bbb N}V_n$ is a cluster point of the sequence.
In general topological spaces it may not be possible to find such nbhds. Let
$$\begin{align*}
L&=\left\{\left\langle\frac1n,0\right\rangle:n\in\Bbb Z^+\right\}\;,\\
Y&=\left\{\left\langle\frac1n,\frac1m\right\rangle:m,n\in\Bbb Z^+\right\}\;,\text{ and}\\
p&=\langle 0,0\rangle\;,
\end{align*}$$
and let $X=\{p\}\cup L\cup Y$. Let $\tau_e$ be the Euclidean topology on $X$, and let $\tau$ be the topology generated by the base $\tau_e\cup\{U\setminus L:U\in\tau_e\}$. You can easily check that this is the usual topology on $L\cup Y$, and that a set $U\subseteq X$ is a nbhd of $p$ if and only if $p\in U$, and $U$ contains all but finitely many points of $\left\{\frac1n\right\}\times\left\{\frac1m:m\in\Bbb Z^+\right\}$ for all but finitely many $n\in\Bbb Z^+$.
Now let $x_n=\left\langle\frac1n,0,\right\rangle$ for $n\in\Bbb Z^+$. The sequence $\langle x_n:n\in\Bbb Z^+\rangle$ has no cluster point in $X$, but if $V_n$ is a closed nbhd of $x_n$ for each $n\in\Bbb Z^+$, every open nbhd of $p$ intersects infinitely many of the sets $V_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Euler number zero for odd dimensional compact manifolds I need to prove that
every compact manifold of odd dimension has Euler number zero.
The Euler number of $M$ compact and oriented is
$$
e(M):=\int_Ms_0^*\phi(TM)
$$
where $s_0$ is the zero section of $TM$ and $\phi(TM)$ is its Thom class.
We also proved that
$$
e(M)=\sum_q (-1)^qh_{DR}^q(M)
$$
where $h_{DR}^q(M):=dim H_{DR}^q(M)$ and $H_{DR}^q(M)$ is the $q-th$ cohomology ring of $M$.
|
Let's suppose $M$ orientable. And let $dim(M)=2n+1$.
By Poincaré duality, we get
$$
H^q(M)\cong (H^{2n+1-q}(M))^*
$$
for every $q$.
Since every compact manifold is of finite type (hence its cohomology rings are finite dimensional) and since every finite dimensional space is isomorphic to its dual, we get
$$
h^q(M)=h^{2n+1-q}(M)
$$
for every $q$.
Let's apply the formula
$$
e(M)=\sum_q(-1)^qh^q(M)
$$
We see that all the terms $h^q(M)$ and $h^{2n+1-q}(M)$ delete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is the function $y=xe^x$ invertible? I'm wondering if the equation $re^r=se^s$ has any answer. If there is any answer,and $r=-1+v,s=-1-v$ in which $v$ is a positive real number,what can we say about $v$?
Thank you in advance.
|
Let $r=-1+v$ and $s=-1-v$. You are looking for zeros of the function:
$$
f(v) = (-1+v)e^{-1+v} - (-1-v)e^{-1-v}.
$$
Observe that one solution is $v=0$ which corresponds to $r=s=-1$.
Compute the derivative
$$
f'(v) = ve^{-1-v} (-1+e^{2 v})
$$
and notice that it is never negative and it is zero only for $v=0$. Hence $f$ is strictly increasing. This means $f$ is injective and there cannot be other solutions. So you can say that $v=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Rank of a matrix of binomial coefficients This question arose as a side computation on error correcting codes.
Let $k$, $r$ be positive integers such that $2k-1 \leqslant r$ and let $p$ a prime number such that $r < p$. I would like to find the rank of the following $k \times k$ matrix with coefficients in the field $F_p$
$$
\begin{pmatrix}
\binom {r}{k} & \binom {r}{k+1} & \dotsm & \binom {r}{2k-1}\\
\vdots & \\
\binom {r}{2} & \binom {r}{3} & \dotsm & \binom {r}{k+1} \\
\binom {r}{1} & \binom {r}{2} & \dotsm & \binom {r}{k}
\end{pmatrix}
$$
where all binomial coefficients are taken modulo $p$. I conjecture the rank should be $k$ but I have no formal proof. I am aware of Lucas's theorem, but it didn't help so far.
|
Write $A(r,k)$ for the matrix you describe. Then
$$
\det A(r,k) =\prod_{i,j=1}^k \frac{r-k-1+i+j}{i+j-1}.
$$
This follows from equation (2.17) in Krattenthaler's Advanced determinant calculus (with $a=n=k$, $b=r-k$). For $p\geq r+k$ every factor in the product has numerator coprime to $p$, so the reduction of $A(r,k)$ has full rank.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Existence of linear functionals
Let $f$ be any skew-symmetric bilinear form on $\Bbb R^3$. Prove that there exist linear functionals $L$ and $M$ on $\Bbb R^3$ such that $$f(a,b)=L(a)M(b)-L(b)M(a)$$
I'm having a similar problem in showing existence of linear functionals in other questions as well. How do I proceed in such questions?
|
Using the standard basis $e_1, e_2, e_3$ of $\mathbb R^3$, $f$ is represented by the skew-symmetric matrix
$$ \pmatrix{0 & a & b\cr -a & 0 & c\cr -b & -c & 0\cr}$$
where $a = f(e_1, e_2)$, $b = f(e_1, e_3)$, $c = f(e_2,e_3)$.
The case $a=b=c=0$ is easy, so suppose at least one (wlog $a$) is nonzero.
Then one solution is
$$ \matrix{L(e_1) = 0, & L(e_2) = -a, & L(e_3) = -b\cr
M(e_1) = 1, & M(e_2) = 0, & M(e_3) = -c/a\cr }$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Height, Time of the Ball The height $h$ in feet of a baseball hit $3$ feet above the ground is given by $h=3+75t-16t²$ where $t$ is the time in seconds.
a. Find the height of the ball after two seconds
b. Find the time where the ball hits the ground in the field
c. What is the maximum height of the ball?
|
Hints :
*
*a) replace t by 2 and evaluate h
*b) replace h by 0 and find the value of t
*c) find the vertex of the parabola.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Definition of derivative of real function in baby Rudin Let $f$ be defined (and real-valued) on $[a,b]$. For any $x\in [a,b]$ form a quotient $$\phi(t)=\dfrac{f(t)-f(x)}{t-x} \quad (a<t<b, t\neq x),$$ and define $$f'(x)=\lim \limits_{t\to x}\phi(t),$$ provided this limit exists in accordance with Defintion 4.1.
I have one question. Why Rudin considers $t\in (a,b)$? What would be if $t\in [a,b]?$
|
In short, this is an example of Rudin's sublime succinctness. His definition includes the usual two-sided definition when $x \in (a,b)$ and also the one-sided definition $x = a$ or $x = b$. To get the one-sided definition, notice that he requires $t \in (a,b)$.
Rudin was perhaps over-succinct in that he failed to mention that when $x = a$ or $x = b$, you have to interpret the limit as being one-sided. (Does Thm 4.1 include this?)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Matrix exponential, computation. What is $e^A$, where$$A = \begin{pmatrix} 0 & 1 \\ 4 & 0 \end{pmatrix}?$$
|
You first have to find a basis in which the matrix has the form $B= D+N$, where $D$ is a diagonal matrix, $N$ is nilpotents and $DN=ND$. Then, since $D$ and $N$ commute,
$$\mathrm e^{B}=\mathrm e^{D}\mathrm e^{N}$$
The exponential of a diagonal matrix $D$ is the diagonal matrix with the exponentials of the diagonal elements of $D$ on the diagonal. The exponential of a nilpotent matrix can be computed with the series expansion of the exponential, which is really a finite sum for a nilpotent matrix.
Finally, let $P$ the change of basis matrix from the canonical basis to the new basis. We have $A=PBP^{-1}$, whence
$$\mathrm e^{A}=P\mathrm e^{B}P^{-1}.$$
Here $A$ is diagonalisable: one finds the eigenvalues are $\pm 2$ and a basis of eigenvectors is $\;\bigl\{(1,2),(1,-2)\bigr\}$, hence
$$P=\begin{bmatrix}1&1\\2&-2\end{bmatrix},\quad P^{-1}=\frac14\begin{bmatrix}2&1\\2&-1\end{bmatrix}$$
and finally
$$\mathrm e^{A}=\begin{bmatrix}1&1\\2&-2\end{bmatrix}\begin{bmatrix}\mathrm e^2&0\\0&\mathrm e^{-2}\end{bmatrix}\cdot\frac14\begin{bmatrix}2&1\\2&-1\end{bmatrix} =\begin{bmatrix}\cosh 2&\frac12\sinh 2\\2\sinh2&\cosh2\end{bmatrix}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1470908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What kind of graph is this? A pie chart? Besides a gross use of Comic Sans, what kind of graph is this? I've seen it used for population statistics before, but I don't know what it's called.
My first instinct was to just say "bullet chart" due to process of elimination. After researching what a bullet chart is, which includes thresholds and targets, I'm now leaning toward some kind of unorthodox pie chart since it's dealing solely of percentage makeup of the whole.
I'm still not convinced that this is simply called a "pie chart," but it's all I got.
|
e) None of the previous ones.
The graph is a scatter plot of filled squares, also know as fragmented partition chart.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Show that $\{\frac{n^2}{n^2+n}\}$ is convergent
Use only the Archimedean Property of $\mathbb{R}$ to show $\{\frac{n^2}{n^2+n}\}$ is convergent
Let $a_n=\{\frac{n^2}{n^2+n}\}=\{\frac{n}{n+1}\}=\{1-\frac{1}{n+1}\}=\{1\}-\{\frac{1}{n+1}\}$.
$\{1\}$ is convergent to $1$. Show $\{\frac{1}{n+1}\}$ is convergent. So let $\epsilon>0$. By the Archimedean Property, there exists an index $N$ such that $1/N<\epsilon$. So for all $n\geq N$, we have $1/n\leq 1/N$. Then we have $$\frac{1}{n+1}\leq\frac{1}{n}\leq\frac{1}{N}<\epsilon$$
Then we have $\vert1/(n+1)-0\vert=1/(n+1)<\epsilon$ which shows that $1/(n+1)$ is converged to $0$; thus $b_n$ is converged to $1$.
I am not sure that is valid to do $\{1-\frac{1}{n+1}\}=\{1\}-\{\frac{1}{n+1}\}$ because it looks right to me. If that is not valid, can anyone give me a hit or suggestion to show the sequence is convergent? thanks
|
You need to know that the sum of two convergent sequences is convergent for your argument to be 100% okay. This is probably "not known" now (if it is, your proof is fine).
You are on the right track, your proof only needs to show that $\left|(1-\frac{1}{n+1})-1\right|<\epsilon$ which you basically did!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
prove the sequence is increasing Im asked to show the sequence $a_{n+1}=\sqrt{3+2a_{n}}$ where $a_{1}=0, a_{2}=1$ is increasing and bounded and therefore convergent.
I don't even know how to start the proof. Im sure it increases because its essentially adding a really small number each time and the number gets very small since its being square rooted a bunch of times but i'm having a real issue showing that it actually is increasing for all n.
$$
a_{2}=\sqrt{3} \\
a_{3}=\sqrt{3+2\sqrt{3}} \\
a_{4}=\sqrt{3+2\sqrt{3+2\sqrt{3}}}
$$
Everything I try to do leads nowhere and makes no sense. I've seen some proofs that go about by squaring both sides then solving the resulting quadratic equation assuming the sequence has a limit but im not sure how that shows that its increasing. If anyone has any tips that might push my in the right direction it would be greatly appreciated.
|
Note that $a_n \ge 0$ and
$$a_{n+1}^2 - a_n^2 = 3 + 2a_n - 3 -2a_{n-1} = 2(a_n - a_{n-1})$$
thus one can argue by induction that $a_n$ is increasing.
Although you didn't ask, you can also prove by induction that $a_n \le 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Demonstrate $f(a,b)=2^a (2b+1)-1$ is surjective using induction I am trying to show that $f:\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N}$, where $f(a,b)=2^a (2b+1)-1$, is surjective using induction (possibly strong induction). In the case $n=0$, it is easy to see that $(0,0)$ does the trick.
Suppose there exist elements of $\mathbb{N}\times\mathbb{N}$ which map to each $n\in\{0,1,\dots,k\}$, consider the case $k+1$.
All I have is:
$k+1=f(x,y)+1$ for some $(x,y)\in\mathbb{N}\times\mathbb{N}$, but I do not know how to continue. Any help is appreciated, thank you.
|
It is more convenient to show that every positive integer $n$ can be represented in the form $2^a(2b+1)$, where $a$ and $b$ are nonl-negative integers. We use strong induction. The result is clearly true for $n=1$. Let us suppose it is true for all $k\lt n$, and show it is true for $n$.
Perhaps $n$ is odd, say $2b+1$. Then we can take $a=0$.
Perhaps $n$ is even, say $n=2m$. By the induction hypothesis, there exist natural numbers $a_1$ and $b$ such that $m=2^{a_1}(2b+1)$. Then $n=2^a(2b+1)$, where $a=a_1+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Implicit Differentiation of Two Functions in Four Variables Can someone please help with this homework problem: Given the equations
\begin{align}
x^2 - y^2 - u^3 + v^2 + 4 &= 0 \\
2xy + y^2 - 2u^2 + 3v^4 + 8 &= 0
\end{align}
find $\frac{\partial u}{\partial x}$ at $(x,y) = (2,-1)$.
In case it may be helpful, we know from part (a) of this question (this is actually part (b)) that these equations determine functions $u(x,y)$ and $v(x,y)$ near the point $(x,y,u,v) = (2, -1, 2, 1)$.
While I am taking a high-level multivariable calc class, I have not yet taken linear algebra or diffeq yet, so please refrain from using such techniques in your solutions.
Thanks a lot!
|
Consider $u(x,y), v(x,y)$ as functions and $x,y$ variables. Take the $x$ derivative of both equations ($y$ is a constant). Use the chain rule to derivate $u$ and $v$. Then set $x=2, y = -1$ and solve.You have 4 equations and 4 unknowns ($u, v, \frac{\partial u}{\partial x}, \frac{\partial v}{\partial x}$) , so you should be able to do that.
If you have seen it, depending on the level of rigor of your course, you should use the implicit function theorem to show than you can solve for $u,v$ and everything is well-defined.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Compute $\lim_{n \to \infty} n(\frac{{\pi}^2}{6} -( \sum_{k=1}^{n} \frac{1}{k^2} ) ) $ Basel series
$$ \lim_{n\to \infty} \sum_{k=1}^{n} \frac{1}{k^2} = \frac{{\pi}^2}{6} $$
is well known. I'm interested in computing the limit value
$$
\lim_{n \to \infty} n\left(\frac{{\pi}^2}{6} - \sum_{k=1}^{n} \frac{1}{k^2} \right) $$
although I am not sure even whether this limit exists or not..
Does this limit exists? If exists, how can we compute this?
Thanks in advance.
|
By the Stolz rule, for $a_n =\frac{\pi^2}{6} - \sum\limits_{k=1}^n \frac{1}{k^2}$ and $b_n = \frac{1}{n}\downarrow 0$,
$$
\lim_{n\to\infty} \frac{a_n}{b_n} = \lim_{n\to\infty} \frac{a_{n-1}-a_n}{b_{n-1}-b_n} = \lim_{n\to\infty} \frac{\frac{1}{n^2}}{\frac{1}{n-1}-\frac{1}{n}} = 1.
$$
This also can be used to get further asymptotics, e.g.
$$
\lim_{n\to\infty} \frac{\frac{\pi^2}{6} - \sum\limits_{k=1}^n \frac{1}{k^2} - \frac{1}{n}}{\frac{1}{n^2}} = \lim_{n\to\infty} \frac{\frac{1}{n^2} - \frac{1}{n(n-1)}}{\frac{1}{(n-1)^2} - \frac{1}{n^2}} = \frac{1}{2}
$$
et cetera.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
}
|
A combinatoric/probability puzzle A horse race has four horses A, B, C and D. The probabilities for each horse to finish first or second are P(A)=80%, P(B)=60%, P(C)=40% and P(D)=20%. The probabilities add up to 200% because one horse will finish first with 100% probability and one horse will finish second with 100% probability. What are the probabilities of all possible six combinations of results for first and second place, i.e.
*
*Horse A and horse B in 1st and 2nd place?
*Horse A and horse C in 1st and 2nd place?
*Horse A and horse D in 1st and 2nd place?
*Horse B and horse C in 1st and 2nd place?
*Horse B and horse D in 1st and 2nd place?
*Horse C and horse D in 1st and 2nd place?
I've got a little monte carlo simulation that tells me the answer, but surely there's a simple analytic formula?
|
Although there is no unique solution, I think useful answers can be derived by searching for the maxiumum entropy propability distribution :-)
https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
find the solution set of the following inequality with so many radical I get lost
$\sqrt[4]{\frac{\sqrt{x^{2}-3x-4}}{\sqrt{21}-\sqrt{x^{2}-4}}}\geqslant x-5$
Edit
I get online with wolfram
-5 < x <= -2 || x == -1 || 4 <= x < 5
|
First let us check where the above formula is defined:
$ x^2-3x-4 \geq 0 \; \; \Leftrightarrow \; \; x\in ] -\infty , -1] \cup [4,+\infty[$.
$x^2-4 \geq 0 \; \; \Leftrightarrow \; \; x\in ] -\infty , -2] \cup [2,+\infty[ $.
$ \sqrt{21 } - \sqrt{ x^2-4} > 0 \; \; \Leftrightarrow \; \; x\in ]-5,-2[ \cup ]2,5[.$
Thus You set of solution must be subset of $]-5, -2] \cup [4,5[ $. Note that whenever $x \in ]-5, -2] \cup [4,5[ $ the inequality holds true as $x-5 < 0 $ and the other side of the inequality is surely positive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Required number of simulation runs I have the following problem:
One wants to estimate the expectation of a random variable X. A set of 16 data values (i.e. simulation outputs) is given, and one should determine roughly how many additional values should be generated for the standard deviation of the estimate to be less than 0.1.
If $k$ is the total number of values required, I think one should solve $S_k/\sqrt{k} < 0.1$ for $k$ where $S_k$ is the sample stadard deviation based on all values.
The problem is that only 16 values are given, and therefore it seems not so reasonable to use the sample standard deviation computed from them as an approximation for $S_k$. How should one proceed?
|
If $X$ is a member of the normal family and you are estimating $\mu$, then
$Q = 15S_{16}^2/\sigma^2 \sim Chisq(15).$
Thus, $$P(Q > L) = P(\sigma^2 < 15 S_{16}^2/L) = 0.96,$$
where $L \approx 7.26$ cuts 5% of the probability from the lower tail
of $Chisq(15).$
Then you have a pretty good (worst case) upper bound for $\sigma^2,$
and upon taking the square root, for $\sigma.$ Conservatively, you could use
that value instead of $S_{16}$ in your formula for $k$.
Similar strategies would work for other distributional families.
However, my guess is that you are just supposed to assume
$S_{16} \approx \sigma$ and forge ahead.
In practice, you can always do a reality check at the end of
the simulation by using $2S_k/\sqrt{k}$ as an approximate 95%
margin of simulation error for the estimate (where $S_k$
is the SD of the $k$ simulated values of the estimate). This
works as long as the estimator is asymptotically normal and
$k$ is reasonably large.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1471959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Check if a point is inside a rectangular shaped area (3D)? I am having a hard time figuring out if a 3D point lies in a cuboid (like the one in the picture below). I found a lot of examples to check if a point lies inside a rectangle in a 2D space for example this on but none for 3D space.
I have a cuboid in 3D space. This cuboid can be of any size and can have any rotation. I can calculate the vertices $P_1$ to $P_8$ of the cuboid.
Can anyone point me in a direction on how to determine if a point lies inside the cuboid?
|
The three important directions are $u=P_1-P_2$, $v=P_1-P_4$ and $w=P_1-P_5$. They are three perpendicular edges of the rectangular box.
A point $x$ lies within the box when the three following constraints are respected:
*
*The dot product $u.x$ is between $u.P_1$ and $u.P_2$
*The dot product $v.x$ is between $v.P_1$ and $v.P_4$
*The dot product $w.x$ is between $w.P_1$ and $w.P_5$
EDIT:
If the edges are not perpendicular, you need vectors that are perpendicular to the faces of the box. Using the cross-product, you can obtain them easily:
$$u=(P_1-P_4)\times(P_1-P_5)\\
v=(P_1-P_2)\times(P_1-P_5)\\
w=(P_1-P_2)\times(P_1-P_4)$$
then check the dot-products as before.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
}
|
Can you choose -1 as the multiplicative unit? And what is a positive number? If one starts with the cyclic group of integers and want to introduce multiplikation the ordinare choice of multiplicative identity is the generator 1. But since 1 and its inverse -1 is sort of the same in the cyclic group, there is an automorphism that exchange these it should be possible to use -1 as multiplicative unit?
How can "a positive number" be defined? In the ordinary setting starting with the integers as a cyclic (additive) group it seems that once you choose the multiplicative unit there is a split in positive and negative numbers.
In what sort of structures does it make sense? For all Z/nZ it seems possible to do with one sort of representative, so for practical purposes we could do arithmetic on the whole number in Z/nZ with a sufficiently large n.
|
*
*There is an automorphism of $\mathbb{Z}$ the abelian group that exchanges $1$ and $-1$, but it doesn't respect multiplication: that is, it isn't an automorphism of $\mathbb{Z}$ the ring, which has no nontrivial automorphisms.
*Positivity comes from an order, which in general neither exists nor is unique. $\mathbb{Z}/n\mathbb{Z}$ does not have an order (compatible with its group structure).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the name for the mathematical property involving addition or subtraction of fractions over a common denominator? For any number $x$ where $x\in\Bbb R$ and where $x\ne0$, what is the mathematical property which states that:
$${1-x^2\over x} = {1\over x} - {x^2\over x}$$
|
Just the computation rules for fractions:
$$
\frac{a - b}{c} = \frac{a + (-b)}{c} \\
\frac{a + b}{c} = \frac{a}{c} + \frac{b}{c} \\
\frac{(-b)}{c} = -\frac{b}{c}
$$
for $a = 1$, $b = -x^2$ and $c = x \ne 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why do we use square in measuring a qubit with probability? A superposition is as follow:
$$ \vert\psi\vert = \alpha\vert 0\rangle + \beta\vert 1\rangle. $$
When we measure a qubit we get either the result 0, with probability $\vert \alpha\vert^{2},$ or the result 1, with probability $\vert \beta\vert^{2}.$
Blockquote
Why does the $\alpha$ have the power $2$ ?
|
Depending on how you are setting up your quantum states, it is usually taken as an axiom of the system that if a state $|\psi\rangle$ is a linear superposition of eigenstates $\{ |e_n\rangle \}$ of some observable where
$$|\psi\rangle = \sum_n \alpha_n |e_i\rangle \ , \ \alpha_n \in \mathbb C \text{ and the coefficients are normalized: } \sum_n |\alpha_n|^2 = 1$$
then when we make a measurement with respect to that observable, the state is observed in state $|e_i\rangle$ with probability $|\alpha_i|^2$. Whatever else the probability is, it cannot be $\alpha_i$, as that is complex. $|\alpha_i|$ is a possible choice. But it will turn out that the mathematics and physics of the model are much more fruitful if we say the probability is $|\alpha_i|^2$.
In other words, it's a model which has made an arbitrary choice about how to interpret it and use it. And it turns out it's a very successful model.
(To be convinced of that last point, keep on using the model in your course or elsewhere!)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Logic - Quantifiers Why is this proposition always true?
$$\forall x\,\forall y\,\exists z\big((x<z)\to (x\ge y)\big)$$
And where`s the flaw in my thinking:
You can always choose a $z$ larger than $x$. So the problem can be reduced to:
For all $x$ and all $y$, it is such that $x\ge y$,
which obviously isn´t true...
|
Why is this proposition always true?
$\forall x\,\forall y\,\exists z\big((x<z)\implies (x\ge y)\big)$
Suppose we have $x$, $y$ in $\mathbb{R}$
By the irreflexive property of $\lt$, we have $\neg x\lt x$
Introducing $\lor$, we have $\neg x\lt x \lor x \ge y$
Applying the definition of $\implies $, we have $\neg\neg x\lt x \implies x\ge y$
Removing the double negation, we have $x\lt x \implies x\ge y$
Introducing $\exists$, we have $\exists z\in \mathbb{R}:[x\lt z \implies x\ge y]$
Generalizing, we have $\forall x,y \in \mathbb{R}: \exists z\in \mathbb{R}:[x\lt z \implies x\ge y]$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Finding a Real Number using epslion Fix a real number $x$ and $\epsilon>0$. If $|x-1|\le \epsilon$ show $|2-x|\ge 1- \epsilon$
I think we were supposed to use the triangle inequality to show this.
If we use the triangle inequality then $|x-1|\le ||x|-|1||$ so then $x\le \epsilon+1$ so then $-x\ge-\epsilon-1$ adding 2 to each side we get $2-x\ge 1-\epsilon$
does that make sense?
|
In general, if $a,b\in\mathbb C$ then $$\big||a|-|b|\big|\leqslant |a|-|b|. $$
For
$$|a| \leqslant |a-b| + |b|\implies |a|-|b|\leqslant|a-b| $$
and similarly
$$|b| \leqslant |a-b| + |a|\implies |a|-|b|\leqslant|a-b|.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Swapping limit at infinity with limit at 0 I am trying to calculate:
$$\lim_{x\rightarrow\infty}x^2\sin\left(\frac{1}{x^2}\right)
$$
I am pretty sure that this is equivalent to calculating:
$$\lim_{k\rightarrow0}\frac{\sin(k)}{k}=1
$$
Since $k=\dfrac{1}{x^2}$ and $\lim\limits_{x\rightarrow\infty}k=0$.
Is there any way I can make this formal?
|
Your way is perfect !
An other way :
Since $$\sin\frac{1}{x^2}=\frac{1}{x^2}+\frac{1}{x^2}\varepsilon(x)$$
where $\varepsilon(x)\to 0$ when $x\to\infty $, you get
$$\lim_{x\to\infty }x^2sin\frac{1}{x^2}=\lim_{x\to \infty }\frac{\sin\frac{1}{x^2}}{\frac{1}{x^2}}=\lim_{x\to\infty }\frac{\frac{1}{x^2}+\frac{1}{x^2}\varepsilon(x)}{\frac{1}{x^2}}=\lim_{x\to \infty }(1+\varepsilon(x))=1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Is $\frac{4x^2 - 1}{3x^2}$ an abundancy outlaw? Let $\sigma(x)$ denote the sum of the divisors of $x$. For example, $\sigma(6) = 1 + 2 + 3 + 6 = 12$.
We call the ratio $I(x) = \sigma(x)/x$ the abundancy index of $x$. A number $y$ which fails to be in the image of the map $I$ is said to be an abundancy outlaw.
My question is:
Is $$\frac{4x^2 - 1}{3x^2}$$ an abundancy outlaw?
In this paper, it is mentioned that if $r/s$ is an abundancy index with $\gcd(r,s) = 1$, then $r \geq \sigma(s)$.
Note that
$$\gcd(4x^2 - 1, 3x^2) = \gcd(x^2 - 1, 3x^2) = 1$$
if $x \not\equiv 1 \pmod 3$.
Hence, if $(4x^2 - 1)/3x^2$ is an abundancy index with $x \not\equiv 1 \pmod 3$, then $4x^2 - 1 \geq \sigma(3x^2)$.
Here is where I get stuck. (I would like to affirmatively answer my question via the contrapositive.)
Lastly, if $(4x^2 - 1)/3x^2$ were an abundancy index $I(y)$, then
$${3x^2}\sigma(y) = y(4x^2 - 1),$$
so that
$${3x^2}\left(\sigma(y) - y\right) = y(x^2 - 1).$$
Since $\gcd(x^2, x^2 - 1) = 1$, I am sure that $x^2 \mid y$ and $(x^2 - 1) \mid \left(3(\sigma(y) - y))\right)$. So I can write
$$y = kx^2$$
and
$$3(\sigma(y) - y) = m(x^2 - 1).$$
I am not too sure though, how this can help in answering my question.
Update [October 10, 2015 - 1:05 PM]: Additionally, $I(y) = (4x^2 - 1)/3x^2 < 4/3$, so that $3(\sigma(y) - y) < y$.
|
A partial answer.
Let $I(y)=\frac{4x^2 - 1}{3x^2}$.
If $\gcd(4x^2 - 1, 3x^2) = 1$ then $3$ is a factor of $y$ contradicting the fact that $I(y)<\frac{4}{3}$.
Therefore $\gcd(4x^2 - 1, 3x^2) = 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Where am I going wrong in solving this exponential inequality? $$(3-x)^{ 3+x }<(3-x)^{ 5x-6 }$$
Steps I took:
$$(3-x)^{ 3 }\cdot (3-x)^{ x }<(3-x)^{ -6 }\cdot (3-x)^{ 5x }$$
$$\frac { (3-x)^{ 3 }\cdot (3-x)^{ x } }{ (3-x)^{ 3 }\cdot (3-x)^{ x } } <\frac { \frac { 1 }{ (3-x)^{ 6 } } \cdot (3-x)^{ 5x } }{ (3-x)^{ 3 }\cdot (3-x)^{ x } } $$
$$1<\frac { 1 }{ (3-x)^{ 9 } } \cdot (3-x)^{ 4x }$$
$$(3-x)^{ 9 }<(3-x)^{ 4x }$$
$$4x>9$$
$$x>2.25$$
This answer seems to be wrong. I am not sure where I went wrong in the steps that I took. What did I do wrong?
|
Hint: use the fact that $a^x$ is decreasing if $0<a<1$ and increasing if $a>1$.
Now consider first $2<x <3$. In that case the inequality is satisfied iff $2<x <9/4$. If $x>3$ there's no solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Incorrect General Statement for Modulus Inequalities $$|x + 1| = x$$
This, quite evidently, has no solution.
Through solving many inequalities, I came to the conclusion that,
If,
$$|f(x)| = g(x)$$
Then,
$$f(x) = ±g(x)$$
And this was quite successful in solving many inequalities. However, applying the above to this particular inequality:
$$|x + 1| = x$$
$$x + 1 = ±x$$
When ± is +, there is no solution. However, when ± is -:
$$x + 1 = -x$$
$$2x = -1$$
$$x = -\frac{1}{2}$$
Which does not make sense. What is wrong with the supposedly general statement that seemed to work flawlessly?
|
The definition of the absolute value is
$$
\lvert x \rvert =
\begin{cases}
x & \text{if $x\geqslant 0$} \\
-x & \text{if $x < 0$},
\end{cases}
$$
so
$$
\lvert f(x) \rvert = g(x)
\iff
\begin{cases}
f(x) = g(x) \\
f(x) \geqslant 0
\end{cases}
\qquad\text{or}\qquad
\begin{cases}
-f(x) = g(x) \\
f(x) < 0.
\end{cases}
$$
In particular,
$$
\begin{align*}
\lvert x+1 \rvert = x
&\iff
\begin{cases}
x + 1 = x \\
x + 1 \geqslant 0
\end{cases}
\qquad\text{or}\qquad
\begin{cases}
- (x + 1) = x \\
x + 1 < 0
\end{cases} \\
&\iff
\begin{cases}
1 = 0 \\
x \geqslant -1
\end{cases}
\qquad\text{or}\qquad
\begin{cases}
x = -1/2 \\
x < - 1.
\end{cases}
\end{align*}
$$
As $1 = 0$ and $-1/2 < -1$ are not true, we conclude that the equation $\lvert x+1 \rvert = x$ has no real solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1472999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Intuitive proof of the formula ${}_nC_r + {}_nC_{r-1} = {}_{n+1}C_r$ I came across this formula in combination— ${}_nC_r + {}_nC_{r-1} = {}_{n+1}C_r$. Even though I know its rigorous mathematical proof, I want a logical and elegant proof of this.
For example, the famous formula of combination ${}_nC_r = {}_nC_{n-r}$ says selecting $r$ objects out of $n$ objects is same as rejecting $(n-r)$ objects.
So, I am looking for such kind of intuitive proof of the formula ${}_nC_r + {}_nC_{r-1} = {}_{n+1}C_r$, which I am unable to get.
The thought of the wise man who said "writing a correct equation but not being able to interpret its result is the same as writing a grammatically correct sentence without knowing what it means!!!" is not helping me either!!
Note— ${}_nC_r$ means combination of $n$ objects taken $r$ at a time.
|
Assume the $n+1$ objects are balls, with only one of them being red and the rest white.
A particular selection of $r$ balls has to either contain the red ball or not.
Those selections omitting the red ball are $^{n}C_r$. Those selections containing the red ball have choice in which $r-1$ white balls have to be chosen, and this is given by $^{n}C_{r-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Solving another bessel equation by substitution For a problem solving class I need to find the general solution of ODE $x^2y''+(\frac{3}{16}+x)y=0$ in terms of $J_{\nu}$ and $J_{-\nu}$, if possible. $\nu$ represents the Bessel parameter.
A hint is given, namely that useful substitutions would be $y=2u \sqrt{x}$ and $\sqrt{x}=z$; this should lead to the ODE being reduced to a Bessel equation. Substituting this value and applying the chain rule leads to the ODE $z^2u''+zu'+(4z^2-(\frac{1}{4})^2)u=0$, which almost has the form of a Bessel equation with $\nu=\frac{1}{4}$, apart from the coefficient 4 in front of the $z^2$ in the $u$ coefficient.
What is the proper way of solving this ODE by reduction to a Bessel equation?
Can this be done?
|
The previous remark answers your question, keeping track of the expansions
\begin{gather*}
\cos z = J_0(z)-2J_2(z)+2J_4(z)-\cdots;\\
\sin z= 2J_1(z)-2J_3(z)+\cdots.
\end{gather*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Permutations and Combinations Heads/Tails Out of all of the potential sequences of 73 Heads/Tails games, each being Heads or Tails, how many sequences contain 37 tails and 36 heads?
Express the output in terms of factorials.
Because there are sequences of 73 games my initial thought is, that the answer would be $\frac{73!}{36!\cdot 37!}$.
Would this be correct?
|
Yes that is correct.
You can basically assume that you have 73 boxes and you have to fill 36 boxes with heads and 37 boxes with tails. Thus you can fill heads first and then fill the remaining ones with tails. Thus the answer is $\dbinom{73}{36}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Show that $f(0)=f(-1)$ is a subspace Let $F(\Bbb R,\Bbb R)$ is a vector-space of functions $f: \Bbb R\to \Bbb R$. The functions $f(0)=f(-1)$ are an example of subspace?
If it's possible I would like some example of functions such that $f(0)=f(-1)$.
|
$$F(\mathbb{R},\mathbb{R})=\left\{\mathcal{f}:\mathbb{R}\rightarrow \mathbb{R} | \mathcal{f}(0) = \mathcal{f}(-1) \right\}$$
We have to show-
*
*It is not a empty set or null element is a part of this set.
*If $\mathcal{f},\mathcal{g} \in F$, then $\left(\mathcal{f}+\mathcal{g}\right) \in F$.
*If $\mathcal{f} \in F$ and $\alpha \in \mathbb{R}$, then $\alpha \mathcal{f} \in F$.
Part-$1$ $\mathcal{f}(x)=0 \ \forall x \in \mathbb{R}$, is the null element. So, $\mathcal{f}(0)=\mathcal{f}(-1)=0$. So, $\mathcal{f} \in F$.
Part-$2$ $$\mathcal{f},\mathcal{g} \in F$$ $$\left(\mathcal{f}+\mathcal{g}\right) (x)=\mathcal{f}(x)+\mathcal{g}(x) $$
$$\left(\mathcal{f}+\mathcal{g}\right) (0)=\mathcal{f}(0)+\mathcal{g}(0)=\mathcal{f}(-1)+\mathcal{g}(-1)=\left(\mathcal{f}+\mathcal{g}\right) (-1)$$
So, $\left(\mathcal{f}+\mathcal{g}\right) \in F$.
Part-$3$ $$\mathcal{f} \in F \text{ and } \alpha \in \mathbb{R}$$
$$(\alpha \mathcal{f})(x)=\alpha \mathcal{f}(x)$$
$$(\alpha \mathcal{f})(0)=\alpha \mathcal{f}(0)=\alpha \mathcal{f}(-1)=(\alpha \mathcal{f})(-1)$$
So, $\alpha \mathcal{f}\in F$.
Ex: $\mathcal{f}(x)=\left|x+\frac{1}{2}\right|$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How discontinuous can the limit function be? While I was reading an article on Wikipedia which deals with pointwise convergence of a sequence of functions I asked myself how bad can the limit function be? When I say bad I mean how discontinuous it can be?
So I have these two questions:
1) Does there exist a sequence of continuous functions $f_n$ defined on the closed (or, if you like you can take open) interval $[a,b]$ (which has finite length) which converges pointwise to the limit function $f$ such that the limit function $f$ has infinite number of discontinuities?
2) Does there exist a sequence of continuous functions $f_n$ defined on the closed (or, if you like you can take open) interval $[a,b]$ (which has finite length) which converges pointwise to the limit function $f$ such that the limit function $f$ has infinite number of discontinuities and for every two points $c\in [a,b]$, $d\in [a,b]$ in which $f$ is continuous there exist point $e\in [c,d]$ in which $f$ is discontinuous?
I stumbled upon Egorov´s theorem which says, roughly, that pointwise convergence on some set implies uniform convergence on some smaller set and I know that uniform convergence on some set implies continuity of the limit function on that set but I do not know can these two questions be resolved only with Egorov´s theorem or with some of its modifications, so if someone can help me or point me in the right direction that would be nice.
|
The following is a standard application of Baire Category Theorem:
Set of continuity points of point wise limit of continuous functions from a
Baire Space to a metric space is dense $G_\delta$ and hence can not be countable.
Another result is the following:
Any monotone function on a compact interval is a pointwise limit of continuous functions.
Such a function can have countably infinite set of discontinuities. For example in $[0,1]$ consider the distribution function of the measure that gives probability $1/2^n$ to $r_n$ where $(r_n)$ is any enumeration of rational numbers in $[0,1]$. The set of discontinuity points of this function is $\mathbb{Q}\cap[0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Convergence of difference of series Let $f(x)$ be a probability density function, i.e $f(.) \geq 0$ and $\int_{\mathbb{R}}f(x) \ dx=1$. Let $I_{n,k}= \big[\frac{k}{n}, \frac{k+1}{n}\big), \ k \in \mathbb{Z}$ and $p_{n,k}$ denote the area of $f(.)$ in the interval $I_{n,k}$, i.e,
$$
p_{n,k}=\int_{\frac{k}{n}}^{\frac{k+1}{n}} f(x) \ dx
$$
We know that $\displaystyle \sum_{k \in \mathbb{Z}} p_{n,k}=1=\displaystyle \sum_{k \in \mathbb{Z}} p_{n,k+1}$. Define $\{a_{n,m}\}$ as the convolution of $\{p_{n,k}\}$ with itself,
$$
a_{n,m}=\displaystyle \sum_{k \in \mathbb{Z}} \ p_{n,m-k} \ p_{n,k}
$$
I am trying to prove that $\sum_{m \in \mathbb{Z}}|a_{n,m+1}-a_{n,m}| \rightarrow 0$ as $n \rightarrow \infty$
Is this true ? Atleast intuitively I feel that as our partitions become smaller and smaller difference should converge because $\{a_m\}$ is also a probability distribution on integers.
|
This is indeed true. We compute
\begin{eqnarray*}
\sum_{m}\left|a_{n,m+1}-a_{n,m}\right| & = & \sum_{m}\left|\sum_{k}\left[p_{n,\left(m+1\right)-k}-p_{n,m-k}\right]p_{n,k}\right|\\
& \leq & \sum_{k}\left[p_{n,k}\cdot\sum_{m}\left|p_{n,\left(m+1\right)-k}-p_{n,m-k}\right|\right]\\
& \overset{\ell=m-k}{=} & \sum_{k}\left[p_{n,k}\sum_{\ell}\left|p_{n,\ell+1}-p_{n,\ell}\right|\right]\\
& \overset{\sum_{k}p_{n,k}=1}{=} & \sum_{\ell}\left|\int_{\left(\ell+1\right)/n}^{\left(\ell+2\right)/n}f\left(y\right)\,{\rm d}y-\int_{\ell/n}^{\left(\ell+1\right)/n}f\left(x\right)\,{\rm d}x\right|\\
& \overset{x=y-\frac{1}{n}}{=} & \sum_{\ell}\left|\int_{\ell/n}^{\left(\ell+1\right)/n}\left[f\left(x+\frac{1}{n}\right)-f\left(x\right)\right]\,{\rm d}x\right|\\
& \leq & \sum_{\ell}\int_{\ell/n}^{\left(\ell+1\right)/n}\left|f\left(x+\frac{1}{n}\right)-f\left(x\right)\right|\,{\rm d}x\\
& = & \int_{\mathbb{R}}\left|f\left(x+\frac{1}{n}\right)-f\left(x\right)\right|\,{\rm d}x\\
& \xrightarrow[n\to\infty]{} & 0.
\end{eqnarray*}
Here, the last step used continuity of translation with respect to
the $L^{1}$ norm, see e.g. Continuity of $L^1$ functions with respect to translation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The names of 8 students are listed randomly. What is the probability that Al, Bob, and Charlie are all in the top 4? My attempt at a solution: There are $C_{4,3}$ ways to arrange Al, Bob, and Charlie in the top 4, and $C_{5,4}$ ways to arrange the other 5 people. All of this is over $C_{8,4}$, giving $$\frac{C_{4,3}C_{5,4}}{C_{8,4}}=\frac{2}{7}$$ Is this correct?
|
$4$ slots are up for grabs, so a simple way is to just compute $\dfrac48\cdot\dfrac37\cdot\dfrac26= \dfrac{1}{14}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
checking if a number is a prime I was reading Wikipedia, and it was given that "all primes are of the form 6k ± 1" (other than 2 and 3), where k = 1,2,3,4,...
Is this statement correct? If yes, can we use this to test if a given number is a prime number? For instance, we can say that 41 is a prime number, since there exists an integer K (K >= 1), such that 6k - 1 = 41 ==> k = 7.
I am confused why this test cannot be used to test if a number is prime?
Thanks,
Sekhar
|
For $k\ge 1$ no prime can be of the form $6k$ (obvious), $6k+2=2(3k+1)$, $6k+3=3(2k+1)$, $6k+4=2(3k+2)$. There remains $6k+1$ and $6k+5=6(k+1)-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1473860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Conjugacy classes for rotations of $D_{2n}$ It says in my notes that for a dihedral group $D_{2n}$, if $n$ is even, then each conjugacy class has at most $1$ element. It says that for the conjugacy class $C_{r^k}$ where $r^k$ is some reflection, the conjugacy class is $\{r^{\frac{n}{2} = k}\}$ (or just $\{r^{k}\}$).
I didn't follow this. Say we have $D_8$. Then $n = 4$, so $k = 2$. Then $r^2$ is just moving a vertex on a octagon over two vertices. But what about $r^1$, $r^3$, $r^5$, and $r^7$? $1$, $3$, $5$, and $7$ are not even numbers, so this says that they are not in any conjuacy classes since they are not of the form $r^2$, and the only conjuacy classes of $D_8$ is $\{r^{2}\}$. But we know that each element should belong to at least one conjugacy class, so how is this possible?
|
Something is certainly incorrect in your notes: a group is abelian if and only if every conjugacy class has size one, and the dihedral group $D_{2n}$ is not abelian for $n>2$. Possibly what is meant is that the center of $D_{2n}$ consists of at most one non-trivial element; and for $n$ even, this element is $r^{n/2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1474146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
exponential distribution involving customers
Customers arrive at a certain shop according to a poisson process at
the rate of $20$ customers per hour. what is the probability that the
shop keeper will have to wait more than $5$ minutes (after opening)
for the arrival of the first customers?
So i am trying to this problem by exponential, however im having some trouble with figuring out the pdf.
I was told $F(x)=20e^{-20x}$ if $x>0$, however the pdf on my paper says: $\frac{1}{\theta}e^{-\frac{x}{\theta}}$ so shouldnt it be: $\frac{1}{20}e^{-\frac{x}{20}}$.
I am just confused. Any explanations?
|
Let the discrete random variable $X=$ The number of minutes the shopkeeper will have to wait for the first customers to arrive.
$X\sim Po(\lambda)$ and working in minutes $\implies 3 \space$ customers arrive per minute, such that $\lambda=3$ so $$X\sim Po\left(3\right)$$ $$P\left(X \gt 5\right)=1-P\left(X \le 5\right) \approx \color{purple}{0.084}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1474225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How many integers between 1000 and 9999 inclusive consist of How many integers between $1000$ and $9999$ inclusive consist of
(a) Distinct odd digits,
(b) Distinct digits,
(c) From the number of integers obtained in (b), how many are odd integers?
(a) ${}^5P_4 = 120$
(b) $9 \times 9 \times 8 \times 7 = 4536$
(c) I don't know how to find the odd integers in $4536$
Are the answers correct? Can you guys help solve me the (c)?
|
If you place $0,2,4,6,$ or $8$ in the units place, it becomes even. Hence, you will have to place $1,3,5,7,$ or $9$ in the units place to make it odd. So in the units place, you have $5$ choices. In the thousand's place we have $8$ choices ($8$ because we have left out $0$ and the number in the units place). Similarly we have $8$ and $7$ choices in the hundreds and tens place, respectively . Hence, the answer should be $8\times 8 \times 7 \times 5 = 2240$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1474463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
$\cot(x+110^\circ)=\cot(x+60^\circ)\cot x\cot(x-60^\circ)$ How to find the solution of this trigonometric equation
$$\cot(x+110^\circ)=\cot(x+60^\circ)\cot x\cot(x-60^\circ)$$
I have used the formulae $$\cos(x+60^\circ)\cos(x-60^\circ)=\cos^2 60^\circ - \sin^2x$$
$$\sin(x+60^\circ)\sin(x-60^\circ)=\sin^2x-\sin^260^\circ$$
How to move further? What is the least positive value of $x$?
|
$$\cot(x+110^\circ)=\cot(x+60^\circ)\cot x\cot(x-60^\circ)$$
$$\frac{\cot(x+110^\circ)}{\cot x} = \cot(x+60^\circ)\cdot \cot(x-60^\circ)$$
$$\frac{\cos(x+110^\circ)\cdot \sin x}{\sin (x+110^\circ)\cdot \cos x} = \frac{\cos(x+60^\circ)\cdot \cos(x-60^\circ)}{\sin(x+60^\circ)\cdot \sin(x-60^\circ)}$$
Now Using Componendo and Dividendo, We get
$$\frac{\cos(x+110^\circ)\cdot \sin x+\sin(x+110^\circ)\cdot \cos x}{\cos(x+110^\circ)\cdot \sin x-\sin(x+110^\circ)\cdot \cos x} = \frac{\cos(x+60^\circ)\cdot \cos(x-60^\circ)+\sin(x+60^\circ)\cdot \sin(x-60^\circ)}{\cos(x+60^\circ)\cdot \cos(x-60^\circ)-\sin(x+60^\circ)\cdot \sin(x-60^\circ)}$$
$$-\frac{\sin \left[(x+110^\circ)+x\right]}{\sin\left[(x+110^\circ)-x\right]} = \frac{\cos[(x+60^\circ)-(x-60^\circ)]}{\cos[(x+60^\circ)+(x-60^\circ)]}$$
So we get $$-\frac{\sin(2x+110^\circ)}{\sin (110^\circ)} = \frac{\cos (120^\circ)}{\cos (2x)}$$
$$\sin (2x+110^\circ)\cdot \cos (2x) = \frac{1}{2}\sin (110^0)$$
So we get $$2\sin (2x+110^\circ)\cdot \cos (2x) = \sin(110^0)$$
$$\sin(4x+110^\circ)+\sin(110^\circ) = \sin (110^\circ)$$
So we get $$\sin(4x+110^\circ) =0\Rightarrow 4x+110^0 = n\times 180^\circ\;,$$ Where $n\in \mathbb{Z}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1474586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
A question about gradient. The gradient of a function of two variable $f(x,y)$ is given by
$$\left( \frac{\partial f}{\partial x} ,\frac{\partial f}{\partial y}\right). $$
It is also evident that gradient points in the direction of the greatest increase or decrease of a function at a point. My question is that whether the gradient vector is tangent to the function $f(x,y)$ at a point or not. The tangent here means that if you cut the graph of the function f(x,y) along the direction of the gradient vector then you will have a 2D curve formed where the function is cut. So is the gradient vector tangent to that curve at the particular point.
|
The answer is that your question doesn't actually make sense: the graph of $f(x,y)$ lives in $\mathbb{R}^2 \times \mathbb{R}$, whereas $\nabla f$ is in $\mathbb{R}^2$.
What is true is that $\nabla f(a,b)$ is normal to the level curve of $f$ passing through $(a,b)$: this is because, taking a local parametrisation of a level curve, $(x(t),y(t))$, we have
$$ f(x(t),y(t)) = c \\
(x(0),y(0))=(a,b),$$
the tangent to this curve at $(a,b)$ is $ (x'(0),y'(0)), $ and differentiating the $f(x(t),y(t))=c$ equation gives
$$ 0 = \left. \frac{d}{dt} f(x(t),y(t)) \right|_{(a,b)} = x'(0) \frac{\partial f}{\partial x}(a,b) + y'(0) \frac{\partial f}{\partial y}(a,b) = (x'(0),y'(0)) \cdot \nabla f(a,b), $$
so $$ \nabla f \perp (x'(0),y'(0)). $$
On the other hand, we can consider the surface in $\mathbb{R}^3$ defined by $g(x,y,z) = f(x,y)-z=0$, which is essentially the graph of $f$ in three dimensions. Then
$$ \nabla g = (\nabla f, -1), $$
where I've abused notation slightly. By exactly the same idea as before, I can show that $\nabla g$ is normal to the surface $g$ (and hence the graph of $f$): take a parametrisation $(x(t),y(t),z(t))$ of a curve in the surface through $(a,b,c)$, and then differentiate $g(x(t),y(t),z(t))=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1474685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
homeomorphism of the closed disc on $\mathbb R ^2$ I just found a statement that for any closed disc on $\mathbb R^2$ and any two points inside it, there is a homeomorphism taking one point to another and is identity on the boundary. But I can't write down the homeomorphism myself. Can anyone help? Many thanks in advance.
|
For each $\mathbf v_0=(x_0,y_0)\in D$, the unit disk, I will define a specific homeomorphism $f:D\to D$ so that $f_{\mathbf v_0}(\mathbf 0)=\mathbf v_0$ which fixes the boundary $S^1$.
Now, for $\mathbf v\neq 0$, we have $\mathbf v=r\mathbf s$ for a unique $(r,\mathbf s)$ for $r\in(0,1]$ and $\mathbf s\in S^1$. This is essentially representing $\mathbf v$ in polar coordinates.
Then define $$f_{\mathbf v_0}(\mathbf v)=r(\mathbf s-\mathbf v_0) +\mathbf v_0.$$
This is clearly well-defined on $D\setminus \{\mathbf 0\}$, and continuous.
Show it is one-to-one and onto $D\setminus\{\mathbf v_0\}$.
Pretty obviously, $f_{\mathbf v_0}(\mathbf v)\to \mathbf v_0$ as $\mathbf v\to \mathbf0$. So we can extend $f_{\mathbf v_0}$ to $f_{\mathbf v_0}(\mathbf0)=\mathbf v_0$. It takes a little more to prove this extension is a homeomorphism $D\to D$. It obviously fixes each $s\in D$ has $r=1$
Now if you need to send $\mathbf v_1\to\mathbf v_0$, do so by composing homomorphisms: $f_{\mathbf v_1}^{-1}\circ f_{\mathbf v_0}$.
This exact argument shows the same result in $n$ dimensional closed balls, or even any $n$-dimensional compact convex set.
Given any convex and compact set $D$ in $n$-dimension, let $B$ be the boundary, and $x\in D\setminus B$ be a point in the interior. Then every element of $D\setminus \{\mathbf x\}$ has a unique representation as $r(\mathbf b-\mathbf x)+\mathbf x$ with $r\in(0,1]$ and $\mathbf b\in B$.
This also lets you prove that any convex and compact subset of $\mathbb R^n$ with non-empty interior is homeomorphic to a closed ball.
This is related to the notion of a topolical cone. If $D$ is convex and compact, and $B$ is on the boundary, then the space $C(B)$, the cone on $B$, is homeomorphic to $D$, and we can define that homeomorphism so that the "tip" of $C(B)$ is sent to any point of $D$ we choose.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1474903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is differentiating on both sides of an equation allowed? Let's say we have $x^2=25$
So we have two real roots ie $+5$ and $-5$.
But if we were to differentiate on both sides with respect to $x$ we'll have the equation $2x=0$ which gives us the only root as $x=0$.
So does differentiating on both sides of an equation alter it? If it does, then how do we conveniently do it in Integration by substitutions?
If not then what exactly is going on here ?
|
mcb's answer suggests that the equal sign doesn't always have the same meaning. I disagree, the difference is in the role of $x$. When we write $(x-1)(x+1)=x^2-1$, we mean that for all $x$, $(x-1)(x+1)=x^2-1$. When we write $x^2=25$, it means for which $x$ is $x^2=25$ ? (Answer: $x=-5$ and $x=+5$).
In the first case, $x$ is used to denote any value from the function domain, whereas in the second example $x$ denotes a specific value from the function domain which corresponds to a given value in the codomain.
How does this apply to the question? Differentiating means calculating (f(x+dx)-f(x))/dx as dx goes to zero. This means evaluating f(x) at and around x. Now if f(x) and g(x) are only equal at a specific point x, then they're not equal at x+dx. That means f'(x) ≠ g'(x).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 9,
"answer_id": 4
}
|
What is the formula for $\int \cos(\pi x)dx$? I want to evaluate the integral $$\int \cos(\pi x)dx$$
I thought it would simply be $\sin(\pi x) +C$
However in the solutions book the answer is $\frac{1}{\pi}\sin(\pi x)$
This is an integral where it seems as if the chain rule is being performed?
|
Let's take the derivative of $\frac{1}{\pi}\sin(\pi x)$
to see that we get the function being integrated.
Remember that $$\frac{d}{dx}f(g(x))= g'(x)f'(g(x))$$
$g(x)$, here, the inside function, is simply $\pi x$. The outer function, f(x), is the sine function.
$$g'(x)=\frac{d}{dx}(\pi x)=\pi$$
$$f'(g(x))=\frac{d}{dx}\frac{1}{\pi}\sin(g(x))=\frac{1}{\pi}\cos(g(x))$$
Multiplying these, and letting $g(x)=\pi x$:
$$(\pi)(\frac{1}{\pi}\cos(\pi x))= \cos (\pi x)$$
Which is what we wanted to show.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to differentiate $x^{\log(x)}$? I want to differentiate $x^{\log(x)}$ with respect to $x$.
By using chain rule ($x{log(x)}\frac{1}{x}$), I got the answer $\log x$. Is it right?
|
$$y=x^{\log x}$$
$$\log y=\log x \log x=\log^2 x$$
$$\frac{y'}{y}=\frac{2}{x}\log x$$
$$y'=\frac{2y}{x}\log x$$
$$y'=2\frac{x^{\log x}}{x}\log x$$
$$y'=2x^{\log x-1}\log x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Can a unbounded sequence have a convergent sub sequence? I have been using the Bolzano-Weierstrass Theorem to show that a sequence has a convergent sub sequence by showing that it is bounded but does that mean that if a sequence is not bounded then it does not have a convergent sub sequence?
The sequence I am struggling is $((-1)^n \log(n))$ for all $n$ in the natural numbers. Now because the sequence is unbounded I am unsure how to prove whether it does or does not have a convergent sub sequence?
|
Yes, an unbounded sequence can have a convergent subsequence.
As Weierstrass theorem implies that a bounded sequence always has a convergent subsequence, but it does not stop us from assuming that there can be some cases where unbounded sequence can also lead to some convergent subsequence.
For example-
if we consider a sequence $S=(100,1,100,2,100,3,100,4,100,5,....)$, it is unbounded but there is a subsequence (consider only odd terms) i.e. $ S_n= (100,100,100.....)$ which is convergent.
So, there may exist some subsequence that converges in an unbounded sequence, though this is not always the case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Proving the nested interval theorem
Theorem:
Let $\{I_n\}_{n \in \mathbb N}$ be a collection of closed intervals with the following properties:
*
*$I_n$ is closed $\forall \,n$, say $I_n = [a_n,b_n]$;
*$I_{n+1} \subseteq I_n$$\forall \,n$.
Then $\displaystyle\bigcap_{n=1}^{\infty} I_n \ne \emptyset$.
Pf: Let $I_n $ be intervals that satisfy 1 and 2. Say $I_n = [a_n, b_n] \forall n\ge 1$.
Let the sets $A$ and $B$ be defined by $A = \{a_n\}$ and $B = \{b_n\}$.
Therefore $\forall n,k \ge 1$, $a_k \le b_n$.
Case 1: $k \le n$
Then $[a_n,b_n] \subset [a_k, b_k]$ therefore $b_n \in [a_k,b_k]$ and $a_k \le b_n \le b_k$ therefore $a_k \le b_k$.
Case 2: $k>n$
Therefore $I_k \subset I_n$. By nestedness, $[a_k,b_k] \subset [a_n,b_n]$, therefore $a_k \le b_n$.
Claim: $\sup A \le \inf B$.
Proof of claim: Let $A$ and $B$ be sets such that for all $a \in A$ and for all $b \in B$, $a \le b$ Therefore $\sup A \le b$ and $a \le \inf B$ therefore $\sup A \le \inf B$.
Now we must prove that either $\bigcap_{n=1}^{\infty} I_n = [\sup A, \inf B]$ or $\bigcap_{n=1}^{\infty} I_n = \emptyset$.
First we will show $[\sup A, \inf B] \subset \bigcap I_n$. Let $x \in [\sup A, \inf B]$. Therefore $\sup A \le x \le \inf B$ and $\forall n $, $a_n \le \sup A \le x \le \inf B \le b_n$ or $a_n \le x \le b_n$ and thus $x \in I_n$.
Now we will show that $\bigcap I_n \subset [\sup A, \inf B]$. Let $y \in \bigcap I_n$. Show $\sup A \le y \le \inf B$. We know that $\forall n \ge 1$, $a_n \le y \le b_n$. Since $a_n \le y$, we see that $\sup A \le y$.
Similarly since $y \le b_n$, we see that $y \le \inf B$.
As you can see, this proof is very long. Does anyone have any advice to shorten this?
|
Here is a non-constructive proof.
Construct a sequence by choosing an element $x_n \in I_n$ for every $n$; you can do this however you like. Since this sequence is bounded then the Bolzano-Weierstrass theorem says there exists a convergent subsequence $x_{n_k}$ converging to some real number $c$. Suppose $c \not\in \bigcap_{n=1}^{\infty} I_n$. Then there exists some $N$ for which $c \not\in I_n$ for all $n > N$, and so there is some number $\varepsilon >0$ such that $|x_{n_k} - c| \geq \varepsilon$ for all $n_k > N$ contradicting convergence. Thus $c \in \bigcap_{n=1}^{\infty} I_n$ and so it is nonempty.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Maximizing $\sin \beta \cos \beta + \sin \alpha \cos \alpha - \sin \alpha \sin \beta$ I need to maximize
$$ \sin \beta \cos \beta + \sin \alpha \cos \alpha - \sin \alpha \sin \beta \tag{1}$$
where $\alpha, \beta \in [0, \frac{\pi}{2}]$.
With numerical methods I have found that
$$ \sin \beta \cos \beta + \sin \alpha \cos \alpha - \sin \alpha \sin \beta \leq 2 \sin \frac{\alpha + \beta}{2} \cos \frac{\alpha + \beta}{2} - \sin^2 \frac{\alpha + \beta}{2}. \tag{2} $$
If $(2)$ is true then I can denote $x = \frac{\alpha + \beta}{2}$ and prove (using Cauchy inequality) that
$$ 2 \sin x \cos x - \sin^2 x \leq \frac{\sqrt{5}-1}{2}. \tag{3}$$
But is $(2)$ true? How do I prove it? Maybe I need to use a different idea to maximize $(1)$?
|
As Dan suggested, solve for the stationary points of the gradient. You'll end up solving two (identical) quartics in $\sin \alpha$, $\sin \beta$ (which are really just 2 quadratics due to the symmetry) and find that the only stationary point in the range is at $\alpha = \beta = \pi/6$. For this to not be the maximum on the domain, the max must occur on the boundary, which are much simpler sub-problems to solve (and by the symmetry you only have to solve 2), and in fact will prove that $\alpha = \beta = \pi/6$ gives the max over the domain.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Prove that $(A-B)\cup (B-A)=(A\cup B)-(A\cap B)$ Prove that:
$(A-B)\cup (B-A)=(A\cup B)-(A\cap B)$
My Attempt:
$x\in (A-B)\cup (B-A)$
so $x\in (A-B)\vee x\in (B-A)$
$(x\in A \wedge x\notin B)\vee (x\in B\wedge x\notin A)$
I suppose I should try different cases from here? I haven't been able to make progress after this step. Thanks for your help!
|
Looks like you're on the right track. You could make two cases now based on your "or" statement.
Case $1$: $x \in A$ and $x\notin B$. Then obviously $x\notin A\cap B$.
Case $2$: $x \in B$ and $x\notin A$. Then again $x \notin A\cap B$.
In either case (that is, whether we start with $x\in A$ or $x\in B \equiv x \in A\cup B$) we know $x\notin A\cap B$ so $x\in (A \ \cup B)\setminus (A\cap B)$ and you can conclude that $$(A-B)\cup (B-A)\subseteq (A\cup B)-(A\cap B)$$ Can you complete the second half of the proof?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
homeomorphic topology of quotient space of $S^1$ I got stuck on the problem about quotient space from General Topology of Stephen Williard. Here is the problem:
Let $\sim$ be the equivalence relation $x \sim y$ iff $x$ and $y$ are diametrically opposite, on $S^1$. Which topology is the quotient space $S^1/\sim$ homeomorphic to?
I tried to build a continuous function $S^1$ such that 2 points which are diametrically have the same images, but I couldn't find it. For each point in $S^1$, we can write it as $(\cos(\phi), \sin(\phi))$, then what is the function which satisfies the previous requirement. Can anyone help me with this? I really appreciate.
|
The space you will get is $\mathbb{RP}^1$, the one dimensional real projective space which is by definition of all lines through the origin in $\mathbb{R}^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Permutation group conjugacy proof.
I tried this example in $S_3$ and saw that this held but I couldn't figure out how to prove it for all $S_n$. Any help would be appreciated.
|
First, you should show that if $\sigma$ is a cycle, then $\alpha\sigma\alpha^{-1}$ is also a cycle of the same length. In fact, you can actually write out what this permutation has to be.
Take an arbitrary permutation $\sigma = \sigma_1 \sigma_2 \dots \sigma_n$, written in disjoint cycle notation. Then $\alpha\sigma\alpha^{-1} = \alpha \sigma_1 \sigma_2 \dots \sigma_n \alpha^{-1}$. Can you find a way to rewrite this that takes advantage of the previous fact?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1475914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Computing the dimension of a vector space of matrices that commute with a given matrix B, This is part $2$ of the question that I am working on.
For part $1$, I showed that the space of $5\times 5$ matrices which commute with a given matrix $B$, with the ground field = $\mathbb R$ , is a vector space.
But how can I compute its dimension?
Thanks,
|
There is a theorem by Frobenius:
Let $A\in {\rm M}(n,F)$ with $F$ a field and let $d_1(\lambda),\ldots,d_s(\lambda)$ be the invariant factors $\neq 1$ of $\lambda-A$, let $N_i=\deg d_i(\lambda)$. Then the dimension of the vector space of matrices that commute with $A$ is $$N=\sum_{j=1}^s (2s-2j+1)n_j$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Do uniformly grey sets of positive density exist? Let us call a set $A\subset \mathbb{R}^2$ uniformly grey if the measure of its sections is constant, but not full. (There may be a standard name for this; I would be glad if someone tells me.)
Formal definition: There are intervals $[a_1,b_1]$ and $[a_2,b_2]$ and constants $\mu_1\in(0,\lambda(A_2))$ and $\mu_2\in(0,\lambda(A_1))$ such that $\lambda(\{x_2:(x_1,x_2)\in A\}) = \mu_1$ for any $x_1\in (\inf a_1,b_1)$ and $\lambda(\{x_1:(x_1,x_2)\in A\}) = \mu_2$ for any $x_2\in (a_2, b_2)$; $\lambda$ is the Lebesgue measure.
A simple example is $A = \{(x_1,x_2)\in[0,1]^2: (x_2-x_1)\mod 1\in(0,1/2)\}$.
Well, this does not look grey at all, so in order to make it look grey, let us add an assumption of positive density: for any $x = (x_1,x_2)\in (a_1,b_1)\times (a_2,b_2)$ and any $r>0$, $\lambda_2(B(x,r)\cap A)>0$, where $B(x,r)$ is the ball of radius $r$ centered at $x$.
So the question is:
Does there exist a uniformly grey set of positive density?
|
Take a fat Cantor set $C$ (such as the Smith-Volterra-Cantor set) in $[0,1]$ of measure $\frac{1}{2}$. Then $C \times C$ will satisfy your section condition.
Does it have positive density? The points in the fat cantor set have positive density on $\mathbb{R}$, and so given any ball around our point we can find a square within that ball having positive intersection with our set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How many roots are rational?
If $P(x) = x^3 + x^2 + x + \frac{1}{3}$, how many roots are rational?
EDIT:
$3x^3 + 3x^2 + 3x + 1 = 0$, if any rat roots then,
$x = \pm \frac{1}{1, 3} = \frac{-1}{3}, \frac{1}{3}$, and none of these work. Complete?
|
The polynomial $3P(x) = 3x^3 + 3x^2 + 3x + 1$ has integer coefficients. So, if $p/q$ with $\operatorname{gcd}(p,q) = 1$ is a rational root, necessary $p$ divides the constant term $1$, and $q$ divides the dominant coefficient $3$. So we have four candidates $-1$, $-1/3$, $1/3$ and $1/3$. Clearly, the roots of $P(x)$ are negative, so we just have to test $-1$ and $-1/3$. The polynomial $P(x)$ has no rational roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
$\lim_{x\to 0} \frac{1-\cos (1-\cos (1-\cos x))}{x^{a}}$ is finite then the max value of $a$ is? I know the formula $\frac{1-\cos x}{x^{2}}=\dfrac{1}2$
But how do i use it here and tried l hospital and no use and doing l hospital twice will make it very lengthy how do i approach ?
|
Hint.
$$ \frac{1 - \cos\bigl(1 - \cos(1-\cos x)\bigr)}{x^a}
= \frac{1 - \cos\bigl(1 - \cos(1-\cos x)\bigr)}{\bigl(1 - \cos(1 - \cos x)\bigr)^2} \cdot \left(\frac{1 - \cos(1 - \cos x)}{(1 - \cos x)^2}\right)^2 \cdot \frac{(1 - \cos x)^4}{x^a} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Differential Equation (First order with separable variable) Given that $\frac{dy}{dx}=xy^2$. Find the general solution of the differential equation.
My attempt,
$\frac{\frac{dy}{dx}}{y^2}=x$
$\int \frac{\frac{dy}{dx}}{y^2} dx=\int x dx $
$-\frac{1}{y}=\frac{x^2}{2}+c$
$y=-\frac{2}{x^2+2c}$
Why the given answer is $y=-\frac{2}{x^2+c}$? What did I do wrong?
|
The constant is arbitrary. You can equally well have $3c,-c$, or even a different letter, like $A$; they are all equivalent. The given answer has just chosen a simpler-looking form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Laplace's Method for Integral asymptotes when g(c) = 0 I am using these notes as my reference, but I am running into some questions.
Say I am trying to find, for large $\lambda$
$$I(\lambda)=\int_0^{\pi/2}dxe^{-\lambda\sin^2(x)}$$
This has our maximum at $c=0$, where g(c)=0 and g'(c)$\neq$0. So when I pull out the $e^{\lambda g(c)}$, that is just 1, so I continue to expand g(x) about x = 0, and then change the bounds from -Infinity to Infinity.
$$I(\lambda)\approx\int_{-\infty}^\infty dx\exp[-\lambda(x^2-x^4/3+...)]$$
To first order, this is just $\sqrt{(\pi/x)}$, but the second order diverges.
The reason this stumps me is because I can find an exact solution that depends on $e^x$ so I am not sure where i am going wrong in computing the behavior.
|
Note that you have an endpoint maximum, and hence the integral should only be one-sided; this is the sort of case where Watson's lemma applies.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How many (3x3) square arrangements?
In how many ways can we place 9 identical squares - 3 red, 3 white and 3 blue, in a 3x3 square in a way that each row and each column has squares of each three colours?
Let the colors be $RGB$, let the white=green for simplicity, then the top row has the following cases:
$RGB, RBG, BRG, BGR, GRB, GBR = 6$ cases.
Take the $RGB$ configuration first: we have:
$$R G B $$
Below $R$ there are two possibilities, $G, B$, suppose you choose $G$ so now we have:
$R G B$
$G$ ==> Now there are two choices there: $R, B$ then one choice for the last one, and for the whole row beneath, one choice each.
If you chose $B$ then you would have had:
$R G B$
$B$ ==> Only one choice left $(R)$ and then one again for last, and the one choice for each of the spots in the last row.
In total, there are $2 + 1 = 3$ possible arrangements here, it would have been the same for the other cases ($GBR, RBG$, etc...).
The final answer should be $6(3) = 18$, but the actual answer is $9$?
|
"In total, there are 2+1=32+1=3 possible arrangements here"
No, you don't add them. You multiply them. 2 choices for the second row and each choice yields 1 choice for the third.
"In total, there are 2x1=2 possible arrangements here"
=======
There are six ways you can do the first row.
In the second row there are 2 choices of color you can put under the red square in the first row. Call that color x. There is only one choices to put under the x square in the first row (not red and not x). Then there's only one choice for the last square of the 2nd row. So there are 2 choices for the second row.
There is only 1 choice for the third row.
So 12 possibilities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Moves to P-positions in Nim Let $A$ be an N-position in Nim such that all moves to P-positions reqire exactly $k$ tokens to be removed. What can we say about $A$?
|
As you may know, a winning move is one that changes the ginary XOR of theheap sizes to $0$, which means to XOR one of the existing heap sizes with the current XOR sum $s$. By assumption, for each heap size $n_i$ in $A$, we have either $n_i\operatorname{XOR} s>n_i$ or $n_i\operatorname{XOR} s=n_i-k$. The first case occurs exactly when $n_i$ has a zero bit at the msb bit of $s$.
As $x\operatorname{XOR}y=x-y+2(x\operatorname{AND}y)$, we conclude that all $n_i$ with a $1$ the msb bit of $s$ (which must be an odd number of heaps) have the same AND with $s$,i.e., they agree at all bit positions occuring in $s$. And vice versa, from any odd number of heaps having a few bits in common in this way, we can construct an $N$-position by adding one or more heaps of suitable size.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1476899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is the Matrix Diagonalizable if $A^2=4I$ I have two question:
Let $A$ be a non-scalar matrix, $A_{k \times k} \in \Bbb R$, and $A^2=4I$. Is the matrix A, always diagonalizable in $\Bbb R$?
Answer
I know that the answer is yes:
$$A^2=4I \rightarrow A^2-4I=0$$
Then by the Cayley Hamilton theorem I know that the matrix satisfies the equation above.
Now I don't know how to explain the fact that the characteristic polynomial is $P_A=(\lambda-2)(\lambda+2)$, then the characteristic polynomial has two different roots, and no more. what's the reason for it?
Second Question - Irrelevant to the first question
When we say that a matrix is diagonalizable if it has different linear roots, it means that for if I have the following characteristic polynomial (for example) $(t-1)^2(t-2)$ then the matrix is not diagonalizable since the root 1 appears twice in the characteristic polynomial?
|
The matrix $A$ is a root of the polynomial $t^2-4$, hence the minimal polynomial of $A$, a divisor of $t^2-4$, has only simple roots. This is is equivalent to $A$ being diagonalisable.
Answer to the second question: a matrix over a field $K$ is diagonalisable over $K$ if and only if its minimal polynomial splits over $K$ into a product of distinct linear factors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Is my proof of $-(-a)=a$ correct? I'm trying to prove theorem $1.4$ with the following:
$$a=a$$
$$a-a=\stackrel{0}{\overline{a-a}}\tag{Ax.5}$$
$$a-a=0\tag{Inverse def.}$$
$$a+(-a)=0\tag{Thm 1.3}$$
Here I thought about packing the $a$ with a minus sign and then it would yield a new minus sign on Its inverse due to the inverse definition.
$$(-a)+(-(-a))=0$$
Adding $a$ to both sides.
$$\stackrel{0}{\overline{a+(-a)}}+(-(-a))=a$$
$$-(-a)=a$$
Is my proof correct?
|
Better wording:
By theorem 1.2. For every a there is a unique -a such that a + (-a) = 0. Likewise for (-a) there is a unique -(-a) such that (-a) + (-(-a)) = 0.
Therefore:
(-a) + (-(-a)) = 0
a + (-a) + (-(-a)) = a+ 0 = a
By associativity:
[a + (-a)] + (-(-a)) = a so
0 + (-(-a)) = (-(-a)) = a.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
How do you factorize quadratics when the coefficient of $x^2 \gt 1$? So I've figured out how to factor quadratics with just $x^2$, but now I'm kind of stuck again at this problem:
$2x^2-x-3$
Can anyone help me?
|
Depending on your level, you may not have seen the quadratic formula that JMoravitz and BLAZE posted. That formula is often the way to find the zeroes of and factor a quadratic.
But what Adam suggests may be more up your alley, and it's what I'll discuss as well: You can make educated guesses and then check to see if it works. The problem you have posted does not have many options to guess, so this method is fairly quick.
Consider your quadratic and what you expect the factored form to look like:
$$2x^2-x-3 = (\color{red}{\Box}x \color{red}{\pm} \color{red}{\Box})(\color{red}{\Box}x\color{red}{\pm}\color{red}{\Box})$$
You know that the factored form requires you to fill in those blanks, choosing the appropriate sign and number. What numbers can possibly satisfy the equation?
Start with the $2x^2$ term on the left. This one is straightforward. In order to get $2x^2$, the coefficients of the $x$'s on the right hand side must be $2$ and $1$ if we are to have all integer coefficients.
$$2x^2-x-3 = (\color{green}{2}x \color{red}{\pm} \color{red}{\Box})(\color{green}{1}x\color{red}{\pm}\color{red}{\Box})$$
$$2x^2-x-3 = (2x \color{red}{\pm} \color{red}{\Box})(x\color{red}{\pm}\color{red}{\Box})$$
Now what about the $-3$ on the right? It's factors are $\pm1, \pm3$. We need to select two of these numbers and have them fill the remaining sections. You can try all combinations until you have the answer, but we can be smarter than that.
Since $-3$ is negative, one of these numbers will be positive and the other will be negative. Furthermore, the two factors will need to add to $-1$, after one of them is multiplied by $2$, since we have $-x$ on the left.
Notice that $2-3 = -1$. That's what we want. So the $2x$ multiplies a $1$ and the $x$ multiplies a $-3$.
$$2x^2-x-3 = (2x \color{green}{-3})(x\color{green}{+1})$$
We do a quick check:
$$(2x-3)(x+1) = 2x^2 +2x -3x -3 = 2x^2 -x -3$$
It works.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 3
}
|
Distribution of $Y= \frac{X_1}{|X_2|}$? If $X_1$ and $X_2$ are independent and identically distributed Gaussian random variables with parameters $0$ and $\sigma^2$, how do I find the distribution of $Y= \frac{X_1}{|X_2|}$?
I'm not supposed to use the method using the Jacobian but I'm not sure what the "easier" way to go about this is.
|
If $X_1$ and $X_2$ are independent and identically distributed Gaussian random variables then their common distribution is circularly symmetrical around the origin. Let this distribution be denoted by $N(u,v)$. Then the cdf of Y can be calculated as follows
$$F_Y(y)=P\left(\frac{X_1}{|X_2|}<y\right)=P(X_1<y\ |X_2|)=\iint_{\color{red}{A_y}}N(v,u)\ dv \ du.$$
Here the red region is
$$\color{red}{A_y=\{(v,u): u<y|v| \}}$$
as shown in the following figure:
So, we have to integrate the circularly symmetric $N(v,u)$ over the red region. The result is
$$F_Y(y)=P\left(\frac{X_1}{|X_2|}<y\right)=1-\frac1{2\pi}\color{red}{\alpha}=1-\frac1{\pi}\left(\frac{\pi}2-\arctan(y)\right).\tag1 $$
Here I used the following two facts facts:
*
*the integral over the white area is proportional to the angle $\alpha$ because of the circular symmetry of $N$ around the origin.
*the integral of $N$ over the whole plane is $1$ because $N$ is a probability density function.
The density of $Y$ can be determined by differentiating $F_Y$:
$$f_Y(y)=\frac1{\pi}\frac1{1+y^2}.$$
This is the standard Cauchy distribution. There are three notes to be taken.
*
*The distribution of $Y$ does not depend on $\sigma$.
*The distribution of $Y=\frac{X_1}{|X_2|}$ is the same as that of $Y'=\frac{X_1}{X_2}$ which is also Cauchy.
*If $X_1$ and $X_2$ were not independent then the argumentation based on the circular symmetry of $N(v,u)$ would be false.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Understaning a problem involving measurable sets of a bounded sequence of measures Hi guys I am trying to understand my problem,
We have a sequence of measureable sets $\{A_n\}$ Where $\sum _{n=1} ^{\infty}m(A_n) < \infty$
Define a set $B= \{x \in \mathbb R: \# \{ n:x \in A_n \}=\infty \}$
We want to show m(B)=0
My question is what exactly is B. How I am understanding it is the set of x such that the number of x in $E_n$ is infinity many. That does not quite make sense
|
Let $f_i = \chi_{A_i}$ be the characteristic function of $A_i$ and $f = \sum_{i=1}^\infty f_i $. Then
$$\int f d\mu = \sum_{i=1}^\infty m(A_n) < \infty $$
(The equality can checked using, e.g., monotonce convergence theorem) Thus $f$ is integable. Thus it is finite a.e.. This is what we want as $B = \{ f= \infty\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Let $G$ be a graph with $n$ vertices and $e$ edges. Let $m$ be the smallest positive integer such that $m \ge 2e/n$. Let $G$ be a graph with $n$ vertices and $e$ edges. Let $m$ be the smallest positive integer such that $m \ge 2e/n$.
Prove that $G$ has a vertex of degree at least $m$.
My approach is sum of all the degree of a graph is $2e$.
$d(v_1)+d(v_2)+......+d(v_n)=2e$.
then if we take the average, will get
$d(v)=2e/n$.
now this will be in between minimum degree and maximum degree in the graph.
Now $m$ also lies in between maximum and minimum degree of the graph.
From here how can we conclude the answer.
Please help me.
|
Suppose that there does not exist a vertex $v$ such that $\deg(v) \geq m$. Then the sum of the degrees of all vertices in $G$, $\sum_{i=1}^{n}v_i \leq (m-1)n$. Since $\sum_{i=1}^{n}v_i=2e$, we have $2e\leq(m-1)n$. Divide both sides by $n$ to get $\frac{2e}{n} \leq m-1$. So, we have $\frac{2e}{n}+1 \leq m$. This is a contradiction because if $\frac{2e}{n}$ is an integer, then $m=\frac{2e}{n}$ and if $\frac{2e}{n}$ is not an integer, $m=\frac{2e}{n}+c$ for some rational number $0<c<1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to solve this limit: $\lim_{n \to \infty} \frac{(2n+2) (2n+1) }{ (n+1)^2}$ $$
\lim_{n\to\infty}\frac{(2n+2)(2n+1)}{(n+1)^{2}}
$$
When I expand it gives:
$$
\lim_{n\to\infty} \dfrac{4n^{2} + 6n + 2}{n^{2} + 2n + 1}
$$
How can this equal $4$? Because if I replace $n$ with infinity it goes $\dfrac{\infty}{\infty}$ only.
|
HINT: rewrite it in the form
$$\frac{4+\frac{6}{n}+\frac{2}{n^2}}{1+\frac{2}{n}+\frac{1}{n^2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1477946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
}
|
What are polyhedrons? Polyhedrons or three dimensional analogues of polygons were studied by Euler who observed that if one lets $f$ to be the number of faces of a polyhedron, $n$ to be the number of solid angles and $e$ to be the number of joints where two faces come together side by side $n-e+f=2$.
It was later seen that a serious defect in this definition (and in the proof supplied by Euler) is that it is not at all clear what is a polyhedron in the first place. For example if we consider a cube nested within another cube as a polyhedron then $n-e+f=4$, a counter example to Euler's result.
What will be the modern definition of that polyhedron which will comply with Euler's result?
|
The polyhedra that Euler studied was not the same as we call polyhedra today. Today a polyhedra is a body with (only) flat polygonal faces.
Those that Euler studied was a subset of these. Basically those whose nodes and vertices form a planar graph.
There are some complications that can arise in the generic definition that Euler did not consider.
The most striking is that it that it allows for cutting out the inside of it, that is that it's surface would not need to be connected.
Another is that you may have donut-like polyhedra, ie take a polyhedron and make a prism shaped hole through it.
A third is the assumption that the edges and vertices might not form a connected graph. For example if you glue together two differently sized cubes at one face (so that the smaller sits on the face of the other).
The second construct will decrease the euler characteristics and the third will increase it. The consequence is that you could combine these complications to result in a polyhedron that Euler didn't consider, but that nevertheless satisfies his formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Simple Proof of FT of Algebra Which proof of the Fundamental Theorem of Algebra requires minimum mathematical maturity and has the best chances to be understood by an amateur with knowledge of complex numbers and polynomials?
|
Suppose the polynomial has no zeros. Then every value of $f(z)$ has a direction.
Draw a large circle, where $z^n$ dominates over the other terms in the polynomial. As you travel around the circle, the direction of $f(z)$ is near the direction of $z^n$, so it rotates $n$ times as $z$ goes once around the circle.
Now gradually shrink the circle, until it is nearly nothing. $f(z)$ is nearly constant as you go around the circle, so it doesn't rotate at all as you go around the circle.
The number of rotations must change from $n$ to $0$ at some point. But unless $f(z)=0$ somewhere, the number of rotations is continuous. So $f(z)=0$ somewhere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to construct an n-gon by ruler and compass? Since $\cos[\frac{2\pi}{15}] $ is algebraic and equal to $\frac{1}{8}(1+\sqrt{5}+\sqrt{30-6\sqrt{5}})$ we know that the regular 15-gon is constructible by ruler and compass.
Although I know how to construct a hexagon by ruler and compass and have seen the construction of a pentagon done in a youtube video, I can't find a description of a general approach to constructing n-gons where $n>7$ anywhere.
Is there a general approach, geometric algorithm if you like, to constructing an n-gon by ruler and compass?
|
In order to construct an angle equal to $\frac{2\pi}{15}$, you just need to construct an equilateral triangle and a regular pentagon, since:
$$ \frac{2\pi}{15} =\frac{1}{2}\left(\frac{2\pi}{3}-\frac{2\pi}{5}\right).$$
Have a look at the Wikipedia page about contructible regular polygons.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find the max, min, sup and inf of a sequence
Let $\{a_n\}=\{x\mid x\in\mathbb {Q},x^2 <2\}$, find the max, min, sup and inf of a sequence
Clearly, sup is $\sqrt {2} $ and inf is $-\sqrt {2} $, so we have $-\sqrt {2}<a_n <\sqrt {2} $. Since $\mathbb {Q} $ is dense of $\mathbb {R} $, max and min both don't exist cause there are many small number between sup and inf.
After I look at the solution, the answer for max and min both are not "DNE", can anyone tell me why cause I don't see it. Thanks.
|
Okay, I guess my first answer wasn't clear.
If you have a bounded set A one of three things can happen.
1) A has a maximum element x. If so then x is also the sup of A and the sup of A is a member of the set A. (sup means least upper bound and if x is maximal it is a least upper bound.) So max A = sup A = x; x $\in$ A.
The same is true about minimum elements and the inf.
2) A doesn't have a maximum element. Then if A has a sup, x, the sup is not a member of the set. If the metric space has the least upper bound property (as R does) the the sup must exist. So max A does not exist; sup A = x; x $\notin$ A.
The same is true about minimum elements and the inf.
3) If the metric space does not have the least upper bound property (as Q does not) then it is possible (but not always true) that if A doesn't have a maximum element nor does it have a sup. max A does not exist, sup A does not exist; metric space X does not have least upper bound property.
So if A = {$a_n$} = {$x| x \in Q, x^2 < 2$} is bounded and, I presume, we are viewing it in R which has the least upper bound property, so 3 isn't possible.
So if A has a max element then max = sup = x and $x^2 < 2$. Is this possible?
If not, then max does not exist and sup A is the smallest real number that is larger than all of the elements of A. What real number is that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How many ways are there of rolling 6 different coloured 6-sided dice so that exactly 3 different values are showing? Any help with this problem would be greatly appreciated. I have tried solving this problem like this:
1st die: 6 choices,
2nd die: 5 choices,
3rd die: 4 choices,
4th die: 3 choices,
5th die: 3 choices,
6th die: 3 choices
But this isn't getting me anywhere! Thanks.
|
Basically, you need to choose 3 distinct #s from 6,
choose patterns of 4-1-1 of a kind, 3-2-1 of a kind and 2-2-2 of a kind and permute them.
$4-1-1$ of a kind: $\binom31\binom63\cdot\frac{6!}{4!} = 1800$
$3-2-1$ of a kind: $\binom31\binom21\binom63\cdot\frac{6!}{3!2!} =7200$
$2-2-2$ of a kind: $\binom63\cdot\frac{6!}{2!2!2!} = 1800$
Add up to get a total of $10800$ ways
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Prove by Induction of a Total Function I'm given a total function that spans Positive Integers ($\mathbb Z^+$ to Positive Real numbers $\mathbb R^+$)
I'm given that $S(1) = 7$
Also I'm given that
$S(2^k) = S(2^{k-1}) +5$ // equivalent to $2^{k-1} + 5$ (Right hand side)
where $k$ is an element of positive Integers
I'm told to prove that :
$S(2^k) = 5k+7$
I started doing the Bases step:
Since $S(1) = 7$, LHS when $k=0$ $= S(1)$, RHS when $K = 0 = 5(0) + 7 = 7$
Thus, $S(1)$ Holds
Then I did the Induction Hypothesis:
Assume true for $S(n)$
$S(2^n) = 5n+7$
But I'm not sure how to do the inductive step, I have:
Try for $n = k+1$
$S(2^{k+1}) = 5(k+1) +7$ //left hand side reads $2^{k+1}$
Any help would be much appreciated.
|
You need to prove it for $n+1$ assuming that it's true for $n$.
$$S(2^{n+1})=S(2^{n})+5=5n+7+5=5(n+1)+7$$
Where I have used the given condition in the first equality and the induction hypothesis in the second one. Doing this you have proved the given formula using mathematical induction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Square root of two How would you find root 2? I have been told to use a number line. I have tried to visualize it on a number line using triangles. But am unsure of where to go from there.
|
The square root of two can not be expressed as a fraction of whole numbers or as a decimal. It is the classic example of an irrational number. The real numbers are infinite limits of rationals but the irrationals can not be expressed completely in any finite form. Geometrically the diagonal of a square with sides of one unit, will be the square root of two units. The square root can be estimated as about 1.414... but the decimal never ends doesn't repeat cyclically.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1478996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Can I construct a bijection between (0,1) and [0,1] using sets and sup/inf? (Proof verification) This question comes from a class I'm in currently:
Find a 1-1 onto map between $(0,1)$ and $[0,1]$.
Let $r \in (0,1)$. Now, define the function $f: (0,1) \rightarrow S$ such that
$$ f(r) = s = \begin{cases}
\lbrace x \in \mathbb{R} \mid r \leq x <1/2 \rbrace, \quad r <1/2 \\
\lbrace x \in \mathbb{R} \mid 1/2 < x \leq r \rbrace, \quad r \geq 1/2
\end{cases}
$$
This is clearly a bijection, for each element of $(0,1)$ is mapped to a unique element $s$, for every $s \in S$. Now, let us define the function $g: S \rightarrow [0,1]$ such that, for $s \in S$,
$$ g(s) =\begin{cases}
\inf (s), \quad s \cap (1/2,1]= \emptyset\\
\sup (s), \quad s \cap [0,1/2)=\emptyset
\end{cases}
$$
This function $g$ is 1-1, for if it were not, then there would exist two unequal infimum or supremum for the same set. This $g$ is also onto, for $g$ maps $S$ onto the whole of $[0,1] = (0,1) \cup \lbrace 0,1\rbrace$. Therefore, $g$ is a bijection, implying that the composition $g \circ f$ is also a bijection between $(0,1)$ and $[0,1]$.
|
First, note that there's a problem: if $r={1\over 2}$, then $f(r)=\emptyset$, so $g(f(r))$ is undefined.
However, that's not the only problem. Let's look at $g\circ f$ directly. If $r<{1\over 2}$, then $f(r)=[r, {1\over 2})$, which has empty intersection with $({1\over 2}, 1]$, so $g(f(r))=r$. Meanwhile, if $r>{1\over 2}$, then $f(r)=({1\over 2}, r]$, which has empty intersection with $[0, {1\over 2})$, so $g(f(r))=r$.
To sum up: $g(f(r))=r$, for all $r$! (Except $r={1\over 2}$ where it isn't defined, but can be easily fixed.) Your statement that $g$ is onto is unjustified.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that if $\gcd(a, b)\mid c$, then the equation $ax + by = c$ has infinitely many integer solutions for $x$ and $y$. Show that if $\gcd(a, b)\mid c$, then the equation $ax + by = c$ has infinitely
many integer solutions for $x$ and $y$.
I understand that if there is one, solution for $ax+by =c$, then there are infinitely many solutions, just because you can solve it in different ways. However, I am not sure how to show this in a proof format.
|
Euclidean algorithm says there is at least one solution.
if $ax + by = c$ then $a(x - b/gcd(a,b)) + b(y + a/gcd(a,b)) = ax + by = c$. Inductively there are infinitely many further solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Two point form of solving straight line problems in coordinate geometry If we find the slope of a line via two point form we find that when the for different points in the same straight line we have different equations. Why is it so? Say, if the points were $3,5$ and $6,10$ then the equation was $2x-y=1$ and when the points were $4,6$ and $6,11$ then the equation of line was $2x-y=2$. Why this deviation?
|
The two st lines you mentioned are compeltely different so of course you will have two different equations !!
For the first one passing through $ (3,5)$ and $(6,10)$, its equation is $y=5/3x$.
For the second one passing through $ (4,6)$ and $(6,11)$, its equation is $ y=5/2x-4$.
Here is their plot :
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The cardinality of the set Let $\mathbb{G} =\{ a^b + \sqrt{c}: a,b,c\in \mathbb Q \}$
I guess the set $\mathbb{G}$ is countable set, but I can't show it properly.
How to start the proof?
|
The proof is basically the same as for $\mathbb Q$ being countable. You start with an enumerations $a_j$, $b_k$ and $c_l$ (for $a$, $b$ and $c$) in your formula and iterate the triples by first taking those where $j+k+l=0$ then $j+k+l=1$, then $j+k+l=2$ and so on. For each level you have finitely many triples and by this construct all triples will be enumerated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Confused about Euclidean Norm I am trying to understand that the Euclidean norm $\|x\|_2 = \left(\sum|x_i|^2\right)^{1/2}$ is in fact a norm and having trouble with the triangle inequality.
All the proofs I have referred to involve the Cauchy-Schwarz inequality. But it seems that this inequality is proved in an inner product space, which has additional properties to a normed space.
So, my question is whether starting with any (possibly infinite dimensional) vector space over $\mathbb{C}$ and taking any algebraic basis for it, can the triangle inequality be proved for the Euclidean norm without making assumptions about an inner product or an orthonormal basis ?
(I don't think that infinite dimensionality should be a problem as any two vectors have finite representations in an algebraic basis).
Addendum after 2 answers and comments.
Can one take the Cauchy-Schwarz inequality "out of context" as an algebraic statement about two finite lists $(x_i) $ and $(y_i)$ and then apply it to the complex coefficients of any algebraic basis to say that $\sum \left|x_iy_i^*\right|\leq (\sum|x_i|^2)^{1/2} (\sum|y_i|^2)^{1/2}$ and then complete the proof of the triangle inequality ?
|
Hint:
Note that the Euclidean norm is a particular case of a $p$-norm and for these norms the triangle inequality can be proved using the Minkowky inequality.
Anyway, the Euclidean norm is the only $p$-norm that satisfies the parallelogram identity ( see: Determining origin of norm), so it is coming from an inner product.
About the addendum.
In an $n$ dimensional real space we can prove the C-S inequality with simply algebraic methods (see here). So, yes, in this case we can proof the triangle inequality without explicitly using an inner product space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
Chain Rule for Differentials I'm trying this problem from Lee's Smooth Manifolds, but I'm not sure where I'm making a mistake:
Problem: Let $M$ be a smooth manifold, and let $f,g\in C^{\infty}(M)$. If $J\subset\mathbb{R}$ containing the image of $f$, and $h:J\to\mathbb{R}$ is a smooth function, then $d(h\circ f)=(h'\circ f)df$.
My attempt: Let $X\in T_pM$ and $\frac{d}{dt}\in T_{f(p)}\mathbb{R}$. We want equality of $d(h\circ f)X$ and $(h'\circ f)dfX$
\begin{align}
d(h\circ f)X&=d(f^*h)X\\&=f^*(dh)X\\&=dh(f_*X)\\&=dh((dfX)\frac{d}{dt})\\&=(dfX)dh(\frac{d}{dt})\\&=\frac{dh}{dt}dfX
\end{align}
So there's an $f$ missing somewhere but running through that calculation, I don't see where I lost it.
|
In the beginning, it is better to track the base points of all the objects involved in order to make sure that everything "compiles" correctly. Here,
$$ d(h \circ f)|_p (X) = d(f^{*}(h))|_p (X) = (f^{*}(dh))|_p(X) = dh|_{f(p)}(df|_p(X)) = h'(f(p)) \cdot df|_p(X) = ((h' \circ f) \cdot df)|_p(X) $$
so you lost your $f$ by not keeping track how the pullback $f^{*}$ acts.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$F(F(x)+x)^k)=(F(x)+x)^2-x$ I have no idea about this problem. But I feel we have to use chain rule of differentiation here.
The Function $F(x)$ is defined by the following identity:
$F(F(x)+x)^k)=(F(x)+x)^2-x$
The value of $F(1)$ is such that a finite number of possible values of $F'(1)$ can be determined solely from the above information.The maximum value of $k$ such that $F'(1)$ is an integer can be expressed as $\frac{a}{b}$, where $a , b$ are co-prime integers. What is the value of $a+b$
|
$$F((F(x)+x)^k) = (F(x)+x)^2 - x
$$
differentiate both sides to get
$$F'((F(x)+x)^k)\cdot k (F(x)+x)^{k-1}\cdot (F'(x)+1) = 2(F(x)+x)\cdot(F'(x)+1) -1
$$
Let denote $u(x)=F(x)+x$ then
$$ F'(u(x)^k) \cdot k u(x)^{k-1} u'(x) = 2u(x)u'(x) -1
$$
The only appropriate value for $F(1)$ I could think is $F(1)=0$, i.e. $u(1)=1$ , then
$$ F'(1) k u'(1) = 2u'(1) -1$$
that is
$$ k F'(1) (F'(1)+1)=k (F'(1)^2 +F'(1)) = 2(F'(1)+1) -1 = 2F'(1)+1$$
or
$$ k F'(1)^2 + (k - 2) F'(1)-1 = 0$$
Thus, we got finitely many solutions for $F'(1)$ as required:
$$ F'(1)_{\pm}=\frac{2-k \pm\sqrt{k^2+4}}{2 k}$$
Wrong solution
One takes limit of this expression as $k\to\infty$ and get $-1$ for $F'(1)_-$ ( and $0$ for $F'(1)_+$ which we'll drop). Thus, we got $a=-b$ and therefore $a+b=0$.
Correct Solution
(thanks to @A.S. whose comment triggered me to finish it)
The requirement is that $F'(1)$ be integer, let $n$ be that integer, i.e.
$$\frac{2-k \pm\sqrt{k^2+4}}{2 k}=n$$
$$(2kn-2+k)^2=k^2+4$$
$$(2kn-2+k)^2-k^2=4$$
$$((2kn-2+k)-k)((2kn-2+k)+k)=4$$
$$4(kn-1)(kn-1+k)=4$$
$$(kn-1)(k(n+1)-1)=1$$
Solve it for $k\ne 0$ (since $0$ is not the larges solution) to get $$k=\frac{2 n+1}{n (n+1)}.$$
Obviously, the maximum $k$ for an integer $n$ (in the domain of definition) is achieved at $n=1$, thus $$k=\frac{a}b=\frac{3}2$$ and therefore $$a+b=5$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to solve $e^x=kx + 1$ when $k > 1$? It's obvious that $x=0$ is one of the roots. According to the graphs of $e^x$ and $kx + 1$, there's another root $x_1 > 0$ when $k > 1$. Is there a way to represent it numerically.
|
If $t = -x - 1/k$, we have $$t e^t = (-x - 1/k) e^{-x} e^{-1/k} = - e^{-1/k}/k $$
The solutions of this are $t = W(-e^{-1/k}/k)$, i.e. $$x = - W(-e^{-1/k}/k) - 1/k$$ where $W$ is one of the branches of the Lambert W function.
If $k > 1$, $-e^{-1/k}/k \in (-1/e,0)$, and there are two real branches: the $0$ branch (which gives you $x=0$), and the $-1$ branch, which gives you the solution you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1479894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Using arithmetic mean>geometric mean Prove that if a,b.c are distinct positive integers that
$$a^4+b^4+c^4>abc(a+b+c)$$
My attempt:
I used the inequality A.M>G.M to get two inequalities
First inequality
$$\frac{a^4+b^4+c^4}{3} > \sqrt[3]{a^4b^4c^4}$$
or
$$\frac{a^4+b^4+c^4}{3} > abc \sqrt[3]{abc}$$ or new --
first inequality
$$\frac{a^4+b^4+c^4}{3abc} > \sqrt[3]{abc}$$
second inequality:
$$\frac{a+b+c}{3} > \sqrt[3]{abc}$$
I am seeing the numbers in the required equation variables here but am not able to manipulate these to get the inequality I want?? Please direct me on which step should I take after this??
|
I just want to add another way to solve the problem.
$$\begin{align} & a^4 + b^4 + c^4 = (a^2)^2+(b^2)^2+(c^2)^2 \\& \ge a^2 b^2 + b^2c^2 + c^2a^2 \text{ (From Cauchy-Schwarz)}\\&= (ab)^2 + (bc)^2 + (ca)^2 \\ & \ge abbc + bcca + caab\ (\because a^2 + b^2 + c^2 \ge ab + bc + ac) \\ &= abc(a+b+c), \end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
What did this sum become: $\left(\sum_{n=a}^{b} z_n\right)^2 $? Can I see something meaningfull about this sum? Where is it equal to?
$$\left(\sum_{n=a}^{b} z_n\right)^2$$
Is it equal to: $\sum_{n=a^2}^{b^2} z_n^2$ or something else, I've no idea how to deal with it.
Thanks for the help.
|
$$\left(\sum_{n=a}^b z_n\right)^2 = \left(\sum_{j=a}^b z_j\right)\left(\sum_{k=a}^b z_k\right) = \sum_{j,k=a}^b z_jz_k$$
If you want, you can pull the squares and reduce repetitive terms into the following form:
$$\left(\sum_{n=a}^b z_n\right)^2 = \sum_{n=a}^b z_n^2 + 2\sum_{j=a+1}^b\left[\sum_{k=a}^{j-1} z_jz_k\right]$$
Unfortunately, there aren't many simplifications (in general) beyond this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Express the following product as a single fraction: $(1+\frac{1}{3})(1+\frac{1}{9})(1+\frac{1}{81})\cdots$
I'm having difficulty with this problem:
What i did was:
I rewrote the $1$ as $\frac{3}{3}$
here is what i rewrote the whole product as:
$$\left(\frac{3}{3}+\frac{1}{3}\right)\left(\frac{3}{3}+\frac{1}{3^2}\right)\left(\frac{3}{3}+\frac{1}{3^4}\right)\cdots\left(\frac{3}{3}+\frac{1}{3^{2^n}}\right)$$
but how would i proceed after this?
|
Write it in base-3 notation. The factors are $$1.1,\; 1.01,\;1.0001,\;1.00000001\,,...,\;1.(2^n-1\text{ zeros here})1,...\;.$$They are easy to multiply: you get successively$$1.1,\; 1.111,\;1.1111111\,,...,\;1.(2^n-1\text{ ones here}),...\;.$$The limit is $1.111...$, which you probably recognize as the limit of the geometric series with first term $1$ and common ratio $\frac13$, namely $\frac32$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Proving Logarithims I'm trying to figure out how to go about proving this statement:
$$n^{\log(a)} = a^{\log(n)}$$
I'm told that I cannot prove from both sides. I tried to $\log$ the first side to get:
$\log(n)\log(a)$
But I'm not sure where to go from here, any ideas would be much appreciated.
|
Suppose $n=a^x$, then $\log n=x\log a$ by taking logs.
Then, by substitution, $n^{\log a}=(a^x)^{\log a}=\dots$
How to find the method - well you start with $n$ to some power, and you want to end with $a$ to some power, so it is natural to express $n$ as a power of $a$, and once this is done the result drops out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $\forall z \in \Bbb C : \lvert \Re(z) \rvert \le \lvert z \rvert \le \lvert \Re(z) \rvert + \lvert \Im(z)\rvert$ I'm having difficulties trying to prove these two complex inequalities :
$\forall z \in \Bbb C :$
$$\lvert \Re(z) \rvert \le \lvert z \rvert \le \lvert \Re(z) \rvert + \lvert \Im(z)\rvert$$
$$\lvert \Im(z) \rvert \le \lvert z \rvert \le \lvert \Re(z) \rvert + \lvert \Im(z) \rvert$$
By the defintion of the modulus of a complex number we have :
$$\lvert z \rvert = \lvert x+iy \rvert = \sqrt{x^2+y^2} = [(\Re(z))^2 + (\Im(z))^2]^{1/2}$$
If I square both side of the equality I get : $${\lvert z \rvert}^2 = (\Re(z))^2 + (\Im(z))^2$$ but I don't know how to continue from here.
|
You have
$$ |z| = \sqrt{(\Re z)^2 + (\Im z)^2} \geq \sqrt{(\Re z)^2} = |\Re z|$$
and
$$ |z| = \sqrt{(\Re z)^2 + (\Im z)^2} \leq \sqrt{|\Re z|^2 +2|\Re z| |\Im z|+ |\Im z|^2} = \sqrt{(|\Im z| + |\Re z|)^2} = |\Im z| + |\Re z|. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Given Column Space and Null Space construct a matrix. Construct a matrix whose column space contains (1 1 1) and (0 1 1) and whose null space contains (1 0 1) and (0 1 0), or explain why none can exist.
So I am not sure if I did this right, but after applying the null space to A I got that
$A=\begin{bmatrix}
a& 0 &-a\\
b& 0 &-b\\[0ex]
c& 0 &-c
\end{bmatrix}.$
But now I am not sure how to apply the column space to find a matrix. I couldn't find any examples of how to do this so I am just kind of guessing.
|
Before beginning, I just consider the case where the matrix we want to construct is a $3\times 3$ matrix (since you tried to construct such a matrix). I claim that we cannot construct such a matrix.
Indeed, since the $2$ vectors which belong to the column space are linearly independent, we have that $$2\le \text{ rank } A.\tag 1$$
Also, since the $2$ vectors which belong to the nullspace of $A$ are linearly independent, we have that:
$$2\le \text{ null } A\tag 2.$$
Adding $(1),(2)$ yields:
$$4\le \text{ rank } (A) + \text{ null }(A).\tag 3$$
By rank-nullity theorem we have: $$\text{rank } A + \text{ null } A = 3.$$
Thus, $(3)$ cannot hold!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.