text
stringlengths 83
79.5k
|
|---|
H: Weak convergence definition
I have a following question, for example: let $f_n\in L^2$ be a bounded sequence of real functions. Then we know that there is some $f$ (in $L^2$?) such that $f_n$ converges weakly to $f$ in $L^2$ i.e..... and now I do not know if I understand it properly. Do weak convergence means:
a) for all $g\in L^2$ scalar product $(f_n,g)=\int f_n(x)g(x)\,dx\rightarrow \int f(x)g(x)\,dx=(f,g)$
or
b) for all $x$, $(f_n(x),g(x))=f_n(x)\cdot g(x)\rightarrow f(x)g(x)=(f(x),g(x))$?
After writing this, I see that I really do not understand this...
AI: In general, if $X$ is a normed space then $x_n\to x$ weakly if for every bounded linear functional $\varphi\in X^*$ we have $\varphi(x_n)\to \varphi(x)$.
Now, $L^2$ is a Hilbert space, so by Riesz representation theorem all the bounded linear functionals have the form $f\to (f,g)=\int_{\mathbb{R}}fg$ for some $g\in L^2$. So here $f_n\to f$ weakly means $(f_n,g)\to (f,g)$ for all $g\in L^2$.
|
H: Question about the second derivative of functions at a local max/min
Is the following true :If $f(x)$ is a twice differentiable function with a local maximum/minimum at a. The function $g(x)$=$d^2f\over d^2x$ will attain its local minimum/maximum at a
Q: is the above statement true if so how would you prove it? If not what function would be a counter- example For most functions it seems to be true and if we had to prove it somehow i think you would have to use some property form analysis . By the second derivative test the sign of $g(x)$ is clear but it does not tell anything else .And maybe you would have to use the fact that $f’(a)=0$
AI: For a counterexample, consider $f(x)=x^3-x$ and $a=1/\sqrt{3}$. Then $f$ has a local minimum at $a$ but $f''(x)=6x$, so $f''$ does not have any minima or maxima.
|
H: If $S$ and $M$ are Positive Definitive Matrices, Why $S$ and $M^{-1/2}$S$M^{-1/2}$ share the same eigenvalues?
Following and argument to find the solution to the max of the generalized Rayleig Quotient, $\frac {x^tSx}{x^tMx}$, I found the stament:
if $S$ and $M$ be Positive Definitive, $S$ and $M^{-1/2}$S$M^{-1/2}$ share the same eigenvalues.
How can it be proved?
AI: The statement is obviously false. As a counterexample, consider any positive definite $S,M$ for which $S \neq I$ and $S = M$.
|
H: Gauss Lemma - Do Carmo's Riemannian geometry, use of parallel transport?
I was exactly having the same doubt as this question. I don't understand specifically why
$$
(d \exp_p)_v(v)=v
$$
I worked out exactly the same math as wikipedia and I ended up with
$$
(d \exp_p)_v(v) = \frac{d}{dt}\left. \left(\gamma((t+1),p,v) \right) \right|_{t=0}
$$
The equation is based on the use of the curve $\alpha(t) = (t+1)v$ where $v \in T_p M$. Apparently the key in understanding how to fill the gap is to use somehow the parallel transport but I couldn't figure from the given answer actually.
The specific bit I can't figure is that apparently, from one of the comments, it might be the case that the result of $(d \exp_p)_v(v)$ is actually the parallel transport of $v$ along the geodesic passing through $\exp_p(v)$.
Can anyone clarify?
AI: It might be easier than you think if you stick to do Carmo's text. The key claim is that
$$
\langle d(\text{exp}_p)_v(v), d(\text{exp}_p)_v(w_T) \rangle = \langle v,w_T \rangle,
$$
for any $w_T = av \in T_pM$. By the fact that $\gamma(t) = \text{exp}_p(tv)$, we can compute $d(\text{exp}_p)_v(v)$ by taking $v$ as the initial velocity of the curve $\alpha(t) = v+vt$ starting at $v$. We will obtain $d(\text{exp}_p)_v(v) = \gamma'(1)$. So we have
\begin{align}
\langle d(\text{exp}_p)_v(v), d(\text{exp}_p)_v(w_T) \rangle &= a \, \langle d(\text{exp}_p)_v(v), d(\text{exp}_p)_v(v) \rangle \\
&=a \, \langle \gamma'(1),\gamma'(1) \rangle \\
&= a\, \langle \gamma'(0),\gamma'(0) \rangle\\
&= a\, \langle v,v \rangle\\
&= \langle v,w_T \rangle,
\end{align}
where the third equality holds because $\langle \gamma'(t),\gamma'(t) \rangle$ is constant along geodesic $\gamma$.
|
H: Solve the following system of non-linear equations: $x^2+4xy+y^2=13$, $2x^2+3xy=8$.
Solve the following system of non-linear equations: $$\begin{align}x^2+4xy+y^2&=13\\ 2x^2+3xy&=8.\end{align}$$
I started off this problem by rearranging and substituting but I’m stuck. I think that I have to change this system to a simple equation in one variable, but I don't know how to do it, please help me.
AI: Get rid of $13$ and $8$ by multiplying the first equation by $-8$ and the second one by $13$. Add both. Factorizing gives $(9x+8y)(2x-y)=0$.
|
H: True / false question about a Cauchy sequence in Real Analysis
I am solving assignments in Real analysis but I am unable to think about how I can solve this question.
Let $f: ( 0, \infty ) \to \mathbb{R}$ be a continuous function. Does $f$ maps any Cauchy sequence to a Cauchy sequence.
I tried by taking $ {x_n} $ and using |$x_n $ - $x_m$| <$\epsilon$
for all $n, m > N$ ( $N$ is a natural number) . But how can I formulate the Cauchy sequence property of $x_n$ into $f(x_n$) ? I am unable to think about it.
Any hints please?
AI: take the map $x \mapsto \frac{1}{x}$ and the Cauchy sequence $\{\frac{1}{n}\}_{n \ge 1}$
|
H: image of differentiable manifold is again a manifold
Let $1\leq \mu <n,\, M\subset \mathbb{R}^n$ be a $\mu$-dimensional differentiable submanifold of $\mathbb{R}^n$ (that is, all the charts $\varphi$ are immersions: $\phi\in C^1$ and the rank of $D\varphi$ equals $\mu$). Now let $f\in C^1(\mathbb{R}^n,\mathbb{R}^n)$ be invertible with $f^{-1}\in C^1(\mathbb{R}^n,\mathbb{R}^n)$.
Show that $f(M)$ is a $\mu$-dimensional submanifold of $\mathbb{R}^n$.
The fact that given $x\in f(M)$ we can find a $C^1$-homeomorphism $\psi:=f\circ\varphi$ follows almost immediately from the assumptions. However, my problem is showing that such a $D(f\circ\varphi)$ has rank $\mu$ and I haven't been able to show this so far.
AI: By the chain rule, $D(f\circ \varphi) = Df\circ D\varphi$. Since $f$ is $C^1$ with $C^1$ inverse, by the chain rule, $D(\mathrm{Identity})= D(f\circ f^{-1}) = Df\circ D(f^{-1})$, so $Df$ is an isomorphism. The rest is a question about linear algebra: if $T$ is an isomorphism, and $S$ is a linear map of rank $\mu$ then $T\circ S$ has rank $\mu$. Apply this to $T = Df$ and $S = D\varphi$.
|
H: Does there exist an operation that could turn the set of all negative real numbers into an abelian group?
The answer is no for familiar operations of addition and multiplication. But could there exist any other operation that could turn the set of all negative real numbers into an abelian group. If yes, what is it? If no, how could I prove it?
AI: If you genuinely allow any operation, then the answer is yes for a silly (but important!) reason: we can lift structure along bijections. Specifically, fix some bijection $f:\mathbb{R}\rightarrow\mathbb{R}_{<0}$. Then we can define an addition map $\oplus$ as follows: $$a\oplus b=f(f^{-1}(a)+f^{-1}(b)).$$ The map $f$ shows that "$\mathbb{R}$ with $+$ looks identical to $\mathbb{R}_{<0}$ with $\oplus$" - or, in more precise language, the two structures $(\mathbb{R};+)$ and $(\mathbb{R}_{<0};\oplus)$ are isomorphic. A fortiori they have the same general algebraic properties: in particular, $(\mathbb{R}_{<0};\oplus)$ is an abelian group since $(\mathbb{R};+)$ is.
A notational comment: the expression "$(A; [\DeclareMathOperator{\stuff}{stuff}\stuff])$" indicates that $A$ is the underlying set of the structure involved and $[\stuff]$ is the list of operations and relations on that set, with different things in $[\stuff]$ being separated by commas (in contrast with the semicolon separating $A$ and $[\stuff]$). So, for example, "$\mathbb{R}$ as an ordered ring" would be written as "$(\mathbb{R};+,\cdot,<)$."
Basically, when we ask "Does such-and-such structure exist on the set $X$?," all that really matters is the cardinality of $X$: whenever $X_1,X_2$ are in bijection with each other, the answer for $X=X_1$ will be the same as the answer for $X=X_2$.
Things get more interesting if we ask for the desired structure to satisfy some additional properties. For example, we might want the relevant operations - the (binary) group operation $\oplus$ and the corresponding (unary) inverse operation - to be continuous with respect to the usual topology on $\mathbb{R}_{<0}$. Now it's not the case that any old bijection $\mathbb{R}\rightarrow\mathbb{R}_{<0}$ will do, since a really messy bijection might turn the continuous $+$ into something highly discontinuous; we need to be a bit more careful. As a matter of fact, however, we can find one which does the job (consider the map $f(x)=-e^x$).
|
H: True /False question about Compact sets and continuous functions
I am trying some assignment questions and I am unable to think on how can I solve this problem.
Question: Let K be subset of $\mathbb{R^{n} }$ such that every real valued continuous function on K is bounded. Then is K compact?
I think this statement is true as if it were false then it would be impossible to give a counterexample as it has to be verified for every real valued function ( continuous) .
But I have no clue on how can I prove it.
Please give hint.
AI: Hint: $K$ is compact if and only if it is closed and bounded. You should devise a continuous function such that if $K$ is not bounded, the function is unbounded. Likewise, if $K$ is not closed, you can construct a continuous function on $K$ which is unbounded.
|
H: The smallest integers having $2^n$ divisors
Problem: For any integer $d > 0,$ let $f(d)$ be the smallest possible integer that has exactly $d$ positive divisors (so for example we have $f(1)=1, f(5)=16,$ and $f(6)=12$). Prove that for every integer $k \geq 0$ the number $f\left(2^k\right)$ divides $f\left(2^{k+1}\right).$
My Solution: We begin by observing that there must exist a prime $p$ such that $2^k-1=v_p(f(2^n))<v_p(f(2^{n+1}))=2^{\ell}-1$. Otherwise, we have $v_p(f(2^n))\ge v_p(f(2^{n+1}))$ for all prime. But this isn't possible, since $f(2^{n+1})$ has more divisors than $f(2^{n})$ (by definition). Now consider the number $N=\frac{f(2^{n+1})}{p^{2^{\ell-1}}}$. This number has $2^n$ divisors. So we must have $N\ge f(2^n)$. Now consider the number $f(2^n)p^{2^k}$. This number has $2^{n+1}$ divisors. So we must $f(2^n)p^{2^k}\ge N\cdot p^{2^{\ell-1}}\ge f(2^n)p^{2^{\ell-1}}$. Thus we must have $N=f(2^n)$ and $\ell=k+1$. Thus we must have $f(2^n)\mid f(2^{n+1})$.
But If my solution is correct, then we have nothing special about 2. Thus I am skeptic wheteher my proof is correct or not. Can someone please point out any error?
AI: You do use something special about $2$, namely that it is prime:
If you replace $2$ with a different prime $q$,
you end up with
$$ f(q^n)p^{q^{k+1}-q^k}\ge f(q^{n+1})=Np^{q^{\ell}-q^{\ell-1}}\ge f(q^n)p^{q^{\ell}-q^{\ell-1}},$$
so $k\ge \ell-1$ and again $k=\ell-1$ and $f(q^{n+1})=p^{(q-1)q^k}f(q^n)$.
|
H: Solve $y'=-3x^2\cdot y^2e^{-\frac{1}{y}}$
Solve the initial value problem $y'(x)=-3x^2\cdot y(x)^2e^{-\frac{1}{y(x)}}$ with $y(e^{1/3})=1$ and, for each solution $\varphi$, find the maximal interval such that $\varphi$ is defined and solves the differential equation.
I've tried to all the common methods like separation of the variable but they seem to fail pretty bad. What's more, the above ODE is not linear so that already leaves out a lot of approaches one could use in order to solve this, which is why I haven't found a solution so far (nor the corresponding maximal interval(s)).
Edit: As you can see from the answers below, separation of the variable does in fact work.
AI: $$\frac{dy}{dx}=-3x^2y^2e^{-\frac1y}$$
$$\frac 1{y^2}e^\frac1ydy=-3x^2dx$$
When you integrate left had side, use $z=\frac 1y$.
|
H: Compact and convergent countable filter base
Prove that if a metric space $(S,d)$ is compact, then every countable filter base is included in a convergent one.
My efforts:
I suppose the "one" in the problem is also countable. Otherwise I can let it be a base for an ultrafilter and directly use the theorem about converging ultrafilters and compactness.
Given a countable basis {$B_n$}, we can pick a point $x_n$ from the element $B_n$. Since $(S,d)$ is a compact metric space, every sequence has a convergent subsequence. Thus {$x_n$} has a convergent subsequence, say, converging to $x$. We can add open balls $B(x, 1/n)$ to {$B_n$}. The new collection is still countable. But it may not be a filter base.
AI: Even if you interpret convergent one as convergent countable filter base, you can use the easy solution with a small modification. First extend the countable filter base $\mathscr{B}$ to an ultrafilter $\mathscr{U}$, and let $p$ be the point to which $\mathscr{U}$ converges. Then $B\left(p,\frac1n\right)\in\mathscr{U}$ for each $n\in\Bbb Z^+$, so just extend $\mathscr{B}$ to $\mathscr{B}\cup\left\{B\left(p,\frac1n\right):n\in\Bbb Z^+\right\}$.
|
H: Change of speed problem (differential equations)
Problem
Let $V: \mathbb{R}^n \to \mathbb{R}^n$ be a Lipschitz continuous vector field, and let $k: \mathbb{R}^n \to (0,\infty)$ be a positive Lipschitz continuous function. Let ${x_t}$ be a solution to $\dot{x} = V(x)$ with initial condition $\xi$, and let ${y_t}$ be a solution to $\dot{x}=k(x)V(x)$, also with initial condition $\xi$. Show that $y_t=x_{I(t)}$, where $$I(t)=\int_{0}^{t} k(y_s)ds$$
[Note: $x_{I(t)}$ means $x(I(t))$.]
Attempt:
By the uniqueness theorem, we just have to show that $x_{I(t)}$ satisfies $\dot{x}=k(x)V(x)$ (that it then satisfies the initial condition is obvious).
Let $z(t)\equiv x(I(t))$.
$\dot{z}(t)= \dot{x}(I(t))k(y(t))=V(x_{I(t)}))k(y(t))$ where the first equality follows by the fundamental theorem of calculus and the second by definition of $x$. We need to show this equals $V(z(t)) \cdot k(z(t)) = V(x_{I(t)}) \cdot k(x_{I(t)}) $. I know I'm going to have to use the definition of $\dot{y}$ somewhere but not seeing where. I guess I can try to show $k(x_{I(t)})=k(z(t))$ but don't know if that's any easier than the original.
The intuitive idea is that $k$ speeds up the trajectory so $y(t)$ will be like $x(I(t))$ where $I(t)$ scales up $t$ based on an average of the speeding up factor $k$ along the path.
Hints welcome, thanks!
AI: The choice of notation is almost guaranteed to cause confusion.
Let $x$ be the solution of $\dot{a}(t) = V(a(t))$ subject to $x(0) = \xi$.
Let $y$ be the solution of $\dot{a}(t) = k(a(t))V(a(t))$ subject to $a(0) = \xi$.
Let $w$ be the solution of $\dot{a}(t) = k(y(t))V(a(t))$ subject to $a(0) = \xi$, where $y$ is the $y$ from the previous paragraph.
Let $I(t) =\int_0^t k(y(s))ds$.
Note that $w=y$.
Let $z(t) = x(I(t))$, we want to show that $z=y$.
Note that $\dot{z}(t) = \dot{x}(I(t))\dot{I}(t) = V(x(I(t)))k(y(t)) = k(y(t))V(z(t)) $ and $z(0) = \xi$.
Hence $z=w=y$.
|
H: Solving $x+x\ln(x)+\ln(x)=y$ for $x$
For $x,y\in\mathbb{R^+}$ , consider the equation:
$x+x\ln(x)+\ln(x)=y$
with constant $y$,
which is the same as
$x+\ln(x^{x+1})=y$
How do I solve for $x$?
AI: By differentiating, we can discover that $y$ is strictly increasing as a function of $x$, and it's easy enough to see that $y \to -\infty$ as $x \to 0$ and that $y \to \infty$ as $x \to \infty$; so the function is bijective with range $\mathbb{R}$. But I'd be absolutely astonished if there were a closed form for it. Certainly Mathematica couldn't find one.
|
H: Question on linear span of subsets of $\Bbb{R}^3$
Let $V$ = $\Bbb{R}^3$. Consider
$S_1$ = {($x,y,z$)|$x^2$ + $y^2$ + $z^2$ = $1$} and
$S_2$ = {($x,y,z$)|$z$ = $0$}. Then choose the correct
$L$($S_1$ $\cup$ $S_2$) = $V$
$L$($S_1$) is a subspace of $L$($S_2$)
dim($L$($S_1$ $\cap$ $S_2$) = $2$
$L$($S_1$) $\cap$ $L$($S_2$) = $L$($S_1$ $\cap$ $S_2$)
This has been my attempt.
Since $S_2$ is a subspace therefore $L$($S_2$) is $S_2$ only.
Since ($0,0,0$) does not belong to $S_1$ therefore it is not a subspace.
Now $L$($S_1$) contains $3$ linearly independent vectors therefore its dimension is $3$ and hence its linear span is $V$.
Now as $S_2$ is two dimensional therefore $2$nd option is incorrect.
How to check the remaining options?
AI: $S_1$ is the sphere of radius 1 centred at 0, $S_2$ is the $xy$ plane. $Int =S1 \cap S_2$ is the unit circle centred at $0$ on the $xy$ plane. $L(Int)$ is the whole $xy$ plane as it contains both the standard basis for the plane, and so is the span of $S_2$. $L(S_1)$ is whole space by the same logic and so is span of $S_1 \cup S_2$ . $L(S1) \cap L(S2)$ is also the $xy$ plane, by using the above logics. So only option 2 is wrong.
|
H: Show that if $HK$ is an abelian group ($H$,$K$ have coprime orders), then $H$ is in $N_G(K)$
My question stems form this excerpt from Dummet. I follow all of it, except the claim that $PQ$ is a subgroup of the normalizer. I feel like the solution is some simple algebraic one but i cannot seem to find it.
AI: Let $h\in H$. To show $h\in N_G(K)$ we have to show that $hK=Kh$. So let $x\in hK$. Then we have $x=hk$ for some $k\in K$. Since $HK$ is abelian, also $x=hk=kh\in Kh$. So this shows $hK\subseteq Kh$. Similarly we can show the reverse inclusion.
|
H: The meaning of "around" in the uniqueness and existence theorem
while learning on the uniqueness and existence theorem, I started to solve some examples and I noticed that the following is always repeated: "$f(x,y)$ and $\frac{df}{dy}$" are continuous around $(x_0,y_0)$
I do not understand what is the mathematical expression of "around" and why is it allowed to be written?
for example $y'=\frac{1}{y-1}$ with initial condition $y(0)=2$ so $f(x,y)$ is not continuous in the point $(x,1)$ but is it possible to say that it is continuous "around" other point, $(0,2)$ for example but how do I know that $(0,1)$ is not "around" $(0,2)$
*the idea here is not solve the equation but understanding the meaning of "around"
AI: It usually means "topological neighborhood". It means there is an open set $U$ in the topology that contains the point $x$, and you are considering a set $N\supset U$ that is generally open, called a neighborhood.
The idea is that you want the properties of an open ball, but you don't want to confine all of your definitions to open balls. Another way to think about it, is that the author is claiming there is a reasonably large open set (it contains a ball with radius $\varepsilon>0$) around $x$ for which the statement is true, but they are not going to solve for the set for you. Why is this happening? Exactly specifying the domain on which a theorem is true is often tedious or impossible or depends on the context, while restricting all theorems specifically to open balls would be misleadingly specific. Neighborhoods are in between.
|
H: Why do we need any test function to be infinitely many times differentiable?
I have started learning the very basic distribution theory and I encountered the idea of a test function and distribution. I am not entirely sure why the following definition of a test function is necessary.
A test function is a $C^{\infty}$ function such that it admits a compact support.
I was wondering why do we need it to be in $C^{\infty}?$ Is it because if $f$ is a generalised function we would want $f^{(n)}$ to also be a generalised for any $n$? But even if that is true, what is the benefit of this?
AI: I would say that it also allows to have a largest space. Since the smaller is the space of test functions, the bigger is the dual space. For example bounded measures can be seen as the dual of continuous functions, and so there are less measures than distributions. But yes, it also allows you to differentiate as many times as you want ... this is often useful in applications, such as partial differential equations. If you prefer, knowing that the set of distribution is stable under differentiation tells you that distributions can be as irregular as you want.
Let take for example the Dirac delta. You can see the Dirac delta as a functional over continuous functions: $\delta_0\in (C^0)'$ defined for every $\varphi\in C^0$ by
$$
\langle \delta_0,\varphi\rangle = \varphi(0)
$$
If you do not it to take derivatives of it, this approach is sufficient. If you need to take its first derivative you can look at the same definition with $\varphi\in C^1$, and then you can define its derivative $\delta_0'$. One of the goals of distributions is to have one big space to put all distributions in order not to have to care about the precise space. Since the space is bigger, you will have more objects available in it and so it will be easier to prove existence theorems for PDE for example. However, in exchange, you lose the knowledge about the regularity of your solution.
Then, to study regularity, one often uses Sobolev spaces $W^{s,p}$ (or more refined scales). The set distribution of order $n$ (so the distributions for which you just need $\varphi\in C^n$) contain the spaces $W^{-n,p}$.
This is also useful to generalize the Fourier transform. The Fourier transform of a function as simple as $\mathbf{1}_{\mathbb{R}_+}$ is a distribution of order $1$, and in general it is not simple to know in what space exactly will be the Fourier transform of of a function that is not in $L^p$ with $p\leq 2$, so it is good not to have to worry about the regularity in this case.
|
H: Is my proof of $|a| \leq b \iff -b \leq a \leq b$ correct?
Background
Hello, I'm teaching myself proofs, and am unsure whether or not my proof of $\forall a,b \in \mathbb{R}(|a| \leq b \iff -b \leq a \leq b)$ is correct. Your feedback is greatly appreciated.
Proof
Proof. $(\rightarrow)$ Suppose $a,b \in \mathbb{R}$ and $ |a| \leq b$. We consider both cases.
Case 1. $a \geq 0$. Then $|a| = a \leq b$ by definition of absolute value. Since $a \geq 0$, then $-a \leq 0$ by multiplying the inequality by $-1$. Similarly, multiplying $a \leq b$ by $-1$ gives us $-a \geq -b$, or equivalently $-b \leq -a$. Since $-a \leq 0$ and $a \geq 0$, or equivalently $0 \leq a$, then $-a \leq a$ by transitivity. We now have the following inequality,
$$
-b \leq -a \leq a \leq b
$$
Which means $-b \leq a \leq b$ by transitivity.
Case 2. $a < 0$. Then, $|a| = -a \leq b$ by definition of absolute value. Multiplying both sides of the inequality by $-1$, we get $a \geq -b$, or equivalently, $-b \leq a$. Since $-b \leq a$ and $a < 0$, then $-b < 0$. Also, since $-b < 0$, multiplying by $-1$ means $b > 0$, or equivalently $0 < b$. But since $a < 0$, then $-a > 0$. By transitivity, $a < 0 < -a$, means $a < -a$. Note that $-a \leq b$. So we have,
$$
-b \leq a < -a \leq b
$$
$\color{blue}{\text{I am unsure about this part. How do I introduce equality?}}$ Therefore, $-b \leq a < -a \leq b$. Since $a < 0$ and $-a > 0$, the only time $a = -a$ is when $a = -a = 0$. So,
$$
-b \leq 0 \leq b
$$
Or, $-b \leq a \leq b$ by substitution.
Since we've exhausted all cases, if $|a| \leq b$ then $-b \leq a \leq b$ for all $a,b \in \mathbb{R}$.
$(\leftarrow)$ Suppose $a,b \in \mathbb{R}$ and $-b \leq a \leq b$. Then, $a \geq -b$ and $a \leq b$. We must show $|a| \leq b$. We consider two cases.
Case 1. $a < 0$. Multiplying $a \geq -b$ by $-1$, we get $-a \leq b$. By definition of absolute value, $-a = |a| \leq b$.
Case 2. $a \geq 0$. Since $a \leq b$, then $a = |a| \leq b$ by definition of absolute value.
Therefore, $|a| \leq b$ when $-b \leq a \leq b$ for all $a,b \in \mathbb{R}$. $\qquad \Box$
Question
I'm really self-concious about my ability to do this sort of stuff, so I hope I haven't butchered this. The blue highlighted part is where I'm most uncertain. I've seen some books (e.g. Rosen's Discrete Math book), use the following definition for absolute value:
$$
|a| = a \text{ when } a \geq 0 \text{, and } -a \text{ when } a \leq 0.
$$
In other words, he uses the $\geq$ and $\leq$ relations in both cases. Is this okay? I've always seen the definition as $a < 0$ for one case, and $a \geq 0$ for the other. Does this matter? If it does, how do you introduce the equality like I had to for Case 2 in the $(\rightarrow)$ proof? Also, when using transitivity, and you have something like
$$
-b \leq a < -a \leq b
$$
I'm guessing you can't conclude $-b \leq a \leq b$. Since we haven't established that $a = -a$, so we can't say more than what we're given, which is that $a < -a$, so we could only say $-b \leq a < b$. Is my reasoning correct here?
AI: The two definitions of absolute value are equivalent. Some would argue that the one that you’re using is better style, because the cases are disjoint, but the one that you found in Rosen is also acceptable, since the two parts of the definition agree when both apply, i.e., when $a=0$.
There is no need to introduce equality after you arrive at
$$-b\le a<-a\le b\;:\tag{1}$$
that immediately implies that $a<b$, and if $a<b$, then certainly $a\le b$, so from $(1)$ you can immediately conclude that $-b\le a\le b$.
I would probably have organized the proof of $(\leftarrow)$ a little differently, doing Case 2 first, since it’s truly trivial: if $a\ge 0$, then $|a|=a$, and we’re assuming that $-b\le a\le b$, so $-b\le|a|\le b$. That, however, is a matter of taste.
|
H: Getting into university - two variables distribution
If students want to get accepted to university, the chance to pass each stage is $0.5$. If you fail, you can't pass to the second stage and can't enter the program. If you fail the second stage, you can't enter the program too. We will assume a case of two people:
$X$ - The number of people failed in the first stage
$Y$ - The number of people failed to enter the program.
I need to find $P(X+Y≤2|Y≥1)$.
So $X$ is $0.5$ and $Y$ can be $0.5$ or $0.25$. Also, to calculate $Y≥1$, I did $1-(Y≥1)$ and got $0.75$. I'm not sure how to proceed to calculate $X+Y$ both. Thought to use binomial distribution to choose $0$ out of $2$, $1$ out of $2$ and $2$ out of $2$.
AI: The final solution is the following
$$P(X+Y\leq 2 |Y \geq1)=\frac{\frac{7}{16}}{1-\frac{1}{16}}=\frac{7}{15}$$
and this because:
the probability to have $P(Y \geq1)$ is the complement that both are admitted while the probablity to have $P[(X+Y \leq2) \cap (Y \geq 1)]$ are the cases circled in red in the table below.
This is a table of the bivariate distribution for your understanding (the yellow cells are the ones for which $X+Y \leq 2$)
|
H: Show that $\forall a,x,y \in G:ax=ay\Longrightarrow x=y$ with $(G,\cdot)$ being a group
Let $(G,\cdot)$ be a group, with $G$ being a finite set.
Show that $\forall a,x,y \in G:ax=ay\Longrightarrow x=y$
Since $(G,\cdot)$ is a group $a \in G \Longrightarrow a^{-1} \in G$ with $a^{-1}a=e$
So we now define the automorphism (bijection!):
$\mathit{l}_{a^{-1}}:G\longrightarrow G:g\mapsto a^{-1}g$
(that the map is bijective was proven in the chapter before)
Now:
$$\mathit{l}_{a^{-1}}(ax)=a^{-1}ax=x$$
$$\mathit{l}_{a^{-1}}(ay)=a^{-1}ay=y$$
Since the map is injective $\forall a,b \in G:a=b \Longrightarrow\mathit{l}_{a^{-1}}(a)=\mathit{l}_{a^{-1}}(b)$
So $ax=ay\Longrightarrow \mathit{l}_{a^{-1}}(ax)=\mathit{l}_{a^{-1}}(ay)\Longrightarrow x=y$
$\Box$
Could someone verify if my solution is correct? And if not, give me some feedback :)? thank you
AI: It’s basically correct apart from some terminology (for which see FiMePr’s answer), but you’re working much harder than necessary: if $ax=ay$, then
$$x=ex=(a^{-1}a)x=a^{-1}(ax)=a^{-1}(ay)=(a^{-1}a)y=ey=y\;.$$
|
H: Is this a vector?
Can anyone help me with this probability question?
Given two independent random variables $Y \sim N(\mu,\sigma)$ and $Z \sim U[0,3]$, we define
$X=Y+Z$
Prove or disprove: The vector $(Z,X)^T$ is a vector.
I would appreciate any help :)
Thanks
AI: The vector $V=(V_1,V_2)^T=(Z,-(-1)^I |Z|)^T=(Z,\pm |Z|)$ is not a 2-dimensional random Gaussian vector.
If it were, the linear combination $V_1-V_2$ would have a Gaussian distribution. But with probability $1/4$ we have $I=1$ and $Z>0$, in which case $V_1=V_2$. Hence $P(V_1=V_2)\ge 1/4$. Similarly with probability $1/4$ we have $I=0$ and $Z>0$, in which case $V_1=-V_2$, so $P(V_1\ne V_2)\ge 1/4$, so $P(V_1=V_2)\le 3/4$. These imply that $V_1-V_2$ is a non-degenerate random variable whose distribution has a discrete component. Hence it is not Gaussian. Hence the vector $V$ is not a bivariate Gaussian vector.
That is to say, since $I$ and $Z$ are independent, $P(I=1\text{ and }Z>0)=P(I=1)\times P(Z>0)$, which, by your assumptions, equals $1/2\times 1/2=1/4$. Thus, the chance that $V_1-V_2=0$ is at least $1/4$. Similarly, the chance that $V_1\ne V_2$ is at least equal to the chance that $Z>0$ and $I=0$, which is also $1/4$. Gaussian distributions do not assign positive probability to singleton points like this.
Your distribution forces $|V_1|=|V_2|$; its support is the union of two lines in the plane. Gaussian distributions in the plane, however, are supported on a singleton point, on a line, or on the whole plane.
|
H: $\mathbb{Z}\oplus \mathbb{Z}\cong \mathbb{Z}\oplus A \implies \mathbb{Z}\cong A$
Let $A$ be a $\mathbb{Z}$-module and suppose that $\mathbb{Z}\oplus \mathbb{Z}\cong \mathbb{Z}\oplus A$. Do we have $\mathbb{Z}\cong A$?
I know the result is true if $A$ is finitely generated as $\mathbb{Z}$-module. Clearly, $A$ can be viewed as a submodule of $\mathbb{Z}\oplus \mathbb{Z}$.
Maybe a submodule of $\mathbb{Z}\oplus \mathbb{Z}$ is finitely generated?
AI: If $\Bbb Z \oplus A$ is finitely generated, then $A$ must be finitely generated. So, it indeed holds that if $\Bbb Z \oplus \Bbb Z \cong \Bbb Z \oplus A$, then $A$ is finitely generated and your previous observation applies. So, $A \cong \Bbb Z$.
|
H: Topology of sequentially open sets is sequential?
Let $(X,\tau)$ be a topological space. The collection of all sequentially open subsets of $X$ (i.e. the complements of the sequentially closed subsets) is itself a topology $\tau_\text{seq}$, equal to $\tau$ if $X$ is a sequential space, and a strictly finer topology otherwise. (See the wikipedia article for details.)
Question: Is $(X,\tau_\text{seq})$ a sequential space?
AI: Suppose that $U$ is sequentially open with respect to $\tau_{\text{seq}}$, i.e., that every sequence converging in $\tau_{\text{seq}}$ to a point of $U$ is eventually in $U$. $\langle X,\tau_{\text{seq}}\rangle$ and $\langle X,\tau\rangle$ have the same convergent sequences, so $U$ is sequentially open with respect to $\tau$ and is therefore in $\tau_{\text{seq}}$. Thus, $\langle X,\tau_{\text{seq}}\rangle$ is sequential.
|
H: Finite group $G$ has $61$ Sylow $3$-subgroups
Suppose that the finite group $G$ has $61$ Sylow $3$-subgroups. Then I want to prove that there exist two Sylow $3$-subgroups $P$ and $Q$ satisfying $|P: P\cap Q|=3 $.
Since $n_{3}=61$, then the order of $G$ must be $61.3^{n}.p_{1}^{a_{1}}\dots p_{k}^{a_{k}}$. But I couldn't connect with $61$ to show the existence of $P$ and $Q$ with the desired feature.
What should I look at first to prove this claim?
AI: Consider the conjugation action of a Sylow $3$-subgroup $P$ on the set of all Sylow $3$-subgroups.
There is one fixed point, namely $P$ itself, and the other orbits have length $3^k$ for some $k>0$.
Since $60$ is not divisible by $9$, the remaining orbits cannot all have length divisible by $9$, and so there must be at least one orbit of length $3$. Let $Q$ be in such an orbit. Then the stabilizer of $Q$ in $P$ is $N_P(Q) = P \cap Q$. So $|P:P \cap Q|=3$.
|
H: Root of Complex Number in Polar Representation with Negative "r"
A friend had been looking at, as an example, $z^3=-8cis(\frac{\pi}{2})$ and ran into a phenomenon he struggled with explaining to himself; he approached me for assistance and I wasn't sure, either.
If I look at the "-" as a $cis(\pi)$ and perform a complex multiplication, I end up with $z = 8(cis(\pi)\cdot cis(\frac{\pi}{2})) = 8cis(\frac{3\pi}{2}) \Rightarrow z = 2cis(\frac{\pi}{2}+\frac{2\pi}{3}\cdot k)$ with k being 0,1,2. Wolfram tells me this is the correct answer, and the whole process feels quite logical and intuitive to me.
But we tried another thing: to take the root without converting the minus to a $cis(\pi)$. Basically, looking at it like a "negative module". The actual idea was: if I take the third root of the whole expression, then I can split it into the third root of minus one times the third root of $8cis(\frac{\pi}{2})$ which should give me the same result (or so we thought), but this leads me to $-2cis(\frac{\pi}{6}+\frac{2\pi}{3}\cdot k)$, and even if I do perform the minus-to-cis conversion now, I get a different result.
I've been trying to do two things:
Explain to myself intuitively why this is wrong, or what even is the meaning of the alternative answer that I get.
Write & examine the proof I know for the formula of a complex number's nth root and attempt to algebraically explain to myself why a negative "r" might invalidate it (or: why I must first convert it into the form of module (which has to be positive) times cis(angle), and strictly that form, before taking the root).
Neither of those was I very successful with doing. I'd really appreciate assistance.
AI: It's true that one cannot assume that roots are multiplicative over the complex numbers. But I don't think that's an issue here.
You don't show your second computation. I have
$$
-\operatorname{cis}(\tfrac\pi2)=\operatorname{cis}(\pi)\,\operatorname{cis}(\tfrac\pi2).
$$
If you write the cubic roots the usual way, you get
$$
\operatorname{cis}(\tfrac\pi3+\tfrac{2k\pi}3)\operatorname{cis}(\tfrac\pi6+\tfrac{2\ell\pi}3)=\operatorname{cis}(\tfrac\pi3+\tfrac\pi6+\tfrac{2(k+\ell)\pi}3)=\operatorname{cis}(\tfrac\pi2+\tfrac{2(k+\ell)\pi}3)
$$
|
H: Integrating $\int_{0}^{2 \pi} \cos^{2020}(x)dx$, $\int_{0}^{\pi/2} \frac{1}{\tan^{\sqrt{2020}}(x)+1}dx$, $\int_{0}^{\infty} x^5 e^{-x^4}dx$
I've been working through the 2020 MIT Integration Bee qualifier questions (20 total) for fun, and there are three that I haven't been able to crack yet. (The complete list of problems and answers are all online (PDF link via mit.edu). However, there are no worked solutions with them.)
(9) $\quad\displaystyle\int_{0}^{2 \pi} \cos^{2020}(x) \, dx = 2^{-2019}\pi\binom{2020}{1010}$
(15) $\quad\displaystyle\int_{0}^{\pi/2} \frac{1}{\tan^{\sqrt{2020}}(x)+1} \, dx = \frac{\pi}{4}$
(20) $\quad\displaystyle\int_{0}^{\infty} x^5 e^{-x^4} \, dx = \frac{\sqrt\pi}{8}$
I think the binomial theorem might be needed for (9) since $2020 \choose 1010$ appears in the solution. I've tried substitution and integration by parts for (20) with no luck. Haven't made too much progress with (15), probably need a clever algebra trick. Any ideas would be much appreciated.
AI: An approach to #20 without use of the gamma function as other comments/answers show but with knowledge of the Gaussian integral is to let $t=x^2$:
$$\frac{1}{2} \int_0^{\infty} t^2 e^{-t^2} \; dt$$
Now, use integration by parts with $dv=te^{-t^2} \; dt$ and $u=t$:
$$=\frac{1}{2} \left(-\frac{1}{2}te^{-t^2} \bigg \rvert_0^{\infty}+ \frac{1}{2} \int_0^{\infty} e^{-t^2} \; dt \right)$$
$$=\frac{1}{4} \int_0^{\infty} e^{-t^2} \; dt$$
$$=\boxed{\frac{\sqrt{\pi}}{8}}$$
|
H: Do all finite groups contain an odd number of elements?
Hello ive got a maybe weird and primitive question, which purely comes from my status as a beginner. Thinking about groups and their properties, with $\forall a \in G\,\,\, \exists a^{-1} \in G:aa^{-1}=e$ and the existence of a neutral element $e \in G$, doesnt this gives us the information, that any group, disregarding how large has a odd number of elements? And Cant we translate something into non-finite groups? This really interests me. Maybe someone can give me insight on how to look at this, and how to win or not win any information for non-finite sets.
AI: No. For any $n\in\mathbb{N}$ the group $\mathbb{Z_n}=\{0,1,2,...,n-1\}$ where the operation is addition modulo $n$ is a group with $n$ elements.
Note that some elements satisfy $a=a^{-1}$, so $a$ and $a^{-1}$ are not always $2$ different elements. This is your mistake.
|
H: The derivative of $f(x)=\frac{3 \sin x}{2+\cos x}$
My solution:
The background
$$\begin{align} &3\frac{d}{dx}\left(\frac{\sin(x)}{2+\cos(x)}\right)\\
&=3\frac{\frac{d}{dx}(\sin(x))(2+\cos(x))-\frac{d}{dx}(2+\cos(x))\sin(x)}{(2+\cos(x))^2}\\
&=3\frac{\cos(x)(2+\cos(x))-(-\sin(x))\sin(x)}{(2+\cos(x))^2}\\
&=\frac{3+6\cos(x)}{(2+\cos(x))^2}\end{align}$$
However, I plugged $f(x)=\dfrac{3 \sin x}{2+\cos x}$ into the derivative calculator in wolfram alpha and received the following calculation:
$$\frac{3\left(\sin^2(x)+\cos^2(x)+2\cos(x)\right)}{(2+\cos(x))^2}$$
Is my solution incorrect and the one from wolfram alpha correct? If so, where did I go wrong?
AI: Your answer is correct. Use trig. identity $\sin^2(x)+\cos^2(x)=1$
$$\therefore \frac{3(\sin^2(x)+\cos^2(x)+2\cos(x))}{(2+\cos(x))^2}=\frac{3(1+2\cos(x))}{(2+\cos(x))^2}=\frac{3+6\cos(x)}{(2+\cos(x))^2}$$
Above matches your solution
|
H: Proof verification: If factor group $G/Z(G)$ is cyclic then $G$ is Abelian
Let $G$ be a group. If $G/Z(G)$ is cyclic, then $G$ is Abelian.
I tried to prove it like this: I'll make use of the theorem that $G$ is Abelian $\iff Z(G)=G$.
Hence, it must be shown that $G=Z(G)$
Let $G/Z(G)=\langle gZ(G) \rangle$ for some $g\in G$.
On the contrary, assume that $Z(G)\ne G$
Case 1: If $g\in Z(G)$, then $Z(G)=gZ(G)$. Let $x\in G-G(Z)$ and hence $\exists m$ such that
$xZ(G)=(gZ(G))^m= (Z(G)^m=Z(G) \implies xZ(G)=Z(G)\iff x\in Z(G)$, which is a contradiction.
Case 2: Now assume that $g\in G-Z(G)$ and $x\in G-Z(G)$. There exists $r$ such that $xZ(G)=(gZ(G))^r=g^r Z(G)\implies (g^r)^{-1}x\in Z(G) \implies \exists z\in Z(G)$ such that $x=g^rz$. Since $z\in Z(G)$, we have $z\in C(g)$, where $C(g)$ is centralizer of $g$ and clearly $g^r\in C(g)$ and hence $x\in C(g)$. Since $x$ is an arbitrary element of $G-Z(G)$, we have shown that $g$ commutes with every element in $G-Z(G)$ and this implies that $g$ commutes with every element of $G$ and hence $g\in Z(G)$, which is a contradiction.
In both the cases above, we have a contradiction and hence $Z(G)=G$, whence it follows that $G$ is Abelian.
Is my proof correct? Please let me know. Thanks a lot for your time.
AI: Well, it looks fine. There is also a shorter solution. If $G/Z(G)$ is cyclic then there is some $g\in G$ such that $G/Z(G)=\langle gZ(G)\rangle$. Now let $x,y\in G$ be any two elements. Every element belongs to some left coset of $Z(G)$, so we can write $x=g^iz$ and $y=g^jw$ for some $i,j\in\mathbb{Z}$ and $z,w\in Z(G)$. But then:
$xy=g^izg^jw=g^{i+j}zw=g^jg^iwz=g^jwg^iz=yx$
We just used the fact that $z$ and $w$ commute with everything. So $xy=yx$ for any two elements, hence $G$ is Abelian.
|
H: Does algebraic factorization imply the same 0 factoring pattern for non-polynomial expressions?
Suppose one has $e^{f(x)}(g(x)-e^{h(x)}) = 0.$ Can it then be inferred that either $e^{f(x)}=0$ or $g(x) - e^{h(x)} = 0$ makes this statement true, similar to how $x(1-x^2)$ would yield that $x=0$ or $x\pm 1$ are solutions?
AI: Yes, of course. At least one of those must be zero, so assume the other is nonzero and divide both sides by it. (This is perfectly legal since it is nonzero.) Then the other factor is equal to zero.
For instance, assume $e^{f(x)} \ne 0$. Then you can divide both sides by it to conclude that $g(x) = e^{h(x)}$. Similarly, you can do the same if you assume $g(x) - e^{h(x)} \ne 0$. Divide both sides by it and one concludes $e^{f(x)} = 0$.
|
H: $90$ Degree Piping Cuts on Cylinders
I'm trying to find the math behind laying out piping cuts. For instance, when one piece of pipe tees (at $90$ degrees) into another piece of pipe, how do you cut the pipe? I'm trying to figure out the math of creating a flat template that I can wrap around a pipe to draw a line and then cut the pattern with a torch for both parts when intersecting.
How do you draw the pattern when it's not a $90$, but a $45$, $30$, and $60$ degree tee.
AI: To make a $ 90^{\circ}$ bend one needs to cut the tube at half the angle i.e., $45^{\circ}.$ For an $L$ full section and for Tee half section.
An intersection of a pipe and plane is an ellipse. The development of a pipe dia $D$ cut by a plane making angle $\alpha$ to diametral plane is always a cosine curve.
That is when thin flexible truncated pipe is laid flat, an ellipse boundary assumes the shape of a cosine curve.
This is the development of a sectioned or truncated cylinder . The development curve is to be drawn on paper to make the template. Length is $\pi D$ and the cosine wave height is $D\tan \alpha$. The paper is wrapped around the tube, keeping that as guide to hand-saw at angle $\alpha$ with a hacksaw.
Powered mitre saws are for larger production volumes. Any desired angle can be mitred. You only need to clamp the tube at desired $\alpha$ and lower the running saw.
|
H: Hasse's Theorem: min and max values with or without point of infinity?
I have to calculate the min and max values of a field.
Min: $\lfloor{q+1-2 \sqrt{q}}\rfloor$
Max: $\lfloor{q+1+2 \sqrt{q}}\rfloor$
According to Hasse. BUT the exercise says that min and max should be found together with the point of infinity. So should I say min+1 and max+1?
Hope you get my question.
AI: Hasse's theorem is usually stated as
$$ | \# E(\mathbb{F}_q) - (q + 1) | \leq 2\sqrt{q}$$
When we talk about the points on an elliptic curve $E/K$ where $K$ is a field, we are always talking about the points on the projective curve (that is, including the point at infinity). Thus if $E/\mathbb{F}_q$ is given by
$$f(x,y) = y^2 + a_1xy + a_3y - (x^3 + a_2x^2 + a_4x + a_6) = 0$$
when we talk about $E(\mathbb{F}_q)$ we really mean
$$\{(x,y) \in \mathbb{F}_q^2 : f(x,y) = 0 \} \cup \{(0:1:0)\}$$
|
H: Find basis to $\begin{bmatrix}1&-4&3&-1\cr2&-8&6&-2\end{bmatrix}$
I want to find the basis to
$$x_1-4x_2+3x_3-x_4=0$$
$$2x_1-8x_2+6x_3-2x_4=0$$
so I set up the matrix:
$\begin{bmatrix}1&-4&3&-1\cr2&-8&6&-2\end{bmatrix}$
to get $\begin{bmatrix}1&-4&3&-1\cr0&0&0&0\end{bmatrix}$
Then I would get $\begin{bmatrix}x_1\cr x_2\cr x_3\cr x_4\end{bmatrix}=\begin{bmatrix}4b-3c+d \cr b\cr c\cr d\end{bmatrix}$. The issue is that I don't know how to then find the basis afterwards. Somone said I have to get $=b\begin{bmatrix}4\cr 1\cr 0\cr 0\end{bmatrix}+c\begin{bmatrix}-3\cr 0\cr 1\cr 0\end{bmatrix}+d\begin{bmatrix}1\cr 0\cr 0\cr 1\end{bmatrix}$ and each of the matrices are the vectors of the basis set. But I wasn't sure howe we get the last step?
AI: You find a basis for a vector space, not for a linear equation.
According to what you did, you are looking for a basis for the kernel of the coefficient matrix, or equivalently, a basis for the solution space of the homogeneous linear equation.
You have already found an answer:
$\left\{\begin{bmatrix}4\cr 1\cr 0\cr 0\end{bmatrix},
\begin{bmatrix}-3\cr 0\cr 1\cr 0\end{bmatrix},
\begin{bmatrix}1\cr 0\cr 0\cr 1\end{bmatrix}\right\}$
|
H: Failed solution for solving $\cos(\theta) = -\sin(-\theta)$
I'm trying to solve $\cos(\theta) = -\sin(-\theta)$ on the interval $[0, 2\pi)$, but having trouble identifying what I'm doing wrong
$$\cos(\theta) = -\sin(-\theta)$$
By even-odd identities:
$$\sin(-\theta)=-\sin(\theta)$$
$$\cos(\theta)= -(-\sin(\theta))$$
$$\cos(\theta)=\sin(\theta)$$
Square both sides
$$\cos^2(\theta)=\sin^2(\theta)$$
By Pythagorean identities: $\sin^2(\theta)=1-\cos^2(\theta)$
$$\cos^2(\theta)=1-\cos^2(\theta)$$
$$2\cos^2(\theta)=1$$
$$\cos^2(\theta)=\frac{1}{2}$$
$$\cos(\theta)=\frac{1}{\sqrt2}$$
$$\theta = \frac{\pi}{4}, \frac{7\pi}{4}$$
I know the correct solutions are $\dfrac{\pi}{4}, \dfrac{5\pi}{4}$. Why am I missing $\dfrac{5\pi}{4}$ and in its place have $\dfrac{7\pi}{4}$ instead?
AI: Two of your steps cause issues
It is true that $\cos(\theta)=\sin(\theta) \implies \cos^2(\theta)=\sin^2(\theta)$ but it is also true that $\cos(\theta)=-\sin(\theta) \implies \cos^2(\theta)=\sin^2(\theta)$. This introduced the possibility of spurious results such as $\frac{7\pi}4$ or $\frac{3\pi}4$ and is which is is always worth checking results in the original expression.
It is not true $\cos^2(\theta)=\frac12 \implies \cos(\theta)=\frac1{\sqrt{2}}$. What is true is $\cos^2(\theta)=\frac12 \implies \cos(\theta)=\frac1{\sqrt{2}} \text{ or }\cos(\theta)=-\frac1{\sqrt{2}}$. The second of these leads to $\frac{5\pi}4$ and the spurious $\frac{3\pi}4$.
|
H: In a given field $F$, if $ab=0$, is it true that at least $a$ or $b$ has to be zero?
The answer is true for integral domains, but I wonder if it is also true for any general field. Thanks.
AI: Without loss of generality, assume ${a\neq 0}$. Then ${a}$ has a multiplicative inverse. Multiply both left hand sides by this multiplicative inverse:
$${\Rightarrow a^{-1}\left(ab\right)=0}$$
$${\Rightarrow b=0}$$
So indeed, at least one of the elements must be $0$.
|
H: Missing solutions from $\tan(\theta)=2\sin(\theta)$
I'm trying to solve $\tan(\theta)=2\sin(\theta)$ on the interval $[0,2π)$, but having trouble identifying what I'm doing wrong
$$\tan(\theta)=2\sin(\theta)$$
Using quotient identity: $$\tan(\theta)= \frac{\sin(\theta)}{\cos(\theta)}$$
$$\frac{\sin(\theta)}{\cos(\theta)}=2\sin(\theta)$$
Divide both sides by $\sin(\theta)$
$$\frac{1}{\cos(\theta)}=2$$
Reciprocal identity: $$\frac{1}{\cos(\theta)}=\sec(\theta)$$
$$\sec(\theta)=2$$
$$\theta =\frac{\pi}{3}, \frac{5\pi}{3}$$
However, I know that I'm missing solutions $0$ and $\pi$.
I have seen a solution elsewhere online that moves everything to the LHS and then uses the zero-product-property to solve:
$$\tan(\theta)=2\sin(\theta)$$
$$\frac{\sin(\theta)}{\cos(\theta)}=2\sin(\theta)$$
$$\frac{\sin(\theta)}{\cos(\theta)}-2\sin(\theta)=0$$
Factor out $\sin(\theta)$
$$\sin(\theta)\left(\frac{1}{\cos(\theta)}-2\right)=0$$
Use zero-product-property
$\sin(\theta)=0$ or $\dfrac{1}{\cos(\theta)}-2=0$
From here the solutions are $\theta =0, \dfrac{\pi}{3}, \pi, \dfrac{5\pi}{3}$
Still, I don't understand why solutions were missing from the first method and how might I avoid such a mistake in the future?
AI: $\tan\theta=2\sin\theta$ has a solution at $\theta=0$ and $\theta=\pi$, which is also when $\sin\theta=0$.
Because, of this, when you divided both sides by $\sin\theta$, you divided by zero, and this is illegal. That is why you missed some solutions.
To avoid this in future, try not to divide if possible but use the other method you posted, move everything to the LHS and use the zero-product property.
If you ever divide both sides by something, check if that thing can be zero.
In your case, check if $\sin\theta=0$ is a solution to your equation before dividing by it. It is, so first you must check when $\sin\theta=0$, which is at $0,\pi$. After considering these solutions and excluding them you may safely divide by $\sin\theta$.
|
H: Need help understanding this recursion via pseudocode
Given the recursive algorithm in this pseudocode:
RTC(n)
Input: A nonnegative integer, n
Output: A numerator or denominator (depending on parity of n) in an approximation of
If n < 3
Return (n + 1)
If n >= 3
t: = RTC(n – 1)
If n is odd
s:= RTC(n – 2)
Return (s + t)
If n is even
r:= RTC(n – 3)
Return (r + t)
If n is even
print ‘Your approximation is ‘ , RTC(n) , ‘/’ , RTC(n – 1) , ‘.’
What is the output for the algorithm if the input n is 6?
The answer is: Your approximation is 17/12.
I'm finding myself stuck on how the recursive value is passed back up once I hit the base case. Take the variable t, for example. with the function getting called as RTC(6), it makes sense to me that t gets assigned RTC(5) which then calls the function with argument 5, getting to t=RTC(4), etc. Once I get to my base case of RTC(2) and the return value is n+1 or 3, then how do i pass that back up the recursion? Do I add? do I multiply? why?
As a side note, is it me or is there a lot of recursion going on in this snippet? This problem is from a bank of questions that should generally be able to be evaluated fairly quickly, not requiring more than a few minutes per question, certainly not much more than 5 minutes.
AI: There is indeed a lot of recursion going on if you trace the operation of the algorithm, but it's easy if you start from the small values of $n$ and go up.
$\def \op #1{\operatorname {#1}}
\def \RTC {\op{RTC}}
\RTC(1) = 2\\
\RTC(2) = 3\\
\RTC(3) = \RTC(2)+\RTC(1)=5\\
\RTC(4) = \RTC(3)+\RTC(1)=7\\
\RTC(5) = \RTC(4)+\RTC(3)=12\\
\RTC(6) = \RTC(5)+\RTC(3)=17
$
|
H: Find all the $x$ such that $N \mod x = 1$
Find all the $x$ such that $N \mod x = 1,$ where $x < N$ and $N$ is a big integer.
Please help me find a solution, or tell me if this is not a simple task to do.
AI: $N \bmod x=1$ means $x $ divides $N-1$,
so finding all the $x$ such that $N\bmod x=1$ is as hard as factoring $N-1$.
|
H: Does there exist a number that makes this work?
I am wondering if there is some kind of polynomial somewhere out there that makes the following functional equation true:
$y(x) = y(x-1) - \tfrac{1}{2}x.$ I don't normally work on this type of problem so I don't know what the standard procedure is. If I try testing say $a x^2$, then I can arrange that as $0 = a(x-1)^2-ax^2-\tfrac{1}{2}x.$ For some reason, instead of giving me scalar quantities in $a,$ wolfram gives me $a = x/(2 x^2 - 6 x + 2) $ and $x^2 + 1 \neq3 x.$ I just want a scalar, like $a = \sqrt{2}/2 - 1$ or something. Why isn't this working?
AI: We have
$$ y(x) = y(x-1) - \tfrac12 x. \tag1$$
You can substitute $x-1$ for $x$ and it must still be true:
$$ y(x-1) = y(x-2) - \tfrac12(x-1). \tag2$$
Use Equation $(2)$ to substitute into equation $(1)$:
$$ y(x) = y(x-2) - \tfrac12(x-1) - \tfrac12 x.$$
If we choose $x$ a positive integer, we can repeat this process as many times as needed to "unroll" the recursion:
\begin{align}
y(x) &= y(x-1) - \tfrac12 x \\
&= y(x-2) - \tfrac12(x-1) - \tfrac12 x \\
&= y(x-3) - \tfrac12(x-2) - \tfrac12(x-1) - \tfrac12 x \\
&\vdots\\
&= y(0) - \tfrac12(1) - \tfrac12(2) - \cdots - \tfrac12(x-2) - \tfrac12(x-1) - \tfrac12 x \\
&= y(0) - \tfrac12 \sum_{t = 1}^x t.
\end{align}
Now use the fact that $\sum_{t = 1}^x t = \frac{(x+1)x}{2},$
and we have that
$$y(x) = y(0) - \tfrac14 x^2 - \tfrac14 x.$$
This is a polynomial, in fact a quadratic polynomial, but not a quadratic polynomial of the form $ax^2.$
That's why you don't get a solution when you assume the solution has to be $ax^2.$ There is no value of $a$ that will make up for the missing $\frac14 x$ term.
Note that the derivation above assumes $x$ is a positive integer, so if we want the equation to be true for all real $x$ we still have to show it.
But now that we have a formula, we can just try it for any real $x$:
\begin{align}
y(x) - y(x - 1)
&= \left(y(0) - \tfrac14 x^2 - \tfrac14 x \right) - \left(y(0) - \tfrac14 (x-1)^2 - \tfrac14 (x-1) \right) \\
&= - \tfrac14 x^2 - \tfrac14 x + \tfrac14 (x-1)^2 + \tfrac14 (x-1) \\
&= - \tfrac14 x^2 - \tfrac14 x + \tfrac14 (x^2 - 2x +1) + \tfrac14 (x-1) \\
&= -\tfrac12 x,
\end{align}
which (fortunately) is exactly what we need in order to establish the desired equation.
(Until this verification step, we do not really know that the formula for $y(x)$ is correct; it is only an inspired guess up to that point.)
Notice that this works regardless of the value of $y(0),$ so the recursion does not define a unique function; it defines a set of quadratic functions in which you can set the constant term to anything you like.
Another method, instead of working downward from a large positive integer, is to work upward from zero:
\begin{align}
y(1) &= y(0) - \tfrac12(1). \tag3 \\
y(2) &= y(1) - \tfrac12(2). \tag4\\
\end{align}
Use $(4)$ to substitute into $(3)$:
$$ y(2) = y(0) - \tfrac12(1) - \tfrac12(2) . $$
Continue like that, using the equation for each value of $x$ in $y(x)$ to substitute into the equation for the next larger $x$:
\begin{align}
y(3) &= y(2) - \tfrac12(3)
= y(0) - \tfrac12(1) - \tfrac12(2) - \tfrac12(3). \\
y(4) &= y(3) - \tfrac12(4)
= y(0) - \tfrac12(1) - \tfrac12(2) - \tfrac12(3)- \tfrac12(4). \\
\end{align}
By now you may see a pattern and be able to make a guess about a formula,
which you can check using the same checking method as before.
A third alternative: if you suppose initially that the formula is quadratic,
then you only need to relate three values to each other, and this gives you three points on the curve relative to one of the points.
If you are really inspired you might choose $x = -1, 0, 1$:
\begin{align}
y(0) &= y(-1) - \tfrac12(0) = y(-1). \\
y(1) &= y(0) - \tfrac12(1) = y(0) - \tfrac12. \\
\end{align}
The equation $y(0) = y(-1)$ tells us that the graph of the function is a parabola with axis $x = -\tfrac12$ (halfway between $x=-1$ and $x=0$), so the equation has the form
$$ y(x) = a \left(x + \tfrac12\right)^2 + h. $$
(This also explains why the answer cannot be of the form $y(x) = ax^2,$ because that would be a parabola with an axis at $x=0$.)
You can then use the fact that $y(1) = y(0) - \tfrac12$ to find the possible value of $a.$ The value of $h$ is arbitrary. You should still check that the formula works for all $x$ and not just these three values.
I am not aware of any general method for solving all recursions of the form $y(x) = y(x-1) + f(x)$ for any known function $f.$
For $f$ a polynomial you will get a polynomial one degree higher (a fact that I did not use in the first two solution methods), but that's only one class of functions.
The first two methods in this answer can be summarized as "try a few consecutive values and see what happens", which is a technique you can try when you don't know anything better for a particular problem.
It is not guaranteed to work but it does often help find a solution when a problem is not too hard.
|
H: Proving limit by definition with absolute value
This is what I have:
$\lim_{x \to 0} x \cdot |x|=0$
And I know:
$$|x|=\begin{cases}x,&\text{if }|x|\ge 0\\-x,&\text{if }x <0\;.\end{cases}$$
Case 1:
$\lim_{x \to 0^+}=0$
$(0 < |x - 0| < \delta \Rightarrow |x-0|< \epsilon)$
since:
$o < x-0< \epsilon \Rightarrow \fbox{$\delta=\epsilon$}$
So:
$\forall \epsilon > 0, \delta =\epsilon, 0<|x-0|< \delta \Rightarrow |f(x) - 0| < \epsilon $
Case 2:
$\lim_{x \to 0^-} x \cdot |x|=0$
And I know:
$\lim_{x \to 0^+}=0$
$(0 < |x - 0| < \delta \Rightarrow |-x-0|< \epsilon)$
And I'm assuming that (because adding or substracting zero is no relevant):
$(x+0 \Leftrightarrow -(x-0))$
And for absolute value property:
$0 < |x-0| < \epsilon$
So: $\fbox{$\delta = \epsilon$}$
With this I know the limit is the same, now I do the same with $\lim {x \to 0} x =$
$(0 < |x - 0| < \delta \Rightarrow |x-0|< \epsilon)$
With this I know :
So: $\fbox{$\delta = \epsilon$}$
This is by far the hardest in my book, doesnt contain the solution. Am I on the right track?
AI: To start with, let's ignore any value of $x \ge 1$. We only need to consider function values close to $0$, so for our intents and purposes, $f(x) = |x|$. The only thing we need to be careful of is ensuring that $\delta \le 1$. That way, if $0 < |x - 0| < \delta$, then $|x| < 1 \implies x < 1$, and so $f(x) = |x|$ for all the $x$ we have to worry about.
So, we are showing $\lim_{x \to 0} x|x| = 0$. We have $|x|x| - 0| = |x|^2$, and this is less than $\varepsilon$ if and only if $|x| < \sqrt{\varepsilon}$. Thus, we can take $\delta = \min\{\sqrt{\varepsilon}, 1\}$, to ensure that $\delta \le 1$ as we needed above. We get,
\begin{align*}
0 < |x - 0| < \min\{\sqrt{\varepsilon}, 1\} &\implies |x| < \sqrt{\varepsilon} \text{ and }|x| < 1 \\
&\implies |x|^2 < \varepsilon \text{ and } x < 1 \\
&\implies |x|x| - 0| < \varepsilon \text{ and } f(x) = |x| \\
&\implies |xf(x) - 0| < \varepsilon,
\end{align*}
completing the proof.
|
H: Will a tortoise moving $1$ unit at a time around a circle meet a hare moving $2$ units at a time?
Consider a circle with $100$ points numbered $1,2,3,4,\dotsc,99,100$. Now there is a tortoise at the point numbered $1$ and a hare at point numbered $2$. Both can only move in clockwise direction, and only on numbered points. The tortoise moves one point at time e.g., from $1$ to $2$, from $2$ to $3$, and so on. The hare moves two points at a time e.g., from $2$ to $4$, from $4$ to $6$ and so on. They start moving and move at the same time i.e, when the tortoise moves from $1$ to $2$, then hare also moves from $2$ to $4$. They continue to move like this until they meet at some point. Either find the point at which they meet, or if they do not meet at some point, explain why.
The seems to be very easy but I just can't figure out how proceed though I know for them to meet at a point hare must be at some point x-1 if tortoise is at some point x only then they will meet at point x+1 in next turn any help would be appreciated..
AI: We can think of moving around the 100 points in the circle as the set $\mathbb{Z}/100\mathbb{Z}$, that is the integers modulo $100$. So for example, when the tortoise has moved 154 times from it's starting position, it's really just 54 points from the start, i.e. at 55.
The way I approach is as follows: the hare moves twice as fast as the tortoise, and starts 1 spot ahead of it. If the tortoise has taken $n$ steps, then the hare has taken $2n+1$ steps, both $\mod 100$. So asking when the hare and tortoise meet is equivalent to asking, when is the following true:
$$
2n+1 \equiv n \mod 100.
$$
Evidently, $n=99$ is the first occurrence (and so they meet at point 100).
|
H: Invertibility of Matrix mapping $f(X) = DX + XD$
Consider the set of $n*n$ matrices.
If $D$ is a diagonal matrix, and the linear transformation from the set of $n*n$ matrices to itself is defined as
$$f(X) = DX + XD$$Is the mapping invertible?
Here are some observations I made. Obviously, if all diagonal entries are 0, then it's not injective.
If $$D = \left [ \begin{matrix}
1 & 0 \\
0 & 0 \\
\end{matrix} \right ]$$ Then applying $f(X)$ to any matrix will lose the element on the lower right corner of $X$.
So my guess is that every element in $D$ should be nonzero. But is that enough for our mapping to be invertible?
edit: Existence of differentiable matrix maps $M(3,\mathbb{R}) \rightarrow M(3,\mathbb{R})$ This is where the relevant question comes from. I'm trying to figure out how this might be related.
AI: I think it will be an alternative way to check whether this linear transformation is invertible by checking whether its corresponding matrix under a set of basis is invertible.
Assume $$D_{n\times n}=\begin{bmatrix}\lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \cdots & \cdots & \cdots & \cdots \\0 & 0 & \cdots & \lambda_n\end{bmatrix}$$, and choose conventional orthonormal basis $\{E_{ij}\},1\leq i\leq n,1\leq j\leq n$, in which $$E_{ij}=\left[e_{xy}\right],e_{xy}=\begin{equation}\begin{cases}1, & x=i,y=j \\ 0, & \text{otherwise}\end{cases}\end{equation}$$
Then $$f(E_{ij})=(\lambda_i+\lambda_j)E_{ij}$$
Thus its corresponding matrix under $\{E_{ij}\}$ is
$$
\begin{bmatrix}
\lambda_1+\lambda_1 & 0 & \cdots & 0 \\
0 & \lambda_1 + \lambda_2 & \cdots & 0 \\
\cdots & \cdots & \cdots & \cdots \\
0 & 0 & \cdots & \lambda_n+\lambda_n \end{bmatrix}_{n^2\times n^2}$$
In order to let this matrix be invertible, it is easy to know this is equivalent to let $$\lambda_i+\lambda_j\neq 0,1\leq i\leq n,1\leq j\leq n$$
This should be the condition.
|
H: Evaluate using Residues $\int_0^{2\pi}\frac{d\theta}{1+a\sin\theta}$
I need to evaluate the following using residues: $\int_0^{2\pi}\frac{d\theta}{1+a\sin\theta}$ where $-1<\theta<1$.
I suppose the $a$ in front of $\sin\theta$ is throwing me off. I was thinking I could let $z=e^{i\theta}$ and so $\sin\theta=\frac{z-z^{-1}}{2i}$ and $dz=izd\theta$. So, the integral becomes: $\int_{|z|=1}\frac{dz}{iz(1+a(\frac{z-z^{-1}}{2i}))}$. After some, hopefully mistake-free, algebra, we'd get:
$2\int_{|z|=1}\frac{dz}{az^2+2iz-a}$. Now, we can use the quadratic formula (again, hopefully mistake-free) to get $z=\frac{-i\pm\sqrt{-1-a^2}}{a}$.
From here, I'm not really sure where to go. Do I just plug and chug and find residues using these two poles, or is there something sneaky going on? Or, did I make a mistake somewhere earlier?
Any help is appreciated :) Thank you.
AI: I guess nothing prevents you from exploiting some symmetry before switching to the computation of residues.
$$\int_{0}^{2\pi}\frac{d\theta}{1+a\sin\theta}=\int_{0}^{\pi}\frac{d\theta}{1+a\sin\theta}+\int_{0}^{\pi}\frac{d\theta}{1-a\sin\theta}=\int_{0}^{\pi}\frac{2\,d\theta}{1-a^2\sin^2\theta}$$
equals
$$ 4\int_{0}^{\pi/2}\frac{d\theta}{1-a^2\sin^2\theta}=4\int_{0}^{\pi/2}\frac{d\theta}{1-a^2\cos^2\theta} $$
or, by letting $\theta=\arctan u$,
$$ 4\int_{0}^{+\infty}\frac{du}{(1+u^2)-a^2}=2\int_{-\infty}^{+\infty}\frac{du}{(1-a^2)+u^2} $$
which equals
$$ 4\pi i\operatorname*{Res}_{u=i\sqrt{1-a^2}}\frac{1}{(1-a^2)+u^2}=\frac{2\pi}{\sqrt{1-a^2}}. $$
|
H: When defining ordered pairs, are there any important distinctions between $\{\{a\},\{a,b\}\}$ and $\{a,\{b\}\}$?
The formal Kuratowski definition of ordered pair is that $\langle a,b\rangle = \{\{a\},\{a,b\}\}$.
While I think I understand the above definition well I wanted to check if below definition also works just fine (and hence is "equivalent" to Kuratowski definition)
$$\langle a,b\rangle = \{a,\{b\}\}.$$
I think that both the definitions are just fine, but maybe I'm missing a subtle point. Also is there any reason to prefer Kuratowski's definition over the later one?
AI: Unfortunately, with the new definition, both $\langle\{0\},1\rangle$ and $\langle\{1\},0\rangle$ equal $\{\{0\},\{1\}\}$. Thus this definition is not suitable for ordered pairs.
|
H: $l_1$ has no infinite dimensional subspace that is reflexive.
How to show that $l_1$ does not contain an infinite dimensional subspace that is reflexive.
AI: See Schur's theorem and Eberlein-Smulian Theorem. Suppose that $Y \le \ell_1$ is a subspace, and let $B_Y$ be the closed unit ball of $Y$, i.e. $B_X \cap Y$. Then
\begin{align*}
Y \text{ is reflexive} &\iff B_Y \text{ is weakly compact} \\
&\iff B_Y \text{ is sequentially weakly compact} &\text{(Eberlein-Smulian)} \\
&\iff B_Y \text{ is (sequentially) norm compact} &\text{(Schur)} \\
&\iff Y \text{ is finite-dimensional}.
\end{align*}
|
H: Convergence of the sum constructed by approximation of the integral of an $L^1$ function
Let $f \in L^1( \mathbb{R}^n)$. Does the sum
$$S(x) = \sum_{k \in \mathbb{Z}^n} f(k+x) $$
converge for almost every $x$?
Intuitively I'm approximating the integral (which is finite), so I think this should be true. Maybe $C||f||_{L^1}$ can be a valid upperbound for some $C>0$?
I think we can suppose wlog $n=1$ (because we can iterate) and $f\geq 0$ (because we can split $f$ in its positive and negative parts)
AI: For $n=1$ consider $ \sum_k \int_{[0,1)} |f(x+k)| dx=\sum_k \int_{[k,k+1)} |f(x)| dx=\int_{\mathbb R} |f(x)|dx$. By Tonelli's theorem we can write this as $ \int_{[0,1)} \sum_k|f(x+k)| dx=\int_{\mathbb R} |f(x)|dx$. Hence $\sum_k|f(x+k)| <\infty$ almost everywhere on $[0,1)$. By periodcity of the sum this implies that the series converges absolutely almost everywhere on $\mathbb R$.
Almost identical argument works in $\mathbb R^{n}$. Just replace $[0,1)$ by $[0,1)\times [0,1)\times ...\times [0,1)$
|
H: Show that $|\sum_{i,j} a_{ij} x_i x_j|\le \max_i |x_i|\cdot \max_j |y_j|$ is equivalent to $|\sum_{i,j} a_{i,j} x_i y_j |\le 1$
Show that $|\sum_{i,j} a_{ij} x_i y_j|\le \max_i |x_i|\cdot \max_j |y_j|$ for all $x_i,y_j \in \mathbb{R}$ is equivalent to
$$
\bigg|\sum_{i,j} a_{i,j} x_i y_j \bigg|\le 1\quad \forall x_i,y_j \in \{+1,-1\}.$$
Source: this is exercise 3.5.2 from Vershynin's book which is supposed to be very simple but I spend a long time on it.
AI: We have 2 formulations. The first is:
$$
\forall x,y\in\mathbb{R}^n,\qquad \left\lvert \sum_{1\leq i,j\leq n} a_{ij} x_i y_j\right\rvert \leq \lVert x\rVert_\infty \lVert y\rVert_\infty \tag{1}
$$
We want to prove that this is equivalent to the following:
$$
\forall x,y\in\{-1,1\}^n,\qquad \left\lvert \sum_{1\leq i,j\leq n} a_{ij} x_i y_j\right\rvert \leq \lVert x\rVert_\infty \lVert y\rVert_\infty = 1 \tag{2}
$$
Now, it is easy to check that (1) is equivalent to (3):
$$
\forall x,y\in[-1,1]^n,\qquad \left\lvert \sum_{1\leq i,j\leq n} a_{ij} x_i y_j\right\rvert \leq 1 \tag{3}
$$
(One direction is clear, again, and for the other we can just consider $x'=x/\lVert x\rVert_\infty$ and $y'=y/\lVert y\rVert_\infty$ and apply (3) to them in order to get (1).)
So we want to prove that (2) and (3) are equivalent. (3) implies (2), so it remains to prove that (2) implies (3). To do so, assume (2), and pick any $x,y\in[-1,1]^n$: we want to show that $\left\lvert \sum_{1\leq i,j\leq n} a_{ij} x_i y_j\right\rvert \leq 1$. Without loss of generality, assume that $\sum_{1\leq i,j\leq n} a_{ij} x_i y_j\geq 0$, and rewrite
$$
0 \leq \sum_{1\leq i,j\leq n} a_{ij} x_i y_j = \sum_{i=1}^n x_i \left( \sum_{j=1}^n a_{ij}y_j \right)
$$
Define $u\in\{-1,1\}^n$ by
$$
u_i = \begin{cases}
1 &\text{ if } \sum_{j=1}^n a_{ij}y_j \geq 0\\
-1&\text{ if } \sum_{j=1}^n a_{ij}y_j < 0
\end{cases}
$$
and observe that
$$
0 \leq \sum_{i=1}^n x_i \left( \sum_{j=1}^n a_{ij}y_j \right)
\leq \sum_{i=1}^n \left| \sum_{j=1}^n a_{ij}y_j \right|
= \sum_{i=1}^n u_i \left( \sum_{j=1}^n a_{ij}y_j \right)
= \sum_{i=1}^n \sum_{j=1}^n a_{ij}u_i y_j \tag{4}
$$
Similarly, define $v\in\{-1,1\}^n$ by
$$
v_j = \begin{cases}
1 &\text{ if } \sum_{i=1}^n a_{ij}u_i \geq 0\\
-1&\text{ if } \sum_{i=1}^n a_{ij}u_i < 0
\end{cases}
$$
so that
and observe that
$$
0 \leq \sum_{i=1}^n \sum_{j=1}^n a_{ij}u_i y_j
= \sum_{j=1}^n y_j \left( \sum_{i=1}^n a_{ij}u_i \right)
\leq \sum_{j=1}^n \left| \sum_{i=1}^n a_{ij}u_i \right|
= \sum_{j=1}^n v_j \left( \sum_{i=1}^n a_{ij}u_i \right) \tag{5}
$$
Combining (4) and (5), we get
$$
0 \leq \sum_{i=1}^n \sum_{j=1}^n a_{ij}x_iy_j
\leq \sum_{i=1}^n \sum_{j=1}^n a_{ij}u_i y_j
\leq \sum_{i=1}^n \sum_{j=1}^n a_{ij}u_i v_j
\leq 1 \tag{6}
$$
the very last inequality by assumption (2). This proves that (2) implies (3), as we wanted.
|
H: Does a statement of the form "for all $X>0$ there exists $x > X$ satisfying some condition" evaluate to "the condition must be true for all $x>0$"?
Question is essentially as the title states. I was inspired by the form which the negation of the Cauchy Criterion takes.
If I have a statement saying "$P$ is only true if for all $X > 0$, there exists an $x > X$ for which $Q$ is true," does this logically evaluate to "$P$ is only true if $Q$ is true for all $x > 0$"?
The original may take the form of $$P\iff \forall X>0\;\exists x\geq X :Q$$
In theory, one may sketch a proof proceeding by induction, where one might say that since for all $X$ there must be an $x > X$ making $Q$ true, $Q$ must be true for $X = 0.5$ and $x = 1$, and $X = 1$ and $x = 2$, etc. Thus $Q$ must be true for all $x$.
Is this logically permissible?
AI: Here is a counterexample. Say $P(x)$ is true only if $x$ is an even integer. ($P$ and $Q$ could even both be “$x$ is even.” ) Then it is true that, for all $X$, there is $x>X$ such that $P(x)$, but it is not true that, for all $X$, $P(X)$.
|
H: If $T$ is an operator such that $T^2 = \lambda T$, and $\|Tx\| = \|x\|$ for some $x \neq 0$, then prove $\lambda = \pm 1$
Let $T$ be a linear transformation on $\mathbb R^n$ over $\mathbb R$ such that $T^2=\lambda T$ for some non zero $\lambda $ in $\mathbb R.$ If $\|Tx\|=\|x\|$ for some non zero vector $x$ in $\mathbb R^n$ then prove that $\lambda = \pm 1$.
If $x$ is an eigenvector then the result is easily proved by me. How do I prove the result for $x$ in $\mathbb R^n$ which is not an eigenvector? Please help.
AI: I presume $\|\ \|$ is the usual Euclidean norm. Consider the matrix
$$T=\pmatrix{2&0\\0&0}.$$
Then $T^2=2T$ but $\|Tx\|=\|x\|$ for
$$x=\pmatrix{1\\\sqrt3}.$$
|
H: Explain this math statement
Quoting a paragraph from Timothy Grover's The Princeton Companion To Mathematics,
(9) ∀n ∃m (m>n) ∧ (m ∈ P).
In words, this says that for every n we can find some m that is both
bigger than n and a prime. If we wish to unpack sentence (9) further,
we could replace the part m ∈ P by
(10) ∀a,b ab = m ⇒ ((a = 1) ∨ (b = 1)).
Here in (m ∈ P), Timothy meant m belongs to prime. How is (m ∈ P) be rewritten as sentence (10). As I understand when sentence (10) explained in english, for all a and b, ab=m implies that either a =1, b=1 or both being 1. So a=4, b=1 satisfies sentence (10). ab = m = 1*4 = 4 is not a prime number. This is clearly wrong. What am I missing here.
AI: What you are missing is that $a,b$ is preceded by $\forall$, meaning that for all possible pairs of (presumably positive) integers $a, b$ such that their product equals $m$, either $a = 1$, or $b = 1$. This prevents $m = 4$ because there exists the choice $a = b = 2$ for which $ab = 4$ but neither $a = 1$ nor $b = 1$.
|
H: What is the definition of a variety in Mumford's red book?
In Mumford's red book, prevariety is defined (in II.3) as follows:
If $k$ is an algebraically closed field, a prevariety over $k$ is a reduced and irreducible prescheme of finite type over $k$.
Mumford then starts mentioning variety later on and I wasn't sure what it was exactly. I can not seem to find the definition in the book. Any clarification is appreciated. Thank you.
AI: From the book, definition 2 in chapter I section 6 in the second edition (pg 37):
Definition 2 (Chapter I section 6, pg 37). Let $X$ be a prevariety. $X$ is a variety if for all prevarieties $Y$ and for all morphisms $$f,g:Y\rightrightarrows X$$ $\{y\in Y\mid f(y)=g(y)\}$ is a closed subset of $Y$.
The next proposition explains things a little:
Proposition 4 (Chapter I section 6, pg 37). Let $X$ be a prevariety. Then $X$ is a variety if and only if $\Delta(X)$ is closed in $X\times X$.
In chapter II, we have the same thing happening with schemes and preschemes:
Definition 2 (Chapter II section 6, pg 118): $f(x)\equiv g(x)$ if $f\circ i_x=g\circ i_x$, where $i_x:\operatorname{Spec} k(x)\to K$ is the canonical morphism. Equivalently, this means that $f(x)=g(x)$, and that the 2 maps $f^*_x,g^*_x:k(f(x))\to k(x)$ are equal.
Proposition 4 (Chapter II section 6, pg 118): For all $f,g:K\to X$, $$\{x\in k\mid f(x)\equiv g(x)\}$$ is locally closed.
Definition 3 (Chapter II section 6, pg 118): A prescheme $X$ is a scheme if for all preschemes $K$ and all $K$-valued points $f,g$ of $X$, $\{x\in K\mid f(x)\equiv g(x)\}$ is closed.
Proposition 5 (Chapter II section 6, pg 118): If $X$ is a prescheme over a ring $R$, then the criterion for $X$ to be a scheme is satisfied for all $K,f,g$ if it is satisfied in the case: $K=X\times_{\operatorname{Spec} R} X$, $f=p_1$, $g=p_2$.
Corollary 1 (Chapter II section 6, pg 119): If $k$ is an algebraically closed field, then a prevariety over $k$ is a variety in the sense of Ch. I if and only if it is a scheme.
In the modern language, this is the statement that $X$ is separated. So for Mumford, a variety is a separated prevariety. You'll find some other examples of this in older literature (use of pre-variety or pre-scheme to denote a possibly nonseparated variety or scheme), but this language has been out of fashion for many, many years.
|
H: Query related to mathematical reasoning
I was studying mathematical reasoning and I guess my book did not explain the rules properly .
There is this statement-:
All natural numbers are even or odd.
This statement seems kind of correct because a natural number is even or odd. But if you try to break it in two component statements then
Statement-1 is "All natural numbers are even " which is false .
Statement-2 is "All natural numbers are odd" which again is false .
So in "and" conjunction if both component statements are false then the statement must be false .
So where did I go wrong? and if this statement is actually false in mathematical language then how should we say "all natural are even or odd"? I mean I want to say like there are only 2 cases for a natural it is even or odd.
AI: Yes, the statement "All natural numbers are even AND All natural numbers are odd" is indeed a false statement. This is equivalent to saying "All natural numbers are even AND odd", which is false, because in fact there is no natural number that is both even and odd. The problem is "All natural numbers ...." It's the quantifier that needs to be changed, a true statement would be "There exists a natural number that is even" and the analogous statement for odd natural numbers. With this, you have two numbers, $n_1$ and $n_2$ that are even and odd respectively. Then, showing that under modulo $2$ every number in $\mathbb{N}$ is congruent to either $n_1$ (which is even) or $n_2$ (which is odd), you have shown that every natural number is even or odd.
|
H: Show that an open interval $(a,b)$ is an open ball in $\mathbb{R}$
My attempt:
Let $(a,b)$ be the open interval. We probably need to show that $(a,b)$ is an open ball so something in this form $B(x,r)$.
So I chose $x=(a+b)/2$. Now let $y \in (a,b)$. Then, $(a-b)/2<(a+b)/2-y<(b-a)/2$.
Let $r = min[0,|\frac{a+b}{2}-b|,|\frac{a+b}{2}-a|]$.So $y \in B(x,r)$.
This has been my attempt,I know it doesn't make any sense but I tried this.
AI: You can simply say that$$(a,b)=B\left(\frac{a+b}2,\frac{b-a}2\right),$$since\begin{align}x\in B\left(\frac{a+b}2,\frac{b-a}2\right)&\iff\left|x-\frac{a+b}2\right|<\frac{b-a}2\\&\iff\frac{a-b}2<x-\frac{a+b}2<\frac{b-a}2\\&\iff a<x<b.\end{align}Concerning your approach, note that $\min\left\{0,\left|\frac{a+b}2-b\right|,\left|\frac{a+b}2-a\right|\right\}=0$.
|
H: Estimating probability of experiencing 100th percentile: is this a valid approach?
Say you work on a website. When someone accesses the website in their browser, then the website makes 5 more calls to different services to load the remaining content. The issue is, until all content is loaded, it takes a long time - the service calls are slow.
We need to improve it. People measure, calculate the average response times and the 100th percentile for the service calls.
But what should we optimise? Try to reduce the average or lower the 100th percentile?
One argument is, the 100th percentile is an outlier, it is rare. Average is what is "usually" experienced.
But if our website makes instead of 5 maybe 25 or 50 or 150 calls, is "experiencing" the 100th percentile of response time really that uncommon? I was trying to find a model to approximate how rare this is.
On each service call you have a $1/100$ chance to be in the highest percentile, since we group our response time measures into $100$ percentiles (from 1st percentile to 100th percentile).
That is, you have a $1-1/100$ chance of not being in the highest percentile.
Assume you make $N$ service calls, then you have a $(1 - 1/100)^N$ chance of not experiencing the highest percentile response time.
Some examples:
N P(not experiencing highest percentile) P(experiencing highest percentile)
1 0.99 0.01
5 0.95 0.05
50 0.60 0.40
...
We can see the larger $N$ becomes, the more likely it gets we at least once experience the highest percentile load time.
The model confirms the idea that the highest percentile is not that rare if you do many calls. But I'm still wondering if this is the correct way to calculate it. Especially step 1, the assumption to have a 1/100 chance to experience the highest percentile still makes me wonder if one can think about it this way or not.
AI: I feel like you are mixing quantiles or a continuum of i.i.d random variables with a finite number of draws in a way that leads to results that don't add up. This is like, "all the children in Lake Wobegone are above average."
You might want to think instead about order statistics. There are $N$ calls in total, but a given person only makes $p = k/N$ of them. If you take $N$ large, each person is making a vanishingly small fraction of the total calls. The asymptotics will be very different than the computations you did, and will be similar to the computation of quantiles (https://en.wikipedia.org/wiki/Order_statistic#Large_sample_sizes). You could probably model each person as a fixed proportion of calls like the $p=k/N$, or as a Poisson distribution to randomize the number of calls they make, compute a distribution of times for fixed $(k,N)$, then take the number of people $N$ to infinity to get a distribution of wait times that is more in line with what you expected.
|
H: Does the least value of $\cos A + \cos B + \cos C$ exist, where $A$, $B$, $C$ are angles of a triangle?
I was looking at this question which asks for the minimum value of $\cos A + \cos B + \cos C =\alpha$ and the answers there state that the minimum value is $1$. This value exists for a degenerate triangle.
But in a similar question which asks for the minimum area ($A$), for a given semi-perimeter ($s$), the possible values of $A$ are
$$\frac {s^2}{3√3} \tag 1$$
in the case when $a=b=c$,
and
$$\frac {s^2}{4} \tag 2$$
in the case when $s=s-a=s-b=s-c$.
But we reject $(2)$ here as then the triangle wouldn't be an "appropriate" triangle (as my teacher said).
Even though in both $\alpha$'s as well as $(2)$'s case the triangles become degenerate but one of them is acceptable and the other is not. Why so?
Also, if possible please point out the corrections to any conceptual mistake that I may have.
Thank you!
AI: Does not exist.
For $a=b=1$ and $c\rightarrow2^-$ we see that
$$\cos\alpha+\cos\beta+\cos\gamma\rightarrow1.$$
We'll prove that $$\cos\alpha+\cos\beta+\cos\gamma>1.$$
Indeed, we need to prove that:
$$\sum_{cyc}\frac{b^2+c^2-a^2}{2bc}>1$$ or
$$\sum_{cyc}(a^2b+a^2c-a^3)>2abc$$ or
$$(a+b-c)(a+c-b)(b+c-a)>0,$$ which is obvious.
The value $1$ does not occur, which says that the minimal value does not exist.
Now, we see that $\sum\limits_{cyc}\cos\alpha\rightarrow1^+$ for any $a+b\rightarrow c^+$.
Your case with $p-a=p-b=p-c\rightarrow p$ is impossible because $a$, $b$ and $c$ are positives.
For $a=b=c$ we'll get a maximal value of the sum.
|
H: On the two definition of weakly convergence of measures, which one is true?
Let $X$ be a metric space and $\mu, \mu_1,\mu_2,\ldots$ Borel probability mesures on $X$. I met the following two definitions of the weakly convergence of $\{\mu_n\}$:
1. We call $\{\mu_n\}$ converges weakly to $\mu$ if $\int f d\mu_n\to \int fd\mu$ for any $f\in C(X)$, where $C(X)=\{f:X\to R: \text{$f$ is continuous and bounded}\}$;
2. We call $\{\mu_n\}$ converges weakly to $\mu$ if $\int f d\mu_n\to \int fd\mu$ for any $f\in C_c(X)$, where $C_c(X)=\{f:X\to R: \text{$f$ is continous and has compact support}\}$.
But, generally speaking, $C_c(X)$ and $C(X)$ are not necessarily equal on a metric space. So, which one is true? Or, these are actually two different definitions with the same name?
AI: These are not equivalent. In K L Chung's book the second one is called vague convergence. The first definition is the standard one.
|
H: Prove that $g$ is also continuous at $x=0$.
Let $f:\mathbb{R} \to \mathbb{R}$ be such that $f$ is continuous at $x=0$. For any $r\in\mathbb{R}$, define $g(x) = f(rx)\;\forall x \in \mathbb{R}$. Prove that $g$ is also continuous at $x=0$.
Proof:
If $f(x)$ is continuous at $x=0$, then
$$\forall \epsilon\;\exists \delta\; s.t\;|x-0|<\delta \implies |f(x)-f(0)|<\epsilon$$
Let $x = ry$.
$$\forall \epsilon\;\exists \delta\; s.t\;|ry-0|<\delta \implies |f(ry)-f(0)|<\epsilon$$
$$\therefore g(y)=f(ry) \text{ is continuous at } y=0.$$
Is this proof correct ? If not please explain how to prove the result.
AI: No, it is not correct. You did not prove that there is a $\delta>0$ such that $|x|<\delta\implies|g(x)-g(0)|<\varepsilon$.
If $r=0$, then the statement is trivial. Otherwise, given $\varepsilon>0$, take $\eta>0$ such that $|x|<\eta\implies|f(x)-f(0)|<\varepsilon$. Let $\delta=\frac\eta{|r|}$. Then\begin{align}|x|<\delta&\iff|rx|<\eta\\&\implies|f(rx)-f(0)|<\varepsilon\\&\iff|g(x)-g(0)|<\varepsilon.\end{align}
|
H: Find the P.D.F for these Variables(Jacobian transformation)
$Q)$ Let the continuous variables $X$ and $Y$ which are following the uniform distribution on $D=\{(x,y) \vert 0 < x < y < 1 \}$
Find the joint probability density function for variables $Y$ and $Z$ (Here the $Z = {X \over Y}$)
I tried this by Jacobi transformation $(X,Y) \to (Y,Z)$
Since $f(X,Y) = 2$ on $D$, Then the answer would be $2\vert J\vert$.
Then, only we have to do just find the $\vert J \vert $(Here the $J$ is a determinant of the Jacobian matrix) Here the $J$ = $
\begin{vmatrix}
x_y & x_z \\
y_y& y_z \\
\end{vmatrix}
$
So since the $Z = {X \over Y}$, $J$ = $
\begin{vmatrix}
Z & Y \\
1& -{X \over Z^2} \\
\end{vmatrix}
$ =$ 2Y$
(Here the $Y={X \over Z}$, $X = ZY$)
Therefore my answer is $f(Y,Z)= 4Y$.
But the answer sheet claimed $J$ = $
\begin{vmatrix}
x_y & x_z \\
y_y& y_z \\
\end{vmatrix} =
\begin{vmatrix}
Z & Y \\
1& 0 \\
\end{vmatrix} = -Y
$ so the answer was $f(Y,Z)= 2Y$.
What the point did I have a mistake? I have a doubt about the its claim since the variable $Y$ itself depends on the $Z$. So do I have to differentiate $Y= {X \over Z}$, not $Y$?
Any help would be appreciated. thanks.
AI: defining the following system
$$\begin{cases}
z=\frac{x}{y} \\
v=y
\end{cases}
$$
that is
$$\begin{cases}
x=zv \\
y=v
\end{cases}
$$
The jacobian $|J|=v$
so the joint density
$f_{ZV}(z,v)=2v$
Of course you have also to define $(Z,V)-$ domain, otherwise the joint density is not a nice density
Understood this, it is trivial to show that
$Z\sim U(0;1)$
$f_Y(y)=2y \mathbb{1}_{(0;1)}(y)$
|
H: How to check $f(x) = x^{-\alpha}, x \in (0,1]$ is Lebesgue integrable?
Let $f(x) = x^{-\alpha}, ~ x \in (0,1], ~ \alpha \in \mathbb{R}$, then how can we show that $f$ is Lebesgue integrable?
I can show that $f$ is measurable but I don't know how to proceed further. I think I need to use monotone convergence theorem to find a sequence such that integral will be finite, but how can I construct such a sequence?
AI: Your function is integrable iff $1-\alpha >0$ or $\alpha <1$.
Let $f_n(x)=x^{-\alpha}$ for $x >\frac 1n$ and $0$ for $x \leq \frac 1 n$. The integral of $f_n$ is actually a Riemann integral and it can be evaluated explicitly. Now apply Monotone Convergence Theorem to see when $f$ is integrable.
[$(f_n)$ is sequence of non-negative measurable functions increasing to $f$ at every point. Hence $\int_0^{1}f=\lim \int_0^{1} f_n=\lim \int_{1/n}^{1} f_n=\lim x^{1-\alpha}|_{1/n}^{1}$. I will let you finish the argument].
|
H: K(subset of C[0,1]) does not attain the minimal norm
Let $V=C[0,1]$ and $K\subset V$ be defined by $K= \{f \in V | \int_{0}^{1/2} f(t) dt - \int_{1/2}^{1} f(t) dt = 1\}$.
Show that $K$ does not admit an element with minimal norm.
AI: Hints:
Show that if $\|f\| < 1$ then $f \notin K$. Hence the distance is at least one.
Let $f_n$ be the function whose graph is given by joining the points
$(0,{1 \over 1-{1 \over n}}), ({1 \over 2}-{1 \over n},{1 \over 1-{1 \over n}}), ({1 \over 2}+{1 \over n},-{1 \over 1-{1 \over n}}), 1,-{1 \over 1-{1 \over n}})$ and show that $f_n \in K$ and $\|f_n\| \to 1$.
Hence $\inf_{f \in K} \|f\| = 1$.
Now suppose $\|f\| = 1$ and $f \in K$ and show that this contradicts continuity of $f$.
To see the latter, let $a=\int_{0}^{1/2} f(t) dt, b=\int_{1/2}^{1} f(t) dt$, note that
$|a|,|b| \le {1 \over 2}$ and $a-b=1$. Hence
$a=1+b \ge {1 \over 2}$ and so $a={1 \over 2}, b = -{1 \over 2}$.
Now note that $\int_{0}^{1/2} (1-f(t)) dt = 0$ and since $f(t) \le 1$ we see that $f(t) = 1$ for $t \in [0, {1 \over2}]$. Repeating for the second side, we see that $f(t) = -1$ for $t \in [{1 \over2},1]$. This gives a contradiction since $f({1 \over 2})$ cannot be both $\pm1$.
|
H: Why the two methods give different answers?
Question
If Kinetic energy of the body is increased by $300\text{%}$, its momentum will increase by:
Method 1:Using proportionality
$$Ke=\frac{P^2}{2m}$$where Ke is Kinetic energy and P is momentum
Since mass is constant,
$$\frac{Ke_i}{Ke_f}=\frac{P_i^2}{P_f^2}$$
$$\frac{Ke_i}{4Ke_i}=\frac{P_i^2}{P_f^2}$$
$$P_f=2P_i$$
$$∆P=2P_i-P_i=P_i$$
Change in momentum is 100%
METHOD 2:Using differentiation
$$Ke=\frac{P^2}{2m}$$
Differentiating both sides
$$dKe=\frac{2Pdp}{2m}$$
Approximating $$dKe=∆Ke$$
$$∆Ke=3Ke$$
$$3Ke=\frac{2Pdp}{2m}$$
$$\frac{3Ke*2m}{2P}=dP$$
$$Ke*2m=P^2$$
Substituting ,
$$\frac{3*P^2}{2P}=dP$$
$$(dP/P)*100=\frac32*100=150%$$
Change in momentum =150%
Why the two methods give different answers?
AI: Notice, $dKe$ is infinitesimal small change in kinetic energy i.e. $dKe\to 0$ while $\Delta Ke$ is change in kinetic energy much much larger than $0$ i.e. $\Delta Ke>>0$ (in this case). Thus
$$dKe<<\Delta Ke$$
Therefore using $dKe=\Delta Ke$ gives a large difference in the results due to overestimation.
The first method computes $\text{%}\Delta P$ correctly by taking actual difference $\Delta Ke$ (without any approximation) while second method computes $\text{%}\Delta P$ incorrectly due to overestimation by taking $dKe=\Delta Ke$
|
H: A non-empty subset of integers bounded above has a maximum
Suppose the set $\mathrm A$ $\neq$ $\emptyset$ , $\mathrm A$ $\subseteq$ $\Bbb Z$ is bounded above. Then since $\Bbb Z$ $\subseteq$ $\Bbb R$, I know that by the completeness axiom there exists a $supremum$ for the set $\mathrm A$, say $s$ $=$ $sup$($\mathrm A $). But, I need to show this is in fact the maximum of this set $\mathrm A$. For that, I know it has to be an element of the set $\mathrm A$. How can we show that $s$ $\in$ $\mathrm A$ ?
AI: Let $A \subseteq \Bbb N$ be bounded above (say by $M \in \Bbb N$: bounded above is unspecified, so by default it will mean bounded in $\Bbb N$. If bounded above in $\Bbb R$ is what is meant (by some bound $B \in \Bbb R$), find an integer larger or equal to $B$ by the archimean property of $\Bbb R$, but such an $M$ exists in either case).
Then $B=\{n \in \Bbb N: \forall a \in A: a \le n\}$ (the set of all upperbounds of $A$) is a non-empty subset of $\Bbb N$ so it has a minimum $m_0$ by the well-foundedness of $\Bbb N$. If $m_0 \notin A$, then $m_0 -1$ is also in $B$ (if $n \in A$ then $n < m_0$ ($n \le m_0$ is by definition, but $m_0= n$ is not the case by assumption, or $m_0 \in A$) and so $n \le m_0 -1$, as $n$ is arbitrary $m_0 -1 \in B$. But this contradicts the minimality of $m_0$. So $m_0 \in A$ and is equal to $\max(A)$.
So all we need is the well-foundedness of $\Bbb N$ and $n < m \to n \le m-1$ as an essential order property of $\Bbb N$. Completeness of $\Bbb R$ is "overkill".
|
H: Definition of Crossed homomorphism
Suppose a group $G$ is acting on an abelian group $M$. Then a mapping $\phi: G \rightarrow M$ is called a crossed homomorphism if it satisfies the condition: $\phi(gh)=\phi(g)(g\cdot \phi(h))$ for every $g,h\in G$. My question is, how we will specify the action of $G$ is left or right in the definition of a crossed homomorphism from $G$ to $M$? I found these definitions:
If the action is left then we write $\phi(gh)=\phi(g)(g\cdot \phi(h))$. If the action is right, then we write $\phi(gh)=(\phi(g)\cdot h)\phi(h)$.
My doubt is, if the action is right, Why not $\phi(gh)=\phi(g)(\phi(h)\cdot g)$ ? Can anyone please clear this concept of left or right?
AI: Normally the easiest way to deal with such problems is to avoid ever considering right group actions. Suppose that we are given a group $G$ and a right $G$-module $M$ with action $\star$
Then we may make $M$ a left $G$-module by defining a left $G$-action via
$$g \cdot m = m \star g^{-1}$$
Edit: With the clarifying comments (thankyou @Derek Holt and my apologies to those who enjoy right actions) it seems useful to add futher explanation.
If $\phi : G \to M$ is a cocycle under this induced left action then the induced map $\psi : G \to M$ (remembering that we are first undergoing the automorphism $G \to G : g \mapsto g^{-1}$ ) satisfies the following condition
\begin{align*}
\psi(gh) = \phi((gh)^{-1}) &= \phi(h^{-1}g^{-1}) \\
&= h^{-1} \cdot \phi(g^{-1}) + \phi(h^{-1}) \\
&= \psi(g)\star h + \psi(h)
\end{align*}
which is precisely the cocycle condition we are claiming.
|
H: PDE with change of variable
In my text book there is an example of PDE with change of variable like this:
$$\frac{\partial^2f}{\partial t^2}-c^2\frac{\partial ^2f}{\partial x^2}=0$$
with $\left\{\begin{matrix}
u=x+ct & \\
v=x-ct &
\end{matrix}\right.$
I can follow my text book until here $\frac{\partial f}{\partial v} = g(v)$ and then my book suggest that we should integrate both side with respect to '$v$' but it doesn't explain why the result become this $f(u,v)=G(v)+H(u)$ where G Is antiderivative of g and H is an arbitrary function of one variable but when I integrate I get something like this $$\int \frac{\partial f}{\partial v} dv = F(u,v) + H(u)$$ $$\int g(v) dv = G(v) + C $$ $$F(u,v) + H(u)=G(v) + C$$
$$ F(u,v)= G(v) - H(u) + C$$ i wonder why in my text book wrote like that: $F(u,v) = G(v) + H(u)$ ?
Can someone please explain to me, I must say that i am a calc 2 student and don't have much experience in PDE.
AI: Recall that the function $f$ depends on the two variables $u$ and $v$.
Integrating $g(v)$ with the absence of $u$ would lead to $G(v) + C$, where $C$ is a real constant and $G$ is an antiderivative of $g$.
But since you here also have the $u$, the $C$ is not enough. $C$ has to be replaced by an arbitrary function $H(u)$, because taking the partial derivative of $H(u)$ with respect to $v$ yields $0$. Of course, in $H(u)$ all constants $C$ are included as a special case. Hence
$$\int g(v) \ \mathrm dv = G(v) + H(u). \tag{1}$$
Your approach with
$$F(u, v) = G(v) - H(u) + C \tag{2}$$
is equivalent to this. In (2), include the $C$ in $H(u)$ and replace $-H(u)$ with $H(u)$ and you obtain (1).
|
H: A question in proof theorem 12.9 of Tom M Apostol ( Mathematical Analysis)
While self studying Mathematical Analysis from Tom M Apostol, I am unable to deduce a statement in following proof :
Question: I am unable to deduce how to prove existance of $\delta$ such that x+tu belongs to S for all real t in that interval in line 1.
I tried using defination of L(x, y) and that L(x, y) belongs to S and also putting u = x-y in defination of L(x, y) but couldn't prove it .
Any help please.
AI: Hint: for $0 \leq t \leq 1$ we have $x+tu=x+t(y-x)=ty+(1-t) x$. This lies in $L(x,y) \subseteq S$. Now, for $-\delta <t<0$ we have $|(x+tu) -x|\leq \delta |y-x|$. Since $x \in S$ and $S$ is open it follows that $x+tu \in S$ provided $\delta >0$ is sufficiently small. A similar argument works for $1 <t <1+\delta$.
You have to choose $\delta$ such that $B(x,\delta |y-x|) \subset S$ and $B(y,\delta |y-x|) \subset S$
|
H: How to find the length if a segment given this problem?
Let a line with the inclination angle of 60 degrees be drawn through the focus F of the parabola y^2 = 8(x+2). If the two intersection points of the line and the parabola are A and B, and the perpendicular bisector of the chord AB intersects the x-axis at the point P, then the length of the segment PF is?
My approach:
I try to find the focus of the parabola and got (0,0) but I am now confused on how to find their intersection points. Also on how to find the perpendicular bisector of the chord AB. Can somebody guide me how to solve this?
AI: The inclination angle of a line $(y-y_0)=k(x-x_0)$ is always $\arctan(k)$ so we have $AB:\ y=\sqrt{3}x$. Then to find points $A,B$ we are to solve $3x^2=8(x+2)$ $\Leftrightarrow$ $3x^2-8x-16=0$ but in this case there's no need to solve it, because we only need the middle of $AB$:
$M=\left(\frac{x_A+x_B}{2},\frac{y_A+y_B}{2}\right)$. By Vieta's theorem $x_A+x_B=\frac{8}{3}$ and $y_A+y_B=\sqrt{3}(x_A+x_B)=\frac{8\sqrt{3}}{3}$ and the perpendicular bisector equation is then
$$-\frac{1}{\sqrt{3}}(x-\frac{4}{3})=(y-\frac{4\sqrt{3}}{3})$$
Setting $y=0$ we get
$$-\frac{1}{\sqrt{3}}(x-\frac{4}{3})=-\frac{4\sqrt{3}}{3}$$
$$x-\frac{4}{3}=4$$
$$FP=x=\frac{16}{3}$$
|
H: Evalute $\lim\sup a_n=\left[1-(-2)^n\right]$
I am trying to evaluate the limit $$\lim\sup\limits_{n\to \infty}\sqrt[n]{|a_n|}$$
$$a_n=\left[1-(-2)^n\right]$$.
What I did here is notice that this sequence can be written as the following:$$ a_{2n+1}=3$$ $$a_{2n}=1$$
From here it is really clear that the $\lim\sup a_n=3$, but this is a wrong answer, the correct answer is $2$.
I do not understand what is wrong in my assumptions
AI: Your computations are false. I am guessing that you confused $(-2)^n$ with $2 \times(-1)^n$.
The answer is $2$ indeed :
You can check that $|a_n| = 2^n + e_n$, where $(e_n)$ is bounded. Thus, $|a_n|^{\frac1n} = \exp(\frac1n\ln (2^n + e_n)) = \exp\left(\frac{\ln(2^n)+\ln\left(1+\frac{e_n}{2^n}\right)}n\right)\to 2$.
|
H: One of a math problems
For modules, let $M = M_1 ⊕ M_2$ and let $f :M→N$ be an epimorphism with $K = \ker f$ and $N = f(M_1) + f (M_2)$.
(1) Prove that if $K= ( K \cap M_1)+ (K \cap M_2)$, then this sum is direct.
Could someone give me hints about this question. I am having troubles about how to solve this question. Thanks a lot.
AI: Let $x_1 , x_2$ be elements of $M_1$ and $M_2$ respectively. Assume that $f(x_1) + f(x_2) = 0$. We want to show that $f(x_1) = f(x_2) = 0$.
We know that $x_1 + x_2 \in K$. So, by the hypothesis, there exist $y_1, y_2$ in $(K\cap M_1)$ and $(K\cap M_2)$ respectively such that $x_1+x_2 = y_1+y_2$. Now, we notice that $x_1-y_1= y_2-x_2 \in M_1 \cap M_2 = \lbrace 0 \rbrace$.
Thus $x_1 = y_1$ and $x_2 = y_2$. Finally, since $y_1, y_2 \in K$, we get $f(x_1) = f(x_2) = 0$, as desired.
|
H: show that if $\lim_{r\to\infty} \min_{|z|=r}|f(z)|$ converges to a positive limit, and it's holomorphic then it's constant
Given $f$ holomorphic in $\mathbb{C}$ and $$\lim_{r \to \infty} \min_{|z|=r}|f(z)|>0$$
show that the function is constant.
It is probably somehow related to Liouville's theorem.
I am able to show that the function is bounded from below. And I thought about taking $-f$ and show it converges to, and therefore is bounded above. but I believe it isn't formal enough.
Does anyone have a formal explanation?
AI: Let
$$ g(z)=\frac{1}{f(1/z)}.$$
Then $g$ is holomorphic and bounded on a small punctured neighborhood of $0$. So $z=0$ is a removable singularity of $g$. This tells that $\lim_{z\to 0}g(z)$ exists, which then proves that $f(z)$ converges as $z\to\infty$. (The possibility of $|f(z)|\to\infty$ as $z\to\infty$ is excluded by the condition.) Therefore $f$ is bounded and hence constant.
|
H: Let $G$ be an abelian group with elements $x, y$ of orders $m$ and $n$ respectively. Moreover, $\gcd(m,n)=1$. Then the order of $xy$ is $mn$
I'm trying to prove this special case of a well-known result. I'm actually very happy because the entire proof is by myself. Could you please confirm if it is correct?
Let $G$ be an abelian group with elements $x, y$ of orders $m$ and $n$ respectively. Moreover, $\gcd(m,n)=1$. Then the order of $xy$ is $mn$.
My attempt:
WLOG, we assume $m<n$. Clearly, $(xy)^{mn}=x^{mn} y^{mn} = (x^m)^n (y^n)^m = 1$. Assume the contrary that $o(xy)=t$ for some $1<t <mn$. By Lagrange's theorem, $t | mn$. Because $\gcd(m,n)=1$, either $t | m$ or $t | n$.
If $tq = m$ for some $1<q<m$
Then $(xy)^t = 1=x^m = x^{tq}$. Then $y^t = x^{t(q-1)}$ and thus $o(\langle y^t \rangle) = o(\langle x^{t(q-1)}\rangle)$. Because $\gcd(t,n) = 1$, $o(\langle y^t \rangle) = n$. It follows that $n = o(\langle x^{t(q-1)}\rangle) \le o(\langle x\rangle) = m$, which is a contradiction.
If $tq = n$ for some $1<q<n$
Then $(xy)^t = 1=y^n = y^{tq}$. Then $x^t = y^{t(q-1)}$ and thus $o(\langle x^t \rangle) = o(\langle y^{t(q-1)}\rangle)$. Because $\gcd(t,m) = 1$, $o(\langle x^t \rangle) = m$. It follows that $m = o(\langle y^{t(q-1)}\rangle)$. On the other hand, $\langle y^{t(q-1)}\rangle$ is a subgroup of $\langle y \rangle$. By Lagrange's theorem, $m | n$, which is a contradiction.
Update: I've just found a simple fix
Let $t = o(xy)$. We have $(xy)^t = 1$ and thus $x^t = y^{-t}$ Then $k := o(\langle x^t\rangle) = o(\langle y^{-t} \rangle)$. This means $k| m$ and $k | n$. It follows that $k = 1$. Hence $\langle x^t\rangle = \langle y^{-t}\rangle = \{1\}$. Then $x^t = y^{-t}=1$ and thus $m|t$ and $n|t$. Because $\gcd(m,n)=1$, $t$ is a multiple of $mn$. Moreover, $(xy)^{mn}=1$. As such, $t=mn$.
AI: I'm afraid that $t | mn$ and $gcd(m,n) = 1$ does not imply $t|m$ or $t|n$...
Hint : If $t=o(xy)$, what can you say about $x^t$ ?
|
H: Compact normed real - spaces.
Is there a compact normed real space which contains more than one element?
Im trying to grab some intuition on this subject. Would love to get some intuition if possible.
AI: No, there is not. Such a space would have a vector $v\ne0$, and then the sequence $(nv)_{n\in\Bbb N}$ would have no convergent subsequence (since $\lim_{n\to\infty}\|nv\|=\infty$). So, the space is not compact.
|
H: How to find the curve whose tangent line is always perpendicular to that of all parabolas centered at x=0 at the point of intersection?
Basically, I’m trying to solve the following differential equation, but for some reason I’m getting the wrong result and I don’t fully understand why.
$$ \frac{\mathrm{d} y}{\mathrm{d} x} = \frac{-1}{\frac{\mathrm{d} ax^2}{\mathrm{d} x}} $$
By $ \frac{\mathrm{d} y}{\mathrm{d} x} $ I'm referring to the slope of the curve I'm trying to find at every point it has (x,y). By $ \frac{\mathrm{d} ax^2}{\mathrm{d} x} $ I'm referring to the slope of the parabola with the peak at x=0. Since the tangents must be perpendicular one slope must be -1 over the other.
To me it seems like it describes what I mean, however when I try to solve it, I get:
$$ \frac{\mathrm{d} y}{\mathrm{d} x} = \frac{-1}{2ax} $$
$$ \mathrm{d} y = \frac{-1}{2ax}\mathrm{d} x $$
$$ y = \int \frac{-1}{2ax}dx $$
$$ y = \frac{-1}{2a}lnx + c $$
Which is not what I'm looking for:
(graphed x^2 and it's corresponding curve)
Sure, they ARE perpendicular, but they aren't perpendicular regardless of my choice of parabola, as in it isn't also perpendicular to, say, 2x^2, at their point of intersection.
I'm sorry if I'm not being concise with what I want to end up with, but hopefully I can clear things up with this picture. As you can see, $ y^2 + \frac{1}{2}x^2 = 1 $ is perpendicular at all points of intersection with any parabola of the form $ ax^2 $
Can anybody please tell me what I did wrong? Considering that my "wrong" result is still perpendicular to a parabola, I don't think that I made a mistake while solving the equation itself, but that rather I'm solving the wrong equation. If that is the case, I wish to know what is the corresponding (or "correct") equation, why, and what is the meaning of the one I solved here.
AI: You obtained the curve which is perpendicular to the parabola $y=ax^2$, but not to all parabolas. If you want to have the curve which is perpendicular to all parabolas, you have to make $a$ dependent of $x$ and $y$ - at every point, $y(x)$ is perpendicular to $ax^2$ for different $a$.
Indeed you have
$$ \frac{dy}{dx} = \frac{-1}{2ax}, $$
but at a point $x$, $y(x)$ crosses the parabola with $a=\frac{y}{x^2}$. So if we plug this $a$ into an equation, we have
$$ \frac{dy}{dx} = \frac{-x}{2y} $$
and after solving it we get an ellipse.
|
H: Spectral decomposition of a self-adjoint operator.
Let $\mathcal{H}$ be a Hilbert space, $A$ be a self-adjoint operator of $B(\mathcal{H})$. If $0\le A\le I$, how to show that there are mutually commutative projections $\{P_n\}_{n=1}^{\infty}$ such that $A=\sum_{n=1}^{\infty} \frac{1}{2^n} P_n$?
I know there is a decomposition for compact normal operator $N\in B(\mathcal{H})$(GTM 209 Theorem 2.8.2):
$$
N=\sum_{n=1}^{\infty} \lambda_n E_n,
$$
where $E_n$ are mutually orthogonal projections onto $H_{\lambda_n}=\{\xi\in \mathcal{H}: N\xi=\lambda_n \xi\}$.
My questions are
Can we prove $A$ is compact by $0\le A\le I$? And how to do that?
Can I solve this problem by writing
$$
A=\sum_{n=1}^{\infty} \frac{1}{2^n}\cdot \left(2^n\lambda_n E_n\right)\colon=\sum_{n=1}^{\infty} \frac{1}{2^n} P_n
$$
?
Could you please give me more details? I am not sure about my thoughts. Thank you very much!
AI: Let $\mu$ denote the projection valued measure associated to $A$. Note that for any two sets $X,Y\subset\Bbb R$ you have that $\mu(X)$ and $\mu(Y)$ are commuting projections. You can assume by the spectral theorem $A$ is of the form $f\mapsto (x\mapsto x\cdot f(x))$ on $L^2([0,1],d\lambda)$ for some measure $\lambda$, this will help you visualise whats happening.
Note that $A-\frac12\mu([\frac12,1])$ then corresponds to multplication with $x-\frac12 \chi_{[\frac12,1]}(x)$, ie it is $x$ until value $1/2$ and then reset to $0$ after which it grows to $1/2$ again.
Do this visualisation again, what function grows to $\frac14$ and then resets to $0$, then grows to $\frac14$ and resets and so on? The function would be
$$x-\left(\frac12\chi_{[\frac12,1]}(x) + \frac14\chi_{[\frac14,\frac12)}(x) + \frac14\chi_{[\frac34,1]}(x)\right)$$
now for the next step you want it to reset in intervals of $\frac18$, so you will need to add some more projections. Continue in this way, being more explicit you get that
$$P_1=\mu([\frac12,1]), \quad P_2=\mu([\frac14,\frac12)\,\cup\,[\frac34,1]) \\
P_3=\mu([\frac18,\frac14)\,\cup\,[\frac38,\frac12)\,\cup\,[\frac58,\frac34)\,\cup\,[\frac78,1])$$
and so on, it can be viewed as an exercise to get the explicit forms of the sets you are interested in.
|
H: Series convergence test, $\sum_{n=1}^{\infty} \frac{(x-2)^n}{n3^n}$
I'm trying to find all $x$ for which $\sum_{n=1}^{\infty} \frac{(x-2)^n}{n3^n}$ converges. I know I need to check the ends ($-1$ and $5$) but I'm not sure what to happen after that. I'm pretty sure I'd substitute the values of $x$ into the sums and then I'd use convergence tests to see what works, but I always get stuck.
Apparently, I'm supposed to get the alternating harmonic series test for the $-1$ and the harmonic series test for $5$ but I'm unable to manipulate the series to get this. I've tried ratio tests but they don't simplify into what I want.
Actually, I figured it out... I was writing down $x+2$ rather than $x-2$ and now it all makes sense.
AI: By the ratio test, every x value between -1 and 5 would make the series converge.
we just need to find out whether x=-1, 5 makes it converge.
x=-1: The series will look like this. $$\sum_{n=1}^{\infty} (-1)^n/n$$ The series without $(-1)^n$ is 1. always positive, 2. the limit of it is zero, 3. and it is a decreasing sequence. Therefore, by the alternating series test, it converges.
x=5: The series will look like this. $$\sum_{n=1}^{\infty} 1/n$$ We know that this series diverges by the p-series test. So, the interval of convergence would be $$-1\leq x< 5$$
|
H: If $P$ is false, and $Q$ is true, then why does $P \implies Q$?
I have often seen truth tables similar to this one:
The first two rows, where $P$ is true, make sense to me. However, why does $P$ being true and $Q$ being false mean that $P \implies Q$?
To make the question clearer to myself, I came up with the following example: let $P$ be whether or not it is raining. Let Q be whether or not I am wearing a coat. Then, there are $4$ possibilities:
It is raining; I am wearing a coat. It is raining; I am not
wearing a coat. It is not raining; I am wearing a coat. It
is not raining; I am not wearing a coat.
$P \implies Q$ is the same as saying if it is raining, then I will wear a coat. This statement says nothing about what happens when it is not raining. Therefore, why do we say not raining $\implies$ coat?
AI: The reason is that, in mathematics, "$P \implies Q$" is equivalent to "$Q$ or not $P$". Thus if $Q$ is true, $Q$ or not $P$ is also true, whatever $P$ is.
|
H: Floor function parity problem
Prove that for every natural k this expression is always odd $⌊(5+\sqrt{19})^k⌋=A^k$
Progress that I' ve done is:
I noticed $9^k<A^k<(9.5)^k$
Also I tried an induction approach, I used Binomial theorem, I rewrote the expression as $⌊(9+(\sqrt{19}-4))^k⌋$. But so far none of this approaches lead me to anything.
Could you give me some tips.
AI: Hint: For all integers $k \ge 0$,
$$f(k) = (5 + \sqrt{19})^k + (5 - \sqrt{19})^k \tag{1}\label{eq1A}$$
is an integer. You can fairly easily prove this using something like the binomial theorem, where the pairing of terms also gives useful parity information. In addition, note $0 \lt 5 - \sqrt{19} \lt 1$.
|
H: I solved but don't know if it is correct, can you help me? Showing $P(X\cup Y)\approx P(X)\times P(Y)$
Question:
$X, Y$ are infinite sets that are not empty, and $X\cap Y=\emptyset$.
Show $P(X\cup Y)\approx P(X)\times P(Y)$
Hi! I tried to solve the question I wrote above, but I don't know if it is correct. Can you check if it is correct, and if not can you show me the correct one?
Thanks in advance.
$\approx$'s definiton: $n\in\mathbb N$, if $X\approx n$ for any $X$ sets, X is a finite set.
And P is Power Set.
Here is my solution:
$(\Rightarrow )$
Let $a=(X,Y)\in P(X\cup Y)$
$$\Rightarrow a\in (X\cup Y)$$
$$X\wedge Y=\emptyset \Rightarrow (a\in X\wedge a\not\in Y)\vee (a\not\in X\wedge a\in Y)$$
$$[a\in P(X)\wedge a\not\in P(Y)]\vee [a\not\in P(X)\wedge a\in P(Y)]$$
$$[a\in P(X)\times P(Y)]\vee [a\in P(X)\times P(Y)]$$
$(\Leftarrow)$
Let $a\in [P(X)\times P(Y)]$
$$\Rightarrow [a\in P(X)\wedge a\not\in P(Y)]\vee [a\not\in P(X)\wedge a\in P(Y)]$$
$$\Rightarrow [a\in X\wedge a\not\in Y]\vee [a\not\in X\wedge a\in Y]$$
$$\Rightarrow a\in X\cup Y\Rightarrow a\in P(X\cup Y)$$
AI: Here is just a remark. I put it as an answer because it is too long for a comment. I take it that $P \approx Q$ means the sets $P$ and $Q$ are equicardinal.
Let $X$ and $Y$ be arbitrary sets, which are not necessarily infinite, nonempty, or disjoint. Then, there exists a bijection $f:\mathcal{P}(X\cap Y)\times \mathcal{P}(X\cup Y)\to \mathcal{P}(X)\times\mathcal{P}(Y)$. This bijection can be defined as follows: for $A\subseteq X\cap Y$ and $B\subseteq X\cup Y$, let
$$f(A,B):=\Big(A\cup (B\setminus Y),B\cap X\Big)\,.$$ The inverse $f^{-1}: \mathcal{P}(X)\times\mathcal{P}(Y)\to\mathcal{P}(X\cap Y)\times \mathcal{P}(X\cup Y)$ of $f$ is given by
$$f^{-1}(M,N):=\Big(M\cap Y,(M\setminus Y)\cap N\Big)$$
for all $M\subseteq X$ and $N\subseteq Y$.
Let $\sqcup$ denote disjoint union, which is usually defined as
$$P\sqcup Q:=\big(P\times\{1\}\big)\cup \big(Q\times\{2\}\big)$$
for all sets $P$ and $Q$. Define the bijection $\phi:(X\cap Y)\sqcup (X\cup Y)\to (X\sqcup Y)$, which sends
$(t,i)$ with $t\in X\cap Y$ to $(t,i)$ for each $i\in\{1,2\}$,
$(t,2)$ with $t\in (X\setminus Y)$ to $(t,1)$, and
$(t,2)$ with $t\in (Y\setminus X)$ to $(t,2)$.
The inverse $\phi^{-1}: (X\sqcup Y) \to (X\cap Y)\sqcup (X\cup Y)$ sends
$(t,i)$ with $t\in X\cap Y$ to $(t,i)$ for each $i\in\{1,2\}$,
$(t,1)$ with $t\in (X\setminus Y)$ to $(t,2)$, and
$(t,2)$ with $t\in (Y\setminus X)$ to $(t,2)$.
We can see that $f$ lifts the bijection $\phi$ in the sense that, if $f(A,B)=(M,N)$, then
$$\phi(A\sqcup B)=M\sqcup N$$
for all $A\subseteq X\cap Y$ and $B\subseteq X\cup Y$.
|
H: Finding The Tangent Line Of $y = \sin x$
Hello everyone I have a tangent line with slope $= a$ to $y = \sin x$ in 2 points $(u_1,v_1) , (u_2 ,v_2)$
How can I prove that $\tan a = a$?
My direction was to express $a$ with the points by $a = \frac{u_1-u_2}{v_1-v_2}$
and the tangent line equation is $y = ax -au_1 +v_1$
$f(x) = \sin x \iff f'(u) = \cos u$
so $\cos u_1 = a$ and $\cos u_2 = a$?
AI: You can't, as there are more solutions. Let's $(u,\sin(u))$ and $(v,\sin(v))$ be two points. (We exclude the trivial case $v=u+2k\pi$.) The equations of their tangents are
$$
\begin{align}
y-\sin(u)&=\cos(u)(x-u)\\
y-\sin(v)&=\cos(v)(x-v).
\end{align}
$$
We have $\cos(u)=\cos(v)$, which gives $v=2k\pi-u$ for some $k\in\mathbb Z$ Furthermore we must have equal $y$-intercepts, that is
$$\sin(u)-u\cos(u)=\sin(v)-v\cos(v).$$
Plugging in $v=2k\pi-u$ and solving for $\tan(u)=\sin(u)/\cos(u)$ finally gives
$$\tan(u)=u+k\pi.$$
Here's an example where $k=5$.
|
H: An ultraweakly continuous functional on the unit ball is ultraweakly continuous everywhere.
Given a von Neumann Algebra $\mathcal{M} \subset \mathcal{B}(\mathcal{H})$, one defines a normal linear functional on $\mathcal{M}$ to be an ultraweakly continuous linear functional. Usually it's proven that this property is equivalent to preserving suprema of bounded increasing nets/being completely additive. However, it is important to know that $\varphi:\mathcal{M} \to \mathbb{C}$ is ultraweakly continuous on $\mathcal{M}$ iff it is only ultraweakly continuous on the unit ball of $\mathcal{M}$ (e.g. in proving that the set of normal functionals is a Banach space). I couldn't figure out a proof of this fact, the literature somehow ignores it/considers it as trivial. Any reference/clear proof will be greatly appreciated.
This of course can't be true for the usual WOT topology (on the unit ball continuity in both is the same), but in all attempts of proving this fact I didn't see a difference between using WOT and the ultraweak topology, so I couldn't devise a full proof.
AI: Let $a_\alpha$ be a bounded positive increasing net with least upper bound $a$. Then $a_\alpha\to a$ in the ultra-strong topology, which is stronger than the ultra-weak toplogy hence $a_\alpha\to a$ ultra-weakly. It sounds like you know this statement, but if you do not you can consult Lemma 2.4.19. in the book by Bratteli and Robinson.
Note then that $a_\alpha/(2\|a\|)$ is contained entirely in the unit ball and it converges to $a/(2\|a\|)$ which is also contained in the unit ball. As such if $\varphi$ is ultra-weakly continuous on the unit ball you get that:
$$\varphi(a_\alpha) = 2\|a\|\,\varphi(a_\alpha/(2\|a\|) \,)\to 2\|a\| \,\varphi(a/(2\|a\|)\,) = \varphi(a),$$
so $\varphi$ preserves suprema of bounded positive nets, hence is normal. (You did not include "positivity" in your formulation, but if this bothers you then changing the net to $\sup_\alpha\|a_\alpha\|\Bbb1 - (a-a_\alpha)$ is a way to make a bounded increasing net into a bounded increasing positive net.)
|
H: Why $\sum_{n=1}^{\infty}\frac{x^{\alpha}}{1+n^2 x^2}$ doesn't converge uniformly on $[0, \infty)$ for $\alpha > 2$?
I'm trying to understand why $\sum_{n=1}^\infty\frac{x^\alpha}{1+n^2 x^2}$ doesn't converge uniformly on $[0, \infty)$ for $\alpha > 2$.
My book says that $\frac{x^\alpha}{1+n^2 x^2}$ is monotonic and unbounded for $\alpha > 2$ therefore it doesn't converge. I don't get why this means it can't converge exactly, someone care to explain?
AI: Because if a series of functions $\sum_{n=1}^\infty f_n$ converges uniformly, then the sequence $(f_n)_{n\in\Bbb N}$ converge uniformly to the null function. So, the functions $f_n$ cannot be unbounded for every $n\in\Bbb N$.
|
H: Find the image of the circle under the Transformation?
Find the image of the circle $$x^2 + y^2 = 4$$ under the transformation $$T ((x,y)) = (x,y + 2x)$$
AI: Let $$T ((x,y))=(X,Y)$$$$\therefore X=x$$$$Y=y+2x$$$$\Rightarrow Y=y+2X$$ $$\Rightarrow y=Y-2X$$Plugging in the original equation $$X^2+(Y-2X)^2=4$$$$\Rightarrow 5X^2-4XY+Y^2=4$$
|
H: Let $G$ be an abelian group with elements $x, y$ of orders $m$ and $n$ respectively. There exists $z \in G$ of order $\operatorname{lcm} (m,n)$
I'm trying to generalize my previous lemma to this well-known result. Could you please confirm if it is fine or contains logical mistakes?
Let $G$ be an abelian group with elements $x, y$ of orders $m$ and $n$ respectively. There exists $z \in G$ of order $\operatorname{lcm} (m,n)$.
My attempt:
My lemma is
Let $G$ be an abelian group with elements $x, y$ of orders $m$ and $n$ respectively. Moreover, $\gcd(m,n)=1$. Then the order of $xy$ is $mn$.
Let $d = \gcd(m,n)$ and $p = m/d$ and $q =n/d$. Then $\gcd(p,q) = 1$ and $o(x^d) =p$ and $o(y^d) =q$. Apply the lemma to $x^d$ and $y^d$, we get $o(x^d y^d) = pq$. We again have $\gcd(pq, d) = 1$ and $o \left (x^{p} \right ) = d$. Applying the lemma again to $x^d y^d$ and $x^p$, we get $o(x^d y^dx^p) = pqd = \operatorname{lcm} (m,n)$.
Update: I've just found a fix
We factorize $\operatorname{lcm} (m,n) = \prod p_i^{r_i}$. Then for each $i$, we have $p_i^{r_i}$ divides at least one of two numbers $m$ and $n$. We pick $h_i$ from the set $\{h \in \{m, n\} | o(h) \text{ is divisible by } p_i^{r_i} \}$ and $k_i = o(h_i) / p_i^{r_i}$.
We have $\gcd(p_1^{r_1} , p_2^{r_2} ) = 1$ and $o( h_1^{k_1}) = p_1^{r_1}$ and $o( h_2^{k_2}) = p_2^{r_2}$. By our lemma, $o(h_1^{k_1} h_2^{k_2}) = p_1^{r_1} p_2^{r_2}$. Continue this process, we get $o(\prod h_i^{k_i}) = \operatorname{lcm} (m,n)$.
AI: suppose $m=16$ and $n=4$, then $d=4$, But $gcd(pq,d)=4$ and not $1$. So you need to fix your proof.
|
H: Power series problems
How can I find the power series expansion of following functions. I don't have any idea, these seem intimidating. Please help how to proceed.
$(x+\sqrt{1+x^2})^a$
$\sqrt{\frac{1-\sqrt{1-x}}{x}}$
AI: The derivative of the first expression, call it $u(x)$, is
$$
u'(x)=\frac{a(x+\sqrt{1+x^2})^a}{\sqrt{1+x^2}}
$$
so that
$$
u'(x)=au(x)(1+x^2)^{-\frac12}
$$
The binomial power series $(1+x^2)^{-1/2}=\sum c_mx^{2m}$ is known so that this last equation becomes, after comparing coefficients of equal degree, a triangular linear system for the power series coefficients of $u(x)$,
$$
u_k=\frac{a}{k}\sum_{0\le m<k/2} u_{k-1-2m}c_m
$$
For the second equation note that by binomial identities
$$
\sqrt{\frac{1-\sqrt{1-x}}{x}}=(1+\sqrt{1+x})^{-1/2}
$$
which possibly will simplify the expansion, it should be possible to apply similar steps as above.
|
H: Volume enclosed by a paraboloid, cylinder and a plane by Spherical coordinates
I am stuck over this triple integral where i have to find following:
$\int\int\int_V \space z^2 \space dxdydz \space$ over a volume bounded by cylinder $x^2 + y^2 = a^2$, a paraboloid $x^2 + y^2 = z$ and a plane $z=0$.
I did it from cylindrical coordinates. However i want to do it with spherical coordinates. Any help is much appreciated.
AI: The bounds convert to spherical like this:
$$\begin{cases}x^2+y^2 = a^2 \\ x^2+y^2 = z \\ z = 0 \\ \end{cases} \implies \begin{cases}\rho\sin\phi = a \\ \rho\sin^2\phi = \cos\phi \\ \phi = \frac{\pi}{2} \\ \end{cases}$$
We could set up the integral now with $\rho$ first
$$I = \int_0^{2\pi} \int_{\cot^{-1}(a)}^{\frac{\pi}{2}} \int_{\frac{\cos\phi}{\sin^2\phi}}^{\frac{a}{\sin\phi}} \rho^4 \cos^2\phi\sin\phi\:d\rho\:d\phi\:d\theta$$
but this is a mess of trig. The intuition here is when you see lots of trig in the bounds for spherical, try to do $\phi$ first, even though that means we will have to do two integrals
$$I = \int_0^{2\pi}\int_0^a \int_{\cos^{-1}\left(\frac{-1+\sqrt{1+4\rho^2}}{2\rho}\right)}^{\frac{\pi}{2}} \rho^4 \cos^2\phi\sin\phi\:d\rho\:d\phi\:d\theta + \int_0^{2\pi}\int_a^{a\sqrt{a^2+1}}\int_{\cos^{-1}\left(\frac{-1+\sqrt{1+4\rho^2}}{2\rho}\right)}^{\sin^{-1}\left(\frac{a}{\rho}\right)}\rho^4\cos^2\phi\sin\phi\:d\rho\:d\phi\:d\theta$$
$$= \frac{\pi}{12}\left[\int_0^a\rho\left(-1+\sqrt{1+4\rho^2}\right)^3\:d\rho + \int_a^{a\sqrt{a^2+1}}\rho\left(-1+\sqrt{1+4\rho^2}\right)^3-8\rho\left(\rho^2-a^2\right)^{\frac{3}{2}}\:d\rho\right]$$
I will let you take it from here.
|
H: Last 2 digits of $143^{101}$ in base 10.
Last 2 digits of $143^{101}$ in base 10.
I have to use Fermat's theorem or Euler's theorem but I dont know where can I started.
AI: As we need to find last two digits, we should divide by 100. Here is the full solution.
$$Find \space a^{x} \space mod \space n$$
$$a=143,x=101,n=100$$
$$We \space also \space know \space a^{φ(n)} \equiv 1 \space (\bmod 100) \text { if n and a are co-prime}...$$
$$\text {and 100 and 143 are co-prime.}$$
$$φ(100)=φ(2^2⋅5^2)=2^1.(2-1)⋅5^1.(5-1) = 40 \text {, where 2 and 5 are prime.} $$
$$So, 143^{40} \equiv 1 (\bmod100)$$
$$So, 143^{101} \equiv 143^{40.2+21}\equiv(143^{40})^2.143^{21} \equiv 1^2.143^{21}\space (\bmod100) \space \text { ...(A)}$$
From here, I will show two ways.
Option 1:
$$Now, 143^{21} = (13.11)^{21}$$
$$So,143^{21} \bmod100 \equiv (13^{21} \bmod 100).(11^{21}\bmod100)$$
$$13^3 \bmod 100 = -3, \text {as the number ends in 97}$$
$$13^{21} \bmod 100 \equiv (13^3 \bmod 100)^7 \bmod100$$
$$13^{21} \bmod 100 \equiv (-3)^7 \bmod100$$
$$13^{21} \bmod 100 \equiv -87 \bmod100$$
$$13^{21} \bmod 100 \equiv 13 \bmod100$$
$$\text{We also know } 11^{10} \bmod 100 = 1$$
$$\text {or you can find the same way I did for 13}.$$
$$11^{21} \bmod 100 \equiv [(11^{10} \bmod 100)^2.11^1] (\bmod100) \equiv 11 \bmod 100$$
$$\text {Let's go back to, } 143^{21} \bmod100 \equiv (13^{21} \bmod 100).(11^{21}\bmod100)$$
$$So, 143^{21} \bmod100 \equiv (13 \bmod 100).(11\bmod100)$$
$$So, 143^{101} \bmod100 \equiv 143^{21} \bmod100 \equiv 143 \bmod 100 = 43$$
Or from point A,
Option 2:
$$\text {We know, }(x+a)^n \bmod x \equiv a^n \bmod x$$
where x, a and n are positive integers.
$$So,(100+43)^{21} \bmod 100 \equiv 43^{21} \bmod 100$$
$$\text {We know or can easily check that, } 43^4 \bmod 100 = 1$$
$$So,43^{21} \bmod 100 \equiv (43^4 \bmod 100)^5.43^1 (\bmod100)$$
$$So,43^{21} \bmod 100 \equiv (1^5.43^1) \bmod100 = 43.$$
|
H: Is there an improper subset that isn't equal to its superset?
Can there be a set $A$ and a set $B$ such that $A\subseteq B$ and $A\ne B$ ?
While trying to find a solution to this question, I've found this answer which states:
An improper subset (usually denoted as $A\subseteq B$) is such that $A=B$ is allowed (but not mandated)
If that's true, can you give an example for such a case where $A\ne B$ ?
AI: Of course, $A= \{1\}$, $B =\{1,2\}$, will do. Or $A = \emptyset$ and $B$ any non-empty set, like $\{\emptyset\}$.
|
H: How do I show that these limits are equal?
I found this exercise in the Michael Spivak's calculus book. The author asked to "interpret precisely" and then prove that these are equal:
$$\lim_{x\to a}f(x)\space\text{and}\space\lim_{h\to0}f(a+h)$$
I interpret the first one as "the limit of f at a", but I don't really know how to interpret the second one, and how to prove that they are equal.
AI: I think that @zkutch is right, i.e. that you're expected to use this. The first limit is the value of $L$, if it exists (in which case the triangle inequality implies it's unique), for which$$\forall\varepsilon>0\exists\delta>0\forall x(0<|x-a|<\delta\implies|f(x)-L|<\varepsilon).$$The second limit is the value of $L$, if it exists (in which case it's unique), for which$$\forall\varepsilon>0\exists\delta>0\forall h(0<|h|<\delta\implies|f(a+h)-L|<\varepsilon).$$These definitions are equivalent with the identification $x:=a+h$, which leads $x,\,h$ to have the same set of possible values, $\Bbb R\setminus\{a\}$.
|
H: Question on linear independence of quotient space
Let $V$($F$) be a vector space of dimension $n$. Let $W$ is a subspace of $V$.
Consider $S$ = {$x_1, x_2,..., x_k$} be a subset of $V$, consider
$S'$ = {$W+x_1, W+x_2..., W+x_k$} be a subset of $\frac{V}{W}$.
Then I need to show the following:
Given $S$ is linearly independent implies $S'$ is linearly independent, then dim$W$ $\leq$ $n$-$k$
I know by a result that $S'$ is linearly independent iff $S$ is linearly independent and $W$ $\cap$ $L$($S$) = {$0$}
From this how could I conclude the above statement.
AI: The key is to use the so-called "dimension formula":
$$
\dim(W) + \dim(L(S)) = \dim(W + L(S)) + \dim(W \cap L(S)).
$$
Since $W + L(S) \subset V$, this means that
$$
\dim(L(S)) \geq \dim(W) + \dim(L(S)) - n.
$$
|
H: How can I apply this easy derivative on this integral?
Im trying to understand a step in a certain proof, this is actually only easy calculus, but somehow I can't wrap my head around it. The equation is given as
$$ \frac{\partial}{\partial t} \int_0^t f(x+(s-t)b,s) \, \text{d}s = f(x,t) + \int_0^t -b \cdot \nabla f(x+(s-t)b,s) \text{d}s.$$
I don't get that, because Im thinking
\begin{align}
\frac{\partial}{\partial t} \int_0^t f(x+(s-t)b,s) \, \text{d}s &= \int_0^t \frac{\partial}{\partial t} f(x+(s-t)b,s) \, \text{d}s \newline
&= \int_0^t -b \cdot \nabla f(x+(s-t)b,s) \text{d}s.
\end{align}
Obviously Im missing something easy here, anyone knows from top of their head?
Cheers
AI: You can look at the integral as a multivariate function
$$ I(A,B,t) := \int_A^B F(s,t)\ ds$$
Then if we set $\psi(t) := (0,t,t)$, $\psi'\equiv (0,1,1)$, and the multivariate chain rule gives
$$ \frac{d}{dt} I\circ \psi(t) = \nabla I(t) \cdot \psi'(t)= 0\times (\partial_A I)(0,t,t)+1\times(\partial_B I)(0,t,t) + 1\times (\partial_t I)(0,t,t) $$
The second term $\partial_B I$ is just $F(s,t)$ by the Fundamental Theorem of Calculus, and the last term is the term you thought that should be there, $(\partial_t I)(0,t,t) = \int_0^t \partial_t F(s,t) \ ds$.
In your case $F(s,t) = f(x+(s-t)b,s)$, so we have $\partial_t F(s,t) = -b\cdot \nabla f(x + (s-t)b,s)$ as needed.
|
H: Limit of $\frac{1}{t} \int_0^t f(x) dx$
A continuous function $f:[0,\infty) \rightarrow \mathbb{R}$ has a limit $\alpha = \lim_{x \to \infty} f(x) \in \mathbb{R}$.
(1) Let $ g(t) = \frac{1}{t} \int_0^t f(x) dx$. Show $\lim_{t \to \infty} g(t) = \alpha$.
(2) Let $ h(t) = \frac{1}{t^2} \int_0^t xf(x) dx$. Show the existence of limit $\lim_{t \to \infty} g(t)$ and get the value.
I cannot answer to neither. My approach is using mean value theorem for definite integral. So there is the real nunber $c (t)\in [0,\infty)$ s.t. $f(c) = \frac{1}{t} \int_0^t f(x) dx $. However I have no idea from here. Thank you.
Answer.
Since $f$ is continuous and take limit $\alpha$, for any $\epsilon$ there exsits $c \in \mathbb{R}$ succh that $|f(x)- \alpha| \leq \epsilon$ whenever $c \leq x$.
(1) We get $\int_c^t (\alpha - \epsilon) dx \leq \int_c^t f(x) dx \leq \int_c^t (\alpha + \epsilon) dx$. Add $\int_0^c f(x) dx$ and divide by t and take $t \to \infty$, we get something like $(\alpha - \epsilon) \leq \lim_{t \to \infty} \frac{1}{t}\int_0^t f(x) dx \leq (\alpha + \epsilon)$.
(2) We get $\int_c^t x(\alpha - \epsilon) dx \leq \int_c^t xf(x) dx \leq \int_c^t x(\alpha + \epsilon) dx$. Add $\int_0^c xf(x) dx$ and divide by $t^2$ and take $t \to \infty$, we get something like $\frac{1}{2}(\alpha - \epsilon) \leq \lim_{t \to \infty} \frac{1}{t^2}\int_0^t xf(x) dx \leq \frac{1}{2}(\alpha + \epsilon)$. This shows existence of limit and value is $\frac{1}{2}\alpha$ (this subtly follow from l'Hôpital's theorem to $h(t)$).
AI: Hint: $(\alpha -\epsilon) (t-s) < \int_s^{t} f(x)dx \leq (t-s) (\alpha +\epsilon)$ if $s$ is sufficiently large. Divide by $t$ and take the limit. Note that $\frac 1 t \int_0^{s} f(x)dx \to 0$ as $ t \to \infty$. Proof of the second part is similar.
|
H: Find the geometric position of all points satisfying the two equations $x^2+y^2+z^2=4$ and $x^2+y^2=1$
Find the geometric position of all points satisfying the two equations $x^2+y^2+z^2=4$ and $x^2+y^2=1$
I think the points for which their coordinates satisfies the two equations at the same time,are all on a circle $x^2+y^2=1$ such that the distance between the circle and the origin is $\pm \sqrt3$.
AI: $x^2+y^2+z^2=4$ represent a sphere with origin $O$ and the $r=2$
$x^2+y^2=1$ represent a cylinder with Orgin $O$ and the $r=1$
If the center of the sphere lies on the axis of the cylinder of revolution, then the intersection curve degenerates into two circles.
The equation of intersection( 2 circles) is:
$x^2+y^2=1 , z=\pm \sqrt3$
|
H: Example of a symmetric matrix which doesn't have orthogonal eigenvectors
I'm looking for an example of a symmetric matrix $A$ which doesn't have orthogonal eigenvectors.
Here's what I tried: I was able to prove that the eigenvectors corresponding to each distinct eigenvalue of a symmetric matrix are orthogonal. So, I realise that the example I'm looking for is a symmetric matrix with at least one repeated eigenvalue for which there are no orthogonal eigenvectors. But I'm not sure how to construct such an example.
AI: So, I realise that the example I'm looking for is a symmetric matrix with at least one repeated eigenvalue for which there are no orthogonal eigenvectors.
If you ever get two of more linearly independent eigenvectors corresponding to the same eigenvalue, you can apply the Gram Schmidt process and end up with orthogonal vectors... which will continue being eigenvectors!
Thus, it is not possible to get any symmetric matrix which doesn't have orthogonal eigenvectors. (Note that I'm not saying that all eigenvectors will be orthogonal but that you can always find orthogonal ones.)
|
H: proving the equation has at least one at least one real solution
If $g(x)=3^x+2x^{\frac{1}{2}}$. Then prove that the equation $g'(x)=21$ has at least one real roots
What i try:: given $g(x)=3^{x}+2x^{\frac{1}{2}}$
Then $\displaystyle g'(x)=3^x\ln(3)+2\cdot \frac{1}{2}x^{-\frac{1}{2}}=3^x\ln(3)+\frac{1}{\sqrt{x}}$
Now i did not understand How do i find number of solution of $g'(x)=21$
Help me please , thanks
AI: $g'$ is continuous at $(0,+\infty)$.
$$g'(1)=3\ln(3)+1<21$$
$$\lim_{x\to +\infty}g'(x)=+\infty$$
thus there exist $ b>1$ such that $g'(b)>21$.
$21\in (g'(1),g'(b)) $ so, By EVT, $$\exists c\in (1,b) \; : g'(c)=21$$
Remark:
$g' $ is a derivative, we do not need its continuity.
|
H: System of equations in $\mathbb{F}_p$
I have:$$\begin{matrix}X&+&Y&+&Z&=&3,\\& &Y&+&Z&=&2,\\&-&Y&+&Z&=&0.\end{matrix}$$
Let furthermore $p$ be prime. How can I solve this system with respect to $\mathbb{F}_p$?
AI: Continuing from the comments. If we are not in a field of characteristic $2$, we can solve the system of linear equations in the usual manner and get $X = Y = Z = 1$ as the unique solution.
Thus, that is the solution in any $\Bbb F_p$ if $p > 2$.
For the case $p = 2$, we see that the equations become
\begin{align}
X + Y + Z &= 1,\\
Y + Z &= 0,\\
Y + Z &= 0.
\end{align}
(Recall that $Y = -Y$ in char $2$.)
Now we see that we actually have a redundancy in the system. (If you're familiar with linear algebra, we see that the coefficients matrix is not full rank and thus, we won't get a unique solution.)
Substituting $Y+Z = 0$ in the first equation gives us that $X = 1$. Thus, we are only left with $Y + Z = 0$ which has two solutions. Thus, the complete set of solutions is
$$(X, Y, Z) \in \{(1, 1, 1), (1, 0, 0)\}.$$
|
H: Proving that this formula all over the positive integer gives us this sequence
Firstly, we have this sequence : $1,1,1,1,1,1,1,1,1,1,2,2,2,...$ which is the sequence of the number of digits in decimal expansion of $n$.
Secondly, we have this formula : $$a_n=\Bigl\lceil\log_{10}(n+1)\Bigr\rceil-\Bigl\lceil\frac{n}{n+1}\Bigr\rceil+1$$ where $n\ge0$
This formula seems to gives us this sequence.
How to prove this ?
AI: Firstly, for $n \ne 0$, $n$ has $d$ digits if and only if $10^{d - 1} \le n < 10^d$. One way to find $d$ from here is to take logarithms, using $d - 1 \le \log_{10}(n) < d$, so $d = \lfloor \log_{10}(n) \rfloor$.
However, this expression doesn't work so well when $n = 0$, so we have to be a bit creative. Adding $1$, we get $10^{d - 1} + 1 \le n + 1 < 10^d + 1$. But a strict inequality of natural numbers of the form $a < b + 1$ can be equivalently written as $a \le b$, so in fact we can write $10^{d - 1} < n + 1 \le 10^d$. Then $d - 1 < \log_{10}(n + 1) \le d$, so $d = \lceil \log_{10}(n + 1) \rceil$. This expression doesn't freak out quite so much when $n = 0$.
The last bit is just a clever trick so that $a_0 = 1$. If $n = 0$, then $n/(n + 1) = 0$, so $\lceil n/(n + 1) \rceil = 0$. Otherwise, $0 < n/(n + 1) < 1$, so $\lceil n/(n + 1) \rceil = 1$.
|
H: How to carry out arithmetics with Bachmann–Landau notations?
Here's something from my class.
As $n \to \infty$, $|\mathcal{F'}| =
(1-o(1))\frac{n \choose t}{k \choose t}
+ o(1){n \choose t}
= (1 + o(1))\frac{n \choose t}{k \choose t}$
I can sort of see why this makes sense, and I guess writing out the expression by the definitions would verify it. But is there an intuitive way to interpret this calculation and others similar to it?
AI: You can write the following estimate : $o(1){n \choose t} = o(1)\frac{{n \choose t}}{{k \choose t}}$, since $k$ and $t$ are fixed. Thus $|F'| = (1+o(1))\frac{{n \choose t}}{{k \choose t}} + o(1)\frac{{n \choose t}}{{k \choose t}} = \frac{{n \choose t}}{{k \choose t}} (1+o(1)+o(1)) = (1+o(1))\frac{{n \choose t}}{{k \choose t}}.$
You can interpret the notations $o$ and $O$ as : these are quantities whose names and exact values are not relevant, only their asymptotic behaviour.
|
H: Is is true that $|A+B|\ge \min(|A|+|B|-1,p)$ in $\mathbf{Z}_p$?
For two sets $A,B\subseteq \mathbf {Z}_p $, define
$$A+B:=\{a+b:a\in A,b\in B\}.$$
Note that "+" is performed under $\mathbf {Z}_p$.
For example, let $p=5$, $A=\{1,4\},B=\{2,4\}$. Then $A+B=\{0,1,3\}$.
Is it true that $|A+B|\ge \min(|A|+|B|-1,p)$ in $\mathbf{Z}_p$?
There is a much simpler conclusion that $|A+B|\ge |A|+|B|-1$ in $\mathbf Z$, which can be easily verified. But as for $\mathbf Z_p$, I have no idea...
Thanks in advance.
AI: This is called the Cauchy-Davenport Theorem:
https://en.wikipedia.org/wiki/Restricted_sumset#Cauchy%E2%80%93Davenport_theorem
Here is a proof that came up in Google:
https://sites.math.rutgers.edu/~sk1233/courses/additive-F16/lec1.pdf
|
H: Calculate three dimensional integral with a delta function
Can anyone help to calculate the following integral: $$\int_a^\infty \mathrm dy_1\int_a^\infty\mathrm dy_2 \int_a^\infty \mathrm dy_3 \delta\left(y_1+y_2+y_3-3\right)$$ with a<0?
According to maple the result is: $\frac{9}{2}\left(1-a\right)^2$ but I don't know how get this result analytically.
Thanks for helping!
AI: By translating, we can see that are integrating on the projection of the plane
$$x_1+x_2+x_3=3(1-a)$$
onto the $x_1x_2$ plane (this is more easily visualized since we are now confined to the first octant). So the integral evaluates as
$$\int_0^\infty dx_1\int_0^\infty dx_2\int_0^\infty dx_3 \: \delta(x_1+x_2+x_3-3(1-a)) = \int_0^{3(1-a)} dx_1\int_0^{3(1-a)-x_1}dx_2$$
$$ = \int_0^{3(1-a)}3(1-a)-x_1\:dx_1 = \frac{9}{2}(1-a)^2$$
|
H: A subsequence of $\{|\sin n|\}$ that converges to $0$ "fast"
It is well-known that for each $a\in [0,1]$ there is a subsequence of $\{|\sin n|\}_{n=0}^\infty$ that converges to $a$. I am curious about the speed of convergence. In particular, for each $\alpha>0$, is there a subsequence $\{|\sin n_k|\}_{k=0}^\infty$ which converges to $0$ "fast" in the sense that
$$|\sin n_k|\le\frac{1}{n_k^\alpha},\,\forall\,k\in\mathbf N\,?$$
If not, what restriction should be imposed on $\alpha$?
Any hints or reference (to existing literature on it) will be highly appreciated.
AI: The exact irrationality measure of $\pi$ is still unknown, it is conjectured to be $2$ but at the moment we only know that
$$ \left|\pi-\frac{p}{q}\right|\leq \frac{1}{q^{7.11}} $$
holds for a finite number of fractions $\frac{p}{q}$, while
$$ \left|\pi-\frac{p}{q}\right|\leq \frac{1}{q^2} $$
certainly holds infinite times due to the irrationality of $\pi$. So if $\frac{p}{q}$ is a convergent of the continued fraction of $\pi$ we have
$$ \left|\pi q-p\right|\leq\frac{1}{q} $$
and since $\sin(x)$ is a Lipschitz-continuous function
$$ \left|\sin p\right| \leq \frac{1}{q} \approx \frac{\pi}{p} $$
i.e. your inequality is achieved by $\alpha=1-\varepsilon$. $1$ is probably the supremum of the set of $\alpha$ allowing the inequality to hold, but the truth of this assertion is beyond our current technology. Anyway, for the unknown optimal $\alpha\in[1,6.11]$ we have that $\{n_k\}_{k\geq 1}$ is a subsequence of
$$ 3,22,333,355,103993,104348,208341,312689,833719,1146408,\ldots $$
given by the numerators of the convergents.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.