text
stringlengths 83
79.5k
|
|---|
H: Proof of $\mathbf{x}^{T}A\mathbf{x}=0\Rightarrow A=0$
As far as I know, for any $A$:
$$\mathbf{x}^{T}A\mathbf{y}=0;\forall\mathbf{x},\mathbf{y}\in R^n\Rightarrow A=0$$
Does it mean that
$$\mathbf{x}^{T}A\mathbf{x}=0;\forall\mathbf{x}\in R^n\Rightarrow A=0$$
The condition of the first claim $\forall\mathbf{x},\mathbf{y}\in R^n$ implies that we could take $y=x$, and, therefore the second claim should hold. Correct?
AI: As pointed out in the comments, your conclusion is false. The problem with your reasoning is that the first claim reads (in plain English):
If, for all $x$ and $y$, $x^TAy=0$, then $A=0$.
When you set $y=x$ then you are no longer considering all $x$ and $y$, only the pairs $x,y$ such that $y=x$.
|
H: Point addition on an elliptic curve
I have an elliptic curve $y^2 = x^3 + 2x + 2$ over $Z_{17}$. It has order $19$.
I've been given the equation $6\cdot(5, 1) + 6\cdot(0,6)$ and the answer as $(7, 11)$ and I'm unsure how to derive that answer.
I have $6\cdot(5, 1) = (16,13)$ and $6\cdot(0,6)=(0, 11)$ however when I use point addition to add them together I get $(16,13)+(0, 11)=(14,11)$ which isn't even a point on the curve...
Could someone help me identify where and why I've gone wrong?
For further information here's each of the points:
And here's the curve plotted out:
AI: Hints:
Your calculation of $6(5,1) = (16,13)$ is correct.
Your calculation of $6(0,6) = (0,11)$ is incorrect, you should get $6(0,6) = (3,1)$.
Maybe if you show how you did that calculation, I can spot the issue.
Once you fix that, I verified the author's result is correct, that is:
$$(16,13)+(3,1) = (7,11)$$
Update
Here are some additional hints to help you with intermediate calculations:
$P = (0,6)$
$2P = (9,1)$
$3P = (6,3)$
$4P = (7,6)$
|
H: $M$ be the subspace of $\Bbb R^3$ spanned by $(1,0,-1).$
I am stuck on the following problem:
Consider $\Bbb R^3$ with the standard inner product. Let $M$ be the subspace of $\Bbb R^3$ spanned by $(1,0,-1).$ Which of the following is a basis for the orthogonal complement of $M\,\,?$
$\{(2,1,2),(4,2,4)\}$
$\{(2,-1,2),(1,3,1),(-1,-1,-1)\}$
$\{(1,0,1),(0,1,0)\}$
$\{(1,2,1),(0,1,1)\}$
Can someone explain it?
AI: You know that the complement has dimension 2. You also know that each vector in the complement is orthogonal to $(1,0,-1)$, so the basis vectors are too. In other words, you need to check
a) Which set of vectors are linearly independent and span a two dimensional subspace.
b) Which set consists of vectors orthogonal to $(1,0,-1)$.
|
H: limit to infinity involving trig and root function
I was doing a ratio test for convergence and the final expression I got before applying limit to infinity was:
$\dfrac{(2+\cos(x) )}{\sqrt{x}}$, now I believe that this goes to zero, the $\dfrac{2}{\sqrt{x}}$ is trivially zero, but the $\dfrac{\cos(x)}{\sqrt{x}}$ I am having trouble trying to show it well. I know that $\cos(x)$ is bounded to finite values and the root function below is a monotonically increasing function so hence the limit should go to zero. I was wanting a better computational way to show this. Is there a better way, than what I have stated? Please let me know.
Sincerely,
Palu
AI: HINT:
As $-1\le \cos x\le 1$ for real $x$ $\implies 1\le (2+\cos x)\le3$
|
H: SOA Exam P Question: Exponential Distribution
Here is an Exam P problem as I have it. That is, it was passed down to me from someone else and I am unsure if the wording is exactly as it was originally posted. I've tried searching for this problem on various sites but cannot find a similar one.
Problem: A company insures two types of drivers: Basic and Preferred. Basic claims come in at a rate that is exponentially distributed with a mean of 3. Preferred claims come in at a rate that is exponentially distributed with a mean of 6.
What is the probability that the next preferred claim happens at least two days after the next basic claim given that a basic claim just happened?
Here's what I was thinking: Let $B$ be the number of days until the next basic claim happens and let $P$ be the number of days until the next preferred claim happens. Thus $B$ ~ Exponential$\left(\frac13\right)$ and $P$ ~ Exponential$\left(\frac16\right)$. We want to find Prob$\left(P\ge2+B|B=0\right)$. This is where I keep getting stuck. I'm sure this problem is relatively easy but I am not seeing it. Any hints are greatly appreciated.
AI: Independence of the random variables that you call $P$ and $B$ is not explicitly stated in the problem, but needs to be assumed.
We want $\Pr(P \ge 2+B)$. That is I think the natural interpretation of the phrase "the next preferred claim comes in at least two days after the next basic claim." By the memorylessness of the exponential, the fact there has just been a basic claim does not affect the calculation.
The joint density function of $B$ and $P$ is $\frac{1}{18}e^{-x/3}e^{-y/6}$ in the first quadrant, and $0$ elsewhere. Call this $f(x,y)$. Note we are using the variable $x$ to refer to $B$ and the variable $y$ to refer to $P$. (I would probably have called $B$ by the name $X$, and $P$ by the name $Y$.)
There are no difficulties in integrating.
Now draw a picture identifying the part of the first quadrant where the condition $P \ge 2+B$ holds. That is the part of the first quadrant that is above the line $y=2+x$. After we have done that, writing down the appropriate integral is immediate. We get
$$\Pr(P\ge 2+B)=\int_{x=0}^\infty \left(\int_{y=2+x}^\infty f(x,y)\,dy \right)\,dx.$$
|
H: Understanding the support of a function
This is from exercise 5.5.A of Vakil's lecture notes.
Consider $f$, a function on $A: = \mathrm{Spec}(k[x,y]/(y^2, xy))$. Show that its support either empty, the origin or the whole space.
Now, I know that the support of any function $f$ must be closed. This comes from the fact that its complement consists of points on which the germ of $f$ is zero. So the $f$ is also zero in a neighborhood of those points $\Rightarrow$ the complement of the support is open.
But then if we take in our case $f = (x-1)(x-2)$, why will the support not be the open set $\mathbb A^2_k- {1,2}$?
If we were doing differential geometry, we would take the support to be the closure of the non-vanishing set. But I didn't see such a definition anywhere in algebraic geometry.
AI: Will add to this later, but just to give a start:
Your $f$ is supported everywhere. To see that it's supported away from the origin, for example, note that if we throw out the origin then what we get is isomorphic to $\mathbb{A}^1 - \{(x)\}$. Now, $f$ isn't zero in the corresponding ring, which is a domain, so it has non-zero germ at each point.
There are two concepts at work here and it's natural to be confused. We think about functions as taking values and vanishing in the residue fields and this is what defines our topology. But the support is where the germ is non-zero. Maybe it's good to think about how, in plain old analysis, the $C^\infty$ functions $0, x, x^2$ have value zero at the origin but only $0$ has a trivial germ there and indeed $x, x^2$ are supported, in the old sense, on the whole line!
I'll try to think about whether there is some statement about vanishing [in the sense of values]. The trouble is that $y$ takes on the value $0$ at every point but it contains more information than the zero section — it ought to live somewhere. Probably you can say something in be reduced case.
|
H: Approximation of DE
It depends on my previous question. Closed form solution of DE
I don't want to deal with Airy functions. How can I approximate this DE in continous domain $[0,1]$?
$$y''(x)+(x+1)y(x)=0\quad\text{ with the initial conditions}\quad y(0)=0\quad y'(0)=1$$
What if the conditions change to
$$y''(x)+(x+1)y(x)=0\quad\text{ with the initial conditions}\quad y(0)=0\quad y'(1)=1$$
May I use the same methods?
AI: If you want to approximate on $L^2$ you can choose a polynomial
$$\hat y=a\,x^3+b\,x^2+c\,x+d$$
To satisfy the initial conditions
$$y(0)=0\Rightarrow d=0$$
$$y'(0)=1\Rightarrow c=1$$
Now you can construct the residual integral for your domain
$$R=\int_0^1\big(\hat y''(x)+(x+1)\hat y(x)\big)^2dx=\int_0^1\big(6\,a\,x+b+(x+1)(a\,x^3+b\,x^2+x)\big)^2dx$$
$$=\frac{31}{30}+\frac{877 a}{105}+\frac{21299 a^2}{1260}+\frac{149 b}{30}+\frac{8549 a b}{420}+\frac{736 b^2}{105}$$
Now you can minimize the residual wrt $a$ and $b$
$$\frac {\partial R}{\partial a}=\frac{877}{105}+\frac{21299 a}{630}+\frac{8549 b}{420}=0$$
$$\frac {\partial R}{\partial b}=\frac{149}{30}+\frac{8549 a}{420}+\frac{1472 b}{105}=0$$
The solution of the system $a=-0.268$ and $b=0.035$; and the residual is $0.0004$ over the domain.
--------------EDIT-------------
To satisfy the initial conditions
$$y(0)=0\Rightarrow d=0$$
$$y'(1)=1\Rightarrow c=1-3a-2b$$
$$R=\int_0^1\big(\hat y''(x)+(x+1)\hat y(x)\big)^2dx=\int_0^1\big(6\,a\,x+b+(x+1)(a\,x^3+b\,x^2+(1-3a-2b)x)\big)^2dx$$
$$=\frac{31}{30}+\frac{226 a}{105}+\frac{289 a^2}{252}+\frac{5 b}{6}+\frac{23 a b}{20}+\frac{127 b^2}{105}$$
By setting the partial derivatives again to zero we can solve the equation for $a=-1.005$ and $b=0.133$. It follows that $c=1-3a-2b=3.749$.
|
H: Complex integral on curve
I have to show that this integral is zero, but don't know how to evaluate it.
Consider a closed class $C^1$ curve $c:[a,b]\rightarrow\mathbb{C}\backslash \{0\}$ and show that $$\int_a^b\frac{\langle c(t),c'(t) \rangle }{\lVert c(t)\lVert^2}dt=0$$
It is necessary to consider $c$ as a curve in $\mathbb{R}^2\backslash \{(0,0)\}$ to write $c(t) = (c_1(t),c_2(t))$, then $\langle c(t),c'(t)\rangle = c_1(t)\cdot c_1'(t)+c_2(t)\cdot c_2'(t)$ and $\lVert c(t)\lVert^2=c_1(t)^2+c_2(t)^2$.
PS: I would know how to evaluate if the denominator was not squared, that is the big problem.
Thanks in advance.
AI: Note that for $f(t):=\log\|c(t)\|$,
$$f'(t)=\frac{\langle c(t),c'(t)\rangle}{\|c(t)\|^2}.$$
|
H: Multiplication of nonsquare matrices
Could multiplication of non-square matrices result in square nonsingular matrix?
It's easy to show for square matrices via determinant. But what to do with non-square ones?
AI: Yes, indeed, this can happen: $A_{m\times n} \times B_{n\times m}$ may very well be non-singular (though not necessarily) provided $m \leq n$, as in the case Daniel Fischer posted in the comments:
$$\begin{pmatrix}1 &0\end{pmatrix}\begin{pmatrix}1\\0\end{pmatrix} = (1)$$
However, we run into problems when $m>n$. Why?
|
H: is Lebesgue measure continuous?
Is Lebesgue measure continuous? Can someone prove it or attach a link to the proof?
I am trying to prove the existence of a plane in $\mathbb{R}^{3}$ that simultaneously divides 3 compact subsets of $\mathbb{R}^{3}$ into two peices of the same measure.
AI: One way to mathematically formulate this expression is that if $X\subset \mathbb R^3$ has finite Lebesgue measure, define the function on $\mathbb R^3\times S^2$ as:
$$f(u,v) =\mu(\{x\in X: (x-u)\cdot v>0\})$$
Then $f$ is continuous.
It is true, but requires some care to prove. It's a little easier to do with $X$ bounded, since the slight variations in $u,v$ would be of finite measure in a bounded region, so we can easily intuitively make the difference small.
That would suffice for your question since your elements are compact, and hence bounded.
|
H: How many intersection points can two graphs have?
Let $F$ and $G$ be copies of the complete equipartite graph with each partition of size $v$. That is, $F,G:= K_{v,v,\dots ,v}$. Prove that if $F$ and $G$ intersect in at more than $v$ vertices, then they MUST share a common edge.
I am having some trouble proving this. It seems quite intuitive that the answer is $v$ but hard to rigorize. Thanks!
AI: Let $F$ and $G$ be complete equipartite graphs. If $F$ and $G$ have two common vertices $x$ and $y$ that belong to different partite sets, then there is an edge between $x$ and $y$ since the graph is complete equipartite.
Suppose $F$ and $G$ have more than $v$ common vertices. Since the size of a partite set is $v$, the pigeonhole principle implies that $F$ and $G$ share two common vertices belonging to different partite sets. The preceding paragraph allows us to conclude that $F$ and $G$ share an edge.
|
H: Unrotate/UnTranslate a Unit Vector
I have a Unit Vector (UV1) which I am transforming using a Rotation + Translation Matrix (MT1). The result of that Rotation and Translation is again normalized to create a Unit Vector (UV2).
Given UV1 and MT1 I can calculate UV2. Given only UV2 and MT1 is it possible to calculate UV1?
Example Data:
UV1 = -0.6593109902290344, -0.7511779860795417, -0.03225912882726347
UV2 = 0.29567466528198705, -0.9546948074188073, -0.03367962288909071
MT1 =
0.523051105292120300, -0.852299146133909100, 0.0019252927314616130, 2.0789077522304078E-4,
0.852293593813883100, 0.523035410017454600, -0.0054396517986005175, 4.457556673629903E-4,
0.003629214310009352, 0.004486130546903434, 0.9999833515795198000, 0.004359159108209769
AI: When you try to invert your problem, you might end up with two possible answers (but not more). You can assume that the rotation matrix is identity as rotation matrices take unit vectors to unit vectors and once you have inverted the problem for the identity matrix, just use the inverse of the rotation matrix to get your desired initial vector.
The translation takes the unit sphere to a sphere with a different center. And you have projected that translated sphere unto the unit sphere. When you try to find the point on the off-set sphere from the projection on the unit sphere, you have two possible candidates.
|
H: How to find global extremum when constraint isn't compact set
Sometimes, the constraint is not a compact set. As a result, the local minimum may not be global.
For example, $ f=x^2+y^3$ subject to constraint $ x+y=4/3$.
Using Lagrange multiplier method, I calculated local minimum at $(x,y)=(\frac23,\frac23)$. But I don't know what to do next.
I cheated a bit and looked at Wolfram Alpha's plot that shows that there is no global minimum subject to the constraint.
http://www.wolframalpha.com/input/?i=min+x^2+y^3+such+that+{x+y=4/3}
Is there any way to get this result (no global minima) without graphing?
AI: Sure. Find points where the values of $f$ are smaller than the values of $f$ at all the critical points. (For some functions, it's not so clear from a computer graph, unless you know what you're looking for.)
|
H: Hausdorff Distance between "Pure Black" and "Pure White" images
I am trying to use Hausdorff Distance to compare a pair of test images of equal dimensions. The images undergo some kind of threshold to obtain binary images. The Hausdorff Distance is calculated for the positions with non-zero pixels in those binary images. Here, I am using Taxi Cab distance instead of Euclidean distance. To provide a point of reference I want to calculate Hausdorff Distance for Control Images. What I mean by Control images is that the control images are binary, with same dimensions as that of the pair of test images, and the value of each pixel is 0 (a pure black image) or 255 (a pure white image).
Now, since I am calculating Hausdorff Distance using the positions of non-zero pixels, I have no problem in finding Hausdorff Distance between a pair of "pure white" images. But I am unable to resolve the situation when one of the images or both images in the pair are "pure black" images since, a search for positions with non-zero pixel value in a "pure black" image will return a NULL. How to resolve this situation? How to calculate Hausdorff Distance in the presence of a NULL set (I used NULL set for lack of better words)? Is there a work around that I can use here? Please help me.
I would like to apologize if any one thinks if this question doesn't belong here. I thought it is biased towards theory than implementation. If anyone thinks otherwise, please point me to a right forum.
AI: The Hausdorff distance is defined for arbitrary subsets of a given metric space, but has nice properties (such as defining a metric itself) only when considering it between nice sets, and nice here means non-empty compact sets.
I assume you compute the distance between the subsets of black pixels of the two images (or almost equivalently between the subsets of white pixels).
Looking at the definition for arbitrary sets (as seen on Wikipedia) and specifically when one set is empty, we are to take the infimum of an empty set of real numbers - that infimum is $+\infty$, and then take the supremum over infintely many such $+\infty$'s - that's a gain $+\infty$.
Viewed the aother way around, we take the supremum over an empty set, which results in $-\infty$. Now the Hausdorff distance is the biggest of these two numbres, that is $+\infty$.
Interestingly, a similar reasoning gives $-\infty$ for the distance of the empty set to itself (whereas the self-distance is $0$ for any other set).
|
H: Identity as lower bound of sine
I'm struggling to rigorously proof
$$ \sin(2x) \geq x \qquad (0 \leq x \leq \pi/4) $$
Any ideas?
AI: Hint: Consider the function $f$ given by $f(x)=\sin (2x)-x$ for all $x$ in $[0,\pi/4]$. Differentiate it.
|
H: What is $\frac{0}{0}$ and $\frac{\infty}{\infty}$? A question on indeterminate forms
I am wondering what is $\frac{0}{0}$ and $\frac{\infty}{\infty}$?
In my impression, both are undefined. But then I need to prove that
$$\lim_{n \rightarrow \infty} \frac{\int_{-n}^x g(t)dt}{\int_{-n}^n g(t)dt} = 1 \text{ when } x \leq 0,$$ where $g(x) = f(x)f(1-x)$, and
\begin{equation*}
f(x) = \left\{
\begin{array}{ll}
e^{-1/x^2} & x > 0\\
0 & x \leq 0
\end{array} \right.
\end{equation*}
My attempt:
I split the demominator and got 0/0:
\begin{eqnarray}
h(x) & =& \lim_{n \rightarrow \infty} \frac{\int_{-n}^x g(t)dt}{\int_{-n}^n g(t)dt}\\
&= & \lim_{n \rightarrow \infty}\frac{\int_{-n}^x f(t)f(1-t)dt}{\int_{-n}^x f(t)f(1-t)dt+\int_x^n f(t)f(1-t)dt}\\
& =& \lim_{n \rightarrow \infty}\frac{0}{0+\int_x^n f(t)f(1-t)dt}
\end{eqnarray}
AI: I don't see how you get $1$ as that limit. Here's how I see it.
$g(t)$ is nonzero only when $t \in [0,1]$. Thus
$$\lim_{n \to \infty} \int_{-n}^n dt\, g(t) = \int_0^1 dt \, e^{1/x^2} e^{1/(1-x)^2}$$
Also, when $x < 0$:
$$\lim_{n \to \infty} \int_{-n}^x dt\, g(t) = 0$$
because $g(t) = 0$ when $t \in [-n,x]$. The limit is then zero.
|
H: Simple Double Summation
I understand how to sum a single sum, but I don't know how to solve a double sum without explicit limits. Please help guide me in the right direction to solve problems 3 through 5 in the included image. Thank you!!
$a_n = \sum\limits_{i=1}^n (2i-1)$
$a_n = \sum\limits_{i=1}^n (3i^2-3i+1)$
$a_n= \sum\limits_{i=1}^n \sum\limits_{j=1}^n 1$
$a_n= \sum\limits_{i=1}^n \sum\limits_{j=1}^n i$
$a_n= \sum\limits_{i=1}^n \sum\limits_{j=1}^n j$
AI: Note: The first (and original) part of this answers solves a harder problem than was actually asked, but if you’re taking a discrete math course, you’ll probably be doing similar things before too long.
I’ll do (d) and leave you with a couple of hints for (c) and (e).
In order to evaluate $\sum_{i=1}^n\sum_{j=1}^ii$, your first step should be to evaluate the inner sum, $\sum_{j=1}^ii$. Since $j$, the index of summation, runs from $1$ through $i$, this is a sum of $i$ terms. And since $i$, the general term in the summation, does not depend on $j$, this inner summation is just
$$\sum_{j=1}^ii=\underbrace{i+i+\ldots+i}_{i}=i^2\;.$$
The original summation can now be reduced to a single summation:
$$\sum_{i=1}^n\sum_{j=1}^ii=\sum_{i=1}^ni^2\;;$$
it’s the sum of the first $n$ squares. This is a formula that you will probably be expected to learn:
$$\sum_{i=1}^ni^2=\frac{n(n+1)(2n+1)}6\;,\tag{1}$$
which is the final answer for (d).
You can do (c) and (e) in the same way: start by evaluating the inner sum. For (c), how many copies of $1$ are you adding? For both (c) and (d) you will need to know that
$$\sum_{i=1}^ni=\frac{n(n+1)}2\;;$$
this is an extremely useful formula that you will certainly be expected to learn, if you don’t already know it. For (e) you’ll need formula $(1)$ as well. Finally, don’t forget that you can always pull a constant factor out of a sum:
$$\sum_{i=1}^nca_i=c\sum_{i=1}^na_i\;.$$
Added: I’ve actually done more here than was called for: I’ve shown you how to write a general formula for the sum, so that you can just plug in $n=1,2,3,4,5$. Of course you don’t need that just to get the first five terms. Here, for instance, is a brute force calculation of the third term of (c):
$$\begin{align*}
\sum_{i=1}^3\sum_{j=1}^ii&=1+(1+2)+(1+2+3)\\
&=1+3+6\\
&=10\;.
\end{align*}$$
The others are equally straightforward.
|
H: differentiable prove of product functions
Let $E, F$ normed spaces and $f:A\rightarrow\mathbb{R}$, $g:A\rightarrow F$, with $A$ open set in $E$, and defined $h:A\rightarrow F$ by $h(x)=f(x)g(x)$. Suppose that $f$ es differentiable in $a\in A$, $f(a)=0$, and that $g$ is continuos at $a$. Prove that $h$ es differentiable at $a$, with $Dh(a)=g(a)Df(a)$.
AI: Expanding on your last line, which seems okay to me.
I use the operator norm, denoted $|||\cdot |||$. $Df(a)$ has a norm since it is a continous linear map. So $||Df(a)(x-a)||\,||(g(x)-g(a))|| \leq |||Df(a)|||\,||(x-a)||\,||(g(x)-g(a))||$
$g$ being continous, it is locally bounded around $a$, so (locally) $||g(x)||\,||f(x)-f(a)-Df(a)(x-a)|| \leq C\,||f(x)-f(a)-Df(a)(x-a)||$.
So we can divide by $||(x-a)||$ and get $$\begin{align}\frac{||h(x)-h(a)-Dh(a)(x-a)||}{||x-a||} \leq &|||Df(a)|||\,||g(x)-g(a)|| \\&+ C\frac{||f(x)-f(a)-Df(a)(x-a)||}{||x-a||}\end{align}$$.
When $x\to a$ , the first term goes to zero because $|||Df(a)|||$ is a constant and $g$ is continous, and the second term goes to zero because of the differentiability of $f$.
|
H: Question on polar decomposition of operators.
Suppose $\tau$ is an operator on a finite dimensional complex inner product space. I'm read the following,
If $\rho$ is the unique positive square root of the positive operatore $\tau^*\tau$, then
$$
\|\rho v\|^2=\langle \rho v,\rho v\rangle=\langle \rho^2 v,v\rangle=\langle \tau^*\tau,v\rangle=\|\tau v\|^2.
$$
Define $\nu$ on $im(\rho)$ by $\nu(\rho v)=\tau v$ for $v\in V$. The above equation shows $\rho x=\rho y$ implies $\tau x=\tau y$ so the above definition is well defined.
I don't follow how it shows $\nu$ is well defined. Can anyone clarify?
AI: Suppose $\rho v_1=\rho v_2$. Then you need $\nu$ to be defined on it in a unique way. But for $\rho v_1$ you are defining $\nu$ as $\tau v_1$ and for $\rho v_2$ as $\tau v_2$. You need then that $\tau v_1=\tau v_2$.
Notice that if $\rho(v_1-v_2)=0$ then, by the norm equalities above $\tau(v_1-v_2)=0$.
|
H: Open set as a countable union of open bounded intervals
Can every nonempty open set be written as a countable union of bounded open intervals of the form $(a_k,b_k)$, where $a_k$ and $b_k$ are real numbers (not $\pm\infty$)?
If yes, can someone point me toward a proof?
If not, counterexample?
Note that this is not the same question as the property "every nonempty open set is the disjoint union of a countable collection of open intervals."
AI: Hint: Let $A$ be open, and consider all intervals $(p,q)$ such that $p$ and $q$ are rational and $(p,q)\subset A$.
|
H: Proving that $x\in E^{o} \iff B_{r}(x)∩ E^{c}\not= \varnothing$
I know it is so easy proof. But I am confused.
Remark:
$x\in E^{o} \iff B_{r}(x)∩ E^{c}\not= \varnothing$
Proof
(İf) suppose $x\in E^c$ and $B_{r}(x)∩ E^{c}=\varnothing$
Then we have $B_{r}(x)⊆ E$ then, $x \not\in E^o$
But i cannot do and not if part. Please show me thank you
AI: Let $X$ be a (topological) metric space. Let $E\subseteq X$. What you want to prove is that $$E^\circ=X\smallsetminus \overline{X\smallsetminus E}$$
That is, the interior of a set is the complement of the closure of the complement of $E$. If this looks unclear, draw a picture.
P
($\Leftarrow$) Suppose $x\in E^\circ$. Then there exists a ball $B(x,\epsilon)$ such that $B(x,\epsilon)\subseteq E$ strictly. It follows $$B(x,\epsilon)\cap (X\smallsetminus E)=\varnothing$$ so $x\notin \overline{X\smallsetminus E}$.
($\Rightarrow$) Now suppose $x\notin \overline{X\smallsetminus E}$. This means that there exists a ball $B(x,\epsilon)$ such that $$B(x,\epsilon)\cap (X\smallsetminus E)=\varnothing$$
But this means $B(x,\epsilon)\subseteq E$; so $x\in E^{\circ}$.
ADD If we let $E$ be $X\smallsetminus F$ for some $F$; we get the shorter
$$(X\smallsetminus F)^\circ =X\setminus \overline F$$ which tells how to interchange the interior and closure operations when complementation is used.
|
H: Representing $m\times n$ matrix using ordered $n$-tuples and an $m$-tuple
Can a matrix, generally
\begin{bmatrix}
a_{1,1} &\cdots &a_{1,n} \\
\vdots &\ddots & \vdots \\
a_{m,1} &\cdots &a_{m,n}
\end{bmatrix}
be represented using ordered $n$-tuples inside an $m$-tuple, like this:
$((a_{1,1},...,a_{1,n}),...,(a_{m,1},...,a_{m,n}))$ ?
AI: Yes, you can express an $m\times n$ matrix using ordered $n$-tuples "inside" an (ordered) $m$-tuple, as you display. Sometimes this is referred to as expressing a matrix in row-major order. Similarly, one can express a matrix in column-major order.
Related to "column-major order" is the vectorization of a matrix
|
H: Solution to $u_{n+1}=u_n/n+u_{n-1}/(n-1)$
What is the solution to the following recurrence relation $$u_{n+1}=\frac{u_n}{n}+\frac{u_{n-1}}{n-1}\ \forall n\geq 2$$ where $u_2=u_1=1$?
AI: Let $a_n=(n-1)!u_n$, and multiply the recurrence by $n!$:
$$\begin{align*}
a_{n+1}&=n!u_{n+1}\\
&=\frac{n!}nu_n+\frac{n!}{n-1}u_{n-1}\\
&=(n-1)!u_n+n(n-2)!u_{n-1}\\
&=a_n+na_{n-1}\;,
\end{align*}$$
with initial values are $a_1=a_2=1$. If you shift the index by setting $b_n=a_{n+1}$, then $b_0=b_1=1$, and the recurrence $a_{n+1}=a_n+na_{n-1}$ becomes
$$b_n=b_{n-1}+nb_{n-2}\;.$$
This is OEIS A000932, which doesn’t seem to have a simple generating function or closed form, so I think it unlikely that there’s a nice closed form for your sequence. The OEIS entry does have a closed form for the modified sequence in terms of several hypergeometric functions; if that’s good enough for your purposes, you can simply divide it by $n!$, since $b_n=a_{n+1}=n!u_{n+1}$.
|
H: Base and Independence proof
Adapted from Axler,
Could someone explain the last part for me? How does the unique representation imply all the constants are suddenly $0$? We are trying to show it is linearly independent, he doesn't know this yet, so why is he doing that?
Please excuse the length of this question.
AI: Given a base $\{v_1,\dots,v_n\}$, the zero vector can be clearly written as $$0=\sum_{i=1}^n 0\cdot v_i$$
Since we're assuming the representation is unique, that is $$\sum_{i=1}^n a_iv_i=\sum_{i=1}^n b_iv_i\implies a_i=b_i$$ the fact that $$0=\sum_{i=1}^n \lambda_iv_i$$ gives that $\lambda_i=0$, so the $v_i$ are linearly independent.
Note that linear independence is a rewording of the zero vector is uniquely represented by the span, since linearity gives $$(a_1,\ldots,a_n)=(b_1,\ldots,b_n)\iff(a_1-b_1,\ldots,a_n-b_n)=(0,\ldots,0)$$ which is precisely what we're saying above.
|
H: Non Identical Closure
I'm working on counterexample here. Can we construct two bounded non empty open sets $A,B$ with $A \subset B$ that are $\lambda(A)=\lambda(B)$ but $\overline{A}\ne\overline{B}$? Here $\lambda$ is the Lebesgue measure, thank you.
AI: No. If $A\subseteq B$, but $\operatorname{cl}A\ne\operatorname{cl}B$, there must be a point $x\in(\operatorname{cl}B)\setminus\operatorname{cl}A$. Let $U=B\setminus\operatorname{cl}A$; then $x\in\operatorname{cl}U$, so $U$ is a non-empty open subset of $B$ disjoint from $A$, and $\lambda(B)>\lambda(A)$.
|
H: How to set up a double integral with $x,y$ and $z$?
Use a double integral to find the volume of the solid bounded by graphs of the equations given by: $z=xy^3, Z>0,\; X>0,\; 5X<Y<5$
How would you set up this integral? please help me.
AI: Here is how
$$ \int_{0}^{1}\int_{5x}^{5} xy^3 dydx.$$
You should plot the graph in xy-plane to see what's happening.
Note: More generally, if the region bounded by two surfaces, namely from below by $z_1=f_1(x,y) $ and above by $z_2=f_2(x,y)$, then the volume of the region is given by
$$ \int\int_{A} (z_2-z_1) dA .$$
|
H: "uniquely written" definition
I'm having troubles with this definition:
My problem is with the uniquely part, for example the zero element:
$0=0+0$,
but $0=0+0+0$
or $0=0+0+0+0+0+0$.
Another example, if $m \in \sum_{i=1}^{10} G_i$ and $m=g_1+g_2$, with $g_1\in G_1$ and $g_2\in G_2$,
we have: $m=g_1+g_2$ or $m=g_1+g_2+0+0$.
It seems they can't be unique!
I really need help.
Thanks a lot.
AI: Well notice what the definition says. It says that for each $m \in M$, you need to be able to write $m= \sum\limits_{\lambda \in \Lambda} g_{\lambda}$ where this sum is over all $\lambda$. So for $0$, the only possibility is a sum of $0$ $\lambda$-many times.
|
H: Abelian subgroups of $GL_n(\mathbb{F}_p)$
Let $p$ be a prime number, and let $k=\mathbb{F}_p$ be the field of $p$ elements. Let $G=GL_n(k)$. We know that
$$|G|=\prod_{i=0}^{n-1}(p^n-p^i)=p^{\binom{n}{2}}\prod_{i=0}^{n-1}(p^{n-i}-1)$$
so that the Sylow $p$-subgroups of $G$ have order $p^{\binom{n}{2}}$. One such subgroup is $U$, the upper-triangular unipotent subgroup consisting of all upper-triangular matrices with $1$'s on the diagonal. Let $A_{ij}=I_n+E_{ij}$ for $j>i$, where $E_{ij}$ is the matrix with a $1$ in the $ij$th entry and $0$'s elsewhere. If we find $i_1,\ldots,i_r,j_1,\ldots,j_r$ such that the $A_{i_kj_k}$ pairwise commute, then we have:
$$(\mathbb{Z}/p\mathbb{Z})^{\oplus r}\cong \langle A_{i_1j_1},\ldots,A_{i_rj_r}\rangle\subset U$$
Two questions:
Is every copy of $(\mathbb{Z}/p\mathbb{Z})^{\oplus r}$ inside of $U$ conjugate under $G$ to a subgroup of the form $\langle A_{i_1j_1},\ldots,A_{i_rj_r}\rangle$?
Do there exist distinct, conjugate subgroups of the form $\langle A_{i_1j_1},\ldots,A_{i_rj_r}\rangle$ and $\langle A_{i_1'j_1'},\ldots,A_{i_r'j_r'}\rangle$?
The largest value of $r$ for which such subgroups exist is $r=\lfloor n^2/4\rfloor$. In this case, I believe the answers to these questions are yes and no, respectively. I'd like to know if this also holds for smaller values of $r$.
AI: For $r=1$: No and yes as long as $n \geq 4$.
For $r$ maximal and $n$ odd, the second question has the answer “yes” as well.
Typically, I think the answer to the first question is very definitely “no”. Every subgroup has a “straightening” of the same order and roughly the same embedding that is of roughly the form you describe (they are called algebra subgroups, and may not be exactly products of root subgroups; see page 109 of the 3rd book of the GLS revised CFSG; it is called Malcev's striaghtening). However, for $r$ maximal it might be “yes” anyways.
$r=1$
For the first question: In $G=\operatorname{GL}(4,p)$, one has the following elementary abelian subgroup that is not conjugate to a root subgroup: $\langle A_{1,3} \cdot A_{1,4} \cdot A_{2,4}\rangle$. Every generator of every cyclic root subgroup has the property that $g-g^0$ has rank 1 which is invariant under GL conjugation, but this subgroup does not have that property.
For the second question: In $G=\operatorname{GL}(3,p)$, one has the following cyclic root subgroups are conjugate: $\langle A_{1,2} \rangle \cong_G \langle A_{2,3} \rangle$.
$r$ maximal
For $r=(n-1)(n+1)/4 = \lfloor n^2/4 \rfloor$, that is, for $n$ odd and $r$ maximal, the second question's answer is no: there are two elementary abelian root subgroups of this maximal rank that are conjugate under the inverse-transpose map, but not conjugate in GL. Each are formed of block matrices:
$$P_{i,j} = \left\{ \begin{bmatrix} I & A \\ 0 & I \end{bmatrix} : A \in M_{i \times j}(\mathbb{F}_p) \right\}$$
The subgroups are $P_{(n-1)/2,(n+1)/2}$ and $P_{(n+1)/2,(n-1)/2}$.
|
H: How to set up a triple intergal with $x, y,$ and $z$
Use a triple integral to find the volume of the solid bounded by $z=16xy$, $z\ge 0$, $0 \le x \le 5$, $0 \le y \le 4$. I know how to set up the integral for $x$ and $y$ it would be $0$ to $5$ for $x$ and $0$ to $4$ for $y$. How would you set up the integral for $z$? Please show me how you would set it up and what the integral would look like if you can. I really appreciate your help.
AI: You know the ranges over which to integrate with respect to $x, y$: each will be treated as a fixed value of the inner-integrals. Since $z$ is a function of $x, y$, we'll use set up the integral of that variable to be innermost. The order of integrating over the ranges for variables $x$ and of $y$ does not matter, so we'll arbitrarily select the order for the outermost integrals.
Since we know $x, y \geq 0$, it follows that $z = 16xy \geq 0$ (and we are also given that $z \geq 0$), so we can use the bounds for variable $z$ in terms $x, y$, from lower bound of $z = 0$ to the upper-bound when $z = 16xy$. That permits us to begin computing volume over the range $0 \leq z \leq 16xy$:
$$ \int_{0}^{5}\int_0^4 \int_{0}^{16xy} dz\,dy\,dx = \int_0^5 \int_0^4 16xy \;dy\,dx$$
So evaluating the innermost of the triple integral gives you, essentially, the double integral in terms of $x, y$, since $$\int_0^{16xy} dz \;= \;\;z\;\Big|_0^{16xy} \;=\; 16xy - 0 = 16xy$$
|
H: existence of the directional derivative of a function
Let
$$f:\mathbb R^2\rightarrow\mathbb R,(x,y)↦\begin{cases}1&(\exists z\in\mathbb R\setminus\{0\}:(x,y)=(z,z ^2)\\0 &(\textrm{else})\end{cases}$$
$f$ is obviously not differentiable in $(0,0)$.
But what about any directional derivative $v$ in $(0,0)$? So I have to consider $\lim_{h\rightarrow0}\frac{f(hv)}h$. But why does this limit always exists?
If $f(hv)=0$ for all $h$ the limit is obviously 0 and also for a finite number of $f(hv)=1$.
In the other case $f(hv)=f(h(v_1,v_2))=1$ for $hv_2=h^2v_1^2\Leftrightarrow v_2=hv_1^2$ and now I am stuck. Does this case exists? And what about the limit of $\frac{f(hv)}h$ ? Wouldn't I get "$\frac10$" ?
AI: $\lim_{h\rightarrow 0} \frac{f(hv_1,hv_2)}{h}=0$,to prove it we show that for all $\epsilon>0$, there exists $\delta>0$ such that for all $h$ : $0<|0-h|<\delta$ implies $|\frac{f(hv_1,hv_2)}{h}-0|<\epsilon$. Setting $\delta=|\frac{v_2}{2v_1^2}|$ should make it work.
(Thanks for Sheldoor for pointing out a mistake in my previous solution)
|
H: Injective map between same dimension implies bijectivity?
For an injective map between two spaces with the same dimension, does the map need to be linear in order to be bijective?
In other words, if this statement universally true:
For any function, injectivity between same dimension implies bijectivity.
or it is true only for linear functions:
For linear functions, injectivity between same dimension implies bijectivity.
It seems I got some counter example for non-linear functions. So I am primarily interested in if it is true for linear functions.
AI: We can send $\mathbb{R}$ into $\mathbb{R}$ injectively and not bijectively. (It is a bijection to $(-1,1)$).
$x\mapsto\frac{2}{\pi}\arctan(x)$
Now, for linear functions it can also be false. Imagine vector spaces of the same infinite dimension. $S:\ell^2\rightarrow\ell^2$ defined by $S(a_1,a_2,\ldots):=(0,a_1,a_2,\ldots)$ is linear, injective but not bijective.
For linear functions between vector spaces of the same finite dimension then, it is true.
To prove this use the fundamental theorem of linear algebra. See where the function sends a basis of the space.
|
H: Example of non G-delta set
An open set is clearly a $G_{\delta}$ set. A closed interval $[a,b]$ is a $G_{\delta}$ set as an intersection of the open intervals $(a-\frac1n,b+\frac1n)$ for all positive integers $n$. What is an example of a set that's not $G_{\delta}$?
AI: The set $\Bbb Q$ of rational numbers is not a $G_\delta$ set; you can find several proofs here.
|
H: Proving the normed linear space, $V, ||a-b||$ is a metric space (Symmetry)
The following theorem is given in Metric Spaces by O'Searcoid
Theorem: Suppose $V$ is a normed linear space. Then the function $d$ defined on $V \times V$ by $(a,b) \to ||a-b||$ is a metric on $V$
Three conditions of a metric are fairly straight-forward.
By the definitions of a norm, I know that $||x|| \ge 0$ and only holds with equality if $x=0$. Thus $||a-b||$ is non-negative and zero if and only if $a=b$.
The triangle inequality of a normed linear space requires: $||x+y|| \le ||x|| + ||y||$. Let $x = a - b$ and $y = b - c$. Then $||a - c|| \le || a - b || + || b - c||$ satisfying the triangle inequality for a metric space.
What I am having trouble figuring out is symmetry. The definition of a linear space does not impose any condition of a symmetry. I know from the definition of a linear space that given two members of $V$, $u$ and $v$ they must be commutative, however, I do not see how that could extend here.
Thus what I would like to request help with is demonstrating $||a - b|| = ||b - a||$.
AI: $$\|a-b\|=\|(-1)(b-a)\|=|-1|\cdot\|b-a\|=\|b-a\|$$
|
H: Set containing another set but having same measure
A set of real numbers is said to be a $G_{\delta}$ set provided it is the intersection of a countable collection of open sets. Show that for any bounded set $E$, there is a $G_{\delta}$ set $G$ for which $$E\subseteq G\text{ and }m^*(G)=m^*(E)$$
I want to define $G_n$ to be the the union of open balls of size $1/n$ around every point in $E$, i.e. $G_n=\bigcup_{x\in E}B_{1/n}(x)$. Then take $G=\bigcap_{n\in\mathbb{Z}^+}G_n$
Clearly $E\subseteq G$. Also, $G_n$ is an open set, since it's a union of open balls. Then $G$ is an intersection of a countable collection of open sets.
I need to prove $m^*(G)=m^*(E)$. Since $E\subseteq G$, I know $m^*(G)\geq m^*(E)$. How can I prove they're equal?
AI: As written, this is not the case. Consider the rationals in $[0,1]$. This is measure $0$ but each $G_n$ will have at least measure $1$.
Try directly using the definition of outer measure to get a countable collection of open sets that approximate the measure of $E$.
|
H: First-grader problem in arithmetic
I found this problem in a text book on arithmetic for first graders (7 y.o.) of the former USSR* . The problem comes from the section that covers single-digit addition and subtraction. Here is the screenshot of the problem:
This is the entire problem: there is no textual description accompanying it, and of course no answer at the back of the book. Other problems in the section are of the 1 + 6 = and 8 - 7 = kind, so this should be an elementary problem as well. However, I cannot figure out what is being asked here: I do not remember this notation, because we used different text books.
Can anybody figure out what is being asked by this assignment?
* A.S.Pchyolko, G.B.Polyak "Arithmetic" Fifth edition. Text book for the first grade of elementary school. Moscow, Printing house of the Department of Education, 1959
AI: Possibly the student is to produce three numbers by adding the number in the centre of the circle to each of the three around it. (Of course by adding $-6$ I mean subtracting $6$, since I assume that the students had not yet been introduced to negative numbers.) I note that the indicated subtractions are all possible in the non-negative integers, a fact that can be viewed as some small evidence in favor of the interpretation.
|
H: Trigonometric Anti-derivative
What is $$\int \frac{\sin(x)^2}{\cos(x) + 1}dx\;?$$ I've tried everything I can think of, but I can't get it into a form that I can solve.
AI: Note that $$\frac{\sin^2x}{1+\cos x}=\frac{1+\cos x}{1+\cos x}(1-\cos x)$$ since $1-\cos^2x=\sin ^2x$ and $1-y^2=(1-y)(1+y)$
|
H: Why $\int_{0}^{\pi}\arctan{\cos{x}}dx = 0$?
I saw this in Ron Gordon's answer to this question:
I need assistance in integrating $ \frac{x \sin x}{1+(\cos x)^2}$
Thank you!
AI: Odd around $\pi/2.$ That is, given $f(x) = \arctan \cos x,$ we have $f(\pi - x) = -f(x).$ Draw a graph.
|
H: How many revolutions per minute does a wheel make if its angular velocity is 20π radians per second?
Note: This is a homework question, however, I am not asking for anyone to do it for me. I just need some direction in how I should go about solving it.
The question reads:
How many revolutions per minute does a wheel make if its angular velocity is 20π radians per second?
I am not familiar with angular velocity which is why I am lost as to where I should start to solve this problem.
AI: Hint: Recall that $\pi$ radians is the same angle as $180$ degrees. So one revolution is $2\pi$ radians.
I expect no more help is needed. But you might want to leave your solution as a comment, so that I can tell you that you are right.
|
H: If $x^2+ax+b=0$ has a rational root, show that it is in fact an integer
I have tried as follows. Please help to double check the proof! Thank you!
Since $x=p/q$ ($p$, $q$ are integers), $(p/q)^2+(p/q)a+b=0$
So, $(p/q)^2=-b-a(p/q)$
then, $p^2=-bq^2-a(p/q)q^2$
and, $p = \dfrac {q(-bq-a(p/q)q)}{p}$
now, it is clear that $q \mid p$, thus $p/q$ must be an integer.
AI: You’re doing fine up through $p^2=-bq^2-a(p/q)q^2$, but then you go a bit astray. At the end you have something of the form $p=\frac{qn}p$ for an integer $n$; this isn’t a multiple of $q$ unless $\frac{n}p$ turns out to be an integer.
Let’s simplify a bit to make things more readable: $p^2=-bq^2-apq=-q(bq+ap)$. You can certainly conclude from this that $q\mid p^2$. That’s not a contradiction in itself, but you can derive one from it if you further assume that $\frac{p}q$ is in lowest terms. You’ll need the fact that if $\gcd(a,b)=1$, and $a\mid bc$, then $a\mid c$. (Alternatively, you can use unique factorization here.)
|
H: Is $\Bbb{Q}\cap [0,1]$ not connected?
Motivation: My textbook states that a connected subset of a normal space is mapped to all of $[0,1]$ if it has non-empty intersections with disjoint closed sets, one of which is mapped to $\{0\}$ and the other mapped to $\{1\}$. This is because the image of a connected set has to be a connected set, by Darboux's lemma.
But on reading the proof to Urysohn's lemma, on which this assertion is based, I was under the impression that points were mapped only to rational points on $[0,1]$.
So isn't it true that the connected subset should have been mapped to $\Bbb{Q}\cap [0,1]$ rather than all of $[0,1]$?
AI: The proof of Uryson’s lemma doesn’t actually tell you exactly which points of $[0,1]$ are in the range of the function, except that $0$ and $1$ definitely are. At one extreme the range may be $\{0,1\}$, at the other it may be $[0,1]$, and it may be somewhere in between. If you look closely, you’ll see that while open sets $U_r$ are defined only for rational $r$, the values of the function itself are not necessarily labels on those sets $U_r$.
$\Bbb Q\cap[0,1]$ is clearly not connected, since it’s the disjoint union of the relatively open sets $\Bbb Q\cap\left[0,\frac{\sqrt2}2\right)$ and $\Bbb Q\cap\left(\frac{\sqrt2}2,1\right]$.
|
H: deductions in a propositional calculus
Hope you're all doing well. I have a question about deductions in logical systems. Say we have a logic in the language of propositional logic. We can think of this as the set of tautologies of propositional logic, along with the inference rule modus ponents.
(See logic for a rigorous definition of a logic, under the section "generic description of a propositional calculus.")
If we know that $A$ and $B$ are well-formed-formulas that are both contained in this set, is it true that $A \wedge B$ is also in this set? Any hints as to how one would go about showing this?
Thank you!
Sincerely,
Vien
AI: It is technically feasible to devise a logic such that $A$ and $B$ are in the logic, but not $A \wedge B$. For instance, you could devise a logic which never makes use of $\wedge$ (it might only use $\rightarrow$, for instance). Such logics are interesting to study formally, but the lack of this "closure under conjunction" property is of course to be expected.
For the most part, logics with $\wedge$ will generally include enough machinery to prove $A \wedge B$ given that $A$ and $B$ are both provable (i.e. tautologies). In fact, one might question whether or not $\wedge$ could even be conjunction if it failed to have this closure property. Indeed, in many cases, the connective $\wedge$ has this property by definition. In Greg Restall's An Introduction to Substructural Logics, for instance, $\wedge$ is by definition a binary connective such that from $X \vdash A$ and $X \vdash B$, you can infer $X \vdash A \wedge B$ (so, in particular, if $A$ and $B$ are tautologies, then so is $A \wedge B$).
|
H: non-axiomatizable logics
Hope you're all doing well. My question is about non-axiomatizable logics. My understanding is that a "logic" (the mathematical structure) is just another word for a "propositional calculus" as in http://en.wikipedia.org/wiki/Propositional_calculus. (see the formal defn. under "general definition of a propositional calculus.") It seems like this definition inherently depends on axioms, and thus is intuitively "axiomatized."
Background:
Say we look at the logic S4, which we can think of(alternatively to the rigorous definition in the above wiki article) as a set of wffs in a specific language $L$ ($L$ is basically the propositional language augmented with some modal operators $\Box$ and $\Diamond$ and with a suitable definition of grammatically correct expressions, i.e. wffs in the language). This set of wffs (S4) contains the tautologies of propositional logic, but also some additional axioms and any wffs in $L$ that can be obtained by using the inference rule $\textit{modus ponens}$. (See http://plato.stanford.edu/entries/logic-modal/ for these additional axioms).
Please correct me if i'm wrong about any of this; I'd appreciate it. In my reading I've come across the idea of a non-axiomatizable logic. (under definition 17 page 139 of http://individual.utoronto.ca/philipkremer/onlinepapers/DTL.pdf) It looks like the concept is quite common, i.e. http://onlinelibrary.wiley.com/doi/10.1002/malq.19610070113/abstract. As I said, the wiki definition of a logic seems to be inherently axiomatizable.
My question:
Is there a (relatively simple) example of a non-axiomatizable logic? I've tried googling it, but nothing too useful has come up yet. Is there somewhere I can learn more about it? Thank you for any help/clarification!
Sincerely,
Vien
AI: In fact, there is. One such classic example is second-order logic, whose semantics can be given, but for which you can prove there is no axiomatization. I posted some resources here in case you're interested (but just FYI, you need to already be familiar with first-order semantics to understand the significance of second-order logic).
|
H: Is there a better alternative to the phrase, 'it holds that'?
The following phrases abound in my writing:
There exists [whatever] such that [whatever].
For all [whatever] it holds that [whatever].
Lately, I've been feeling that the phrase 'it holds that' is overly long-winded. The only substitute I can think of is 'we have that' which is just as bad. I've solved the problem in my personal writing by using the abbreviation 'iht = it holds that' (along with sth = such that), but this isn't appropriate for more formal pieces.
Is there a better phrase?
AI: I don't see anything wrong with such phrases. I think they're perfectly idiomatic, and I don't see them as long-winded. I'd much rather occupy another couple centimeters of space on the page than confuse the reader with an ambiguous statement.
In the second case, I personally prefer "we have that" or "we have". I don't care as much for "it holds that", because my brain briefly searches for the antecedent of "it" and there's a moment of grammatical dissonance.
|
H: Question about differentiability and sequences
Given $f\colon X\rightarrow \mathbb R$ differentiable in $a\in X \cap X'_+ \cap X'_-$.
If $x_n<a<y_n \;\forall n$, and $\lim x_n=\lim y_n=a$, prove that
$$\lim_{n\rightarrow +\infty}\frac{f(y_n)-f(x_n)}{y_n-x_n}=f'(a).$$
Does anyone know how to solve this problem?
Obs: $X$ is the domain of function. $X'_-$ is the set of all accumulation points which are to the left to "a", and "X'_+" the set of accumulation points to right of "a".
AI: The assumption that $f$ is differentiable at $a$ is equivalent to the existence of a function $\epsilon(x)$ defined in some neighborhood of $a$ such that
$$
f(x)=f(a)+f'(a)(x-a)+(x-a)\epsilon(x)\qquad \lim_{x\rightarrow a}\epsilon (x)=0.
$$
It follows that
$$
f(y_n)-f(x_n)-f'(a)(y_n-x_n)=(y_n-a)\epsilon(y_n)+(a-x_n)\epsilon(x_n).
$$
Now note that $x_n<a<y_n$ yields $0<y_n-a<y_n-x_n$ and $0<a-x_n<y_n-x_n$, whence
$$
\Big|\frac{f(y_n)-f(x_n)}{y_n-x_n}-f'(a)\Big|\leq|\epsilon(y_n)|+|\epsilon(x_n)|.
$$
The result follows.
|
H: How to show that for any real numbers $x,y,z$ $|x|+|y|+|z|\le|x+y-z|+|y+z-x|+|z+x-y|?$
How to show that for any real numbers $x,y,z$$$|x|+|y|+|z|\le|x+y-z|+|y+z-x|+|z+x-y|?$$
I'm don't know how to split RHS.
AI: Write $x=a+b$, $y=b+c$, $z=c+a$. Then the following equivalent inequality is clear.
$$|a+b|+|b+c|+|c+a|\le 2|a|+2|b|+2|c|$$
|
H: Bounded sets with finite minimum distance and sum of measures
Let $A$ and $B$ be bounded sets for which there is an $\alpha>0$ such that $|a-b|\ge\alpha$ for all $a\in A,b\in B$. Prove that $m^*(A\cup B)=m^*(A)+m^*(B)$.
We automatically have $m^*(A\cup B)\leq m^*(A)+m^*(B)$ for any sets $a,b$. Now we must prove the other direction $m^*(A\cup B)\geq m^*(A)+m^*(B)$. So given a countable collection of open intervals that cover $A\cup B$ and has total length $m^*(A\cup B)+\epsilon$, find another collection that cover $A$ and a collection that cover $B$, so that the combined length is $m^*(A\cup B)+\epsilon+\delta$ for some small $\delta$.
I get the idea of the $|a-b|\ge\alpha$ condition that the two sets are somewhat "separated" on the real line. But I don't know how to use it in the proof.
AI: Hint: Cover $A \cup B$ with intervals shorter than $\frac \alpha 2$. Then argue that each interval only intersects one of $A$ or $B$. Then $m(A \cap$ all the intervals it intersects$=m(A)$
|
H: If $y\in\mathbb R^+$ then $\exists~m\in\mathbb N$ such that $0<\dfrac{1}{2^m}
Using Archimedean Property how to show the following:
If $y\in\mathbb R^+$ then $\exists~m\in\mathbb N$ such that $0<\dfrac{1}{2^m}<y.$
AI: Hint: Start by showing that:
$$\forall n\in \mathbb{N}[n<2^n]$$
|
H: Inclusion and Exclusion how many n-strings with a,b,c and d
There are n different characters namely a, b, c, d. Use them to compose n-length string. Count the number of different n-length strings which at least contain one a, one b and one c.
My previous idea is like:
Set A contains strings with at least one a. Similarly we have set B and C. So |A| = |B| = |C| = n * 4^(n-1). Firstly, choose one place of n (n possibilities) to set it to character a. Then n-1 places left with 4^(n-1) possibilities.
But some guys say this is not right. He suggests that the right answer should be |A| = |B| = |C| = 4^n - 3^n. Could you guys explain why it does not work?
EDIT 1:
I think there is a rule out there that I should follow when counting this kind of permutations. I need to find it out. Otherwise, I would make similar mistake next time without consciousness. It is essential to summarize the tip when and how I avoid this kind mistake.
Could someone figure the rule out?
EDIT 2
I think there is rule somehow:
When counting elements with property P, we cannot split the counting process into pieces. That is to say we cannot split property P into P = P1 + P2 + ... + Pk. This usually repeats one element several times.
For the above problem, I split the number of at least 1 a of a n-length string into two sub-properties as 1 a and 0 and/or more a's. This will repeatedly count some strings with more than 1 a. Like n = 5, abcaa will be counted 3 times since it has 3 a's.
AI: Suppose that $n=5$, and consider the string $aaabc$. You count this once as $\underline{a}aabc$, once as $a\underline{a}abc$, and yet a third time as $aa\underline{a}bc$, where in each case the underlined $a$ is the one that you chose first, and the other four characters were chosen as one of the $4^{5-1}$ possibilities for. Thus, you’ve counted the same string three times in your figure of $n\cdot4^{n-1}$. In general you’ll find that if a string has $k$ copies of the letter $a$, it gets counted $k$ times in your figure of $n\cdot4^{k-1}$. Thus, you are badly overcounting the number of strings containing at least one $a$.
There are $4^n$ strings of length $n$ altogether, and there are $3^n$ strings of length $n$ containing no $a$, so there are actually $4^n-3^n$ strings of length $n$ containing at least one $a$.
|
H: relation between arithmetic series and `square` arithmetic series
For example:
$$1+2+\text{...}+n=\frac{n(n+1)}{2}~~~(1)$$
$$1^2+2^2+\text{...}+n^2=\frac{n(n+1)(2n+1)}{6}~~~(2)$$
In this equality, I sometimes recall by heart
$\frac{n(2n+1)(2n+3)}{6}$ or others.
Why I cannot memorize some formulas exactly over these years?
Question1
How to derive $\frac{n(n+1)(2n+1)}{6}$ from $1^2+2^2+\text{...}+n^2$?
Question2
Now I happen to notice that $n(n+1)$ are the same to the summation in
$1+2+\text{...}+n=\frac{n(n+1)}{2}$
What's the relation between (1) and (2).
I think there must be large materials about more general cases
$$\sum _{i=0}^n i^k\text{ },k=1,2,\text{...},n~~~(3)$$
What's the formal terminology about that?
AI: First, we look at your observation that
$$1+2+3+\cdots+n=\frac{n(n+1)}{2}.\tag{1}$$
If you are somewhat familiar with combinatorics, there is a nice way of seeing it via the binomial coefficient $\binom{n+1}{2}$, sometimes called $C(n+1,2)$ or $C_2^{n+1}$ (there are other names).
The right-hand side of (1) is the number of ways to choose $2$ numbers from the $n+1$ numbers $1$ to $n+1$. Let's count the number of ways to choose $2$ numbers in another way.
Maybe the second largest number chosen is $n$. There is then $1$ way of choosing the largest number.
Maybe the second largest number chosen is $n-1$. There are then $2$ ways to choose the largest number.
Maybe the second largest number chosen is $n-2$. There are then $3$ ways to choose the largest number.
Continue. Finally, if the second largest number chosen is $1$, there are $n$ ways to choose the largest number.
It follows that $\binom{n+1}{2}=1+2+3+\cdots +n$.
The nice thing about this viewpoint is that it generalizes. The same reasoning shows, for example, that
$$\binom{2}{2}+\binom{3}{2}+\binom{4}{2}+\cdots+\binom{n}{2} =\binom{n+2}{3}.\tag{2}$$
The binomial coefficient on the right of (2) is equal to $\frac{(n+2)(n+1)(n)}{3!}.$
The sum on the left of (2) is
$$\frac{(1)(2)}{2}+\frac{(2)(3)}{2}+\frac{(3)(4)}{2}+\cdots +\frac{n(n+1)}{2}.\tag{3}.$$
The sum (3) is a close relative of the sum of the first $n$ squares. For it is equal to
$$\frac{1}{2}\left(1^2+1+2^2+2+3^2+3+\cdots+n^2+n\right).$$
Since we already know the sum $1+2+3+\cdots +n$, we can now find a formula for the sum of the first $n$ squares. Putting things together, it is
$$2\frac{(n+2)(n+1)(n)}{3!} -\frac{n(n+1)}{2}.$$
Algebraic simplification (bring to common denominator $6$) then yields the hard to remember formula for the sum of the first $n$ squares.
|
H: Is this formula logically valid
Is: $\exists x (P(x) \land Q(x)) \rightarrow \exists x P(x) \land \exists x Q(x) $ logically valid?.
I cant found an intepretation in wich the formula is false.
AI: Yes, indeed it is valid. No counterexample to be found.
If there exists an $x$ for which both ($P(x)$ and $Q(x)$) hold, then there certainly exists an $x$ for which $P(x)$ holds, and there exists an $x$ for which $Q(x)$ holds.
The converse implication is not valid, however. If there exists an $x$ that's a pumpkin and there exists an $x$ that is green, it does not follow that there exists an $x$ that is a green pumpkin.
|
H: Why does $\lim_{\lambda \to \infty} \frac{\cos(\lambda x) - \cos(\lambda y)}{\lambda} = 0$
Why does $$\lim_{\lambda \to \infty} \frac{\cos(\lambda x) - \cos(\lambda y)}{\lambda} = 0$$
Or I should really rephrase my question. Is this limit obvious?
Motivation comes from $$\lim_{\lambda \to \infty} \int_{a}^{b} \sin(\lambda t) dt$$
The book I have simply states that limit is $0$. I worked it out and I found that
$$\left | \frac{\cos(\lambda x) - \cos(\lambda y)}{\lambda} \right | \leq |x - y|$$
I am unable to see how the RHS goes to $0$
AI: HINT:
As for real $z,-1\le\cos z\le 1\implies -2\le\cos \lambda x-\cos \lambda y\le 2$
So, $$-\frac2{\lambda}\le\frac{\cos \lambda x-\cos \lambda y}{\lambda}
\le\frac2{\lambda}$$
Use Squeeze Theorem
|
H: The same destination regardless of origin
When I was very little, I didn’t understand some basics of the space we live in. We always followed the same directions to get into town, to school, to the grocery store, and so on. So I figured that by following those directions, we would always arrive at the same destination no matter where we left from. It only happened that we always left from home.
In other words, I assumed space was more like a fully-connected graph than the continuous 3-space it seems to be. Vertices were places, and edges were the mystical “directions” by which you could travel between places. This hypothesis didn’t exactly stand up to experimentation. However I am curious to know two things:
Suppose a directed graph where outgoing edges are numbered; what does the set of graphs look like where any given path, say, as a list of edge numbers, has a constant destination?
Does a (nontrivial) continuous space or surface exist where any given translation $T$ has a constant destination?
Examples:
A directed graph with one vertex connected to itself.
A directed graph with $kn+1$ vertices: a central self-connected vertex and $k$ “spokes” of length $n$.
A continuous 0-space.
AI: Taking question 2. with a fairly liberal (and admittedly naive) interpretation, we could always consider a vector field $F$ with a single attractive fixed point - e.g. $F(\mathbf{x})=y\hat{\imath}-(x+y)\hat{\jmath}$, which has a spiral sink at $\mathbf{x}=(0,0)$. Then, the family of solutions to the differential equation
$$
\dot{\mathbf{x}}=F(\mathbf{x})
$$ for varying initial conditions $\mathbf{x}(0)=\mathbf{x}_0$, defines a continuous space where "all objects have a common destination regardless of origin," the desination being the fixed point $(0,0)$. Below is a PPlane output for a few trajectories:
This could probably, of course, be generalized a great deal by an expert in dynamical systems (which I don't really claim to be).
|
H: Boundaries of finite intersections and unions of sets
I apologize if this is a duplicate - I looked but didn't find one.
This question is sort of a sanity check.
Let $A$, $B$ be sets and define the boundaries $\partial A$ and $\partial B$ as usual.
Is it true that both $\partial (A \cup B) \subseteq \partial A \cup \partial B$ and $\partial (A \cap B) \subseteq \partial A \cup \partial B$?
It seems obvious, and the proofs seem really easy, but I haven't seen this fact written down anywhere.
To get started on the proofs I thought to look at a point $p$ which is in neither $\partial A$ nor $\partial B$ and then show it is not in either $\partial (A \cup B)$ or $\partial (A \cap B)$ by looking at different cases where $p$ is in the interior or exterior of $A$, $B$, taking intersections of neighborhoods, etc...
Thanks a bunch!
AI: $\newcommand{\bdry}{\operatorname{bdry}}\newcommand{\cl}{\operatorname{cl}}\newcommand{\int}{\operatorname{int}}$You can do it with inline calculations if you use the definition that $\bdry A=\cl A\cap\cl(X\setminus A)$:
$$\begin{align*}
\bdry(A\cap B)&=\cl(A\cap B)\cap\cl\Big(X\setminus(A\cap B)\Big)\\
&=\cl(A\cap B)\cap\cl\Big((X\setminus A)\cup(X\setminus B)\Big)\\
&=\cl(A\cap B)\cap\Big(\cl(X\setminus A)\cup\cl(X\setminus B)\Big)\\
&=\Big(\cl(A\cap B)\cap\cl(X\setminus A)\Big)\cup\Big(\cl(A\cap B)\cap\cl(X\setminus B)\Big)\\\\
&\subseteq\Big(\cl A\cap\cl(X\setminus A)\Big)\cup\Big(\cl B\cap\cl(X\setminus B)\Big)\\\\
&=\bdry A\cup\bdry B\;,
\end{align*}$$
and
$$\begin{align*}
\bdry(A\cup B)&=\cl(A\cup B)\cap\cl\Big(X\setminus(A\cup B)\Big)\\
&=\cl(A\cup B)\cap\cl\Big((X\setminus A)\cap(X\setminus B)\Big)\\
&\subseteq\cl(A\cup B)\cap\Big(\cl(X\setminus A)\cap\cl(X\setminus B)\Big)\\
&=\Big(\cl A\cup\cl B\Big)\cap\cl(X\setminus A)\cap\cl(X\setminus B)\\
&=\left(\Big(\cl A\cap\cl(X\setminus A)\Big)\cup\Big(\cl B\cap\cl(X\setminus A)\Big)\right)\cap\cl(X\setminus B)\\
&=\Big(\bdry A\cap\cl(X\setminus B)\Big)\cup\Big(\cl B\cap\cl(X\setminus A)\cap\cl(X\setminus B)\Big)\\
&=\Big(\bdry A\cap\cl(X\setminus B)\Big)\cup\Big(\bdry B\cap\cl(X\setminus A)\Big)\\\\
&\subseteq\bdry A\cup\bdry B\;.
\end{align*}$$
|
H: Math Parlor Trick
A magician asks a person in the audience to think of a number $\overline {abc}$. He then asks them to sum up $\overline{acb}, \overline{bac}, \overline{bca}, \overline{cab}, \overline{cba}$ and reveal the result. Suppose it is $3194$. What was the original number?
The obvious approach was modular arithmetic.
$(100a + 10c + b) + (100b + 10a + c) + (100b + 10c + a) + (100c + 10a + b) + (100c + 10b + a) = 3194$
$122a + 212b + 221c = 3194$
Since $122, 212, 221 \equiv 5 (mod\space9)$ and $3194 \equiv 8 (mod\space9)$
$5(a + b + c) \equiv 8 (mod\space9)$
So, $a + b + c = 7$ or $16$ or $26$
Hit and trial produces the result $358$. Any other, more elegant method?
AI: The sum of all six combinations is $222(a+b+c)$
So, $3194+100a+10b+c=222(a+b+c)$
As $3194/222>14$
If $a+b+c=15, 100a+10b+c=222(15)-3194=136$
$\implies a+b+c=1+3+6=10\ne 15$
If $a+b+c=16, 100a+10b+c=222(16)-3194=358$
$\implies a+b+c=3+5+8=16$ as needed
|
H: Subspace of a finite dimensional space is finite dimensional
I was reading a proof about subspaces of finite dimensional spaces are finite dimensional. The key was to add and remove vectors and then make a counting conclusion.
An excerpt (adapted from Axler) is given below
This is going to sound very stupid, but it would seem the whole proof is based on the idea that $span(v_1, \dots, v_{j-1})$ exists. How do we know $U$ is not infinite dimensional? Isn't there something wrong by assuming $U = span(v_1, \dots, v_{j-1})$
AI: The span of $\{v_1,\dots,v_{j-1}\}$ is just the set of all linear combinations of the vectors $v_1,\dots,v_{j-1}$; this certainly exists, whether or not it is equal to the given subspace $U$. Note that Step $j$ does not say that $U=\operatorname{span}\{v_1,\dots,v_{j-1}\}$; it says that if $U=\operatorname{span}\{v_1,\dots,v_{j-1}\}$, then we’re done, because (as you’ll see if you read further) $\{v_1,\dots,v_{j-1}\}$ is then a finite linearly independent set of vectors spanning $U$, i.e., a basis for $U$. In fact, in that case we can say that $\dim U=j-1$. If, however, the vectors $v_1,\dots,v_{j-1}$ do not span $U$, then we can find a vector $v_j\in U\setminus\operatorname{span}\{v_1,\dots,v_{j-1}\}$. Because $v_j\notin\operatorname{span}\{v_1,\dots,v_{j-1}\}$, the new, bigger set $\{v_1,\dots,v_{j-1},v_j\}$ is linearly independent, and we go on to Step $j+1$:
if $U=\operatorname{span}\{v_1,\dots,v_j\}$, then $\{v_1,\dots,v_j\}$ is a basis for $U$, $\dim U=j$, and $U$ is therefore finite-dimensional;
if $U\ne\operatorname{span}\{v_1,\dots,v_j\}$, then we can find a $v_{j+1}\in U\setminus\operatorname{span}\{v_1,\dots,v_j\}$, and we construct the new, still bigger linearly independent set $\{v_1,\dots,v_j,v_{j+1}\}$ and go on to Step $j+2$.
Now recall that $V$ is finite-dimensional. Let $n=\dim V$. Then we know that if $A$ is a linearly independent set of vectors in $V$, then $|A|\le n$: you cannot find a set of more than $n$ linearly independent vectors in $V$. Thus, the process described in the proof must stop by Step $n$: any further steps would give us at least $n+1$ linearly independent vectors $\{v_1,\dots,v_{n+1}\}$, and that’s impossible.
|
H: Yet Another Monty Hall Question - Please advise if alternative scenario proves the same principle
Okay, I'm very embarrassed that there are already 71 questions (based on search of "monty hall") and I'm going to post another one. I read the first 5 before succumbing to choice-overload. I'll try to keep this short and sweet.
A host and contestant stand before 3 doors. The host advises the contestant that behind 1 of the 3 is a car while the other doors each have a goat.
The host advises the contestant to choose 2 of the 3 doors to reveal if either has the car.
The contestant chooses door 1 and door 3.
The host advises that behind one of the doors chosen is a goat and asks if the contestant wants to keep doors 1 and 3 or switch to only revealing door 2.
Big Question: Is the probability of revealing the car higher if the contestant switches or stays with the original choice?
As I understand the original problem, the above has the same result, so the contestant should switch, but I can't wrap my head around the math and don't want to hurt my brain trying if I'm incorrect about the above fundamentally being the same scenario.
Also, if the above is the same, how is it any different from the contestant saying "3, no wait 2", since no matter which 2 doors are chosen (either by the contestant alone or with the help of the host, as in the original problem), we know that at least 1 door has a goat?
Last bit: If this is the same mathematical scenario, is it even less intuitive than the original or does it help clarify (to someone other than me) why the original works?
Addendum
Original MH problem, simplified:
There are 3 marbles in a bag; 2 are boring and grey, 1 is green. The host asks you to reach in and pull 1 out but not look at it. After doing this, the host, who can look into the bag, pulls out 1 grey marble. He then asks if you want to keep the 1 in your clutched hand or take the 1 still in the bag.
My version, simplified:
There are 3 marbles in a bag; 2 are boring and grey, 1 is green. The host asks you to reach in and pull 2 out but not look at them. After doing this, the host asks if you want to keep the 2 in your clutched hand or take the 1 still in the bag.
In both scenarios, 2 marbles are removed from the bag and 1 of those 2 is definitely grey. If we accept (and we all should at this point!) that in the first scenario the probability of the remaining door or marble being the winning choice is 2/3, shouldn't that hold true in the second scenario? If not, please explain at what point it diverges? If we know 1 of the 2 "out of the bag" is grey or a goat in either scenario, it shouldn't matter if we see which of the 2 it is, right?
Addendum 2
Thanks to Eric T for helping me get my head around this. With either of my modified scenarios, where my logic diverged was I allow the contestant to choose 2 doors and then keep both choices or switch, whereas in the original MH problem, the contestant is given a second "choice" with the host-reveal but still only keeps the original (or switches). One of my goals in creating this alternative was to eliminate the host variable which is a clear source of confusion (and trickery) in the original, leading to such misassumptions as
The host's knowledge of what is behind all three doors creates a mathematical bias since he won't ever choose the car at random. If MH only presents the option when the car wasn't selected (to throw the contestant off), this would not change the math when testing for when the contestant chooses the car first. If the host always chooses a goat, it's because he always has at least one goat to choose from and is supposed to reveal a goat, not choose a door at random.
Showing the contestant that one of the other 2 doors has a goat gives the contestant new information that affects the outcome. it is not eliminating the goat (seeing the goat) that makes switching more likely to reveal the car, it is eliminating the door.
If my variation has the pick-2 parameter but has the "only one door allowed" rule reinstated , switching is still the better option. Here is the final version:
A host and contestant stand before 3 doors. The host advises the contestant that 1 of the 3 doors hides a car while the other doors each hide a goat.
The host advises the contestant to choose 2 of the 3 doors to check for the car.
The contestant chooses door 1 and door 3.
The host asks the contestant to choose between opening 1 and 3 or switch to opening door 2.
In this scenario, the host has done nothing to interfere and the contestant knows that at least one of the two selected doors has a goat, but must still risk choosing the wrong door of his selected 2 or switching. While the odds may still seem 1/2 at first, the possibility that the contestant chose both goats and thus may have no chance with his current subset makes the better odds in switching clearer.
Last question: What would be the actual probability of choosing the car if we don't know if the car exists in the subset 1 or 0 times? Just a comment mentioning a concept or wiki page is fine. Just curious on the math and have no idea what search for.
AI: If the host is under no obligations except not to lie, "behind one of the doors is a goat" reveals absolutely nothing. There is no conditional probability here. The chances of winning are 2/3 if the contestant stays, and 1/3 if he switches.
Also, if he wins, the goat should ride shotgun; they're notoriously bad drivers.
Edit: To answer your last comment, this version is more intuitive than the original. In the original, one has to interpret the information provided by the host's big reveal; in your variant, it is easy to see that the information is useless and that we can ignore the host.
Edit 2:
I want to express the solution in an absolutely clear way that is not unique to this problem, because I think this mode of thought will be helpful to people who are perennially confused by these kinds of puzzles.
First, let's take the original Monty Hall problem. There are 3 doors, 2 hiding goats and one hiding a car. You choose a door uniformly at random. Now, there is a 1/3 chance that you've chosen the car, and 2/3 chance that you've chosen a goat. The host reveals a goat behind a door other than the one you've chosen. Now what?
2/3 of the time, you will be in this situation:
you are standing in front of a door with a goat
the other unopened door has a car
if you switch, you will win
if you stay, you will lose
1/3 of the time, you will be in this situation:
you are standing in front of a door with a car
the other unopened door has a goat
if you switch, you will lose
if you stay, you will win
2/3 of the time, switching is correct. Having no other information, you should switch.
Now, let's look at your problem in exactly the same way. There are 3 doors, 2 hiding goats and one hiding a car. You choose two doors uniformly at random. Now, there is a 2/3 chance that you've chosen the car and a goat, and 1/3 chance that you've chosen two goats. Regardless of what the host does:
2/3 of the time, you will be in this situation:
you have chosen two doors, one with a car and one with a goat
the unchosen door has a goat
if you switch, you will lose
if you stay, you will win
1/3 of the time, you will be in this situation:
you have chosen two doors, both with goats
the unchosen door has a car
if you switch, you will win
if you stay, you will lose
2/3 of the time, staying is correct. Having no other information, you should stay.
|
H: $T=U\Sigma V^T$ is the SVD of T. Given $\Sigma$ find T, U and V.
$T=U\Sigma V^T$ is the SVD of T.
$$\Sigma=\pmatrix{11.83&0\\0&0\\0&0}$$
The last two columns of $U$ are $[0.949,0,0.316]^T$ and $[-0.894,0.447,0]^T$
The first column of $V$ is $[0.316,0.949]^T$
a) what are the remaining values in $U$ and $V$?
b) identify the four fundamental subspaces of T.
I know that $\lambda_1=11.83$ and $\lambda_2=0$ (most likely) from $\Sigma$.
The first $r$ columns of $U=C(T)$ and the last $m-r$ columns of $U=n(T^T)$
The first $r$ columns of $V=C(T^T)$ and the last $n-r$ columns of $V=n(T)$.
Also, $$T=U\Sigma V^T$$
$$TV=U\Sigma$$
$$U=AV\Sigma^{-1}$$
How can I use these relationships to find $T$, and the remaining values of $U$ and $V$? Any help is greatly appreciated. :)
AI: Suppose columns of $U$ are $u_1$, $u_2$ and $u_3$. You know that $u_3$ has to be orthogonal to $u_1$ and $u_2$, and its norm has to be $1$, so there are two possibilities: $u_3 = \pm u_1 \times u_2$, where $\times$ is the cross product in $\mathbb R^3$.
For $V$, the problem is slightly easier. Suppose $v_1$ and $v_2$ are columns of $V$. There are also two possibilities: $v_2 = \pm\begin{pmatrix}v_{1y}\\-v_{1x}\end{pmatrix}$, where $v_{1x}$ and $v_{1y}$ are components of $v_1$.
Despite four choices for $U$ and $V$, we see that
\begin{align*}
U\Sigma V^T & =
\begin{pmatrix}
\vert & \vert & \vert\\
u_1 & u_2 & u_3 \\
\vert & \vert & \vert
\end{pmatrix}
\begin{pmatrix}
\lambda_1 & 0\\
0 & \lambda_2 \\
0 & 0
\end{pmatrix}
\begin{pmatrix}
- \ v_1^T -\\
- \ v_2^T -
\end{pmatrix} \\
& =
\begin{pmatrix}
\vert & \vert\\
\lambda_1u_1 & \lambda_2 u_2\\
\vert & \vert
\end{pmatrix}
\begin{pmatrix}
- \ v_1^T -\\
- \ v_2^T -
\end{pmatrix} \\
\end{align*}
We see here that the choice of $u_3$ is actually not important in the product. Also, because $\lambda_2 = 0$, we have
\begin{align*}
U\Sigma V^T & =
\begin{pmatrix}
\vert & \vert\\
\lambda_1u_1 & 0\\
\vert & \vert
\end{pmatrix}
\begin{pmatrix}
- \ v_1^T -\\
- \ v_2^T -
\end{pmatrix} \\
& = \lambda_1u_1v_1^T.
\end{align*}
This is your original matrix $T$. It is now easy to see that
$\text{image}(T) = \langle u_1 \rangle \subseteq \mathbb R^3$
$\ker(T) = \langle u_2, u_3 \rangle \subseteq \mathbb R^3$
$\text{image}(T^T) = \langle v_1 \rangle \subseteq \mathbb R^2$
$\ker(T^T) = \langle v_2 \rangle \subseteq \mathbb R^2$
These answers do not depend on the choice of $u_3$ and $v_2$.
By the way, as Amzoti pointed out, the given values for $u_1$ and $u_2$ are wrong. $u_1$ and $u_2$ are supposed to be orthogonal.
|
H: Problem in Cardinality and Order
Recently, I have read a Schaum's book. General Topology, and it introduced the concept of cardinality and order, but it points out something that I really don't understand.
If $A\preceq B$ and $B\preceq A$, then $A \sim B$.
If $X\supseteq Y\supseteq X_1$ and $X\sim X_1$, then $X\sim Y$
Prove these statements are equivalent which are both Schroeder-Berstein Theorem.
In fact, I know how to prove them separately but is there a easy way to prove equivalent?
2.In the book, it mentions 'axiom of choice' a lot. So I search it from wiki and I found that it means:
$$\forall X[\emptyset\notin X\Rightarrow \exists f:X\rightarrow\cup X \forall A\in X(f(A)\in A)]$$
It is quite easy to understand. But the book said this is equivalent to Zorn's Lemma:
Let $X$ be a non-empty partially ordered set in which every totally ordered subset has an upper bound. Then X contains at least one maximal element.
Wiki also mention that $\aleph_0$ is the smallest cardinality of a infinite set. This is directly derive from axiom of choice. Both of them I don't understand why.
3.Prove Law of trichotomy: Given any pair of set , either $A\prec B$ , $B\prec A$, or $A\sim B$. It said it can be proved ny using Zorn's Lemma.
AI: These questions require fairly extensive answers; you should split them up and ask them separately. I’ll answer the first one here.
Assume that $A\prec B$ and $B\prec A$ imply $A\sim B$ for all sets $A$ and $B$, and suppose that $X\supseteq Y\supseteq X_1$, where $X\sim X_1$. Since $Y\subseteq X$, the identity map $\mathrm{id}_Y:Y\to X$ is an injection (one-to-one); to complete the proof, we must show that there is also an injection $g:X\to Y$. Since $X\sim X_1$, there is a bijection $h$ mapping $X$ onto $X_1$; in particular, $h:X\to X_1$ is then an injection (one-to-one). The identity map $\mathrm{id}_{X_1}:X_1\to Y$ is an injection, so the composition $\mathrm{id}_{X_1}\circ h:X\to Y$ is an injection, and we’re done: we set $g=\mathrm{id}_{X_1}\circ h$.
Now assume that if $X\supseteq Y\supseteq X_1$, and $X\sim X_1$, then $X\sim Y$, and suppose that $A\prec B$ and $B\prec A$. Then there are injections $f:A\to B$ and $g:B\to A$. Let $A_1=f[A]$; clearly $A_1\subseteq B$, and $A_1\sim A$. We can almost apply our assumption, but not quite, because $B$ isn’t (necessarily) a subset of $A$. But we can use $g$ to get the same effect: let $X_1=g[A_1]$ and $Y=g[B]$, and observe that $X_1\subseteq Y\subseteq A$ and $X_1\sim A_1\sim A$. Thus, $Y\sim A$, and since $Y\sim B$ (because $g$ is injective), we have $A\sim Y\sim B$, i.e., $A\sim B$.
|
H: Convergence w.p. 1 vs convergence in probability: a "physical" example
I understand (proved) that convergence with probability one implies convergence in probability, and that the latter notion is indeed weaker; I've completed an exercise showing that a sequence of indicator variables on $[0,1]$ converged in probability but not almost everywhere.
However, I still don't have intuition about these notions of convergence in the "real world". Is there an example using either coin tossing or some concrete physical process such that by using plain english, it is obvious that we have convergence in probability, but not convergence almost everywhere?
AI: Consider a lightbulb which is being replaced each time it goes broken and assume that the lifetime of the $n$th lightbulb used is exponential (as lightbulbs lifetimes often are, at least in mathematics, aren't they?) with mean $\mu_n=n$. Let $D_t$ denote the age of the lightbulb in use at time $t$, that is, $t-D_t$ denotes the time at which the lightbulb in use at time $t$ began to be used.
Then $[D_t\to\infty]$ has probability zero since $D_t=0$ at every time $t$ in the unbounded sequence of time replacements. But $P(D_t\geqslant x)\to1$ for every $x$ because $\mu_n\to\infty$ hence $D_t\to\infty$ in probability.
If infinities are a problem, note that $X_t=1/D_t$ is such that $X_t\to0$ in probability but not almost surely.
|
H: Approximation of differential equations
Can someone provide me a good reference about approximation techniques in continous domain (not piecewise nor numerical methods) for differential equations?
AI: There really would be different types of methods for this, particularly dependent on how and what your differential equation is and also what you want to achieve.
I did some time approximations by studying systems of differential equations (also non-linear) by corresponding stochastic methods such as the master equations, Fokker Planck equations (vs. Langevin equations). For reference to both types of approach you just need to start on wikki to get primary information and then forward.
However this is one way. What is often interesting as well is to apply Fourier techniques. This is quite often used by electrical engineers (systems and signals) and in physics. The references there are really massive you need just goolgle the terms. Rather difficult to sieve what is the best technique for your case of equations.
Hope this helps well.
|
H: Why does $(\cos \theta + i \sin \theta)^n =(\cos n\theta + i \sin n \theta)$
Is it the Euler identity $$ e^{i \theta} =(\cos \theta + i \sin \theta)$$
$$ e^{i n \theta} =(\cos n \theta + i \sin n \theta)$$
AI: Hints (sketches)
First Proof: Trigonometric identities + induction: $\;\;\;\;$ For $\,n=2\;$ :
$$(\cos +i\sin t)^2=\cos^2t-\sin^2t+2i\cos t\sin t=\cos 2t+i\sin 2t$$
Induction:
$$(\cos t+i\sin t)^{n+1}=(\cos +i\sin t)^n(\cos +i\sin t)\stackrel{\text{ind. Hyp.}}=(\cos nt+i\sin nt)(\cos +i\sin t)=$$
$$=\cos nt\cos t-\sin nt\sin t+i(\sin nt\cos t+\sin t\cos nt)=\ldots\ldots$$
Second "Proof": Using polar representation
$$(\cos t+i\sin t)^n=\left(e^{it}\right)^n=e^{int}=\ldots\ldots$$
You can see that the second proof is way easier and direct than the first one...yet it requires to know some stuff that makes it so.
Choose yours...:)
|
H: Intersection of a countable collection of $F_\sigma$ sets
Let $\{f_n\}$ be a sequence of continuous functions defined on $\mathbb{R}$. Show that the set of points $x$ at which the sequence $\{f_n(x)\}$ converges to a real number is the intersection of a countable collection of $F_\sigma$ sets.
Continuity of $f_n$ means that for any $x\in\mathbb{R}$ and $\epsilon>0$, there exists $\delta$ such that $|y-x|<\delta$ implies $|f_n(y)-f_n(x)|<\epsilon$.
The sequence $\{f_n(x)\}$ converging to a real number $y$ means that for any $\epsilon>0$, there exists $N$ such that for all $n>N$, $|f_n(x)-y|<\epsilon$.
Intersection of a countable collection of $F_\sigma$ sets... and each $F_\sigma$ set is a countable union of closed set... that seems complicated.
AI: The recipe for these kind of problems is to write down the formula for a point $x$ to be in this set, using countable sets (like $\frac{1}{n}, n \in \mathbb{N}$, instead
of arbitrary $\epsilon$ and $\delta$). Also use that $f_n(x)$ converges iff it is a Cauchy sequence in $\mathbb{R}$, as $\mathbb{R}$ is complete.
So $x$ is in this set iff
for all $n$ in $\mathbb{N}$ there exists $m$ in $\mathbb{N}$ such that $k,l \ge m$ implies that $|f_k(x) - f_l(x)| \le \frac{1}{n}$.
Now define $A_{k,l,n} = \left\{x: | f_k(x) - f_l(x) | \le \frac{1}{n} \right\}$ which is closed (as the inverse image of the continuous function $|f_k - f_l|$ of the set $[0,\frac{1}{n}]$, eg.).
So the set of convergence points of $(f_n)$ equals $\cap_n \cup_m \cap_{k,l \ge m} A_{k,l,n}$
where the last set is closed, as an intersection of closed sets, and so
this set is a countable intersection of a countable union of closed sets.
|
H: Shortest way to travel to each point in a set of points exactly once, and return to starting point?
Is there a polynomial time algorithm that finds this?
Just interested.
Thanks in advance
edit: In this case you are given a set of cartesian co-ordinates that represents their physical distance from one another.
AI: According to Wikipedia, the Euclidean traveling salesman problem is NP-complete, which implies that there is no known polynomial algorithm for finding the optimal solution. The Euclidean metric does simplify things, however, making it easier to find good approximative solutions, as described in the Wikipedia article on TSP.
|
H: Show that if the sequence$(x_n)$ is bounded, then $(x_n)$ converges iff $\lim\sup(x_n)=\lim\inf(x_n)$.
Show that if the sequence$(x_n)$ is bounded, then $(x_n)$ converges iff $\lim\sup_{x\to\infty}(x_n)=\lim\inf_{x\to\infty}(x_n)$.
The definitions that I’m using:
$$\begin{align*}
&\liminf_{n\to\infty}x_n=\lim_{n\to\infty}\inf_{m\ge n}x_m\\
&\limsup_{n\to\infty}x_n=\lim_{n\to\infty}\sup_{m\ge n}x_m
\end{align*}$$
This’s the first time I deal with $\lim\inf$ things, can someone give me help?
Thank you.
AI: I’ll get you started. For one direction, suppose that $\lim\limits_{n\to\infty}x_n=x$; we want to show that $$\limsup_{n\to\infty}x_n=\liminf_{n\to\infty}x_n\;.$$ The most natural guess is that this is true because both are equal to $x$, so let’s try to prove that.
In order to show that $\limsup\limits_{n\to\infty}x_n=x$, we must show that $\lim\limits_{n\to\infty}\sup_{k\ge n}x_k=x$. To do this, we must show that for each $\epsilon>0$ there is an $m_\epsilon\in\Bbb N$ such that
$$\left|x-\sup_{k\ge n}x_k\right|<\epsilon\quad\text{whenever}\quad n\ge m_\epsilon\;.$$
Since $\lim\limits_{n\to\infty}x_n=x$, what we actually know is that for each $\epsilon>0$ there is an $m_\epsilon'\in\Bbb N$ such that $|x-x_n|<\epsilon$ whenever $n\ge m_\epsilon'$.
Show that if $|x-x_n|<\epsilon$ for all $n\ge m_\epsilon'$, then $\left|x-\sup\limits_{k\ge n}x_k\right|\le\epsilon$. Conclude that if we set $m_\epsilon=m_{\epsilon/2}'$, say, then $$\left|x-\sup_{k\ge n}x_k\right|<\epsilon\quad\text{whenever}\quad n\ge m_\epsilon$$ and hence $\limsup\limits_{n\to\infty}x_n=x$.
Modify the argument to show that $\liminf\limits_{n\to\infty}x_n=x$.
For the other direction, suppose that $$\limsup_{n\to\infty}x_n=\liminf_{n\to\infty}x_n=x\;;$$ we want to show that $\langle x_n:n\in\Bbb N\rangle$ converges. The natural candidate for the limit of the sequence is $x$, so we should try to prove that $\lim\limits_{n\to\infty}x_n=x$, i.e., that for each $\epsilon>0$ there is an $m_\epsilon\in\Bbb N$ such that $|x-x_n|<\epsilon$ whenever $n\ge m_\epsilon$. What we know is that
$$\lim_{n\to\infty}\sup_{k\ge n}x_k=x=\lim_{n\to\infty}\inf_{k\ge n}x_k\;,$$
i.e., that for each $\epsilon>0$ there is an $m_\epsilon'\in\Bbb N$ such that
$$\left|x-\sup_{k\ge n}x_k\right|<\epsilon\quad\text{and}\quad\left|x-\inf_{k\ge n}x_k\right|<\epsilon\quad\text{whenever}\quad n\ge m_\epsilon'\;.$$
(Why can I use a single $m_\epsilon'$ instead of requiring separate ones for each of the two limits?)
Show that if $\ell\ge n$, then $$|x-x_\ell|\le\max\left\{\left|x-\sup_{k\ge n}x_k\right|,\left|x-\inf_{k\ge n}x_k\right|\right\}\;,$$ and conclude that setting $m_\epsilon=m_\epsilon'$ will ensure that $|x-x_n|<\epsilon$ whenever $n\ge m_\epsilon$ and hence that the sequence converges to $x$.
|
H: Are there any real life applications of the greatest common divisor of two or more integers?
I am looking for real life applications of gcd. I have found one with tiles but there must be many more of these type.
AI: Suppose that you have two sets of people of cardinalities m,n and you want to divide them into teams of k people with every team being composed of people of only one of the original two sets. Then the maximum value of k where this is possible is $\gcd (m,n)$.
In fact if it is possible to do this with a specific k. Then $k |\gcd(m,n)$.
|
H: $\ker A \cap\mathrm{Im} A = {0} \Rightarrow \ker A + \mathrm{Im} A = R^n$.
Suppose, we have a linear operator A on $R^n$ such that $n>1$. I'm trying to prove that if $\ker A \cap \mathrm{Im}(A) = {0}$, then $\ker A + \mathrm{Im}(A) = R^n$.
Generally, $$\dim (\ker A + \mathrm{Im}(A)) = \dim \ker A + \dim \mathrm{Im}(A) - \dim (\ker A \cap \mathrm{Im}(A))$$
Rank-nullity theorem simplifies the previous equality: $$\dim (\ker A + \mathrm{Im}(A)) = n - \dim (\ker A \cap \mathrm{Im}(A))$$
If $\ker A \cap \mathrm{Im}(A) = {0}$, then $\dim (\ker A \cap \mathrm{Im}(A)) = 0$. Thus, $\dim (\ker A + \mathrm{Im}(A)) = n = \dim R^n$
I'm not sure whether I am allowed to deduce that $\ker A + \mathrm{Im}(A) = R^n$? If not, then how to (dis)prove?
AI: yes, you are allowed since
$$\left(U\le V\;\wedge\;\dim V<\infty\right)\implies\left(U=V\iff \dim U=\dim V\right)$$
For a very quick proof of the above just choose any basis of $\,U\,$ and complete it to a basis of $\,V\,$ ...
|
H: Continuous Functions - Topology
I'd like to prove the following.
A function $f:X \to Y$ is continuous if whenever $A$ is closed in $Y$,
$f^{-1}(A)$ is closed in $X$.
Proof. By definition, a function is continuous if the inverse image of every open set is open. Suppose that $A\in Y$ is closed. Then, $Y-A$ is open, so $f^{-1}(Y-A)$ is open.
$f^{-1}(Y-A) = X - f^{-1}(A)$ is open. So $f^{-1}(A)$ is closed.
Is this correct?
AI: Hints:
You cannot assume what you want to prove: suppose that whenever $\,A\subset Y\,$ is closed then also $\,f^{-1}(A)\subset X\,$ is closed.
Let $\,U\subset Y\,$ be open $\;\implies Y\setminus U\;$ is closed, so by asumption $\,f^{-1}\left(Y\setminus U\right)\;$ is closed in $\;X\;$ and thus $\;X\setminus f^{-1}(Y\setminus U)\;$ is open.
But $\,X\setminus f^{-1}(Y\setminus U)\subset f^{-1}(U)\;$ since:
$$z\in X\setminus f^{-1}(Y\setminus U)\implies z\notin f^{-1}(Y\setminus U)\implies f(z)\notin Y\setminus U\implies$$
$$f(z)\in U$$
Deduce now that in fact $\,f^{-1}(U)\;$ is open and thus $\,f\,$ fulfills the usual definition of continuity, i.e. $\,f\,$ is continuous.
|
H: Evaluating $\int^1_0\sqrt{1 - x^2}$
How to Evaluate $\int^1_0\sqrt{1 - x^2}$
I know that this is just a $\frac14$ of unit circle, that is $\frac\pi4$, but I want to solve it algebraically.
AI: Hint: Use the trig substitution $x=\sin\theta$ so that $dx=\cos\theta d\theta$. This leads to:
$$
\int_0^1 \sqrt{1-x^2}dx = \int_0^{\pi/2} \sqrt{1-\sin^2\theta} \cos\theta d\theta= \int_0^{\pi/2} \sqrt{\cos^2\theta} \cos\theta d\theta = \int_0^{\pi/2} \cos^2\theta d\theta
$$
Then make use of the trig identity:
$$
\cos^2\theta = \dfrac{1}{2} + \dfrac{1}{2}\cos{2\theta}
$$
|
H: Factor $x^6 +5x^3 +8$
I wanted to know, how can I factor $x^6 +5x^3 +8$, I have no idea. Is there any method to know if a polynomial is factored. Just some advice will do.
Help appreciated.
Thanks.
AI: Let $y = x^3$ to obtain $y^2 + 5y + 8 = 0$. This factors as $y = \frac{-5 \pm \sqrt{-7}}{2} = \frac{-5}{2} \pm \frac{\sqrt{7}}{2}i$. These have modulus $r = \sqrt{25/4 + 7/4} = 2\sqrt{2}$. Now solve $5/2 = 2\sqrt{2} \cos \theta$ to find the angle $\theta = \cos^{-1}(5/4\sqrt{2})$. Our two roots correspond to $re^{\pi-\theta}$ and $re^{\pi+\theta}$.
Now we have $x^3 = re^{\pi-\theta}$ and $x^3 = re^{\pi+\theta}$ to contend with. For the former, one root is $x_1 = \sqrt[3]{r}e^{(\pi-\theta)/3}$, so the other two are $x_2 = \sqrt[3]{r}\exp(\frac{\pi-\theta}{3} + 2\pi/3)$ and $x_3 = \sqrt[3]{r}\exp(\frac{\pi-\theta}{3} + 4\pi/3)$.
Similarly, the roots of the other equation are $x_4 = \sqrt[3]{r}e^{(\pi+\theta)/3}$, $x_5 = \sqrt[3]{r}\exp(\frac{\pi+\theta}{3} + 2\pi/3)$, and $x_6 = \sqrt[3]{r}\exp(\frac{\pi-\theta}{3} + 4\pi/3)$.
|
H: Two questions on topology and continous functions
I have two questions:
1.) I have been thinking a while about the fact, that in general the union of closed sets will not be closed, but I could not find a counterexample, does anybody of you have one available?
2.) The other one is, that I thought that one could possibly say that a function f is continuous iff we have $f(\overline{M})=\overline{f(M)}$(In the second part this should mean the closure of $f(M)$). Is this true?
AI: (1) Take the closed sets
$$\left\{\;C_n:=\left[0\,,\,1-\frac1n\right]\;\right\}_{n\in\Bbb N}\implies \bigcup_{n\in\Bbb N}C_n=[0,1)$$
|
H: Analytical Solution for Elastic Bar under applied end velocity
Say, a thin long rod is occupying the space $[0,L]$. It's isotropic, linear elastic, homogeneous. The partial differential equations for stress $\sigma(x,t)$ and displacement $u(x,t)$ are as follows ($E$ denotes the Young's Modulus and $\rho$ the density):
$$\frac{\partial^2 u}{\partial t^2} = \frac{1}{\rho} \frac{\partial\sigma}{\partial x}$$
$$ \frac{\partial \sigma}{\partial t} = E \frac{\partial^2u}{\partial x\partial t}$$
The bar is initially in rest and free of any stresses. It is fixed to the "left" (at $x = 0$) and a constant velocity is applied to the right. Formally:
$$ \sigma(x,0) = 0$$
$$ u(x,0) = 0$$
$$ u(0,t) = 0$$
$$ \frac{\partial u\left(L,t\right)}{\partial t} = v_{bc}$$
Now my question is simply: Is there an analytical solution for this system? I already tried to find one in Maple, but to now avail.
AI: The equations can be combined for a solution, if initial conditions are known. Strain is $\epsilon = \frac{\partial u}{\partial x}$, stress $\sigma = E \epsilon$ and the balance of forces yields
$$ \frac{\partial \sigma}{\partial x} = E \frac{\partial \epsilon}{\partial x} = E \frac{\partial^2 u}{\partial x^2} = \rho \frac{\partial^2 u}{\partial t^2} $$
and with $c^2=\frac{E}{\rho}$ the wave equation is
$$ \frac{\partial^2 u}{\partial x^2} = \frac{1}{c^2} \frac{\partial^2 u}{\partial t^2} $$
with initial conditions
$$ u(x,0) = 0 $$
$$ \dot{u}(x,0) = \frac{x}{L} u_{bc} $$ (for smooth velocity function)
The general solution has the form:
$$ u(x,t) = A_0 + B_0 t + \sum_{n=0}^{\infty} \cos\left(\frac{n \pi x}{L}\right) \left(A_n \sin \left( \frac{n \pi c}{L} t \right) + B_n \left( \frac{n \pi c}{L} t \right) \right) $$
$$ \dot{u}(x,t) = \frac{\partial}{\partial t} u(x,t) $$
with the coefficients $A_n$ and $B_n$ derived from the initial conditions ($u(x,0)$ and $\dot{u}(x,0)$)
$$ A_n = \frac{2}{n \pi c} \int_0^L \cos \left( \frac{n \pi x}{L} \right) \dot{u}(x,0)\,{\rm d} x $$
$$ B_n = \frac{2}{L} \int_0^L \cos \left( \frac{n \pi x}{L} \right) u(x,0)\,{\rm d} x $$
For your case
$$ A_0 = 0 $$
$$ B_0 = \frac{1}{2} v_{bc} $$
$$ B_n = 0 $$
$$ A_n = \frac{L \left(\cos(\pi n)+\pi n \sin(\pi n)-1\right)}{c \pi^3 n^3} v_{bc} $$
with example output:
|
H: Express $\cos 6\theta $ in terms of $\cos \theta$
I think I'm supposed to use the chebyshev polynomials, as in $$ \cos n \theta = T_n(x) = \cos(n \arccos x)$$
But no idea what now?
AI: Since $$\cos 2a =2\cos ^{2}a -1, \qquad\sin 2a
=2\sin a\cos a,$$ $$\cos (a+b)=\cos a\cos b-\sin a\sin b,$$
and
$$
\begin{eqnarray*}
\cos 3\theta &=&\cos (2\theta +\theta ) \\
&=&\cos 2\theta \cos \theta -\sin 2\theta \sin \theta \\
&=&( 2\cos ^{2}\theta -1) \cos \theta -2\sin ^{2}\theta \cos
\theta \\
&=&( 2\cos ^{2}\theta -1) \cos \theta -2( 1-\cos ^{2}\theta
) \cos \theta \\
&=&4\cos ^{3}\theta -3\cos \theta ,
\end{eqnarray*}
$$
we have
$$
\begin{eqnarray*}
\cos 6\theta &=&\cos \left( 2\times 3\theta \right) \\
&=&2\cos ^{2}3\theta -1 \\
&=&2( 4\cos ^{3}\theta -3\cos \theta ) ^{2}-1 \\
&=&32\cos ^{6}\theta -48\cos ^{4}\theta +18\cos ^{2}\theta -1.
\end{eqnarray*}
$$
ADDED: From the definition of the Chebyshev polynomials
$$
\begin{equation*}
T_{n}(x)=\cos (n\arccos x)\Leftrightarrow T_{n}(\cos \theta )=\cos n\theta
,\quad \theta =\arccos x,
\end{equation*}
$$
we get
$$
\begin{eqnarray*}
T_{1}(x) &=&\cos (\arccos x)=x \\
T_{2}(x) &=&\cos (2\arccos x)=2x^{2}-1.
\end{eqnarray*}
$$
Since they satisfy the recurrence
$$
\begin{equation*}
T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x),
\end{equation*}
$$
we have
$$
\begin{eqnarray*}
T_{3}(x) &=&2xT_{2}(x)-T_{1}(x) \\
&=&2x( 2x^{2}-1) -x \\
&=&4x^{3}-3x \\
&& \\
T_{4}(x) &=&2xT_{3}(x)-T_{2}(x) \\
&=&2x( 4x^{3}-3x) -( 2x^{2}-1) \\
&=&8x^{4}-8x^{2}+1 \\
&& \\
T_{5}(x) &=&2xT_{4}(x)-T_{3}(x) \\
&=&2x( 8x^{4}-8x^{2}+1) -( 4x^{3}-3x) \\
&=&16x^{5}-20x^{3}+5x \\
&& \\
T_{6}(x) &=&2xT_{5}(x)-T_{4}(x) \\
&=&2x( 16x^{5}-20x^{3}+5x) -( 8x^{4}-8x^{2}+1) \\
&=&32x^{6}-48x^{4}+18x^{2}-1.
\end{eqnarray*}
$$
Therefore
$$
\begin{eqnarray*}
\cos 6\theta &=&T_{6}(\cos \theta ) \\
&=&32\left( \cos \theta \right) ^{6}-48\left( \cos \theta \right)
^{4}+18\left( \cos \theta \right) ^{2}-1 \\
&=&32\cos ^{6}\theta -48\cos ^{4}\theta +18\cos ^{2}\theta -1,
\end{eqnarray*}
$$
as above.
|
H: Boundedness and finite limit of function
Suppose $f(x)$ is continuous on $[1,+\infty)$, differentiable on $(1,+\infty)$. If $f(x)$ is bounded on $[1,+\infty)$ and has finite $\lim_{x\rightarrow \infty} f'(x)$, then it has finite $\lim_{x\rightarrow \infty} f(x)$
I know it's false, but don't see why: if the function is bounded, then it either has finite limit or doesn't have limit at all, i.e. oscillating. Having derivative, which limit is finite, means that as we go to infinity our function turns into monotonic one. Bounded monotone function should have a finite limit.
Correct?
AI: $f(x)=\sin(\sqrt{x})$ has $f'(x)=\cos(\sqrt{x})/(2\sqrt{x})\to 0$.
|
H: Question about projection
If $B^T AB$ is not a projection, then either $B$ isn't orthogonal, or $A$ isn't a projection.
I understand that orthogonal $B$ and projection $A$ help transform the following: $B^T ABB^T AB = B^T AAB = B^T AB$. But I need to prove negation. How to deal with it?
AI: Hint: Prove the contrapositive.
Edit: Your initial statement is: $$B^TAB\neq (B^TAB)^2\implies (BB^T\neq I\lor A^2\neq A.)$$ And you should know that $(BB^T\neq I\lor A^2\neq A)\iff \neg(BB^T=I\land A^2=A)$.
|
H: A different way to write $(B\cap C)\cup(D \cap E)$
Let $A=(B\cap C) \cup (D\cap E)$ be a given set. I am looking for a different way to write this. I guess it is somehow possible to "reorder" this by using De Morgan's relations. Unfortunately, I am not successful. Does somebody see how one can write this differently?
AI: As pointed out by Asaf Karagila, there is no option to rewrite your expression in a simpler or more elegant way.
This can be demonstrated graphically:
The following is a Venn diagram for your case:
And here is the Karnaugh-Veitch map:
The two intersecting minterm blocks cannot be replaced by fewer or simpler minterm blocks.
|
H: Calculating the probability of an intersection
I have the following Venn Diagram,
Let:
$\color{red}A$ be the red circle
$\color{blue}B$ be the blue circle
$\color{green}C$ be the green circle
I know that $\dfrac{1}{6} = P(\text{(all three)}=x|\text{at least two}) = \dfrac{P(\text{at least two} \cap \text{all three})}{P(\text{at least two})}$ but all if know is $P(\text{at least two})$. How do I find the intersection of all three and at least two in order to find $x$?
AI: By your diagram, at least 2 is $0.4+0.0+x+(0.2-x)=0.6$. Also note that the intersection of "at least 2" with "all three" is the same as simply "all three", denoted $x$ in your diagram. So you know that $1/6=x/0.6$ from which you get $x=1/10=0.1.$
NOTE: This all assumes the numbers in the venn diagram are probabilities directly. However they add to more than 1, and this all might need adjustment if the OP says the numbers are just "raw numbers" in the regions, not denoting probabilities. (I await confirmation by OP on this issue.)
***As Did suggests in his comment, if that outer number were a typo and should have been $0.05$ then things add to 1 properly, and no need to adjust
|
H: Inequality between two sequences preserved in the limit?
Let $(a_n)_{n\in \mathbb{N}}$ and $(b_n)_{n\in \mathbb{N}}$ be two real sequences that satisfy $a_n\geq b_n, \forall n \in \mathbb{N}$ and converge to some $a,b$, respectively.
Is it always true that $a \geq b$?
AI: Yes. If you assume that $a < b$, you can take $\varepsilon = (b-a)/2$ and use the definition of the limit of a sequence to come up with a contradiction, i.e. find an $n$ such that $a_n < b_n$.
|
H: Subsets of $\mathbb{R}^2$ of which no $2$ are homeomorphic.
I'm reviewing general topology and I'm having trouble with this problem:
Let $Z_0 := \{ \frac{1}{i} \mid i = 1,2, \ldots\}$, $Z_1 := Z_0 \cup \{0\}$, $I_0 := (0,1)$, $I_1 := [0,1]$. Prove that no two of the following subspaces of $\mathbb{R}^2$: $Z_0 \times I_0, \quad Z_1 \times I_0, \quad Z_0 \times I_1, \quad Z_1 \times I_1$ are homeomorphic.
I don't really know how to approach this.
I believe there must be a simple proof, but all of these subsets are connected, all are path-connected and all have the property that if you remove any one of their points they stay path-connected, which is pretty much all the approaches I remember from my classes.
Thank you for your suggestions.
AI: $Z_1\times I_1$ is the only compact space in your list.
$Z_0\times I_1$ and $Z_1\times I_1$ have compact connected components, but only the first one is locally connected.
The remaining two sets do not have compact connected components and are distinguished by local connectivity as well.
|
H: "such that" logical symbol
So, in the definition of what is a square root,
$\sqrt{x}$ are all numbers $y$ such that $y×y=x$.
are there any logical mathematical symbols so that the above definition can be written using logical operators only, and no natural language?
Where can I get some introductory or reference material on all such logical symbols?
update:
I noticed, some time after asking the question that the definition of square root I am giving is wrong. The square root of $x$ is to defined to be the non-negative number $y$ that satisfies $y*y=x$. But the question was about notation, not square roots, so I am leaving it as it stands due to some answers using the supplied (erroneous) definition.
AI: You could write this in a few different ways... I'm not sure what you're asking, so let me show you a couple.
For one, you could define the condition $y\in\text{Sqrt}(x)$, rather than the set itself:
$$
y\in\text{Sqrt}(x)\Leftrightarrow y^2=x
$$
The following two are commonly used in set definitions:
$$
\text{Sqrt}(x)=\{y\mid y^2=x\}\qquad \text{or}\qquad \text{Sqrt}(x)=\{y:\ y^2=x\}
$$
I also see people use (and have used myself) "s.t." as an abbreviation for such that in formulas.
|
H: If $A = \tan6^{\circ} \tan42^{\circ},~~B = \cot 66^{\circ} \cot78^{\circ}$ find the relation between $A$ and $B$
My trigonometric problem is:
If $A = \tan6^{\circ} \tan42^{\circ}$ B = cot$66^{\circ} \cot78^{\circ}$ find the relation between $A$ and $B$.
Working :
$$B = \cot 66^{\circ} \cot78^{\circ} = 1- \frac{\tan24^{\circ}+\tan18^{\circ}}{\tan42^{\circ}}$$
$$A= \tan6^{\circ} \tan42^{\circ} = 1- \frac{\tan6^{\circ} +\tan42^{\circ}}{\tan48^{\circ}}$$
but it seems this is the wrong way of doing this...please suggest. Thanks!
AI: Using
$2\cos A\cos B=\cos(A-B)+\cos(A+B)$ and $2\sin A\sin B=\cos(A-B)-\cos(A+B),$
$$A=\frac{\sin 6^\circ\cdot \sin 42^\circ}{\cos 6^\circ\cdot \cos 42^\circ}=\frac{\cos36^\circ-\cos48^\circ}{\cos36^\circ+\cos48^\circ}$$
Applying Componendo and dividendo,
$$\frac{1+A}{1-A}=\frac{\cos36^\circ}{\cos48^\circ}$$
Similarly, Using
$2\sin A\cos B=\sin(A+B)+\sin(A-B)$ and $2\cos A\sin B=\sin(A+B)-\sin(A-B),$
$$B=\frac{\cos66^\circ \cos72^\circ}{\sin66^\circ \sin72^\circ}=\frac{\cos66^\circ \sin18^\circ}{\sin66^\circ \cos18^\circ}=\frac{\sin84^\circ-\sin48^\circ}{\sin84^\circ+\sin48^\circ}$$
Applying Componendo and dividendo,
$$\frac{1+B}{1-B}=\frac{\sin84^\circ}{\sin48^\circ}$$
$$\implies \frac{1+A}{1-A}\cdot\frac{1+B}{1-B}=\frac{\sin84^\circ\cdot \cos36^\circ}{\sin48^\circ\cdot \cos48^\circ}=\frac{2\sin84^\circ\cdot \cos36^\circ}{\sin(2\cdot48)^\circ}=2\cos36^\circ$$
as $\sin96^\circ=\sin(180-96)^\circ=\sin84^\circ$
Now $\cos36^\circ$ can be found here
|
H: What is the splitting field of $x^3 - \pi$?
What is the splitting field of $x^3 - \pi$? Is it $\mathbb R(\sqrt[3] \pi, \xi_3)$
or $\mathbb Q(\sqrt[3] \pi, \xi_3)$? (where $\xi_3$ denotes the third root of unity)
It is a polynomial over $\mathbb R[x]$, so I guess it must be $\mathbb R(\sqrt[3] \pi, \xi_3)$, but I never saw such an extension.
AI: Since $\sqrt[3]\pi$ is already an element of $\mathbb R$ and $\xi_3=-\frac12\pm i\frac{\sqrt{3}}2$, the splitting field is simply $\mathbb C$. In fact, $\mathbb R$ and $\mathbb C$ are the only candidates for algebraic extensions of $\mathbb R$.
|
H: Intersection of non-independent events
Let $A_1,\dots, A_n$ be not necessarily independent events. What can be said about the relation between $\mathbb{P}(\cap A_i)$ and $\prod_i^n \mathbb{P}(A_i)$?
AI: Not very much, in general. Consider a coin tossed just once. Let $A_1$ be the event that it's heads, and $A_2$ be the event that it's tails. Then $\mathbb{P}(A_1 \cap A_2) = 0$. On the other hand, if both $A_1$ and $A_2$ are the even that the coin is heads, then $\mathbb{P}(A_1 \cap A_2) = \frac{1}{2}$. In both cases, $ \mathbb{P}(A_1) \mathbb{P}(A_2) = \frac{1}{4}$.
In the first case you had
$$\mathbb{P}(A_1 \cap A_2) = 0 \lt \mathbb{P}(A_1) \mathbb{P}(A_2) = \frac{1}{4}$$
and in the second you had
$$\mathbb{P}(A_1 \cap A_2) = \frac{1}{2} \gt \mathbb{P}(A_1) \mathbb{P}(A_2) = \frac{1}{4}$$
|
H: Is there a non-saturated measure?
Let $(X, \mathcal {M}, \mu) $be a measure space. A subset $E$ of $X$ is locally measurable, if for each $B \in \mathcal M$ with $\mu (B) < \infty$, we have $E \cap B \in \mathcal M$. The measure $\mu$ is saturated if every locally measurable set is measurable.
Is there a non-saturated measure such that there is a locally measurable set isn't measurable?
All I know is that a non-saturated measure can't be a $\sigma$-finite measure.
AI: Yes, there are measures that are not saturated.
On $X=[0,1]$, let $\cal M$ consist of the subsets $A$ of $X$ such that $A$ is countable or $A^c$ is countable. For $A\in\cal M$, define the measure $\mu$ by setting $\mu(A)=0$ if $A$ is countable and $\mu(A)=\infty$ if $A^c$ is countable. Consider a set $E$ such that both $E$ and $E^c$ are uncountable.
|
H: How to prove this simple inequality?
Please help me to prove this inequality.
Suppose $X$ and $Y$ are independent and $EX=EY=0$, then we must have $E(|X|) \leq E(|X+Y|)$.
Thanks.
AI: The condition $EX=0$ is not necessary.
Since the absolute value function is convex, by Jensen's inequality for conditional expectations,
$$E \big(|X+Y|\big| X\big)\ge |E \big(X+Y\big| X\big)|=|X+E \big(Y\big| X\big)|.\tag{1}$$
Since $X$ and $Y$ are independent,
$$E\big(Y\big| X\big)=EY=0.\tag{2}$$
Combining $(1)$ and $(2)$, it follows that
$$E(|X+Y|)=E\left(E \big(|X+Y|\big| X\big)\right)\ge E\left(|X+E \big(Y\big| X\big)|\right)=E(|X|).$$
|
H: Prove $\int_0^{\infty}\! \frac{\mathbb{d}x}{1+x^n}=\frac{\pi}{n \sin\frac{\pi}{n}}$ using real analysis techniques only
I have found a proof using complex analysis techniques (contour integral, residue theorem, etc.) that shows $$\int_0^{\infty}\! \frac{\mathbb{d}x}{1+x^n}=\frac{\pi}{n \sin\frac{\pi}{n}}$$ for $n\in \mathbb{N}^+\setminus\{1\}$
I wonder if it is possible by using only real analysis to demonstrate this "innocent" result?
Edit
A more general result showing that
$$\int\limits_{0}^{\infty} \frac{x^{a-1}}{1+x^{b}} \ \text{dx} = \frac{\pi}{b \sin(\pi{a}/b)}, \qquad 0 < a <b$$
can be found in another math.SE post
AI: $$ \int_{0}^{\infty}\frac{1}{1+x^n}\ dx =\int_{0}^{\infty}\int_{0}^{\infty}e^{-(1+x^{n})t}\ dt\ dx $$
$$ =\int_{0}^{\infty}\int_{0}^{\infty}e^{-t}e^{-tx^{n}}\ dx\ dt =\frac{1}{n}\int_{0}^{\infty}\int_{0}^{\infty}e^{-t}e^{-u}\Big(\frac{u}{t}\Big)^{\frac{1}{n}-1}\frac{1}{t}\ du\ dt $$
$$ =\frac{1}{n}\int_{0}^{\infty}t^{-\frac{1}{n}}e^{-t}\int_{0}^{\infty}u^{\frac{1}{n}-1}e^{-u}\ du\ dt =\frac{1}{n}\int_{0}^{\infty}t^{-\frac{1}{n}}e^{-t}\ \Gamma\Big(\frac{1}{n}\Big)\ dt $$
$$ =\frac{1}{n}\ \Gamma\Big( 1-\frac{1}{n}\Big)\Gamma\Big(\frac{1}{n}\Big) =\frac{\pi}{n}\csc\Big(\frac{\pi}{n}\Big) $$
|
H: What is the expected number of ice creams that the saloon can still sell until the first customer who wants chocolate has to be dissapointed?
An ice saloon sells ice creams with one, two, or three scoops. Customers can choose from ten tastes, but you may also choose for more than one scoop with the same taste. The order of the scoops on the horn is irrelevant, but the number of each taste does matter.
The tray with chololate ice is empty. Assume that each of the 285 different types of ice cream is sold with equal probability. What is the expected number of ice creams that the saloon can still sell until the first customer who wants chocolate has to be dissapointed?
My attempt
The number of ice creams with chocolate is : $1 \text{ (the case with one scoop) } + 11 \text{ (the case with two scoops) } + (1+9\cdot 9 + 9) \text{ (the case with three scoops) } = 103$.
Since I think that $Y$ = #number of icecreams that can be sold before the first customer wants chocolate is $\sim Geo(\frac{103}{285})$. I highly doubt however whether I calculated the number of possible ice creams with chocolate correctly. Could anyone please check/comment on my approach and/or help me in obtaining the correct solution?
AI: You are right, the number of types with at least one scoop of chocolate is not counted correctly. There are not very many categories, so we can do a cases analysis.
One scoop: ($1$)
Two scoops: double chocolate ($1$) or chocolate with something else ($9$)
Three scoops: triple chocolate ($1$), double chocolate with something else ($9$), single chocolate and two different flavours ($36$), single chocolate and a doubled flavour ($9$)
Added: In the usual introductory probability courses, a geometric random variable measures the number of trials up to and including the first success. In that sense, the number $Y$ of ice-creams sold before the first request for chocolate is not geometric. But $Y+1$ is. So if you are going to use a canned formula for the expectation of a geometric, you will need to subtract $1$ to get the answer to the ice-cream problem.
|
H: Describing a set using linear inequalities
I am having a hard time understanding the answer to the following exercise (which was taken from "Linear Optimization and Extensions: Problems and Solutions" by Padberg and Alevras).
My problem lies in the last sentence of the answer.
I know that:
$$\left| {x_j^ + - x_j^ - } \right| \le 1 \Leftrightarrow \left\{ {\begin{array}{*{20}{c}}{ - 1 \le x_j^ + - x_j^ - \le 1}&{,{\rm{if}}\;x_j^ + \ge x_j^ - }\\{ - 1 \le - x_j^ + + x_j^ - \le 1}&{,{\rm{if}}\;x_j^ + < x_j^ - }\end{array}} \right.$$
but I can't arrive at the last set and I can't even see why it's equivalent to the first one. Does anyone have a clue?
AI: They are projecting $Y:=(x_i^{+},x_i^{-})$ to $X:=x_i^{+}-x_i^{-}$. First you need to see that if $Y$ satisfies the given inequalities then $X$ is going to satisfy $|x|<1$. On the other hand, you need to see that any number $|a|<1$ can be written as $a=a^{+}-a^{-}$ such that $a^{+}+a^{-}<1$, $a^{+}\geq0$, and $a^{-}\geq0$.
Notice they are only claiming one is the projection of the other.
|
H: Complex Analysis Advice
Could anyone advise on this problem?
Let $g(z$) be an analytic function in punctured ball $B(z_1, R) - \{z_1\}$ and let $N$ be a fixed non-negative integer such that $\lim_{z\rightarrow\ z_1}(z- z_1) ^{m}g(z)=0$ $\forall m > N$, and $\lim_{z\rightarrow\ z_1}(z- z_1)^{n}g(z)= \infty$ $\forall $n < N. Determine the type of singularity of $g(z)$ at $z=z_1$.
Thank you.
AI: Here's another version. Define $G(z) = (z - z_1)^ {N+1} g(z)$ when $z \ne z_1$ and $G(z) = 0$ when $z = z_1$. Then $G(z)$ is analytic on the punctured disk and continuous on the disk, so is analytic on the whole disk using Morera's Theorem. Since $G(z_1) = 0$, you see that $H(z) = \displaystyle \frac{G(z)}{z-z_1} = (z - z_1)^ {N} g(z)$$ is also analytic on the whole disk.
In addition, $H(z_1) \ne 0$, because otherwise $\lim \limits _{z \to z_1} (z-z_1)^{N-1}g(z) = \lim \limits _{z \to z_1} \displaystyle \frac{H(z) - 0}{z - z_1}$ would be finite.
Then you can write $g(z) = \displaystyle \frac{H(z)}{(z-z_1)^{N}}$, where $H(z)$ is analytic, $H(z_1) \ne 0$, which shows $z = z_1$ is a pole of order $N$.
|
H: Properties of amenable groups
Let $G$ be an amenable countable group. Why does every subgroup and homomorphic image of $G$ is amenable? Further more, if $N$ is a normal subgroup of $G$, and both $N$ and $G/N$ are amenable, why does $G$ must be amenable?
AI: V. Runde, Lectures on Amenability. Springer, 2002 (Lecture notes in mathematics ; 1774). Section 2.3 "Hereditary properties"
|
H: Bott periodicity and homotopy groups of spheres
I studied Bott periodicity theorem for unitary group $U(n)$ and ortogonl group O$(n)$ using Milnor's book "Morse Theory". Is there a method, using this theorem, to calculate $\pi_{k}(S^{n})$? (For example $U(1) \simeq S^1$, so $\pi_1(S^1)\simeq \mathbb{Z}$).
AI: In general, no. However there is a strong connection between Bott Periodicity and the stable homotopy groups of spheres. It turns out that
$\pi_{n+k}(S^{n})$ is independent of $n$ for all sufficiently large $n$ (specifically $n \geq k+2$). We call the groups
$\pi_{k}^{S} = \lim \pi_{n+k}(S^{n})$
the stable homotopy groups of spheres. There is a homomorphism, called the stable $J$-homomorphism
$J: \pi_{k}(SO) \rightarrow \pi_{k}^{S}$.
The Adams conjecture says that $\pi_{k}^{S}$ is a direct summand of the image of $J$ with the kernel of another computable homomorphism. By Bott periodicity we know the homotopy groups $\pi_{k}(SO)$ and the definition of $J$, so the Bott Periodicity theorem is an important step in computations of stable homotopy groups of spheres (a task which is by no means complete).
|
H: T/F: $\vdash _{NDFOL}\forall x(B \to A) \to(\exists x B \to A)$
I need to decide whether the above holds or not.
I know that $\vdash _{HFOL}\forall x(B \to A) \to(\exists x B \to A)$, and if $T=\emptyset$, $T\vdash _{HFOL}\varphi$ then $T\vdash _{NDFOL} \varphi$.
Is it enought to say that $\vdash _{NDFOL}\forall x(B \to A) \to(\exists x B \to A)$?
Thanks!
AI: If $H$ indicates a classical Hilbert-style proof system and $ND$ indicates a natural deduction system, then since both systems are sound and complete for $FOL$ it is trivial that if $\vdash_H \varphi$ then $\vdash_{ND} \varphi$, assuming you can appeal to the known meta-results.
But a direct $ND$ proof is very easy anyway, assuming as we must that $x$ doesn't occur free in $A$. For a Fitch-style proof (easily re-arranged into Gentzen-style)
$\quad|\quad\forall x(B(x) \to A)$
$\quad|\quad|\quad \exists xB(x)$
$\quad|\quad|\quad | \quad B(a)$
$\quad|\quad|\quad | \quad (B(a) \to A)$
$\quad|\quad|\quad | \quad A$
$\quad|\quad|\quad A$
$\quad|\quad (\exists xB(x) \to A)$
$\quad \forall x(B(x) \to A) \to (\exists xB(x) \to A)$.
Exercise: annotate the proof, and explain at which step we rely on the fact that $x$ doesn't occur free in $A$.
|
H: Closedness of $L^\infty_+$ in $\sigma(L^\infty,L^1)$
Let $L^\infty_+$ be the set of all $f\in L^\infty$ which are non negative. Our measure can be assumed to be finite. My goal is to prove that $L^\infty_+$ is closed in the weak-star topology $\sigma(L^\infty,L^1)$. Hence I view $L^\infty$ as the dual of $L^1$. We know that every linear continuous function of $L^1$ can be written:
$$F(x)=\int xy d\mu$$
for $x\in L^1$ and a unique $y\in L^\infty$. So $L^\infty_+$ is the space of all $F$ with $y\ge 0\iff \int xy d\mu \ge 0 $ for all $0\le x\in L^1$. I've learned that the weak-star topology (on $L^\infty$) is generated by the sets
$$\Omega_{x,U}=\{y\in L^\infty: \int xy d\mu\subset U \}$$
for $U\subset \mathbb{R}$ closed. Somehow I have to choose $x,U$ such that $\Omega_{x,U}=L^\infty_+$. But maybe there is a simpler characterization of closed sets in $\sigma(L^\infty,L^1)$? Or how else could I show that $L^\infty_+$ is weak-star closed?
AI: One can write
$$L^\infty_+=\bigcap_{A\in\mathcal F}\left\{g\in L^{\infty},\int_Ag\geqslant 0\right\},$$
where $\mathcal F$ is the collection of sets of finite measure. Inclusion $\subset$ is obvious, and $\supset$ follows by the following argument. Assume the measure space $\sigma$-finite (hence $X=\bigcup F_n$, where each $F_n$ is measurable and has finite measure, and the $F_n$ are nested); if $g<0$ on a set of positive measure, take the intersection of the characteristic function of this set with a $F_n$ for a $n$ large enough.
And for each $A\in\mathcal F$, $\left\{g\in L^{\infty},\int_Ag\geqslant 0\right\}$ is weak-* closed, and so is an intersection of such sets.
|
H: How many students were there if a total of 870 photographs were exchanged
After the graduation exercises at school, the students exchanged photographs with each others. How many students were there if a total of $870$ photographs were exchanged ?
My attempt:
I used the combination formula $^nC_r$ where $n = 870$ and $r=2$ .
But my answer seems to be wrong.
Where i am wrong?
AI: Let's start from the beginning.
We know $870$ photographs were shared. If $n$ is the number of students (which is what we want to solve for), then we know each student gives a photograph to every other student: each of the $n$ students gives a photo to $n - 1$ students. So the total number of photographs is $$\begin{align} n(n - 1) & = 870\tag{$\dagger$}\\ \\ n^2 - n - 870 & = 0 \\ \\ (n-30)(n+29) & = 0 \\ \\ \iff n = 30 \;& \text{or} \; n = -29\end{align}$$ We are needing a positive number of students here, so we take $n = 30$.
Therefore, there are $\;30\;$ graduating students here, each of whom gives a photograph to $n - 1 = 29$ other students: $30\times 29 = 870 $ photographs exchanged.
This would be the same result if we computed the number of students for whom $\bf 2$ photographs were exchanged for every pair of $n$ students: Again: $n$ is our unknown number of students, $\binom{n}{2}$ is the number of pairs of students: $$\begin{align} 2 \times \binom{n}{2} & = 870 \\ \\ 2\times \frac{n!}{2! \,(n - 2)!} & = 870 \\ \\ 2\times \frac{n\cdot (n-1)}{2} & = 870 \\ \\ n(n-1) & = 870...\tag{see $\dagger$}\end{align}$$
|
H: Integrate $\frac{x-1}{\sqrt{x}+1}$ by power-law
How to evaluate the following integral by power-law?
$$\int\dfrac{x-1}{\sqrt{x}+1}\,\mathrm dx$$
Here is my solution which is apparently wrong:
$$\begin{align}
\int\frac{x-1}{\sqrt x+1}\,\mathrm dx &= \int (x-1)(x^{1/2}+1)^{-1}\,\mathrm dx \\
&= \frac23 x\sqrt x+\frac12x^2 - 2x^{1/2}-x+c
\end{align}$$
There should be a fairly easy solution to integrate this.
AI: Note that
$$\frac{x-1}{\sqrt{x}+1}=\sqrt{x}-1.$$
This will be clearer if you recall that $\frac{y^2-1}{y+1}=\frac{(y-1)(y+1)}{y+1}=y-1$. Put $y=\sqrt{x}$.
Remark: Your calculation seems to have used the incorrect "rule" $(a+b)^{-1}=a^{-1}+b^{-1}$.
|
H: Hyperplane - explanation of theorem needed
For any subset $W$ of $\mathbb{R^n}$ the following are equivalent:
W is a hyperplane in $\mathbb{R^n}$;
There is a non-zero $a$ in $\mathbb{R^n}$ such that $W = \{x ∈ \mathbb{R^n} : x\cdot a = 0\}$;
There are scalars $a_1, . . . , a_n$, not all zero, such that
$W = \{(x_1, . . . , x_n) ∈ \mathbb{R^n} : x_1a_1 + ··· + x_na_n = 0$
I understood the proof of the theorem, but I don't really understand the theorem itself. Why would we need second and third points of it to have separately? Even the proof states, that that they are equivalent, so why do we bother?
Also, could somebody explain the hyperplane itself according to this theorem? I mean, the hyperplane is a subspace of dimension 1 lower than the space it is in. So why do we show that $W = \{x ∈ \mathbb{R^n} : x\cdot a = 0\}$?
I need just a small 'click' because I'm almost there! Thanks :)
AI: Like any logical deduction in maths, it is necessary to prove that two (maybe seemingly obvious to you) statements are equivalent if you want to be able to swap between the two statements. Having several characterisations of the same property of an object can be useful, especially when one is trying to make future proofs, which rely on the equivalence, shorter and more readable.
For example, suppose I asked you to prove that if $n\geq 2$, then there are an infinite number of hyperplanes in $\mathbb{R}^n$. It might not be totally clear from the definition because this just says that a hyperplane has codimension $1$ in $\mathbb{R}^n$. But now that you have an equivalent characterisation of a hyperplane (It is a subset $W$ of $\mathbb{R}^n$ such that $W=\{x\in\mathbb{R}^n:x\cdot a=0\}$ for some vector $a$) it should be immediately clear to you how to tackle this problem. As hyperplanes can be classified by vectors which are orthogonal to them, we can show there are an infinite number of them by instead showing that there are an infinite number of unit-vectors which each correspond to a different hyperplane (a little more work is needed to show that two vectors which are linearly independant define different hyperplanes).
|
H: Fibonacci sequence - how to prove $a_n=\frac{1}{\sqrt{5}} ((\frac{1+\sqrt{5}}{2})^n-(\frac{1-\sqrt{5}}{2})^n)$ without induction
How to prove that
$$a_n=\frac{1}{\sqrt{5}} ((\frac{1+\sqrt{5}}{2})^n-(\frac{1-\sqrt{5}}{2})^n)$$
without using induction?
AI: You can use generating functions. Let $F_n$ be the Fibbonacci sequence defined as $$F_0=1\\F_1=1\\F_{n}=F_{n-1}+F_{n-2}\;\; ;\; n\geq 2$$
Let $$F(x)=\sum_{n\geq 0}F_nx^n$$ be its generating function. Then $$(1 - x - {x^2})F(x) = \sum\limits_{n \geqslant 0} {{F_n}} {x^n} - \sum\limits_{n \geqslant 0} {{F_n}} {x^{n + 1}} - \sum\limits_{n \geqslant 0} {{F_n}} {x^{n + 2}}$$
$$(1-x-x^2)F(x)=\sum_{n\geq 0}F_nx^n-\sum_{n\geq 1}F_{n-1}x^{n}-\sum_{n\geq 2}F_{n-2}x^{n}$$ $$(1 - x - {x^2})F(x) = {F_0} + {F_1}x - {F_0}x + \sum\limits_{n \geq 2} {\left( {{F_n} - {F_{n - 1}} - {F_{n - 2}}} \right)} {x^n}$$
And by the recursion the left hand side is $1+1-1+0$ so $$(1 - x - {x^2})F(x) = 1$$ $$F(x) = \frac{1}{{1 - x - {x^2}}}$$
Now use that $${x^2} + x - 1 = \left( {x+ \varphi } \right)\left( {x-{\varphi ^{ - 1}}} \right)$$ and the geometric expansion $$\frac{1}{{a - x}} = \sum\limits_{k = 0}^\infty {{a^{1 - k}}{x^k}} $$ plus simple fractions to get what you want.
|
H: Finding vowels occupy places in word?
From the different words formed out of the letters in the world allahabad the number of words in which the vowels occupy the even places are ?
My Try :
$\frac{9!}{4!\times2!}=15120$
1 2 3 4 5 6 7 8 9
a l l a h a b a d
I found only one vowel which is A at the position 4,6,8
How i can can apply conditions which is said in problem.
AI: Your words are of the form:
_ a _ a _ a _ a _
so you only have to choose 5 letters so the number of possibilities is $\frac{5!}{2!}=60$
|
H: Integral $\int_0^\frac{\pi}{2} \sin^7x \cos^5x\, dx $
im asked to find the limited integral here but unfortunately im floundering can someone please point me in the right direction?
$$\int_0^\frac{\pi}{2} \sin^7x \cos^5x\, dx $$
step 1 brake up sin and cos so that i can use substitution
$$\int_0^\frac{\pi}{2} \sin^7(x) \cos^4(x) \cos(x) \, dx$$
step 2 apply trig identity
$$\int_0^\frac{\pi}{2} \sin^7x\ (1-\sin^2 x)^2 \, dx$$
step 3 use $u$-substitution
$$ \text{let}\,\,\, u= \sin(x)\ du=\cos(x) $$
step 4 apply use substitution
$$\int_0^\frac{\pi}{2} u^7 (1-u^2)^2 du $$
step 5 expand and distribute and change limits of integration
$$\int_0^1 u^7-2u^9+u^{11}\ du $$
step 6 integrate
$$(1^7-2(1)^9+1^{11})-0$$
i would just end up with $1$ however the book answer is $$\frac {1}{120}$$
how can i be so far off?
AI: $$\int_0^\frac\pi2\sin^7x\cos^5xdx=\int_0^\frac\pi2\sin^7x\cos^4x\cos xdx=\int_0^\frac\pi2\sin^7x(1-\sin^2x)^2\cos xdx$$
$$=\int_0^1 u^7(1-u^2)^2 du (\text{ Putting }\sin x=u)$$
$$=\int_0^1 (u^7-2u^9+u^{11}) du$$
$$=\left(\frac{u^8}8-2\frac{u^{10}}{10}+\frac{u^{12}}{12}\right)_0^1$$
$$=\frac18-\frac15+\frac1{12}$$
$$=\frac{15-24+10}{120}=\frac1{120} $$
|
H: How can one calculate this diferential equation
$y\ dy = 4x(y^2+1)^2\ dx \text{ and } y(0) = 1$
I'm trying:
$\dfrac{y\ dy}{ (y^2+1)^2 }= 4x\ dx$
but I can't figure out what to do now.
AI: Good start, you separated the variables. Now integrate.
For the integral of the left-hand side, make the substitution $u=y^2+1$. You should arrive at
$$\int \frac{1}{2} \cdot\frac{1}{u^2} \,du,$$
which is easy.
The integral of the right-hand side is $2x^2+C$. Use the initial condition to find $C$.
Added detail: Let $u=y^2+1$. Then $du =2y\,dy$ and therefore $y\,dy=\frac{1}{2} \,du$. Thus
$$\int \frac{y\,dy}{(1+y^2)^2}=\int \frac{1}{2}\cdot \frac{du}{u^2}=-\frac{1}{2}\cdot\frac{1}{u}.$$
(We deliberately and not quite correctly left out the constant of integration, since we will have one on the right, and one is enough.)
Thus an antiderivative of the expression on the left is $-\dfrac{1}{2(1+y^2)}$.
Integrating on the right, we end up with
$$ -\frac{1}{2(1+y^2)}=2x^2 +C.$$
Put $x=0$. Since $y(0)=1$, we get $C=-\frac{1}{4}$.
We have arrived at
$$-\frac{1}{2(1+y^2)}=2x^2-\frac{1}{4}=\frac{8x^2-1}{4}.$$
Change sign and flip over. We get
$$2(1+y^2)=\frac{4}{1-8x^2.}$$
Now a small amount of manipulation gets us $y$ explicitly in terms of $x$. At the end we have to take a square root. Take the positive one, since $y(0)=1\gt 0$.
|
H: Disjoint union of random graphs again a random graph?
Let $G_{n,p}, n\in \mathbb{N}, p\in(0,1)$ be the binomial random graph, i.e. a graph on $n$ vertices where an edge is in $G_{n,p}$ with probability $p$. Also, let $q\in (0,1)$.
Can one regard $G_{n,q}$ as a disjoint union of many $G_{n,p}$?
AI: If one is willing to drop the disjointness requirement, then the answer is yes, for appropriate values of $q$. For example, if $q = 2p - p^2$, then $G_{n,q}$ will have the distribution of the union of two random graphs with distribution $G_{n,p}$. (Since each edge $e$ has probability $p$ of being in either of the two graphs with distribution $G_{n,p}$, the probability that $e$ is an at least one of them is $2p - p^2$.)
In general, if $G_1$ has the distribution of $G_{n,p_1}$, $G_2$ has the distribution of $G_{n,p_2}$, and $q = p_1 + p_2 - p_1 p_2$, then a random graph with distribution $G_{n,q}$ will have the distribution of the union of $G_1$ and $G_2$.
The fact that it is possible to couple $G_{n,p}$ and $G_{n,q}$ in this way is often useful. For example, this type of argument can be used to show that if $\mathcal{P}$ is an increasing property of graphs, then the probability that a random graph with distribution $G_{n,p}$ is in $\mathcal{P}$ is increasing in $p$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.