text
stringlengths 83
79.5k
|
|---|
H: $a_{m+n}+a_{m-n}=\frac{1}{2}(a_{2m}+a_{2n}), a_1=1$, find $ a_{1995}$
The sequence $a_0,a_1,a_2,...$ satisfies
$a_{m+n}+a_{m-n}=\frac{1}{2}(a_{2m}+a_{2n}), a_1=1$ for all non-negative integers $m,n$ with $m\ge n$. Find $a_{1995}$
This is a quick sketch of my solution.
$n=0$ gives $a_{2m}=4a_{m}$
$m=n=0$ gives $a_{0}=0$
Plugging $a_{2m}=4a_{m},a_{2n}=4a_{n}$ (with $n=1$) back to the question gives
$a_{m+1}=2a_{m}-a_{m-1}+2$
Trying small values led me to conjecture that $a_n=n^2$
My question is a a lot simpler: My book's solution also conjecture that $a_n=n^2$, and proves it by strong induction. Can't I just check to see if it satisfy $a_1=1$ and $a_{m+n}+a_{m-n}=\frac{1}{2}(a_{2m}+a_{2n})$?
AI: You can readily check that $a_n=n^2$ satisfies the given condition: $1^2=1$ and $(m+n)^2+(m-n)^2=\frac12((2m)^2+(2n)^2)$. So far, so good. Now you know that this is one of possibly many solutions. In order to conclude that this is the only solution, you have to do a bit more.
Fortunately, you already obtained a recursion in your third point, where you express $a_{m+1}$ in terms of $a_m$ and $a_{m-1}$. As a consequence, the sequence is uniquely determined by $a_0$ and $a_1$. Of these, we are already given that $a_1=1$. This would still allow for many different solution if you hadn't already established that necessarily $a_0=0$.
|
H: Converting area of a rectangle
This is probably a very simple question and I'm being stupid.
Let's suppose I have a rectangle that is 0.6m by 0.4m. To calculate the area of this rectangle, you do 0.6 multiplied by 0.4, which is 0.24m^2.
However, if I convert the units to cm first, to calculate the area you do 60 multiplied by 40, which is 2400cm^2. When I convert this value back into meters the answer is 24m^2. What am I doing wrong?
AI: You seem to be labouring under the delusion that $1\mathrm m^2=10^2\mathrm{cm}^2$, when actually $1\mathrm m^2=10^4\mathrm{cm}^2$ and $10^2\operatorname{cm}^2=1\mathrm{dm}^2$.
So to say, $\mathrm{cm}^2$ stands for the squared centimetre "$(\mathrm{cm})^2$", and not a centi-squaremetre "$\mathrm c(\mathrm m^2)$". As far as I know, prefixes indicating the scaling are always treated as part of the symbol indicating the measure unit, and not as a "separate term" of a monomial. For instance, $\mathrm A\cdot\mu\mathrm{Pa}^3$ stands for ampere times cubed micropascal.
|
H: How does the Fundamental Theorem of Calculus (FTC) tell us that $\frac{d}{dx}\left(\ln (x)\right)= \frac{1}{x}$?
According to Wikipedia, one common definition of the natural logarithm is that:
$$
\ln (x) = \int_{1}^{x} \frac{1}{t} dt
$$
The article then goes on to say that because of the first FTC, we can deduce that:
$$
\frac{d}{dx}\left(\ln (x)\right)= \frac{1}{x}
$$
This doesn't make sense to me. Although I agree that the derivative is indeed equal to $\frac{1}{x}$, I don't understand how that follows from the first FTC.
To my knowledge, the first FTC tells us that definite integrals can be computed using indefinite integrals. If we have $f(x)$, and $F(x)=\int f(x) dx$, then
$$
\int_{a}^{b} f(x) = F(b)-F(a)
$$
I understand that this is a significant result because definite integrals are defined as equalling as the area under the graph between $a$ and $b$, not with some formula that involves indefinite integrals.
If this is the case, then applying the first FTC to the problem at hand seems to only get us so far:
$$
\ln (x) = \int_{1}^{x} \frac{1}{t} dt
$$
And then we are stuck, because though we know that $f(t)= \frac{1}{t}$, we haven't shown that $F(x) = \ln(x)$. The only thing that we have shown is that $\ln (x) = \int_{1}^{x} \frac{1}{t} dt$. What am I missing?
AI: I guess that you are missing that
$$F(x)=\ln(x)$$ by definition, and $$f(x)=\frac1x.$$
So by the FTC,
$$\int_1^x\dfrac{dt}t=F(x)-F(1)=\ln(x)$$
and as $F$ is an antiderivative of $f$,
$$\frac{d\ln x}{dx}=F'(x)=f(x)=\frac1x.$$
|
H: If $\ U_{r} = \frac{1+\ U_{r-1}}{2}$ and $\ U_{0}=0$, Find $\lim_{n\to\infty} \sum_{r=1}^n \ U_{r}$
I am trying to understand fully how drug half-life works. So I derived this relationship:
$$\ U_{r} = \frac{1+\ U_{r-1}}{2}$$
Where $\ U_{0}=0$ and r is a set of natural numbers.
My issue to how to deduce a relationship for the sum to infinity:
$$\ S_{\infty}=\lim_{n\to\infty} \sum_{r=1}^n \ U_{r}$$
Consequently I need to get the relationship for $\ S_{\infty}$ if $\ U_{r} = \frac{A+\ U_{r-1}}{2}$ and $\ U_{0}=0$
AI: $U_r=\frac{1}{2}+\frac{U_{r-1}}{2}=\frac{1}{2}+\frac{1}{4}+\frac{U_{r-2}}{4}=\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{U_{r-3}}{8}=...=\sum_{i=1}^{r}\frac{1}{2^i}+\frac{U_0}{2^r}=1-\frac{1}{2^r}$
and hence $\sum_{1}^{n}U_r=n-\sum_1^n\frac{1}{2^r}$
|
H: Show that the sequence converges and what is it limit?$x_{n+1}=\frac{1}{2}(x_{n}+\frac{a}{x_n})$
The sequence is: $x_{n+1}=\frac{1}{2}(x_{n}+\frac{a}{x_n})$ for$ , n \in \mathbb{N}_{0}$, $a>0$ and $x_{0}=a$
Hint: Show at first that $x^{2}_{n+1} - a \ge 0$ and than take $x_{n+1}-x_{n}$
I tried this way: $\frac{a}{x_0}$ is not negativ, so the sum of $x_{0}+\frac{a}{x_0}$ and $\frac{1}{2}(x_{0}+\frac{a}{x_0})$ is also not negativ.
I guess that I should prove it for $x_{n}$ that it's not negativ with induction.
AI: Well,
$$x_{n+1}=\frac{1}{2}\underbrace{\left(x_{n}+\frac{a}{x_n}\right)}_\text{$AM \ge GM$} \ge \sqrt{a} \implies x_{n+1}^2 \ge a$$
$$\iff x_{n+1}^2-a \ge0 \iff a-x_{k}^2 \le 0$$
for all $k \in \mathbb{N}$
I used here the $AM \ge GM$ inequality: $\frac{x+y}{2} \ge\sqrt{xy}$ for positive $x$ and $y$, and with our sequence being positive, we have nothing to worry about using it or dividing by one of its terms.
$$x_{n+1}-x_n=\frac{1}{2}\left(\frac{a-x_n^2}{x_n}\right) \le0$$
$$\iff x_{n+1} \le x_n$$
for all $n \in \mathbb{N}$ which directly means the sequence is decreasing and thus converging.
But $x_{n+1} \ge \sqrt{a}$ and thus $$x_1 \ge \dots \ge x_n \ge \dots \ge \sqrt{a}$$
Thus the limit is $\sqrt{a}$
|
H: Class equation of normal subgroups
I am puzzled with a thought or a question regarding class equation and normality of subgroups.Consider the following situation.
Let $G$ be a finite group and $N\trianglelefteq G$.Let $G$ act on $N$ by conjugation which is allowed since $N$ is normal and let the representatives of the different equivalence classes be $n_1,n_2,...,n_k$ (excluding the equivalence class of identity).
Then, the equivalence class of $n_i$ is
$[n_i]=\{gn_ig^{-1} : g\in G\},i=1,2,..,k$
Then $N=\{e\}\displaystyle\cup_{i=1}^k[n_i]$
$\Rightarrow |N|=1+\sum_{i=1}^k |[n_i]| \quad (1)$
Now, let $N$ act on itself by conjugation.If
$cl(n_i)=\{nn_in^- : n\in N\}(i=1,2,3,..,k)$ be the equivalence of $n_i$ under this action.Then I can see that
$cl(n_i)\subseteq [n_i]$ but can it happen that
$cl(n_i)\subsetneq [n_i]$ for some $i$(i.e a proper subset) or if the question is put in other way
Will the class equation of $N$ be the same as $(1)$?
This may be a silly question but I am trying to keep my understandings clear.
I have seen a question with the same title here(Class equation of normal subgroup) but I don't think that answers my question.
Any suggestions or advice?
Thanks for your time.
AI: Let's see an example, We'll take $$G=S_3=\{e,(12),(13),(23),(123),(132)\}$$
and
$$N=A_3=\{e,(123),(132)\}$$
We look at the action of $G$ on $N$ by conjugation and we get the equivalent classes
$$\{\{e\},\{(123),(132)\} \}$$
So here the class equation is :
$$3=|N|=1+|[(123)]|=1+2$$
We look at the action of $N$ on itself we get:
$$\{\{e\},\{(123)\},\{(132)\}\}$$
So here the class equation is:
$$3=|N|=1+cl([(123)])+cl(|[(132)])=1+1+1$$
So,the $n_i$ are the representatives of the equivalent classes for the action of conjugation of $G$ on $N$.
But the there could be more equivalence classes when you take the action of N on itself.
so in the example here, we have that $[(123)]=\{(123),(132)\}$ but $cl((123))=\{(123)\}$
So when you write down the class equation for the action of $N$ on itself, you cannot just take the $cl(n_i)$, where $n_i$ are the class representative of the action of $G$ on $N$. Because you are missing more equivalence classes for in the action of $N$ on itself.
|
H: Designing an XNOR function for real numbers
I would like to design a function of $(x,y)$ which gives a large output for large values and $x$ and $y$ and for small values for $x$ and $y$. For values of $x$ and $y$, when one is large and the other one small, it should yield a small output value. What would be a suitable function? This is similar to the concept of Bitwise XNOR but operating on Real Values of $x$ and $y$ instead of bits.
AI: I'll speak to the case where $x,\,y\in[0,\,1]$. This can easily be adapted to other scales for which we have a definition of "small" and "large", by replacing the original $f(x,\,y)$ with a function of the form $g^{-1}(f(g(x),\,g(y)))$.
Usually we define $x\land y$ as $xy$, $x\lor y$ as$$1-(1-x)(1-y)=x+y-xy,$$and $\neg x$ as $1-x$. So xnor is then$$\begin{align}(x\land y)\lor(\neg x\land\neg y)&=(xy)\lor((1-x)(1-y))\\&=xy+(1-x)(1-y)-xy(1-x)(1-y)\\&=1-x-y+xy+x^2y+xy^2-x^2y^2,\end{align}$$which under the convention $z^2=z$ (viz. booleans $0^2=0,\,1^2=1$) becomes$$1-x-y+xy+xy+xy-xy=1-x-y+2xy.$$This looks less surprising if written as $x(1-y)+(1-x)y$ before staring at the logic table.
|
H: Show the correctness of this derivation to the Taylor's Theorem?
As to this beautiful derivation of the Taylor's Theorem, wouldn't it break the equality when we add the term $f(0)$ to the right side of $f(x) = \int_{0}^{x} f'(t)dt$ ?
AI: No. The reason is as follows: we have
$$ \int_{0}^{x} f'(t) dt = \left [ f(t) \right ] ^ x _ 0 = f(x) - f(0)$$
(not just $f(x)$). Thus:
$$f(x)=f(0)+ \int_{0}^{x} f'(t) dt$$
|
H: Probability of Exiting A Roundabout
I am piggybacking on this question: probability of leaving
The question was closed, but I found it interesting and I would like feedback on what I have done with it, and I have more questions about it. I'm not sure what proper etiquette is for piggybacking on closed questions.
I will rephrase the question as I understand it:
In the diagram below, you start at the node marked with lowercase $a$.
From $a$, you move to one of the adjacent nodes, chosen uniformly at random. That is, you move to $b$, $d$, or $A$.
If you are at a lowercase node, you continue in the same fashion, moving to an adjacent node uniformly at random. Once you reach an uppercase node, you have left the roundabout, and the journey stops. The question is, what are the probabilities of the journey ending at $A$, $B$, $C$, and $D$?
In the original question, there was an answer that used Markov chains, but I am wondering if there is a way to do it without that technique.
I modeled this experiment in Excel, and after $100,000$ trials, the probabilities appear to be roughly:
$P(A)=46.5\%$
$P(B)=P(D)=20\%$
$P(C)=13.5\%$
I have no way of knowing if these are the exact answers, or even if the exact answers are rational numbers, but they make intuitive sense to me because of the structure and symmetry in the diagram.
I wonder if there is a way to "juggle" conditional probabilities to find exact answers to this question, without having to use Markov chains.
I would start by calculating $P(A|a)$, the probability of leaving at $A$ given you started at $a$. By symmetry, $P(B|b)$, $P(C|c)$, and $P(D|d)$ would be the same as $P(A|a)$.
To get $P(A|a)$, notice that the total length of the walk must be odd. Either you leave immediately ($1$ step), or you take an even number of steps in the roundabout to return to $a$, and then leave at $A$ ($2m+1$, for some integer $m$, steps).
For a given walk of $k$ steps, the probability of taking that walk is simply $(\frac13)^k$.
Let $N_k$ be the number of walks of $k$ steps that leave the roundabout at $A$.
Then $P(A|a)=\frac13+N_3\cdot(\frac13)^3+N_5\cdot(\frac13)^5+...$
I'm unable to find a systematic way of calculating $N_k$. It is easy enough to do for $k=3$ or $5$, but I can't be sure of what pattern is emerging.
Beyond that, assuming I did have an exact answer for $P(A|a)$, I would still need to figure out $P(B|a)$. Would that just be $\frac13\cdot P(A|a)$ since there is a $\frac13$ probability of going to $b$ on the first step?
By symmetry, $P(D|a)=P(B|a)$, so if I had $P(A|a)$ and $P(B|a)$, I could easily figure out all the probabilites.
I would appreciate any input on this!
AI: One way to short circuit the complications of the Markov chains is to work recursively. The situation is so very symmetric that this ends up simplifying the vast majority of the calculations.
Let the desired probabilities be $P_A,P_B, P_C, P_D$. Of course $P_B=P_C$ and the four variables sum to $1$. Thus there are really only two unknowns here. Let $x=P_A, y=P_B$. Of course, $P_C=y$ and $P_D=1-x-2y$.
Consider what happens once one makes that first choice. With probability $\frac 13$ the game is over (and you exit at $A$). Otherwise you move to either $B$ or $C$. Note that which ever of those two states you move to, you are now in the position $B,C$ started in. It follows that $$x=\frac 13\times 1 + \frac 23\times y$$
Similarly, still considering the start, let's analyze what happens to $P_B$. If you exit at $A$, you can't eventually exit at $B$. If you move to $B$, then the probability that you eventually exit at $B$ is $x$. And if you move to $C$ the probability that you eventually exit at $B$ is now $1-x-2y$. Thus $$y=\frac 13\times 0 +\frac 13\times x+\frac 13\times (1-x-2y)$$
This is easily solved and we get $$P_A=\frac 7{15}\quad P_B=P_C=\frac 15 \quad P_D=\frac 2{15}$$
We remark that this aligns well with your simulated results.
|
H: convergence or divergence of $\sum^{\infty}_{k=1}\frac{k}{k^2-\sin^2(k)}$
Finding whether the series $$\;\; \sum^{\infty}_{k=1}\frac{k}{k^2-\sin^2(k)}$$ is converges or Diverges
What i try
We know that $\sin^2(x)\leq 1$ for all real number.
So $$\sum^{\infty}_{k=1}\frac{k}{k^2-\sin^2(k)}\geq \sum^{\infty}_{k=1}\frac{k}{k^2-1}$$
How do i solve it . Help me please.
AI: You can write:
$\sum\limits_{k=2}^\infty \frac{k}{k^2-1}=\sum\limits_{k=2}^\infty \frac{1}{2(k-1)}+\frac{1}{2(k+1)}=\frac{1}{2}\sum\limits_{k=2}^\infty \frac{1}{k-1}+\frac{1}{k+1}$
Can you see where this is going?
|
H: Estimative of the measure of the intersection.
Let $(\Omega, \mathscr{F}, \mathbb{P})$ be a probability space. Let $\Sigma_n\subset \Omega$ such that $\mathbb{P}(\Sigma_n^c)<\epsilon_n,$ where, $\sum \epsilon_n<\delta$, where $\delta>$ is as small as we want (much smaller than 1).
Can I estimate $\mathbb{P}(\bigcap_{n=1}^\infty \Sigma_n)$ in terms of the sequence $(\epsilon_n)$ (only)?
AI: We can at least get a lower bound on the probability:
\begin{align*}
\mathbb{P}\left( \bigcap_{n=1}^\infty \Sigma_n\right) &= \mathbb{P}\left(\left( \bigcup_{n=1}^\infty \Sigma_n^c\right)^c\right) \\
&= 1-\mathbb{P}\left(\bigcup_{n=1}^\infty \Sigma_n^c\right) \\
&\ge 1 - \sum_{n=1}^\infty \mathbb{P}(\Sigma_n^c) \\
&\ge 1 - \sum_{n=1}^\infty \varepsilon_n \\
&\ge 1 - \delta
\end{align*}
|
H: How to apply ratio test to prove convergence of $\sum_{n=1}^{\infty} \frac{(2^n n!)^2}{(2n)^{2n}}$
Can someone please help me with this
I know that we should use the ratio test but can you show me the steps in detail
$$\sum_{n=1}^{\infty} \frac{(2^n n!)^2}{(2n)^{2n}}$$
And I stuck here
$$\lim_{n \to \infty}\left|\frac{(2^{n+1}(n+1)!)^2}{(2n+2)^{2n+2}}.\frac{(2n)^{2n}}{(2^n n!)^2}\right|$$
$$=\lim_{n \to \infty} \left|\frac{(2n)^{2n}}{(2n+2)^{2n}}\right|$$
how should I proceed from here?
AI: In order to use the ratio test, we have to evaluate the limit:
$$ L := \lim_{n \rightarrow \infty} \left | \frac{a_{n+1}}{a_n} \right | $$
If $L <1$, the series converges; if $L >1$, it diverges; if $L=1$ or the limit does not exist, we cannot conclude anything with the ratio test.
Now consider the first series; we have:
$$ \lim_{n \rightarrow \infty} \left | \frac{a_{n+1}}{a_n} \right | = \lim_{n \rightarrow \infty} \left | \frac{(2^{n+1} (n+1)!)^2 (2n)^{2n} }{(2^n n!)^2 (2n+2)^{2n+2}} \right | = \frac{1}{e^2} < 1$$
Thus, the first series converges. Similarly, you can see that the second series diverges.
EDIT: to evaluate the limit, rewrite it as follows:
$$ \lim_{n \rightarrow \infty} \frac{2^{2n+2}(n+1)! (n+1)! 2^{2n} n^{2n}}{2^{2n} n! n! 2^{2n+2} (n+1)^{2n+2}} = \lim_{n \rightarrow \infty} \left ( \frac{n}{n+1} \right )^{2n} =\lim_{n \rightarrow \infty} \left (1- \frac{1}{n+1} \right)^{2n} = \frac{1}{e^2} $$
where at the first passage we have simplified, and in the last one we have used a well known limit for the exponential.
|
H: Q: When can we say that the starting value is near the root in Newton Raphson method?
I would like to know when can I use the Newton Raphson methos to find an approximation of a root. We know that it can be used or that it is possible to work when the starting value is near of the root; but what is near? I mean, if for example if I've got $f(x)=(x-3)^{1/2}$ I know that the root would be $3$, so can I start from the point $p_0=4$? Is $4$ near $3$???
AI: In principle it may be possible to determine the immediate basin of attraction of a root $r$ of your function $f$, i.e. the largest interval $(a,b)$ containing $r$ such that Newton's method starting at any point in $(a,b)$ converges to $r$. Namely, $a$ and $b$ (if finite) are the closest points to left and right of $r$ which are either roots of $f'$, or points where $f$ is not differentiable, or they form a $2$-cycle for Newton's method.
However, in practice the simplest thing to do, to tell whether a point is "sufficiently close", is usually just to try it and see what happens to the iterations.
Your example $f(x) = (x-3)^{1/2}$ is not a good candidate for Newton's method, because (if you stick to real numbers) the function is not defined for $x < 3$ and (even if you allow complex numbers) is not differentiable at $x=3$. If you start at any $x_0 > 3$, the very first iteration will take you to numbers $< 3$. Perhaps you mean $(x-3)^2$.
|
H: What can we say if the inner products of two vectors ($u$ and $v$) with another vector ($x$) are equal? ($x$ is an eigenvector)
We know that $x$ is an eigenvector and the inner products of $x$ with $u$ and $v$ are equal, I mean:
$\langle u,x\rangle \;= \; \langle v,x \rangle$
On the other hand, we know that $v = [1,1,...,1]$.
$u$ and $v$ and $x$ are defined on $\mathbb{R}^d$. And by the inner-product, I mean the sum of elements of the output of the element-wise multiplication. What can we say about the relation between $u$ and $v$?
AI: $\langle w, x \rangle$ is a scalar multiple of the length projection of $w$ onto $x$. So your two inner products are equal if and only if $u$ and $v$ have the same projection onto $x$, or equivalently, $u-v$ is orthogonal to $x$ (which can be easily seen by $\langle u-v, x\rangle = \langle u, x \rangle - \langle v, x \rangle = 0$).
|
H: Distinct ways to put $N$ balls in $M$ boxes such that there is no more than $K$ balls in each box?
The question is: in how many different ways can I put $N$ indistinguishable balls into $M$ distinguishable boxes such that each box contains no more than $K$ balls it it?
A more general problem: if $K$ is different for different boxes, $K_i$ for the box $i$, $i=1,...,M$.
I tried to find a way to use "stars and bars" method here, but didn't succeed. I would be grateful if anyone could explain how to solve such a task or provide a reference.
AI: Let $a_j$ defines the number of elements in the $j$-th box, then you want to count the $(a_1,\ldots ,a_M)\in (\mathbb N \cup \{0\}) ^M$ such that $a_j\leqslant K$ for each $j\in\{1,\ldots,M\}$ and $\sum_{j=1}^Ma_j=N$. Then note that this number is just the coefficient of $x^N$ in the expansion of the polynomial $(1+x+x^2+\ldots +x^K)^M$, this is notated as $[x^N](1+x+x^2+\ldots +x^K)^M$.
For the general case it is just
$$
[x^N]\prod_{j=1}^M(1+x+x^2+\ldots +x^{K_j})
$$
For a way to handle this kind of problems, at least for simpler cases, take a look here. A good book about counting methods using generating functions is this.
|
H: basic notion about schemes.. what is the difference between $s(x)$ and $s_x$ for $s \in \Gamma(U, O_X)$
Let $X$ be a scheme and $U$ an open subset of $X$. Let $s \in \Gamma(U, O_X)$ and $x \in U$.
I am getting confused with what the difference is between
$s_x$ and $s(x)$... Are they the same thing?
AI: It is standard to write $s_x$ for the image of $s$ in the stalk $\mathscr{O}_{X,x}$ of $X$ at $x$, and I think it is more or less standard to write $s(x)$ for the image of $s$ in the residue field $k(x)=\mathscr{O}_{X,x}/\mathfrak{m}_x$ of $\mathscr{O}_{X,x}$.
|
H: What is the definition of the Entscheidungsproblem (Decision Problem)?
I have been trying to find the most “formal” definition of the Entscheidungsproblem for the past couple of days now.
On Wikipedia it states this:
The problem asks for an algorithm that considers, as input, a statement and answers "Yes" or "No" according to whether the statement is universally valid, i.e., valid in every structure satisfying the axioms.
On Simple English Wikipedia, it states this:
Is there an algorithm that will take a formal language, and a logical statement in that language, and that will output "True" or "False", depending on the truth value of the statement?
In these instances, are the two definitions equivalent? Is taking the formal language equivalent to considering whether the statement is universally valid?
Another definition, this one found on Quora, states this:
Is there an effective procedure (an algorithm) which, given a set of axioms and a mathematical proposition, decides whether it is or is not provable from the axioms?
Out of these definitions, which one is the most formal definition of the Entscheidungsproblem? Why does that definition adhere to the Entscheidungsproblem more closely than the other ones?
Just to add, I also tried to find the definition or an outline of the problem from the Principle of Mathematical Logic book – but I could not get through the dense layer of prerequisite knowledge. Perhaps there’s a section in the book that describes it in layman’s term. If there is, I would greatly appreciate it if someone could tell me where I can find that section.
Sorry if this sort of question is a little vague.
AI: None of those definitions is entirely accurate, since the following terms are left undefined:
"The axioms:" what axioms exactly? (Note that the other terms, e.g. "structure," are actually unambiguous technical terms - and on that note see the end of this answer.)
"True"/"False:" according to what notion of truth? (Unsurprisingly given that this is the Simple English Wikipedia article, this is the most dangerous conflation.)
"Set of axioms:" what sort of set of axioms is allowed here, and how should a set of axioms be "given?"
Here is a precise definition of the Entscheidungsproblem. We start small. Suppose $T$ is a finite first-order theory $T$ in a finite language.$^*$ Then there is an "Entscheidungsproblem for $T$,"$^\dagger$ which I'll call $E(T)$:
Is there an algorithm for determining whether a sentence $\sigma$ in the language of $T$ is provable from $T$?
Note that for some $T$s the answer to $E(T)$ is yes, e.g. Presburger arithmetic.
The full Entscheidungsproblem can then be phrased as follows (and my understanding is that this is the most faithful to its original intent):
Is the answer to $E(T)$ always yes?
It turns out that we can boil this down to a single candidate: there is such a theory $T$ (actually, there are many such theories) such that $E(T)$ is maximaly complicated. For example, we can take Robinson arithmetic $\mathsf{Q}$. (Note that strictly speaking, according to the phrasing above something like first-order Peano arithmetic or $\mathsf{ZFC}$ does not count, being non-finitely-axiomatizable.) So we could equivalently phrase the Entscheidungsproblem as:
Is there an algorithm for determining whether a sentence $\sigma$ in the language of arithmetic is provable from $\mathsf{Q}$?
There are yet more rephrasings of the Entscheidungsproblem, the most nontrivial one being in terms of structures:
Is there an algorithm for determining whether a sentence $\sigma$ in the language of arithmetic is true in every model of $\mathsf{Q}$?
The equivalence between these notions is highly nontrivial - it's a consequence of Godel's completeness theorem (that's not a typo!).
And of course the modern perspective on the Entscheidungsproblem is that it's really just a rephrasing of the halting problem, so we tend to talk about that instead. One direction of this conflation is that the above $E(T)$s can be directly encoded in the halting problem, and the other direction is that we show the undecidability of $E(\mathsf{Q})$ by coding the halting problem into it - $E(\mathsf{Q})$ and the halting problem are equivalent in a precise sense.
$^*$Why finite language? Well, I want to avoid issues with the complexity of the language itself - the focus should be just on what the theory can do, not how hard it is to describe the theory or understand the language in the first place. It's really more natural to allow arbitrary computable theories in computable languages, but I think restricting attention to the finite case makes things simpler at first.
$^\dagger$See e.g. Church page $363$: "As a corollary of Theorem XIX, it follows that the Entscheidungs-problem is unsolvable in the case of any system of symbolic logic which is $\omega$-consistent [...] and is strong enough to allow certain comparatively simple methods of definition and proof. "
|
H: Proof that the sequence $ \{g \circ f_{n} \} $ converges uniformly in compact subsets of $ \Omega $ to the function $ g \circ f $.
Let $ \Omega $ be an open in $\mathbb{C}$, $\{f_{n} \}$ a succession of continuous functions of $\Omega$ in $ \mathbb{C}$ that converge to a
function $ f $ uniformly in compact subsets of $ \Omega $. Let $ g $ be a continuous function on $ \mathbb{C} $. Proof that the sequence $ \{g \circ f_{n} \} $ converges uniformly in compact subsets of $ \Omega $ to the function $ g \circ f $.
I tried to make this demonstration absurd but the test turned out to be a little difficult. I would like to know if there is a simpler test to do this.
AI: Fix $K$ compact. There is $N\in\mathbb{N}$ such that $\|f-f_n\|_K<1$ for all $n\geq N$. Thus, for all $n\geq N$, the range of $f_n$ over $K$ is contained in the closed ball $B$ around $0$ of radius $\|f\|_K +1$ which is compact in $\mathbb{C}$.
$g$, being continuous in $\mathbb{C}$, is uniformly continuous on $B$. Thus, given $\varepsilon>0$, there is $\delta>0$ such that $|g(y)-g(x)|<\varepsilon$ whenever $|x-y|<\delta $ and $x,y\in B$.
On the oner hand, there is $N^*\in\mathbb{N}$ such that $n\geq \max(N,N^*)$ implies that $\|f_n-f\|_K<\delta$
Consequently
$$
|g(f_n(u))-g(f(v))|<\varepsilon
$$
for all $u,v\in K$.
|
H: (Proof-check) Alternative formula for the total variation
Good morning!
I'm reposting a message I posted a bit earlier because it was quite messy and I wanted to make it clearer.
I have a continuously differentiable function $f$ on $[a,b]$, and I am trying to prove the following equality:
$\sup\limits_{\mathcal{P}} \displaystyle\sum_i |f(t_{i+1}) - f(t_i)| = \lim\limits_{||\mathcal{P}|| \to 0} \displaystyle\sum_i |f(t_{i+1}) - f(t_i)|$, where $\mathcal{P}$ ranges over the set of partitions, and the $t_i$'s are "tag-points" (ie, each $t_i$ belongs to the $i+1$-th interval of the partition).
I've already shown that $\sup\limits_{\mathcal{P}} \displaystyle\sum_i |f(t_{i+1}) - f(t_i)| = \displaystyle\int_a^b |f'(x)| \mathrm{d}x$, and it is obvious that $\lim\limits_{||\mathcal{P}|| \to 0} \displaystyle\sum_i |f(t_{i+1}) - f(t_i)| \leq \sup\limits_{\mathcal{P}} \displaystyle\sum_i |f(t_{i+1}) - f(t_i)|$. So I tried to show the second side of the inequality, but I think there might be a mistake in my reasoning, so I wanted to ask for some proof-checking. Here is how I went about it:
$|f'|$ is continuous, so its integral can be written as a limit of Riemann sums:
\begin{align*}
\displaystyle\int_a^b |f'(x)| \mathrm{d}x &= \lim\limits_{n \to +\infty} \displaystyle\sum_{i = 0}^{n-1} \dfrac{b-a}{n}|f'(a_i)| \, \, \left(a_i := a + \dfrac{i}{n}(b-a)\right) \\
&= \lim\limits_{n \to +\infty} \displaystyle\sum_i |f(a_i + \dfrac{b-a}{n}) - f(a_i) + o(\dfrac{1}{n})| \\
&\leq \lim\limits_{n \to +\infty} \displaystyle\sum_i |f(a_i + \dfrac{1}{n}) - f(a_i)| = \lim\limits_{||\mathcal{P}|| \to 0} \displaystyle\sum_{t_i \in \mathcal{P}} |f(t_{i+1}) - f(t_i)|
\end{align*}
Now, the step I think is dodgy is the last one. I think it makes sense, because since $f$ is Riemann-integrable, two of its Riemann sums with a mesh going to $0$ should be equal, but I'm not $100\%$ sure, so I'm asking just in case :)
AI: Set $S = \sup_\mathcal{P} \sum_\mathcal{P} |f(t_i)-f(t_{i-1})|$ and let $\epsilon > 0$. By definition of the supremum you may find a partition $\mathcal{P}$ so that $$S - \sum_\mathcal{P} |f(t_i)-f(t_{i-1})| < \epsilon$$ Now if you take any finer partition $\mathcal{P}' \supset \mathcal{P}$, then $$S - \sum_{\mathcal{P}'} |f(t_i')-f(t_{i-1}')| < \epsilon \qquad (\star)$$
Now note, that $I = \lim\limits_{\|P\| \to 0} \sum |f(t_i)-f(t_{i-1})|$ exists and is unique (see https://math.stackexchange.com/a/2047959/72031) and by $(\star)$ we obtain, $S < I + \epsilon$ for all $\epsilon >0$ and thus $S \leq I$ as wanted.
|
H: There is another function to calculate $T_n$?
I was searching about Fibonacci numbers, and I found that Tribonacci numbers also exist given the following recurrence: $T_{n+3}= T_{n+2}+T_{n+1}+T{n}$ with $T_0=T_1=0,T_2=1$. Then I thought about what would be a function that would calculate the value $T_n$ and it would obviously be: $ F (n) = F(n-1) + F(n-2) - F(n-3) $, please correct me if it is not So. But, is there another way (function) to calculate this value without having to use recurrences?
AI: Yes, you can solve the recurrence relation $F(n)=F(n-1)+F(n-2)+F(n-3)$ by solving its characteristic polynomial $x^3 = x^2 + x + 1$. This has a real solution $\alpha$ and two complex solutions $\beta, \gamma$, so the function will be given by $$F(n) = c_1 \alpha^n + c_2 \beta^n + c_3 \gamma^n$$ and we need to solve for $c_1, c_2, c_3$. We do this by substituting in the initial conditions:
\begin{align*}
0 = F(1) &= c_1 + c_2 + c_3 \\
0 = F(2) &= c_1 \alpha + c_2 \beta + c_3 \gamma \\
1 = F(3) &= c_1 \alpha^2 + c_2 \beta^2 + c_3 \gamma^2,
\end{align*}
so we have to solve a system of $3$ equations and $3$ unknowns.
|
H: Linear algebra: Null space Basis
Suppose I have a matrix $A_{3\times 3}$ then if my rank of column space is 1 then rank of null space is 2. Is there anyway I can get the basis vectors of null space from this matrix A itself?
AI: There is no such thing as the basis vectors of the null space, since there are infinitely many such bases. Simply solve the system$$A.\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix}$$and find a basis of the space of solutions.
|
H: Pointwise convergence of ${\tau \wedge n}(ω)$ to the rv ${\tau}(ω)$, given that the stopping time is finite
Does ${\tau \wedge n}(ω)$ convergence uniformly to the random variable ${\tau}(ω)$ too?
My intuition is that, it does not, as a result of ${\tau \wedge n}(ω)$ converging to ${\tau}(ω)$ being dependent on $ω$, i.e the rate of convergence to the stopping time is not "uniform" for all $ω \in\ Ω $.
As a follow up question: Does the fact that ${\tau \wedge n}(ω)$ converges pointwise to the ${\tau}(ω)$, trivially implies that $X_{\tau \wedge n}\to X_{\tau}$ a.s, or are there subleties involved?
AI: You are correct, the convergence is not uniform (unless $\tau$ is bounded, then $\tau \wedge n = \tau$ for all $n$ larger than the bound).
The main subtleties in $X_{\tau \wedge n} \rightarrow X_\tau$ are questions of whether $X_\tau$ and $X_{\tau \wedge n}$ are measurable and if $\tau$ can be infinite. If $X_t$ is progressively measurable then $X_\tau$ and $X_{\tau \wedge n}$ are measurable. If $\mathbb{P}(\tau = \infty) > 0$, then we need that there exists a random variable $X_\infty$ such that $\lim_{t \rightarrow \infty} X_t = X_\infty$ a.s. to have $X_{\tau \wedge n} \rightarrow X_\tau$.
|
H: Defining discrete set
$\mathcal{S}$ is a set with discrete elements. Is it a good practice to write $|\mathcal{S}| < \infty$ in order to say $\mathcal{S}$ is discrete? This is not a common notation, but is short and effective in my view.
Note: it is also not of infinitely many elements.
AI: This doesn't actually make sense. See here for more information about why a set with discrete elements isn't an actual thing.
$|\mathcal S|<\infty$ means that $\mathcal S$ is finite, and tells you nothing about the topology on the set. It's perfectly fine to use this notation to denote finiteness.
|
H: Multiplying $P(x) = (x-1)(x-2) \dots (x-50)$ and $Q(x)=(x+1)(x+2) \cdots(x+50)$
Let $P(x) = (x-1)(x-2) \dots (x-50)$ and $Q(x)=(x+1)(x+2) \cdots(x+50).$
If $P(x)Q(x) = a_{100}x^{100} + a_{99}x^{99} + \dots + a_{1}x^{1} + a_0$, compute $a_{100} - a_{99} - a_{98} - a_{97}.$
I've been quite stuck with this one. If I multiply and group the polynomials with the similar terms e.g $(x-1)$ and $(x+1) ...$ I would get
$P(x)Q(x)= (x^2-1)(x^2-2^2) \dots(x^2-50^2)$ right? From here on if I would keep multiplying the terms wouldn't I end up with $50$ times the term $x^2$ as the initial term, hence $a_{100} = 1$ since it wouldn't have any coefficient? For the other terms I don't have a clue how to find them so any hints would be appreciated
AI: Clearly $a_{99}=a_{97}=0$, since only even degree terms appear. As you've noticed, $a_{100}=1$; so really we just need to find $a_{98}$. But this is just negative $1$ times the sum of squares up to $50$, i.e. $-\sum_{k=1}^{50} k^2 =- \frac{50\cdot 51\cdot 101}{6}=-42925$. So the answer is $42926$.
Edit: it seems I was a bit sparse with my explanation. Claim: the coefficient of $x^{2n-2}$ in $\prod_{i=1}^{n}(x^2-i^2)$ is $\sum_{i=1}^{n}i^2 = \frac{n(n+1)(2n+1)}{6}$. The claim is immediate when $n=1$ or $n=2$:
$$
(n=1)\qquad \prod_{i=1}^{1}(x^2-i^2)= x^2-1;\, 1 = \frac{1\cdot 2 \cdot 3}{6}
$$
$$
(n=2)\qquad \prod_{i=1}^{2}(x^2-i^2)= x^4-5x^2+1;\, 5 = \frac{2\cdot 3 \cdot 5}{6}
$$Suppose the result is true for some $k$. Then
$$
\prod_{i=1}^{k+1 }(x^2-i^2) = \left(\prod_{i=1}^{k }(x^2-i^2)\right)\cdot(x^2-(k+1)^2)
$$
$$
= \left(x^{2k}-\sum_{i=1}^{k}i^2 \cdot x^{2k-2} +\cdots\right)\cdot(x^2-(k+1)^2)
$$
$$
= \left(x^{2k+2}-((k+1)^2+\sum_{i=1}^{k}i^2) \cdot x^{2k} +\cdots\right),
$$as was to be shown.
|
H: Prove that every measure space has a completion.
I am following the book $\\$ An Introduction to Measure and Integration written by Prof. Inder K Rana for my measure theory course. While going through this book I came across the aforesaid theorem which roughly states that every measure space has a completion. The proof that he has presented is also very rigorous and easy to read. But in the entire proof he didn't show anywhere that the measure space $(X, \mathcal S \cup \mathcal N, \overline {\mu})$ is complete. How to prove it?
Well we need to only to show that if $E \subseteq A \in \mathcal S \cup \mathcal N$ with $\overline {\mu} (A) = 0$ then $E \in \mathcal S \cup \mathcal N.$ Now the author has also shown that for any $S \cup N \in \mathcal S \cup \mathcal N$ with $S \in \mathcal S, N \in \mathcal N,\ \ $ $\overline {\mu} (S \cup N) = \mu (S).$ So $\mu (A) = 0 \implies \exists\ S \in \mathcal S$ such that $\mu (S) = 0,$ where $A = S \cup N,$ for some $S \in \mathcal S, N \in \mathcal N.$ This shows that $S \in \mathcal N.$ But then $A \in \mathcal N,$ since $\mathcal N$ is closed under countable unions. How does it imply that $E \in \mathcal S \cup \mathcal N$? Any help in this regard will be highly appreciated.
Thank you very much for your valuable time for reading.
AI: Since $\mu(S) = 0$ we know that $S \cap E \in \mathcal N$. Since $N \in \mathcal N$ there exists $B \in \mathcal S$ with $\mu(B) = 0$ and $N \subset B$, so $(N \cap E) \subset N \subset B$ and hence $N \cap E \in \mathcal N$. Since $E \subset S \cup N$, $E = (S \cap E) \cup (N \cap E) \in \mathcal N$ so in particular $E \in \mathcal S \cup \mathcal N$.
|
H: Proving integral transform using banach fixed-point theorem
I'm currently working on the following problem:
Let $K: [0,1]^2 \to \mathbb{R}$ be continuous with $|K(x,y)| < 1$ for all $(x,y) \in [0,1]^2$. Prove the existence of a function $f \in C([0,1])$ s.t.
$$f(x) + \int_0^1 f(y) K(x,y) dy = e^{(x^2)}$$
for all $x \in [0,1]$. Is $f$ also unique?
I'm given the hint that I should use the banach fixed point theorem; and additionally that I should use the compactness of $[0,1] \times [0,1]$ to show that $\max|K(x,y)| < 1$. Honestly, this just confuses me more, as the task already states that $|K(x,y)| < 1$, so I'm not sure how it'll help.
All in all I'm admittedly at a loss with this problem, and I'd appreciate it if somebody could give some input on this.
AI: What you want to do is start with an $f$, and iteratively construct functions that are fit the equation better.
Specifically, you let $f_1(x)\equiv e^{x^2}$ and then define $$f_{n+1}(x)=e^{x^2}-\int_0^1f_n(y)K(x,y)\,dy.$$
Finally, we want to show that $f_n$ converges uniformly towards a function, $f$, satisfies the properties in the question.
Note that $$|f_{n+1}(x)-f_n(x)|=\int_0^1(f_{n-1}-f_n)K(x,y)\,dy\le\int_0^1|f_n-f_{n-1}||K|\,dy<|f_n-f_{n-1}|$$
and thus by Banach fixed-point theorem, the functions converge uniformly towards a solution, $f$, to the equation. A straightforward application of uniform limit theorem tells us that $f\in C([0,1])$ (after verifying that every $f_n\in C([0,1])$ as well).
As for uniqueness, suppose $f,g$ satisfy the equation. Subtracting the two, we get that $$f(x)-g(x)+\int_0^1(f(y)-g(y))K(x,y)\,dy=0.$$ Let $h=f-g$, and $t\in[0,1]$ such that $|h(t)|$ is maximal (such a $t$ exists by compactness). If $|h(t)|>0$, then $$|h(t)|=\left|\int_0^1h(y)K(x,y)\,dy\right|\le\int_0^1|h(y)||K|\,dy<\int_0^1|h|\,dy,$$ where the strict inequality uses the fact that $|h(t)|>0.$ Therefore $h=0$ and thus $f=g$, so the equation does indeed have a unique solution.
|
H: How can we find the shortest paths in the boundaries of a matrix?
This is another version of a previous question.
In this image,
we have all the points within the shape (yellow area), which can be represented as a matrix.
We aim to connect the boundary points (black dots) within the shape.
How can we find the closest boundary point to each boundary point while we can draw a circle inside the shape (yellow area) by the resulting line as the diameter?
The basic solution is to check all points for any given point. The key is to reduce the number of calculations, as I wish to scale up this approach for large and complicated systems.
AI: If the circle with diameter $BA$ is supposed to be within the region bounded by your curve, and that curve has tangent lines at $B$ and $A$, then those tangent lines must be perpendicular to $BA$. So given $B$,
take a line perpendicular to the tangent line to the curve at $B$, and see where it first intersects the curve. That point must be $A$. But if $BA$ is not perpendicular to the tangent line to the curve at $A$, then there is no solution.
Of course it is still not guaranteed that the circle will be within the region, but if there is a solution it must be this one.
|
H: Rolle's theorem, such that $f'(c)=2c$
We have that the function $f: \mathbb R \xrightarrow{}\mathbb R$ is differentiable and satisfies $f(0)=0$ and $f(1)=1$.
I need to use Rolle's Theorem to show that there exits $c\in(0,1)$ such that $f'(c)=2c$.
I am unsure how to proceed considering we do not have $f(a)=f(b)$.
AI: Consider $g(x)=f(x)-x^2$. We know that $g(0)=g(1)$ and the question is now equivalent to finding $x$ such that $$f'(c)=2c\iff g'(c)=0,$$ which is of course just Rolle's.
|
H: Simplification of ${0 \binom{n}{0} + 2 \binom{n}{2} + 4 \binom{n}{4} + 6 \binom{n}{6} + \cdots}$
Simplify
$$0 \binom{n}{0} + 2 \binom{n}{2} + 4 \binom{n}{4} + 6 \binom{n}{6} + \cdots,$$ where $n \ge 2.$
I think we can write this as the summation $\displaystyle\sum_{i=0}^{n} 2i\binom{n}{2i},$ which simplifies to $\boxed{n\cdot2^{n-2}}.$ Am I on the right track?
AI: Sure. Notice that
$$\sum _{i = 0}^{n}2i\binom{n}{2i}=\sum _{i = 0}^{n}\binom{2i}{1}\binom{n}{2i}=\sum _{i = 1}^{n}\binom{n}{1}\binom{n-1}{2i-1}=n\sum _{i=1}^n\binom{n-1}{2i-1}=n\cdot 2^{n-2}.$$
Notice that the last step is because you are adding half of the binomials, and the odd half equals the even half by the binomial theorem.
|
H: If $M$ is a martingale, show that $X_t = M_t − kt^2$ is a supermartingale iff $k$ is non-negative.
If $M$ is a martingale, show that $X_t = M_t − kt^2$ is a supermartingale iff $k$ is non-negative.
A supermartingale is defined when $X_s \ge \mathbb E[X_t|\mathcal F_s]$ when $s\le t$. Clearly $X$ must be integrable and adaptable if $M$ is since $kt^2$ is a constant. The main thing then is that we know
$$ M_s = \mathbb E(M_t|\mathcal F_s)$$
I just don't know how to apply this to get a supermartingale for $X$?
AI: Assume $k\geq0$. For $s\leq t$ we (almost surely) have
$$\mathbb E[X_t|\mathcal F_s] = \mathbb E[M_t-kt^2|\mathcal F_s] = \mathbb E[M_t|\mathcal F_s] -kt^2 = M_s - kt^2 \leq M_s -ks^2 = X_s,$$ where we have used linearity of conditional expectation, the fact that $(M_t)_t$ is a Martingale and $k\geq0$. Hence, $(X_t)_t$ is a supermartingale.
For the other direction, assume $(X_t)_t$ is a supermartingale. Then, for $0<s< t$
$$X_s = M_s -ks^2 \geq\mathbb E[X_t|\mathcal F_s] = \mathbb E[M_t-kt^2|\mathcal F_s] = \mathbb E[M_t|\mathcal F_s] -kt^2 = M_s -kt^2, $$ which implies that $k\geq0$.
|
H: Intuition behind area of ellipse
This is not meant to be a formal proof, but I just wanted to know if this is a valid way of thinking about the area of an ellipse. It does assume knowledge of the area of a circle, but this can be proven without knowledge of the area of an ellipse. I also don't know how to include pictures, so please excuse that.
Draw an ellipse with semi-major axis $a$ and semi-minor axis $b$. Because $a$ and $b$ are both linear quantities (i.e. they have units of distance), $k=\frac{b}{a}$ is dimensionless. Hence, we can draw a new ellipse with a semi-major axis of $ka$ and a semi-minor axis of $b$. Because $b=ka$, this new ellipse is a circle of radius $b$, so it has an area of $$A_{circ}=\pi b^2.$$ Because $a$ and $b$ are both linear, the area of the first ellipse, $A$, can be expressed as the product of these two quantities and some constant. Also, because $A$ was only scaled by a factor of $k$ in one dimension to get $A_{circ}$, $$kA=A_{circ}.$$ Substituting, $$\left(\frac{b}{a}\right)A=\pi b^2$$ $$\therefore \boxed{A=\pi ab}$$ as desired. $\blacksquare$
AI: This is completely non rigorous, but $\pi ab$ is kind of the only thing it can be. If we find a formula for the area of an ellipse, it needs to meet a few conditions:
1: It only depends on $a$ and $b$
2: When either $a=0$, $b=0$, or both, $A=0$.
3: When $a=b$, the formula must reduce to $\pi a^2$
4: It must follow the typical properties of area: scaling one of the dimensions by a factor $k$ should produce an area scaled by a factor of $k$ and scaling both by a factor of $k$ should produce an area scaled by a factor of $k^2$.
5: It must be "symmetric" to $a$ and $b$, that is, interchanging them doesn't change the formula.
You can think long and hard about this, but $A=\pi ab$ is the only thing that works.
|
H: Showing that disjoint intervals implies independent number of arrivals for random point set
If we have a number of i.i.d. random variables, $X_1, X_2, ..., X_z$ (where $z$ is the realisation of a random variable $Z\sim Po(\lambda)$ independent of each $X_i$) with pdf $f$ that form a random point set, then I want to show that for two disjoint sets (in this case intervals), we have that $N(A_1)$ and $N(A_2)$ are independent. These are the number of "arrivals" (as in a Poisson Process - but I haven't shown it to be one yet) in said intervals. I will denote $p_i:=\int_{A_i} f\,dx$ to be the probability of being in the interval $A_i$.
I want to show $P(N(A_1)=n\space\cap\space N(A_2)=m)=P(N(A_1)=n)P(N(A_2)=m)$. If we only consider the cases where $n+m\leq z$, then
$$P(N(A_1)=n\space\cap\space N(A_2)=m)=P(N(A_1)=n)P(N(A_2)=m|N(A_1)=n)$$
So I want that $P(N(A_2)=m|N(A_1)=n)=P(N(A_2)=m)$. However,
$$P(N(A_2)=m|N(A_1)=n)=\binom{z-n}{m}p_2^m(1-p_2)^{z-n-m}$$
is definitely not the same expression as the desired one, i.e. $\binom{z}{m}p_2^m(1-p_2)^{z-m}$. Where did I go wrong? I know for a fact that the set must describe a Poisson Process, so I must have made an error somewhere.
AI: The point process you are considering is not a Poisson point process (PPP) but rather a binomial point process (BPP). Indeed, for a PPP, $N(A)$ is Poisson distributed while in your case $N(A)$ is a binomial random variable. In particular, even if $A_1$ and $A_2$ are disjoint, $N(A_1)$ and $N(A_2)$ are dependent since
$$N(A_1) + N(A_2) = N(A_1 \cup A_2) \leq z.$$
I should note however that there is an error in your calculation. We have
$$P(N(A_1) = n, \, N(A_2) = m) = \binom{z}{n} \binom{z-n}{m}p_1^np_2^m (1-p_1-p_2)^{z-n-m}\tag{1},$$
which is saying that we want $n$ points in $A_1$, $m$ points in $A_2$ AND $z-n-m$ points in $\mathbb{R}\setminus (A_1\cup A_2)$.
Final note: if you want a PPP then you should take $z$ Poisson distributed (instead of deterministic) and independent of the $X_i$.
EDIT:
Let $Z$ be Poisson distributed with mean $\lambda$ and set $N = \sum_{i=1}^Z\delta_{X_i}$. Then
$$\begin{align}
P(N(A_1) = n, \, N(A_2) = m) &= \sum_{z=0}^\infty P(N(A_1) = n,\, N(A_2) = m, \, Z = z) \\
&= \sum_{z=n+m}^\infty P(N(A_1) = n,\, N(A_2) = m, \, Z = z)\\
&=\sum_{z=n+m}^\infty P(N(A_1) = n,\, N(A_2) = m| Z = z)e^{-\lambda}\frac{\lambda^z}{z!},
\end{align}$$
where I used that the summand is zero if $z < n+m$ for the second equality and that $Z \sim Po(\lambda)$ for the last. Now, conditionally on $Z=z$, the computation above is valid and gives that
$$\begin{align}P(N(A_1) = n, \, N(A_2) = m|Z=z) &= \binom{z}{n} \binom{z-n}{m}p_1^np_2^m (1-p_1-p_2)^{z-n-m} \\
&= \frac{z!}{n! m!(z-n-m)!}p_1^n p_2^m (1-p_1-p_2)^{z-n-m}.
\end{align}$$
We deduce that
$$\begin{align}
P(N(A_1) = n, \, N(A_2) = m) &= \frac{p_1^n p_2^m e^{-\lambda}}{n!m!}\sum_{z=n+m}^\infty \frac{\lambda^z (1-p_1-p_2)^{z-n-m}}{(z-n-m)!} \\
&= \frac{p_1^n p_2^m \lambda^{n+m}e^{-\lambda}}{n!m!}\sum_{z=0}^\infty \frac{\lambda^z (1-p_1-p_2)^{z}}{z!} \\
&= \frac{p_1^n p_2^m \lambda^{n+m}}{n! m!} e^{-\lambda(p_1+p_2)} \tag{2}.
\end{align}$$
On the other hand, summing over all $m \in \mathbb{N}$ in $(2)$, we get that
$$P(N(A_1) = n) = \frac{(\lambda p_1)^n}{n!}e^{-\lambda p_1},$$
i.e. $N(A_1) \sim Po(\lambda p_1)$. Similarly $N(A_2) \sim Po(\lambda p_2)$. It follows that $$P(N(A_1) = n, N(A_2) = m) = P(N(A_1) =n)P(N(A_2) =m).$$
|
H: I do not understand an inequality used to prove that e is bigger/equal to exp(1)
I have seen the following inequalies:
$$e \geq 1+1+\sum \limits_{k=2}^{n}\frac{1}{k!}=s_{n}
\implies e\geq \lim_{n\to\infty}s_{n}=exp(1)$$
Based on this information, how can one explain the implication? Why would $$e\geq \lim_{n\to\infty}s_{n}$$ be true? I only know that $e \geq 1+1+\sum \limits_{k=2}^{n}\frac{1}{k!}=s_{n}$.
Wouldn't the implication also imply $\sum \limits_{k=0}^{n}\frac{1}{k!} = \sum \limits_{k=0}^{\infty}\frac{1}{k!} = \lim_{n\to\infty}s_{n}$?
AI: Whenever you have a convergent sequence $(x_n)_{n\in\Bbb N}$ and a number $M$ such that $(\forall n\in\Bbb N):x_n\leqslant M$, you also have $\lim_{n\to\infty}x_n\leqslant M$. If you apply this to $(s_n)_{n\in\Bbb N}$ and to the number $e$, you get what you want.
|
H: How to find residues of $\frac{1+z}{1-\sin z}$
I have to compute an integral:
$$\int_C \frac{1+z}{1-\sin z}$$
where $C$ is the circle of radius $8$.
I would do this way:
the function is holomorphic in $\mathbb{C}-\{\frac{\pi}{2}+k\pi\}$,
$C$ is homologous to $0$ in $\mathbb{C}$ and the singularities don't intercept the curve so I can use the residue theorem, so: $$\int_C \frac{1+z}{1-\sin z}=2\pi i( Res_{\frac{\pi}{2}}(f)+Res_{\frac{\pi}{2}+2\pi}(f)+Res_{\frac{\pi}{2}-2\pi}(f))$$ where $$\frac{\pi}{2},\frac{\pi}{2}+2\pi,\frac{\pi}{2}-2\pi$$ are the only singularities inside the circle of radius $8$.
Now my problem is how to compute the residues: I did this:
$$\frac{1+z}{1-\sin z}=\frac{1+z}{1-\cos(z-\frac{\pi}{2})}\frac{(z-\frac{\pi}{2})^2}{(z-\frac{\pi}{2})^2}=[\frac{1+z}{1-\cos(z-\frac{\pi}{2})}(z-\frac{\pi}{2})^2]\frac{1}{(z-\frac{\pi}{2})^2}$$
now,say $g$ the function in the square brackets. $g$ has a removable singularity at $\frac{\pi}{2}$ and hence the residues of $f$ in $\frac{\pi}{2}$ is $g'(\frac{\pi}{2})$. The problem is that $g'$ is not defined in $\frac{\pi}{2}$.
Is there a more elegant way to proceed?
AI: Let's try the residue at $z = k\pi/2$ where $k \equiv 1 \mod 4$. If $w = z - k \pi/2$, near $w=0$ we have
$$\eqalign{\sin(z) &= \cos(w) = 1 - \frac{1}{2} w^2 + O(w^4)\cr
\dfrac{1}{1 - \sin(z)} &= \frac{2}{w^2} + O(w^0)\cr
\dfrac{1+z}{1-\sin(z)} &= (1 + k\pi/2 + w)\left(\frac{2}{w^2}+O(w^0)\right)\cr
&= \frac{(2 + k \pi)}{w^2} + \frac{2}{w} + O(w^0) } $$
So the residue is the coefficient of $w^{-1}$, namely $2$.
|
H: Equilateral triangle in a regular pentagon
I don't understand how the line segment of $|BF|$ equals to $|BC| = |FC| = |AB| = |AE|$. How can the line segment maintains the equilateral triangle? The instructor says while drawing that it just maintains a equilateral triangle, but not how.
AI: $|BC|=|AE|$ because it's a regular pentagon; then $|BC|=|FC|$ by transitivity of equality. $\angle FCB$ is $60^{\circ}$ by subtracting $48^\circ$ from $108^\circ$; now you have SAS on the triangle $\triangle BCF$, so the rest of the information is uniquely determined. Since an equilateral triangle obviously has two equal sides and a $60^{\circ}$ angle, it must be the unique triangle fitting this SAS relation. But note that $\triangle BAF$ isn't equilateral; despite how the diagram is drawn, $ABCF$ isn't an equilateral rhombus.
|
H: Proof that $\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n}{2}}=1$ without L'Hospital
I proved that $$\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n}{2}}=1$$
using L'Hospital's rule. But is there a way to prove it without L'Hospital's rule? I tried splitting it as
$$\lim_{n\to\infty}n^{-n}(n^2+x^2)^{\frac{n}{2}},$$
but that didn't work because $\lim_{n\to\infty}(n^2+x^2)^{\frac{n}{2}}$ diverges.
AI: METHODOLOGY $1$: Direct Application of Bernoulli's Inequality
Note that for $n>|x|$
$$1\le \left(1+\frac{x^2}{n^2}\right)^{n/2}\le \frac1{\left(1-\frac{x^2}{n^2}\right)^{n/2}}\le \frac1{1-\frac{x^2}{2n}}$$
where we used Bernoulli's inequality to arrive at the last inequality.
Now apply the squeeze theorem to find
$$\lim_{n\to \infty}\left(1+\frac{x^2}{n^2}\right)^{n/2}=1$$
METHODOLOGY $1$: Using Estimates of the Logarithm Function
Note that we may write
$$\left(1+\frac{x^2}{n^2}\right)^{n/2}=e^{(n/2)\log\left(1+\frac{x^2}{n^2}\right)}\tag 1$$
In This Answer, I used elementary, pre-calculus tools to obtain the inequalities
$$\frac{x}{1+x}\le \log(1+x)\le x \tag2$$
Using $(2)$ in $(1)$ reveals
$$e^{nx^2/(2n^2+2x^2)}\le e^{(n/2)\log\left(1+\frac{x^2}{n^2}\right)}\le e^{x^2/2n}$$
whence application of the squeeze theorem yields the coveted result
$$\lim_{n\to \infty}\left(1+\frac{x^2}{n^2}\right)^{n/2}=1$$
as expected!
|
H: Continuity is a local property: topology, proof
Consider topological spaces $X, Y$ and $f: X\to Y$, $x\in X$.
Show: if $f$ is continuous in $x\in V\subseteq X$, then $f_{\mid V}: V\to Y$ is continuous in $x$.
Show: if $f_{\mid V}: V\to Y$ is continuous in $x$ and $V\subseteq X$ is a neighborhood of $x$, then $f$ is continuous in $x$.
My attempt:
Let $U\in\tau_{Y}, f_{\mid V}(x)\in U$. Then $f_{\mid V}^{-1}(U) = f^{-1}(U)\cap V$. I believe that this is an open neighborhood of $x$ in $V$, as $f_{\mid V}(x)=f(x)$, implying $x\in f^{-1}(U)$ and the continuity of $f$ in $x$ implies $f^{-1}(U)\in\tau_X$.
Let $U\in\tau_Y, f(x)\in U$. The function $f_{\mid V}$ is continuous in $x$ and $f(x)=f_{\mid V}(x)$ (?), so $f_{\mid V}^{-1}(U)$ is an open neighborhood of $x$. Now, I'm stuck and I don't know how to introduce the fact that $V$ is a neighborhood of $x$ and how to go from $f_{\mid V}$ to $f$.
Is my reasoning for the first bullet point correct? Any hints for the second one?
Thanks!
AI: Let $f:X \to Y$ be a function between topological spaces, and $x\in X$ a point of $X$. Then you say that $f$ is continuous at $x$ if for every open subset $B$ of $Y$ such that $f(x)\in B$ there exists an open subset $A$ of $X$ such that $f(A)\subseteq B$.
Now suppose that $f$ as above is continuous at $x$ and take an open subset $B$ of $Y$ which contains $f(x)$. You are looking to an open subset $A$ of $V$ such that $f_{|V}(A)\subseteq B$. Since $f$ is continuous at $x$, you know there exists an open subset $A'$ of $X$, containing $x$, such that $f(A')\subseteq B$. Then consider the intersection $A=V\cap A'$. It is an open subset of $V$, because it is the intersection of the subspace $V$ of $X$ with an open subset of $X$, it contains $x$ and for every $y\in V\cap A'$, $f_{|V}(x)=f(x)\in B$. This shows that the restriction of $f$ to $V$ is indeed continuous at $x$.
Conversely, suppose that $f$ restricted to a neighborhood $V$ of $x$ is continuous at $x$. Take an open subset $B$ of $Y$ containing $f(x)$ and seek an open subset $A$ of $X$ containing $x$ such that $f(A)\subseteq B$. Since $f_{|V}$ is continuous at $x$, you know there exists an open subset $A'$ of $V$, containing $x$, such that $f_{|V}(A')\subseteq B$. But open subsets of $V$ has the form $A'=V\cap G$, for some open subset $G$ of $X$. If $V$ is open, you are done, with $A=A'$, otherwise you may choose a subset $U$ of $V$ open in $X$ and containing $x$ and take $A=U\cap G$
|
H: Calculate shore distributions and correlation coefficient
Task
We have depth of random vector:
$$f(x,y) =\begin{cases}\frac{1}{5} & \text{for } 0<x<1\;,2<y<7\\0 & \text{for other } x,y\end{cases}$$
a) Calculate shore distributions
b) Correlation coefficient for X,Y
================================================================
a)
$$f_X(x) =\begin{cases} \int_0^1 \frac{1}{5} \, dy= \frac{1}{5} & \text{for } 0<x<1 \\0 & \text{for other} \end{cases}$$
$f_Y(y)$ will also be $\frac{1}{5}$
b) Correlation coefficient is:
$$p(X,Y)= \frac{\operatorname{cov}(X,Y)}{ \sqrt{D^2 X}\sqrt{D^2 Y} } $$
$EX=(EX,EY) = (\frac{1}{5},\frac{1}{5})$
what I don't know how to calculate in this task:
$EXY=\text{?} \; (\frac{1}{25}?)$
$D^2 X=\text{?}$
$D^2 Y=\text{?}$
Any help would be appreciated.
AI: Your calculations in (a) are incorrect. Correct is
$$f_X(x) =\begin{cases} \int_2^7 \frac{1}{5}\mathop{dy}= 1 & \text{if } 0<x<1 \\0 & \text{else} \end{cases}.$$
For $Y$ we find that
$$f_Y(y) =\begin{cases} \int_0^1 \frac{1}{5}\mathop{dx}= \frac{1}{5} & \text{if } 2<y<7 \\0 & \text{else} \end{cases}.$$
Your expected values are incorrect as well. For example,
$$\mathbb E X = \int_0^1 x f_X(x) \mathop{dx} = \int_0^1 x \mathop{dx} = \frac{x^2}{2}\Bigr|_{x=0}^1 = \frac{1}{2}.$$
The variance is
$$\operatorname{Var}X = \mathbb E (X^2) - (\mathbb EX)^2 = \int_0^1 x^2 f_X(x)\mathop{dx} - \frac{1}{4} = \int_0^1 x^2 \mathop{dx} - \frac{1}{4}= \frac{x^3}{3}\Bigr|_{x=0}^1 -\frac{1}{4}= \frac{1}{3}-\frac{1}{4} = \frac{1}{12}.$$
I leave the analogous calculations for $Y$ up to you.
Moreover,
$$\mathbb E (XY) = \int_0^1\int_2^7 xyf(x,y) \mathop{dy} \mathop{dx}=\int_0^1\int_2^7 \frac{xy}{5} \mathop{dy} \mathop{dx} = \int_0^1\frac{xy^2}{10}\Bigr|_{y=2}^7 \mathop{dx} =\int_0^1\frac{9x}{2} \mathop{dx} = \frac{9}{4}.$$
If you succeeded in finding $\mathbb E Y$, you can calculate the covariance using $\operatorname{Cov}(X,Y) = \mathbb E(XY) - \mathbb E X\mathbb E Y$.
|
H: The intersection of maximal subgroups of a group lies in a maximal subgroup of that group
I am trying to prove that the intersection of maximal subgroups of a finite group lies in a maximal subgroup of that group.
My question: Can someone please verify my proof below? I am afraid that the two statements in blue are contradictory. Is it really the case?
Proof: Let $G$ be finite. Suppose $K \leq G$ and $[G:K]$ is prime. Then,
\begin{equation}\label{amend}
[G:K] = \frac{\left|G\right|}{\left|K\right|} = p
\end{equation}
where $p$ is a prime. Then, $\left|G\right| > \left|K\right|$, implying that $K$ is a proper subgroup of $G$. $\color{blue}{\textrm{Then, $K$ must be contained in some maximal subgroup of $G$}}$ by the hint; denote such a maximal subgroup of $G$ containing $K$ by $V$. Clearly, $K \leq V$.
which implies $K \leq V \leq G$ and
\begin{equation*}
[G:K] = [G:V] [V:K] = p
\end{equation*}
Since $p$ is prime, either $[G:V] = 1 \implies \left|G\right| = \left|V\right| \left( \textrm{and } [V:K] = p\right)$ or $\left([G:V] = p \textrm{ and}\right) [V:K] = 1 \implies \left|V\right| = \left|K\right|$.
Thus, either $\left|G\right| = \left|V\right|$ or $\left|V\right| = \left|K\right|$ which shows that $\color{blue}{\textrm{$K$ is a maximal subgroup of $G$}}$. Then, clearly, $M(G) \subseteq K$.
AI: This is correct. However, you're given that $G$ is finite, so you don't need to check that every subgroup and index is finite along the way. Since $K$ is a subgroup of $G$, you do not need to check that it is a subgroup of $V$. Given that $K$ is a subgroup of $V$ and $V$ is a subgroup of $G$, and $K$ is of prime index, you correctly deduced that $V=K$, i.e. $K$ is maximal. This does not contradict the fact that $K$ is contained in a maximal subgroup; it is contained in itself, though not properly, which is fine.
|
H: Unknown index in a model theoretic considerations
What is $k$ in $${}^kM$$ here on the page $11$ in the definition $2.10$?
AI: It appears to be an instance of the standard notation ${{^A}B}$ for the set of maps from $A$ to $B$. Here $k\in\omega$, so it’s maps from $k$ to $M$, i.e., effectively $k$-tuples from $M$.
|
H: problem about Cantor-Schroeder-Bernstein theorem
suppose we need to prove$|(0,1)|=|(0,1]|$
First, $x\in(0,1)$,
$(0,1)→(0,1]$, $f(x)=x$,injective
$x∈(0,1]$
$(0,1]→(0,1)$, $g(x)=\frac{x}{2}$, injective
So according to sb theorem,$(0,1)↔(0,1]$ is bjective,but $0.6$ is not covered by $g(x)$ so it is not subjective and not bijective?
AI: The theorem says that if there are injective maps $f:A\to B$ and $g:B\to A$ then there exists some bijection between $A$ and $B$. It doesn't say that the specific maps $f$ and $g$ are bijections. (and in your example $f$ and $g$ are indeed not bijections)
|
H: Complex inner product space.
Problem from Schaum’s Outlines of Linear Algebra 6th Ed (2017, McGraw-Hill)
I proved that a and d must be real positive, and b is the conjugate of c.
The solution indicates that a.d-b.c must also be positive, but i can't figure that out.
thanks for your help.
AI: Hint 1: $f(u,v) = u \begin{pmatrix}a & b \\ c & d \end{pmatrix} \overline{v}^T$.
Hint 2:
What happens to $f(u,u)$ if the determinant of that matrix is negative?
|
H: Its correct to assume the Sum Set using "for all" quantifier in place of existential quantifier?
From ZFC Axiom of union we have: $$(\forall x)(\exists y)(\forall u)(u \in y \Leftrightarrow (\exists v)(u \in v \land v \in x))$$
My interpretation of this is that for any set $x$ its guarantee a set $y$ exists containing, every member of member of x. and im trying to solve an exercise which its to prove that some set $z$ is transitive if, and only if sum set of $z$ is a subset of $z$, i started assuming that sum set of $z$ is a subset of z:
$$\bigcup z \subseteq z$$
From this i changed the expression to this:
$$\bigcup z \subseteq z \Leftrightarrow (\forall u)(u \in \bigcup z \Rightarrow u \in z)$$
Now come my doubt, saying $u$ belongs to Sum set of $x$, can be changed to something like
$$(\forall u)(\forall v)((u \in v \land v \in z) \Rightarrow u \in z)$$
I believe it have something wrong because $(\Rightarrow)$ side of the proof will terminate here if its correct. I did i right? if this is not correct let me know how a proof for this should be done.
AI: What you’re trying to prove is what I normally take as the definition of $z$ being transitive, so I’m assuming that your definition is that $z$ is transitive if $x\in y\in z$ implies that $x\in z$. I would use fewer symbols and more words:
Assume that $\bigcup z\subseteq z$, and suppose that $u\in v\in z$. Then by definition $u\in\bigcup z$, so $u\in z$, and $z$ is therefore transitive.
That really is all that needs to be said unless you’re specifically being asked for a much more formal argument.
Added: For the opposite implication suppose that $z$ is transitive, and let $x\in\bigcup z$. Then by definition there is some $u\in z$ such that $x\in u$, and transitivity of $z$ then implies that $x\in z$. Since $x$ was an arbitrary element of $\bigcup z$, it follows that $\bigcup z\subseteq z$.
Here again I think that it’s better to avoid getting bogged down in a lot of symbols and to use words to make the flow of logic clear.
|
H: Integrate $\frac{e^{itx}}{1-it}$
I need this integral to find the probability density function from the characteristic function but I don't know how to find it, every method that I try fails. I tried integration by parts but I didn't get the result that I need to.
The integral is
$$\frac{1}{2\pi}\int_{-\infty}^{\infty}{\frac{e^{itx}}{1-it}dt}.$$
The result I need to get by my computations is $e^{-x}$, but I dont know how.
AI: First we rewrite the expression and interpret it as a Fourier transform:
$$
\int_{-\infty}^{\infty} \frac{e^{itx}}{1-it} \, dt
= \int_{-\infty}^{\infty} \frac{1+it}{1+t^2} e^{itx} \, dt
= \mathcal{F}\left\{ \frac{1}{1+t^2} \right\} + \mathcal{F}\left\{\frac{-it}{1+t^2} \right\},
$$
where
$$
\mathcal{F}\{f(t)\} = \int_{-\infty}^{\infty} f(t) \, e^{-ixt} \, dt
$$
Then, according to rule 208,
$$
\mathcal{F}\{ e^{-|t|} \} = \frac{2}{1+x^2}
$$
so by the inversion theorem, rule 105,
$$
\mathcal{F}\left\{ \frac{2}{1+t^2} \right\} = 2\pi \, e^{-|x|}.
$$
Thus,
$$
\mathcal{F}\left\{ \frac{1}{1+t^2} \right\} = \pi \, e^{-|x|}.
$$
For the other term we use rule 107 giving
$$
\mathcal{F}\left\{ \frac{-it}{1+t^2} \right\}
= -i \mathcal{F}\left\{ t \frac{1}{1+t^2} \right\}
= -i \cdot i \left( \pi \, e^{-|x|} \right)' = -\pi \, \operatorname{sign}(x) e^{-|x|}.
$$
Thus,
$$
\int_{-\infty}^{\infty} \frac{e^{itx}}{1-it} \, dt
= \pi \, e^{-|x|} - \pi \, \operatorname{sign}(x) e^{-|x|}
= \pi \left( 1 - \operatorname{sign}(x) \right) e^{-|x|}
= \pi H(-x) e^{-|x|},
$$
where $H$ is the Heaviside function:
$$
H(x) = \begin{cases}
0, & (x<0) \\
1, & (x\geq 0)
\end{cases}
$$
|
H: Is Differentiation Operator bounded for polynomials
$P$ is a vector space of all polynomial functions on $C[0,1]$
$p'$ is derivative of $p$
Check whether $T : P \to P \quad T(p)= p'$ is linear bounded or not. Use $|| \; ||_{sup}$ (supremum norm) for polynomials.
T is linear so there is no problem here. I am struggling to show whether it is bounded or not.
I think it is not bounded according to the following counter example
we have to show that $||Tv|| \leq M||v|| $ for $M\geq0$
Now lets take $v=x^n$ this give $||v||=1$ and $||Lv||=n$
So there is no general constant $M$.
is it correct ?
Can someone please give any other counter example or fill the missing pieces from mine.
Thank you.
AI: Yours is correct. Assume there were such $M > 0$, and choose a natural number such that $n > M$. Then, $$n = \|Tx^n\| \leq M \|x^n\| = M < n, $$ which is a contradiction.
|
H: Looking for a formula to compute $\left\lceil \frac{x+1}{2} \right\rceil$
I'm looking for a formula to easily compute:
$$ \left\lceil \frac{x+1}{2} \right\rceil $$
The formula shouldn't use any floor, ceil or round function. I'm looking for something "simple".
AI: No closed-form expression with $+,-,\times,\div$ can "emulate" the ceiling function (in particular because these operators are continuous; all they allow are rational fractions). With these basic operators, you would need an expression of infinite size.
Periodic functions and their inverses, like
$$\frac1\pi\arctan(\tan(\pi x))$$ give you access to the fractional part, from which you can build the floor/ceiling. But this is by no means "simple".
The answer is essentially no way.
|
H: Finding $g(N)$ for $T(N)= \frac{\exp(N^3)}{\lg N}$ such that $T(N) = \Theta(g(N))$
Could a correct answer be $$g(n)=\frac{N\exp(N^3)}{N\lg N}$$ for $T(N)=\Theta(g(N))$ if $T(N)= \frac{\exp(N^3)}{\lg N}$?
AI: Sure this will work, it's the same function. Another one is a different way to write it, note that $\log N = e^{\log \log N}$, so
$$
\frac{\exp\left(N^3\right)}{\exp(\log \log N)}
= \exp\left(N^3 - \log \log N\right)
$$
You can scale by any constant and add any lower order terms, for example
$$
25 \pi\exp\left(N^3 - \log \log N\right) + N^{100} e^N
$$
will do fine as well.
|
H: Two Options in Lottery - Probability
I was thinking about the lottery and came to this question, which is way above my understanding of probability.
Let's say there are 100,000 tickets and 10 IPhone prizes. You can buy as many tickets as you want but can only win one IPhone. (One prize per person.) Let's assume that you bought 10 tickets.
What would be more advantageous: to submit all of the 10 tickets by yourself, or to tell 10 of your relatives to submit the tickets on their own? (Of course we assume that the relatives give the prize to you and don't run away with it!)
Is there a scenario where the other option becomes more advantageous?
Thanks in advance,
Best regards.
AI: Suppose $n$ tickets are winning tickets. If you were to submit all of the tickets yourself, then you would win:
$1$ iPhone, if $n>0$; or
Nothing, if $n=0$
Now suppose your family members each submit the tickets instead, and if they win, give their prize to you. In that case, you would always win $n$ iPhones. Specifically:
$n$ iPhones, if $n>0$; or
Nothing, if $n=0$
Now, it's important to compare cases: in the case that no ticket wins, then you are left in the same position regardless of which option you picked. This is the same as if one ticket is the winner. In all other cases, choosing your relatives to submit the tickets is more advantageous. Thus, in all cases, it is either equivalent or more advantageous to choose option 2.
|
H: Is Differentiation Operator continuous?
$P$ is a vector space of all polynomial functions on $C[0,1]$
$p'$ is derivative of $p$.
Show that $T : P \to P \quad T(p)= p'$ is closed operator (the graph of T is closed) but not continuous.
Use $|| \; ||_{sup}$ (supremum norm) for polynomials.
On wikepedia it says that differentiation operator is not continuous how can I show it for this problem ?
AI: Take any polynomial $x^{n}$. The sup norm of this is $1$. It's derivative is $nx^{n-1}$, the norm of which is $n$. Hence, the norm of the derivative can be made arbitrarily large. So the map is not bounded.
|
H: Why does $\frac{|\sin\theta|}{2}<\frac{|\theta|}{2}<\frac{|\tan\theta|}{2}$ not imply that $1>\lim_{\theta\to 0}\frac{\sin\theta}{\theta}>1$?
I was watching this proof of the equality $$\lim_{\theta\to 0} \frac{\sin \theta}{\theta} = 1$$
The author says about the following areas that
red area <= yellow area <= blue area. Which leads to the following inequality:
$$\frac{|\sin\theta|}{2} \le \frac{|\theta|}{2} \le \frac{|\tan\theta|}{2}$$
and in the end proofs the theorem.
$$1 \ge \lim_{\theta\to 0} \frac{\sin \theta}{\theta} \ge 1 $$
I noticed that the statement red area < yellow area < blue area about the areas is also true and in fact more accurate. But this would lead to the following:
$$\frac{|\sin\theta|}{2} \lt \frac{|\theta|}{2} \lt \frac{|\tan\theta|}{2}$$
...
$$1 \gt \lim_{\theta\to 0} \frac{\sin \theta}{\theta} \gt 1 $$
Obviously that cannot be true.
Have I just broken the proof?
AI: Good first question! Even if $f(x)<M$ for some value $M$ and for every $x\neq x_0$ in the domain of $f$, you still can't conclude that $\lim\limits_{x\to x_0}f(x)<M$. The most you can say is that $\lim\limits_{x\to x_0}f(x)\leq M$. For example, if $f(x)=1-x^2$, then $f(x)<1$ for all $x\neq0$, but $\lim\limits_{x\to0}1-x^2=1$.
In your case, you have:
$$\cos\theta<\frac{\sin\theta}{\theta}<1$$
for every $\theta\neq0$ (at least in a neighborhood of $\theta=0$), so in the limit you have:
$$1\leq\lim_{\theta\to0}\frac{\sin\theta}{\theta}\leq 1$$
which is true! So the proof is not broken after all.
|
H: Group cohomology of finite cyclic group
I am currently reading Cohomology of Groups by Brown and I am stuck on page 58. They do an example of the computation of the cohomology group of a finite cyclic group $G=\langle t\rangle$ of order $n$.
My first question is a about the free resolution (I.6.3):
I see the maps $t-1$ and $N=1+t+\ldots+t^{n-1}$ as endomorphisms of $M$. How do these precisely induce maps $\mathbf{Z}[G]\longrightarrow\mathbf{Z}[G]$? The natural way would be to define something like $\sum_{g\in G} a_g g\longmapsto \sum_{g\in G}a_g g(t-1)$. However, this does not seem correct, (or maybe it is an enormous abuse of notation?), because $t-1$ is not an element of $G$.. I thought of the identification $\mathbf{Z}[G]\cong \mathbf{Z}[X]/(X^n-1)$, but this boggles my mind, like the fact that this sequence is exact. We have to prove that $\operatorname{Ker}(t-1)=\operatorname{Im}N$? But how is this defined? And why aren't all the cohomology groups trivial, as the $t-1$ and $N$ stay the same in the second sequence?
The map $\mathbf{Z}[G]\longrightarrow \mathbf{Z}$ he writes down is the augmentation map $\sum_{g\in G} a_g g\longmapsto \sum_{g\in G}a_g$. The kernel $I_G$ of $\pi$ is the subgroup of $\mathbf{Z}[G]$ generated by all elements of the form $i_g=g-1$ where $g$ runs over $G$.
Why is the image of $t-1$ equal to $I_G$? The inclusion $\subset$ is clear, but I don't see why the other inclusion is true.
Any help is appreciated.
AI: The maps $\mathbb{Z}[G] \to \mathbb{Z}[G]$ are given by multiplication, so
$$
\sum a_i g_i \mapsto (t-1) \sum a_i g_i.
$$
and similarly for the norm. You are unhappy that $t - 1$ isn't in $G$, but it's unquestionably in the ring $\mathbb{Z}[G]$, so there's no problem.
The fact that the image of $(t-1)$ is $I_G$ is an old trick. As you say, the one containment is clear.
For the other, suppose $\sum a_i t^i$ is in $I_G$. Then $\sum a_i = 0$. Thus $\sum a_i t^i = \sum a_i (t^i - 1)$, and you can factor a $t-1$ out of the right hand side.
|
H: If there were a function mapping a set onto its powerset, would the unrestricted comprehension schema be true?
I have the following exercise in a set:
Cantor proved that there can be no function $\phi$ mapping a set onto the set of all of its subsets. Show directly that if there were such a mapping, then we would have an interpretation of “$\in$” which makes the unrestricted comprehension schema true, including the version with parameters.
I am not sure how one would go about proving this at all. For the schema to be true, it would need to have a model in which it (all of its instances) is (are) true, but there can be no such model, as far as I know.
AI: That's the point. You are trying to show that under a certain assumption, you can derive something false.
For a hint on how to get started, if you have a bijection $\psi:X\to\mathcal{P}(X)$, you can define a new membership relation $\varepsilon$ on $X$ by $$x\;\varepsilon\;y:\iff x\in\psi(y)$$ You just need to show that there's such a bijection given the assumptions, and that the comprehension scheme would hold.
|
H: Find the domain and range of $f(x) = \frac{x+2}{x^2+2x+1}$:
The domain is: $\forall x \in \mathbb{R}\smallsetminus\{-1\}$
The range is: first we find the inverse of $f$:
$$x=\frac{y+2}{y^2+2y+1} $$
$$x\cdot(y+1)^2-1=y+2$$
$$x\cdot(y+1)^2-y=3 $$
$$y\left(\frac{(y+1)^2}{y}-\frac{1}{x}\right)=\frac{3}{x} $$
I can't find the inverse... my idea is to find the domain of the inverse, and that would then be the range of the function. How to show otherwise what is the range here?
AI: Alternate way to find the range :
$$f(x) = y =\frac{x+2}{x^2+2x+1}$$
$$yx^2+(2y-1)x+(y-2)=0 $$
Now this quadratic has real roots (since real points exist belonging to the function for all values of x (we can remove the case of -1 later)) . So applying the condition for real roots : ($b^2-4ac \geq 0$)
$$(2y-1)^2-4(y)(y-2)\geq0$$
$$-4y+1 +8y\geq0$$
$$y\geq\frac{-1}{4}$$
$$y \in \left[ -\frac{1}{4},\infty \right)$$
Now the value of $y$ at $x=-1$ $\to \infty$ so no need to remove it from the range
|
H: Determining $(A \times B) \cup (C \times D) \stackrel{?}{=} (A \cup C) \times (B \cup D)$
I am trying to determine the relationship between $(A \times B) \cup (C \times D)$ and $(A \cup C) \times (B \cup D)$. Usually, I begin these proofs with a venn diagram and then formally prove the result thereafter, but there really isn't a good way that I know of to sketch a venn diagram for a Cartesian product. Some help on the intuition would be appreciated.
The first inclusion does make sense intuitively. The element on the left-hand side is of the form $(x,y)$ where $x$ is in either in $A$ or $C$ and $y$ is in either $B$ or $D$. That inclusion seems rather naturally, and the proof isn't too difficult.
Let $x \in (A \times B) \cup (C \times D)$. Then $x \in A \times B$ or $x \in C \times D$. If $x \in A \times B$, $x = (a,b)$ for some $a \in A$ and $b \in B$. Since $a \in A$, $a \in A \cup C$. Since $b \in B$, $b \in B \cup D$, so $x \in (A \cup C) \times (B \cup D)$. Alternatively, if $x \in C \times D$, then $x = (c,d)$ for some $c \in C$ and $d \in D$. Since $c \in C$, $c \in A \cup C$. Since $d \in D$, $d \in B \cup D$. Hence, $x \in (A \cup C) \times (B \cup D)$.
I have no intuition for the other side. My attempt at a proof of it (below) failed, but I have no intuition to construct a counterexample.
Let $x \in (A \cup C) \times (B \cup D)$. Then $x = (\alpha, \beta)$ where $\alpha \in A \cup C$ and $\beta \in B \cup D$. Then $\alpha \in A$ or $\alpha \in C$ and $\beta \in B$ or $\beta \in D$.
It seems that there is no guarantee that $x$ lives in the right-hand side, depending on how we "mix and match" $\alpha$ and $\beta$. With that said, I am not completely satisfied with this as an intuitive explanation. An alternative to a venn diagram (or, as someone on here recently suggested, a truth table of sorts) would be helpful.
AI: Taking your approach for the second part $(p,q) \in (A \cup C) \times (B \cup D)$, we get $p \in A \cup C$ and $q \in B \cup D$. Thus $p \in A$ or $p \in C$ or both and a similar statement is true for $q$.
Now we want to check if $(p,q)$ necessarily lies in $(A \times B) \cup (C \times D)$. For this to happen $(p,q)$ must be in at least one of $A \times B$ or $C \times D$. That means we should have either $p \in A \quad $ AND $\quad q \in B \quad $ OR $\quad p \in C \quad $ AND $\quad q \in D$ or both.
Ask yourself:
What if $p \in A \setminus C$ and $q \in D \setminus B$? Then our initial condition (see first paragraph) is met but such $(p,q)$ will not be in $(A \times B) \times (C \cup D)$. This gives us a clue to find a possible counterexample for this containment.
Counterexample
Let $A=\{1,2\}$, $B=\{1,4\}$, $C=\{x\}$ and $D=\{y\}$. Then $1 \in A \setminus C$ and $y \in D \setminus B$ so $(1,y) \in (A \cup C) \times (B \cup D)$ but $(1,y) \not\in (A \times B) \cup (C \times D)$
|
H: Proving intersection of empty set is the set of all sets
In an exercise, it asks to prove $\bigcap \varnothing$ is equal to the set of all sets. I understand that there are many proofs online but I have a specific question (mostly regarding mathematical logic) regarding my proof.
From my understanding, given a set $S$ (where all of its elements are also sets), $x \in \bigcap S \iff \forall \, Y\in S, x \in Y$.
Now I will try and show the set of all sets is contained in $\bigcap \varnothing$. Let $x$ be an arbitrary set - now $x \in \bigcap \varnothing \iff \forall \, Y\in \varnothing, x \in Y$. My question is why the statement $\forall \, Y\in \varnothing, x \in Y$ vacuously true? From my understanding, a statement is vacuously true in the case of an implication (ie. $p \implies q$ where $p$ is false) but I fail to see how $\forall \, Y\in \varnothing, x \in Y$ can be written in an implication (I tried writing it in the form $\forall Y: Y \in \varnothing \implies x \in Y$ but this is not the same statement)
AI: $\forall Y \in \varnothing \; P(Y)$ means
$\forall Y \; (Y \in \varnothing \implies P(Y))$.
But $Y \in \varnothing$ is always false, making the implication $Y \in \varnothing \implies P(Y)$ true.
|
H: Sequence convergence without metric equivalence
I only have problem in the metric equivalence part. I think there exists an equivalence. $R^\infty$ is a subspace of $R^\omega$. Let's use the uniform topology for $R^\omega$ and use the relative topology for $R^\infty$. Let the metric for the uniform topology be $d$. Obviously the defined convergence implies convergence of relative $d$. The other direction is also obvious. The requirement of $y(m)_j=y(0)_j=0$ is automatically satisfied since the space is $R^\infty$.
I think the requirement of $y(m)_j=y(0)_j=0$ is redundant.
AI: The other direction is also obvious.
I don't believe so. Show it then? In detail. How do you get one $k$ that works for all $m$, as you need? Convergence in $d$ doesn't give you that, even in $R^\infty$. Every sequence (point) in the sequence can have its own cut-off point, going further and further back, while still converging uniformly.
The convergence induces a topology (in the usual way) and as we have a linear space in that topology (I think $+$ and scalar multiplication are continuous) it will be induced by a metric iff it is first countable. So to show your problem, try to refute that.
|
H: Let $G$ be a multiplicative group and $\varnothing\neq H\subseteq G$. Show that $H\le G$ iff both $H\circ H\subseteq H$ and $H^{-1}\subseteq H$.
Question: Let $G$ be a multiplicative group and $H$ be a nonempty subset of $G$. Show that $H$ is subgroup if, and only if, $H\circ H \subseteq H$ and $H^{-1} \subseteq H$.
At the same time it seems to be very obvious, because when I verify the condition that $(a\circ b^{-1})$, I'm operating two elements of $H$ and it's inside $H$, byeond that, one this elements is an inverse of $H$.
But I don't know how can I write it down... So I need some help, please and thank you all!
AI: A subgroup $H$ of a group $G$ is a non-empty subset which is closed under composition and inversion and contains the identity element of $G$, that is
$h_1,h_2\in H\implies h_1\circ h_2\in H$
$e\in H$, where $e$ is the identity element of $H$
$h\in H\implies h^{-1}\in H$
As you correctly noted, these conditions are equivalent to $g,h\in H\implies g\circ h^{-1}\in H$ (which is sometimes called the One-Step subgroup test) for non-empty subsets $H\subseteq G$.
Now, regarding your question. It is clear that $H$ being a subgroup implies the two inclusions. On the other, these inclusion can be reexpressed as
$H\circ H\subseteq H\iff h_1,h_2\in H\implies h_1\circ h_2\in H$
$H^{-1}\subseteq H\iff h\in H\implies h^{-1}\in H$
As you can see, these are basically the defining property of being a subgroup rewritten in terms of set inclusions. What is not included here is that the identity of $G$ resides in $H$. But as we have at least one $h\in H$, $H$ being non-empty, we also have $h^{-1}\in H$ and then $h,h^{-1}\in H\implies h\circ h^{-1}=e\in H$ which concludes the proof.
Of course, the One-Step subgroup test applies directly as you correctly noted. Anyway, it might be worth adding the perspective of reformulating subgroup properties in terms of set inclusion.
|
H: Gaussian quadrature method when $n=2$. Function to be approximate is $\arcsin$, how to do it properly when weights are not given?
So, there is a an exercise of numerical integration using Gaussian quadrature method. The givens are:
f(x) = arcsin x, a = −1, b = 1, i = 2
I was just wondering, do I need to use the know formula for the $n=2$ to integrate the function, which is given as $f(\frac{-1}{\sqrt{3}} + f(\frac{1}{\sqrt{3}})$, because in that case the exercise seems to be trivial as $\arcsin$ is odd function and the integral would be equal to 0. What am I missing here?
$$\int^1_{-1} f(x) = f(\frac{-1}{\sqrt{3}}) + f(\frac{1}{\sqrt{3}}) = 0$$
For this result the weights are chosen as $c_1=c_2=1$, but in the exercise that I need to solve the weights are not given, is this norm to just let $c_i = 1$?
AI: Gaussian quadrature with weight function $w(x)=1$ on $[-1,1]$ is a well-defined thing, you don't need to be told what the weights and nodes are in order to do it. Specifically, the nodes are $\pm 1/\sqrt{3}$ and the weights are both $1$. Therefore, the answer to the exercise is indeed $\arcsin(-1/\sqrt{3})+\arcsin(1/\sqrt{3})=0$.
|
H: Converse to a proposition on subgroups
Let $G$ be a group, and let $H$ and $K$ be subgroups of $G$. I read somewhere that if either $H$ or $K$ is a normal subgroup, then $HK$, which is the set of products from $H$ and $K$, is itself a subgroup. Is the converse true? That is, given that $HK$ is a subgroup of $G$, must at least one of $H$ and $K$ be a normal subgroup of $G$?
AI: No, the converse is not true.
In general, if $H$ and $K$ are subgroups, then $HK$ is a subgroup if and only if $HK=KH$ as sets; that is, for each $h\in H$ and $k\in K$, there exist $h’,h’’\in H$ and $k’,k’’\in K$ such that $hk=k’h’$ and $kh=h’’k’’$. (I’ll leave it to you to prove this; it’s a good exercise). If, say, $H$ is normal, then $kH=Hk$ for each $k\in K$ (in fact, for each $g\in G$), and so the condition will be met. However, it is possible for neither $H$ nor $K$ to be normal, and yet for the product to be a subgroup anyway.
An example of this take the dihedral group of order $8$, with $r$ a rotation and $s$ a reflection. Let $H=\{e,rs\}$ and $K=\{e,r^3s\}$. Neither is normal; e.g., $s(rs)s = sr = r^3s\notin H$; and $s(r^3s)s = sr^3 = rs\notin K$.
However, $HK= \{e, rs, r^3s, r^2\}$ and $KH=\{e,r^3s,rs, r^2\}=HK$, a subgroup.
|
H: Eigenvalues of offset multiplication tables
Consider the $n$ x $n$ 'multiplication table' constructed as (using Mathematica language)
$$
M_n^s = \text{M[n_,s_]:=Table[ k*m , {k,1+s, n+s}, {m, 1+s, n+s} ] }
$$
For example,
$$M_4^0 = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 6 & 8 \\3 & 6 & 9 & 12 \\4 & 7 & 12 & 16 \end{pmatrix} $$
and
$$M_4^1 = \begin{pmatrix} 4 & 6 & 8 & 10 \\ 6 & 9 & 12 & 15 \\8 & 12 & 16 & 20 \\10 & 15 & 20 & 25 \end{pmatrix} $$
Amazingly, these matrices have all zero eigenvalues, except for a single value. The non-zero eigenvalue exhibits an interesting pattern as a function of $n$ and $s.$ Starting with $n=2,$ the sequences read as follows
$$ \text{eigv }M_n^0=\{1,5,14,30,55,91,140...\} = (n-1)(2n^2-n)/6 $$
$$ \text{eigv }M_n^1=\{4,13,29,54,90,139,203...\} = (n-1)(2n^2 + 5n + 6)/6 $$
$$ \text{eigv }M_n^2=\{9,25, 50, 86, 135, 199,280...\} = (n-1)(2n^2 + 11n + 24)/6 $$
$$ \text{eigv }M_n^3=\{25,61,110,174,255,355...\} = (n-1)(2n^2 + 17n + 54)/6 $$
I've worked out many of these, and it appears that, in considering the quadratic polynomial, and using $[n^1]$ to mean the coefficient of $n,$
$$ [n^2] = 2, \, [n^1]=6s-1, \, [n^0] = 6s^2 , \ s=1,2,3... $$
The question is: can these observations be proved?
AI: All lines are linearly dependent (construction with entries $k*m$), so there is only one non zero eigenvalue.
This eigenvalue is the trace of the matrix (theorem that sum of eigenvalues is trace) so $\lambda=\sum_{j=1}^n (s+j)^2$
|
H: f is a distribution, does the right derivative, written here $f'$, always exists on the domain?
I am reading the paper from Tehranchi: https://arxiv.org/abs/1701.03897
Page 20, he mentions that for a density $f$, the right derivative always exists on its domain (the definition of $f$ is given page 18 for anyone who would like to check). Put it briefly, We only assume that $f$ is a continuous density.
I am baffled by the fact that the right derivative would exist. I know that densities are cadlag, but I didn't know that the right derivative always exists.
Can anyone confirm me this, and prove it ? If I am missing some hypothesis (which I highly doubt but maybe?), can you please mention it, that would be great.
AI: I think the paper is assuming that $f$ is log-concave throughout this section. (See on page 19 where Proposition 3.2.3 is invoked.) So, by definition $\log\circ f$ is concave and so right differentiable which implies $f$ is also right differentiable (by the chain rule).
|
H: The convergence of $(n\sin n)$
I think $\lim_{n\to \infty} n\sin n$ does not exist, for $\sin x$ is a wave function and when $n\to \infty$, then so does factor $n$. But I am confused that how to give a strict proof for the divergence.
I would appreciate it if someone could give some suggestions and comments.
AI: Every convergent sequence is bounded.
Edit: Let $m \in \mathbb{Z}.$ Then $$\sin x\geq \sin\left(\frac12\right)>0\,\, \forall x \in \left[\frac12+2m\pi,\frac32+2m\pi\right].$$ Moreover there exists $n_m \in \left[\frac12+2m\pi,\frac32+2m\pi\right] \cap \mathbb{N}.$ Therefore $$n_m\sin(n_m)\geq \left(\frac12+2m\pi\right)\sin\left(\frac12\right).$$It follows that $(n \sin n)_n$ is unbounded.
|
H: Explanation of Geometric Distribution Graph
I understand what the geometric distribution is and that it is calculating the probability of the number of trials needed up to and including the first success. Expressed as the following:
$$P(X = k) = (1-p)^{k-1}p$$
I asked a similar question years ago and for the life of me I still cannot wrap my head around the interpretation of the graph from the example:
What I mean by not wrapping my head around the interpretation is for example, let's choose $x = 10$. What I'm interpreting in my mind is "the probability of winning the lottery after purchasing 10 tickets is roughly 0.19". But this cannot be right to me. Interpreting it in this fashion alludes to purchasing more tickets should mean a higher probability of winning the lottery. This is what would be the case if I was thinking of the CDF, but I'm not. So what is the way to interpret the PMF of the geometric distribution in this example?
Question that I asked previously: Understanding the geometric distribution
AI: The probability $P(X=10)$ is the probability of needing to buy exactly 10 tickets in order for the 10th ticket to be your first winner. Put another way, it's the probability of buying 9 losing tickets in a row before buying your first winning ticket.
If it helps, consider the similar experiment of flipping a coin until the first head. The probability that the very first toss gives a head is $P(Y=1)=1/2$. The probability $P(Y=2)$ represents the probability of failing to get heads on your first toss (probability 1/2) and then furthermore succeeding in getting heads on your second toss. That's probability 1/2 to first fail followed by probability 1/2 to then success, for an overall probability of $P(Y=2)=1/2 * 1/2 = 1/4$. Similarly, to get $P(Y=3)$, you have to fail twice (probability $1/2 * 1/2 = 1/4$) and then furthermore succeed on your third toss (probability $1/2$), for a total probability of $P(Y=3)=1/2 * 1/2 * 1/2 = 1/8$.
The decreasing probabilities in both the lottery and coin examples result from the fact that, for example, $P(Y=10)$ needs 9 failures (and it's tough to lose so many times!) before you get your first success on the tenth try.
|
H: Proving that ${\aleph_1}^{\aleph_0}\leq |[\omega_1]^{\omega}|$
Today I was working with some exercises about topology but in some part I need to prove the next inequality: $${\aleph_1}^{\aleph_0}\leq |[\omega_1]^{\omega}|$$Here $[\omega_1]^{\omega}:=\left\{A\subseteq\omega_1 : |A|=\aleph_0 \right\}$. I don't know how to prove it.
My attempt starts with $f:\omega\to\omega_1$ a function. We know that ${\aleph_1}^{\aleph_0}=|\left\{f:\omega\to\omega_1\mid f \ \text{is a function} \right\} |$ then is correct to take $f:\omega\to\omega_1$. Then $f[\omega]$ is a subset of $\omega_1$ but then $f:\omega\to f[\omega]$ is a surjective function. Then $\aleph_0\geq |f[\omega]|>0$ and therefore every function from $\omega\to\omega_1$ defines naturally a subset of $\omega_1$. The problem is the fact that $f[\omega]$ can be a finite set and the natural asignation $f\mapsto f[\omega]$ doesn't works to prove the inequality. Moreover, I think that this asignation isn't inyective because we can have two different functions $f$ and $g$ such that $f[\omega]=g[\omega]$. For example $f(n)=n$ and $g(0)=1$, $g(1)=0$ and $g(n)=n$ for $n>1$. I don't know how to procced or a way to conclude the exercise. Anyone can help me?
AI: You know that $2< \aleph_{1}\leq 2^{\omega}$. Then you have $2^{\omega} \leq \aleph_{1}^{\omega}\leq (2^{\omega})^{\omega}= 2^{\omega \times \omega}= 2^{\omega}$. It is clear that $ 2^{\omega}\leq|[\omega_{1}]^{\omega}|$ because you know that $|P_{\infty}(\omega)|= 2^{\omega}$ and $P_{\infty}(\omega)\subset [\omega_{1}]^{\omega}$, where $P_{\infty}(\omega)=\{A\subset \omega:A$ is infinite $\}$. Therefore you have your inequality, moreover you have the equality.
|
H: Consider a set $G\subseteq \Bbb R$ and a binary operation * defined on $\Bbb R$ as $a*b=a+b+ab$, such that $(G,*)$ is an Abelian Group. Determine $G$.
My question differs from this one(also, the binary operation differs slightly). Here, I'm supposed to determine $G$, as opposed to proving a given $G$ to be an Abelian group. I know that for $(G,*)$ to be a group, it has to satisfy the following postulates:
1) Closure, i.e., $a*b\in G$, $\forall a,b\in G$
2) Associativity, i.e., $(a*b)*c=a*(b*c)$, $\forall a,b,c\in G$
3) Existence of a unique identity element $e\in G$ such that $a*e=e*a=a$, $\forall a\in G$
4) Existence of inverse elements, i.e., $\forall a\in G$, $\exists a^{-1}\in G$ such that $a*a^{-1}=a^{-1}*a=e$
I begin by assuming $G$ to be identical to $\Bbb R$, because I have no idea how to proceed with an arbitrary subset of $\Bbb R$ whose elements and properties are unknown. Since addition and multiplication are both closed on $\Bbb R$, the closure property is satisfied by $(G,*)$. $(G,*)$ is also associative(as can be shown by trivial calculations). Calculations prove that $0$ is the identity element. So far so good. Now to evaluate the existence of inverse elements, I use postulate 4 $$a*a^{-1}=e$$ This means $$a+a^{-1}+aa^{-1}=0$$ This equation proves that $a^{-1}$ does not exist when $a=-1$. Intuitively, it seems reasonable that $G=\Bbb R\backslash \{-1\}$. However, this requires to check the consistency of the first 3 postulates again. I'm not sure if I can rely on addition and multiplication being closed operations on $\Bbb R$ anymore, now that one of the elements has been excluded. I'm confused as to how to proceed forward and seek guidance for the same.
Edit: Apparently I overlooked the postulate of commutativity for Abelian Groups, but as it stands I think proving $(G,*)$ to be a group is the primary challenge for me.
AI: You first need to determine what is $e$: $e*e=e$, so $e+e+e^2=e$, so $e^2=-e$. Hence $e=0$ or $e=-1$. Then $e*x=x$, so $e+x+ex=x$ or $e+ex=0$ for every $x\in G$. If $e=-1$, then the only $x$ is $x=-1$ and so $G=\{-1\}$. So we can assume $e=0$. Then for every $x\in G$ there should be $y$: $x*y=0$, that is $x+y+xy=0$.So $y=1/(x+1)-1=x/(x+1)$. So with every $x\in G$ this $y$ should be in $G$ too. Finally $G$ must be associative which gives some more restrictions on $G$: $$(a*b)*c=
(a+b+ab)*c=a+b+ab+c+ac+bc+abc=a*(b*c)=a*(b+c+bc)=a+b+c+bc+ab+ac+abc,$$ so for every $a,b,c\in G$: $0=0$ - no new restrictions. Hence the only restrictions on $G$ are
Either $G=\{-1\}$ or $0\in G, -1\not\in G$;
For every $a,b\in G$, $ab+a+b\in G$;
If $0\in G$ then for every $x\in G$, $x/(x+1)\in G$.
For example $G$ can be one of the following sets
a) $\mathbb{Q}\setminus \{-1\}$;
b) $\mathbb{R}\setminus\{-1\}$;
c) $\mathbb{Q}_{\ge 0}$
but also many other subsets of $\mathbb{R}$ (continuously many).
|
H: Expressing polynomial equations as a set of linear equations
Suppose the $4$-vector $c$ gives the coefficients of a cubic polynomial $p(x) = c_1 + c_2x+c_3c^2+c_4c^3$.
Express the conditions
$p(1) = p(2), p'(1) = p'(2)$
as a set of linear equations of the form $Ac=b$. Give the sizes of A and b, as well as their entries.
So far, what I've done was sub in the values and differentiate the polynomials and let it equal to each other after subbing their values.
I get
$c_1 + c_2 + c_3 + c_4 = c_1 + 2c_2 + 4c_3 + 8c_4$
then,
$0 = c_2 + 3c_3 + 7c_4$
as well as
$c_2+2c_3+3c_4 = c_2 + 4c_3+12c_4$
then,
$0 = 2c_3 + 9c_4$
I am a bit lost oh what to do now.
AI: So you have
\begin{align}
0c_1 + 1c_2 + 3c_3 + 7c_4 &= 0\\
0c_1 + 0c_2 + 2c_3 + 9c_4 &= 0
\end{align}
Now write these two equations of four variables in matrix-vector form. You already know that $c$ is $4\times 1$, that is, $4$ rows and $1$ column. What should the sizes of $A$ and $b$ be?
|
H: Proving that $h\left(x\right)=x^2-5x+\ln \left(\left|-x+4\right|\right)$ has only one real root in $\left[0,2\right]$
As I need to prove that the h function "$h\left(x\right)=x^2-5x+\ln \left(\left|-x+4\right|\right)$" has one real root in $\left[0,2\right]$ i have tried to do this but i would like to know if is it correct or what i should improve.
Domain: $\mathbb{R}-\left\{4\right\}$, so the function is continuous in $\left[0,2\right]$
$h\left(2\right)=-5,3$ $\space$ and $\space$ $h\left(3\right)=-5,3$
By Bolzano Theorem, being $h\left(x\right)$ continue and
$h\left(a\right)\cdot h\left(b\right)<0$ $\space$, at least the function admits one root.
Now to be sure that $h\left(x\right)$ has only one root at that interval, we have to study the sign of $h'\left(x\right)$
$h'\left(x\right)=\frac{-2x^2+13x-21}{-x+4}$
Sign of $h'\left(x\right)$ = image on https://i.stack.imgur.com/nJ2IH.jpg
As the sign of $h'\left(x\right)$ is decreasing monotone, we can affirm that the funcion has only one real root on $\left[0,2\right]$
AI: I would just use the Intermediate Value Theorem to show that $h(x)=0$ for some $x\in[0,2]$ because $h(0)=\ln(4) > 0$ and since $4-e\in[0,2]$ then $$h(4-e) = (4-e)^2 - 5(4-3)+\ln(|4-4+e| )= (4-e)^2-5(4-e)+1$$
To see that $h(4-e) < 0$ notice that $(4-e)^2-5(4-e) = (4-e)(4-e-5) = -(4-e)(e+1)$ and determine that $(4-e)(e+1)>0$. Since there exists points $x_1, x_2\in[0,2]$ such that $h(x_1)<0<h(x_2)$ then the Intermediate Value Theorem tells of the existence of a root.
To differentiate, we notice on our domain that $h(x) = x^2-5x + \ln(4-x)$ and so $h'(x) = 2x - 5 - \frac{1}{4-x}$. As $2x-5 \in[-5,-1]$ for all $x\in[0,2]$ and $-\frac{1}{4-x}\in[-\frac{1}{2}, -\frac{1}{4}]$. Thus $h'(x)<0$ for $x\in[0,2]$. We use that $h$ is strictly decreasing and thus $h$ only has one root.
|
H: Another social network matrix expression assistance please.
We consider a collection of n people who participate in a social network in which pairs of people can be connected, by 'friending' each other. The n x n matrix F is the friend matrix, defined by $F_{ij} = 1$ if persons i and j are friends, and $F_{ij} = 0$ if not. We assume that the friend relationship is symmetric, i.e., person i and person ja are friends means person j and person i are friends. We will also assume that $F_{ii} =0$
C is the nxn matrix with $C_{ij}$ equal to the number of friends persons i and j have in common. (Person k is a friend in common of persons i and j if she is a friend of both person i and person j. The diagonal entry $C_{ii}$, which is the total number of friends person i has in common with herself, is the total number of friends of person i.) Give an expression for C in terms of the matrix F. Briefly justify your expression
AI: Note that $k$ is a common friend of $i$ and $j$ iff $F_{i,k}=F_{j,k}=1$, equivalently $F_{i,k}F_{j,k}=1$. So the number of common friends of $i$ and $j$ is
$$C_{i,j}=\sum_{k=1}^n F_{i,k}F_{j,k}=\sum_{k=1}^n F_{i,k}F_{k,j}.$$
Now what matrix operation does that look like?
|
H: A wrong understanding of Permutation and Combination
Suppose We have 10 different things that we want to distribute to P and Q where P gets 3 and Q gets 7, in how many ways can we do this ? ( order doesnt matter when P or Q gets them , ie if P gets {a,b,c} then it is same as {b,c,a} )
Now we know the answer will be $${10 \choose 3}\text{ Or } {10 \choose 7}$$
But my basic understanding is this :
The picture that alsways plays out in my mind is this
Row 1 : $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$ $\oplus$
Row 2: |_______| |_______________|
P Q
My understanding:
First we arrange the different things in row 1 ie in $10!$ ways .
Then just bring the row 1 into row 2 , ie whichever balls are above P 's box come into it and the ballss above Q's box go into that box.
But even in these small individual boxes there can be many permutations ( which are supposed to be treated as the same , but currently we are not)
so we find the no of permutations possible in these small boxes . For P its $3!$ and for Q it is $7!$
Now WE DIVIDE our initial arrangements by $3!\cdot 7!$ ( why not subtract ) and get $$\frac{10!}{3! \cdot 7!}$$
So we got our Answer .
Now if we have 10 things out of which 3 are same (identical) , and 7 are different (non identical) ie a total of 8 species of balls, Find the number of ways you can distribute these to P and Q such that P gets 3 and Q gets 7 ?
(I don't know the actual answer but my friend told me that the answer that we were getting from logic is wrong)
By Logic :
Imagine the same picture( the two rows one with balls and other with empty boxes)
First we arrange -> $\frac{10!}{3!}$ ( divided by 3! because they are same )
Then we try to Find the Arrangements in those little boxes , But this time its not straightforwards , because the same balls can
Case A : P has 3 identical and Q doesnt have any identical so we get $\frac{3!}{3!} \cdot 7! $
Case B : P has 2 identical , Q has 1 so $ \frac{3!}{2!} \cdot 7!$
Case C : P has 1 identical , Q has 2 so $ 3! \cdot \frac{7!}{2!}$
Case D : P has 0 identical , Q has all 3 so $ 3! \cdot \frac{7!}{3!} $
We Add them to get $$\frac{7!3!}{1}+\frac{7!3!}{3}=\frac{7!3!(4)}{3}=8!$$
Now we divide the arrangements with no, of repititions ( or whatever ) and we get
$$ \frac{10!}{2!8!}={10 \choose 2}$$
Is this Logic Correct ? Is there a shortcut logic for this ? Also why do we Divide and not subtract the repetition ( in step 5 of question 1 and Step 4 of question 2 )
AI: The only reason that in the first problem, you can divide is because each single case in the $10!$ original arrangements is
Equal probability
With a precisely equal distribution into equivalent cases
Where you actually want the count of the equivalent case-classes.
The problem with 4. in the second problem is not all of these cases have the same probability. It is far more difficult for all of the 3 identical balls to end up in P, for example.
There is no need to go about dividing the number of total arrangements. 2. does more than well enough in actually counting the different number of ways the objects can be distributed.
To do so, simply note that as a partition, choosing the objects for P completely determine the objects for Q (everything else). So:
If P has 3 identical, there is precisely $\binom{7}{0}=1$ case
If P has 2 identical, then you are choosing 1 of the other 7 non-identical objects, so precisely $\binom{7}{1}$ cases.
If P has 1 identical, choose 2 of the other 7, for $\binom{7}{2}$ cases.
If P has 0 identical, you choose $\binom{7}{3}$ ways.
The sum of these is the answer, without reference to the 10 total objects.
|
H: Questions regarding inverse trigonometric integration
Intregrate $\displaystyle \int \frac{x+5}{\sqrt{9-(x-3)^2}} \, dx $
I split the integral
$$\int \frac{x+5}{\sqrt{9-(x-3)^2}} \, dx=\int \frac5{\sqrt{9-(x-3)^2}}\, dx+\int \frac{x}{\sqrt{9-(x-3)^2}} \, dx$$
For the first integral I use the integral table which states
$$ \int \frac{1}{\sqrt{a^2-u^2}} \, du = \sin ^{-1}\left(\frac{u}{a}\right) +C$$
where in this case:
$u=(x-3)$,
$a=3$
When you take out the constant of $5$, you would integate and the second integral would be
$$ =\int \frac{x}{\sqrt{9-(x-3)^2}} \, dx + 5 \sin ^{-1}\left(\frac{x-3}{3}\right) + C $$
I'm not sure how to integrate the first integral however...
AI: Let $u=x-3$:
$$\int \frac{u+3}{\sqrt{9-u^2}} \; du$$
$$=\int \frac{u}{\sqrt{9-u^2}} \; du + \int \frac{3}{\sqrt{9-u^2}} \; du$$
Where the second integral is similar to the one you previously calculated and the first one is another substitution with $t=u^2$. Putting it all together with your first integral yields:
$$-\sqrt{9-(x-3)^2}+8\arcsin{\left(\frac{x-3}{3}\right)}+C$$
|
H: A problem with the definition of $e^x$
I am very experienced in the world of calculus, but there is a certain problem I need to solve that I can't quite get my head around. The canonical definition of $e$ is $$e=\lim_{n\to\infty}{\left(1+\frac{1}{n}\right)^n}$$
Therefore, $$e^x=\lim_{n\to\infty}{\left(1+\frac{1}{n}\right)^{nx}}$$
However, I often see this written as
$$e^x=\lim_{n\to\infty}{\left(1+\frac{x}{n}\right)^{n}}$$
Why do these limits have the same value?
AI: Perhaps it is better to think of this the other way around. You first treat $e^x$ as purely a notation, and then set
$$e^x := \lim_{n \rightarrow \infty} \left(1 + \frac{x}{n}\right)^n$$
which you can then prove satisfies the two properties expected of an exponential function, namely that
$$e^0 = 1$$
and
$$e^{x + y} = e^x e^y$$
. Hence, to find its "base", simply set $x = 1$. And thus you derive the expression for $e$, and use this to define it, so that the above is now a literal exponentiation.
ADD: Now I see you want this the "first" way around, which wasn't quite clear from how the question was written (said "a problem with the definition of $\exp$", so thus why I answered it this way).
In that case, we need another definition for exponentiation of a real base to a real exponent. In fact, such a definition - and arguably a more basic and intuitive one to start with - exists: it is this.
The exponential $b^x$ with a positive base $b$ and arbitrary real number $x$, is the unique function which satisfies the following axioms.
$b^0 = 1$.
For any two real numbers $x$ and $y$, $b^{x + y} = b^x b^y$.
$b^x$ is continuous in $x$.
In particular, the first two properties effectively fix the values at rational numbers $x$; the third then fixes them at irrational numbers (proof would be for a different answer; but basically it boils down to that what comes out of the first two for rationals only can be thought of as having "an uncountable number of removable singularities", and then we "remove" them all.).
(From a more sophisticated perspective, the exponential maps $x \mapsto b^x$ are the homeomorphism-isomorphisms between the additive topological group of all reals and the multiplicative topological group of positive reals.)
Once we know that a function like this exists, and then define
$$e := \lim_{n \rightarrow \infty} \left(1 + \frac{1}{n}\right)^n$$
we then first have a consequence
$$e^x = \left[\lim_{n \rightarrow \infty} \left(1 + \frac{1}{n}\right)^n\right]^x$$
. We now have to move the real power into the limit. This requires a lemma:
Lemma 1. (Continuity in the base) The function $b \mapsto b^x$, with a fixed $x$, is continuous.
(Can try adding a proof but wanna keep this mostly to exposing the theory and not filling out all the technical details.). With that we can exchange the limit out from underneath the power:
$$e^x = \lim_{n \rightarrow \infty} \left[\left(1 + \frac{1}{n}\right)^n\right]^x$$
The next step is the following rule (again, proof omitted).
Lemma 2. For any reals $x$ and $y$ and a given base $b$, $(b^x)^y = b^{xy}$.
From that, we can go to
$$e^x = \lim_{n \rightarrow \infty} \left(1 + \frac{1}{n}\right)^{nx}$$
Note that this works because the arbitrary real power is defined and continuous in both arguments. We can thus then take the substitution $m := nx$ to get (this follows from the limit rule for a composite of continuous functions):
$$e^x = \lim_{m \rightarrow \infty} \left(1 + \frac{1}{\frac{m}{x}}\right)^m = \lim_{m \rightarrow \infty} \left(1 + \frac{x}{m}\right)^m$$
.
|
H: Bijections Question (from TAACOPS) About Integers to Evens
I was working on a problem about bijections (just learned about this today, so please be harsh/uber-specific when answering/correcting me so that I get a handle on the terminology). So far, my very limited understanding of a bijection is that it is both onto (meaning that the function in question's range is all reals) and a 1-to-1 relationship (correct me if I'm wrong).
The problem that I'm working on is this:
Let A denote the even integers (remember, 0 is even). Is there a bijection from $\mathbb Z$ to A?
Now the answer that I got was yes, but I struggle to understand how. Since a bijection is 1-to-1, that means that for every one input there is exactly one output for the function. So what would the bijection describing the transformation from $\mathbb Z$ to A look like?
To give you an example of what I mean, a previous problem/exercise had asked whether there was a bijection from $\mathbb N$ to $\mathbb Z$. The answer (which I managed to solve) was
$$f(x) =
\begin{cases}
x/2, \text{if x is even} \\
-\frac{x+1}{2}, \text{if x is odd}
\end{cases}$$
This ensures that every single natural number gets mapped as an integer successfully, and accounts for all cases. I am trying to look for a similar one for $\Bbb Z$ to A.
Thanks!
AI: You can use the map $f:\mathbb{Z}\to A$ defined by $n\mapsto 2n.$ In many, if not most cases, it is a good idea to prove that a map is a bijection by proving that it is an injection and a surjection. The function $f$ is an injection because $$2n=2m\implies n=m$$ and $f$ is a surjection because for all $t\in A,$ there exists an element $r\in \mathbb{Z}$ such that $f(r)=t,$ namely $r=\frac{t}{2}.$ Also I recommend not using the term "one-to-one" for a bijection because, although a "one-to-one correspondence" can refer to a bijection, it is easy to confuse with a "one-to-one function" which is another term for an injection. Confusing, I know.
Of course, there are other ways of proving that a bijection exists, like taking the composition of bijections, taking the inverse of a bijection, or using the Cantor-Schröder–Bernstein theorem.
Let me know if you have any questions. By the way, that's a great book by Zeitz. I read an older edition years ago when I was in high school.
|
H: Finding $\iint_D \sqrt{\left | x-y \right |} \,dx\, dy$ where $D$ is a rectangular region
I was tasked to compute the following double integral:
$$\iint\limits_D \sqrt{\left | x-y \right |}\, dx\, dy\,,$$
where rectangular region $D$ is bounded by $0 \leq x \leq 1$ and $0 \leq y \leq 2$.
Direct integration is futile. 3D visualization of the graph reveals a trough, $z = 0$, along $y = x$. I suppose we could rotate the whole graph, including region D 45 degrees clockwise such that we can rewrite the function to integrate as $\sqrt{x}$ which will be easier to integrate.
AI: $$\int\limits_{x=0}^1 \int\limits_{y=0}^x \sqrt{x-y}\ dy\ dx + \int\limits_{x=0}^1 \int\limits_{y=x}^2 \sqrt{y - x}\ dy\ dx = \frac{4}{15}+\frac{4}{15} \left(4
\sqrt{2}-1\right) = \frac{16 \sqrt{2}}{15}$$
|
H: If Ricci tensor has a null eigenvector in dimension $3$, then it has at most two nonzero eigenvalues and $|\text{Ric}|^2 \geq \dfrac{R^2}{2}$
Here's the necessary context:
I assume a null eigenvector would be a vector $v$ such that $\operatorname{Ric}(v, \cdot) = 0$ (i.e $\text{Ric}_{ij}v^{j} = 0$). But I don't understand why the claim "where the Ricci tensor has a null eigenvector, it has at most two nonzero eigenvalues" is so obvious - and why that implies the inequality in the title. I would really appreciate it if someone gave a detailed explanation of why this is true. Thanks in advance.
Also, it is relevant to point out that $R$ stands for the scalar curvature.
AI: The Ricci tensor is a $3 \times 3$ matrix, so it has at most 3 eigenvalues. If it has a null eigenvector, then $0$ is an eigenvalue; so it has at most 2 non-zero eigenvalues. If we call these eigenvalues $a,b,$ then we have $$|\mathrm{Rc}|^2 = a^2 + b^2, \; R = \mathrm{tr}\,\mathrm{Rc}=a+b;$$ so applying the inequality $2ab \le a^2 + b^2$ we see $$R^2 = a^2 + 2ab + b^2 \le 2(a^2 + b^2) = 2|\mathrm{Rc}|^2.$$
|
H: Will the largest possible value of the dot product between a vector A and a mystery vector (x, y) of length 1 always be the magnitude of A?
The question is pretty much all in the title, really.
I have a vector (2, 1) multiplied with (x, y) where the length of this vector is 1, and the largest possible value of the dot product is sqrt(5). Is it always this case?
AI: Yes, because this is just the size of the projection of $A$ in the direction of $(x,y)$ (using the notation in the title). It is realized precisely when $A$ points in the same direction as $(x,y)$, up to sign.
|
H: If $ \sum_{n\geq 1}\mu( \{|f_n|\geq n\})<\infty $ then $ f_n-f_n 1_{|f_n|\leq n}\underset {n}{\to} 0 $
Let $(E,\mathcal {A},\mu) $ be a finite measure space and $\{f_n\} $ be sequence of bounded function in $L^1$ such that
$$
\sum_{n\geq 1}\mu( \{|f_n|\geq n\})<\infty
$$
Can we say that
$$
f_n-f_n 1_{|f_n|\leq n}\underset {n}{\to} 0
$$
I thought of Borel Cantelli's lemma but I am unable to use it.
AI: This is consequence of the direct art of the Barel-Cantelli theorem. In terms of our problem, the function $g(x)=\sum_n\mathbb{1}_{\{|f_n|\geq n\}}$ is integrable asn so is finite $\mu$-a.s.
That means that for a.s. $x\in E$, there is $N(\omega)$ such that $n\geq N(\omega)$ implies $\mathbb{1}_{\{|f_n|\geq n\}}=0$. That is, for $n\geq N(\omega)$,
$$
f_n(\omega)-f_n\mathbb{1}_{\{|f_n|\geq n\}}(\omega)=f_n(\omega)\mathbb{1}_{\{|f_n|\geq n\}}(\omega)=0
$$
|
H: What is $\nabla\wedge\vec{\mathbf F}(x,y,z)$
What is $\nabla\wedge\vec{\mathbf F}(x,y,z) \quad$ where F is a vector field?
I assume its the dual of the curl but I need an equation. And what is it called? Google is getting me nowhere.
AI: You can start with the formula for the curl and work your way backwards. We have that
$$\operatorname{curl} \vec{F} = (\partial_yF_z - \partial_z F_y)dx + (\partial_z F_x - \partial_x F_z)dy + (\partial_xF_y - \partial_yF_x)dz$$
We have that in $3$ dimensions the formula for the Hodge dual of a $1$ form by applying the Hodge star operator (the buzzwords to Google) is given by
$$\vec{v} = a dx + b dy + c dz \implies \star \vec{v} = \begin{pmatrix} 0 & c & -b \\ -c & 0 & a \\ b & -a & 0 \\ \end{pmatrix}$$
This means that the proper value to assign would be
$$\nabla \wedge \vec{F} = \begin{pmatrix} 0 & \partial_xF_y - \partial_yF_x & \partial_x F_z - \partial_z F_x \\ \partial_yF_x - \partial_xF_y & 0 & \partial_yF_z - \partial_z F_y \\ \partial_z F_x - \partial_x F_z & \partial_z F_y - \partial_yF_z & 0 \\ \end{pmatrix}$$
$$ = \begin{pmatrix} 0 & \partial_xF_y & - \partial_z F_x \\ - \partial_xF_y & 0 & \partial_yF_z \\ \partial_z F_x & - \partial_yF_z & 0 \\ \end{pmatrix} - \begin{pmatrix} 0 & \partial_yF_x & - \partial_x F_z \\ - \partial_yF_x & 0 & \partial_z F_y \\ \partial_x F_z & - \partial_zF_y & 0 \\ \end{pmatrix}$$
|
H: Is it good to use for all $\delta$ there exists $\epsilon$?
Logically, the following two definitions are exactly the same:
For all $\epsilon >0$, there exist $\delta >0$ such that if $0<\vert x-a\vert<\delta$, then $\vert f(x)-L\vert<\epsilon$.
For all $\delta >0$, there exist $\epsilon >0$ such that if $0<\vert x-a\vert<\epsilon$, then $\vert f(x)-L\vert<\delta$.
But would people say that the second one follows a "bad notation", a "hard-to-read notation", "less-elegant notation", or an "unconventional notation"? I am trying to make the readers happy.
AI: Yes, people would object. Although the choice of variable name doesn't matter mathematically, it can still be helpful or misleading. There is a general convention around the use of that particular pair of variables, and going against it will only confuse readers.
(Granted, every so often there is a good reason to go against such a convention ... but that tends to be pretty rare.)
|
H: Finding the optimal $\frac pq$ approximation for a real number given upper limits on $p$ and $q$
When answering a question on Stackoverflow I got curious about how to find the optimal $\frac pq$ approximation for a real number, $r$, where $p$ and $q$ are integers that are limited by the integer type's number of bits - or a lower limit, like $\sqrt{2^{bits-1}-1}$ which would allow multiplication of two of these fractions without risking overflow. In the original question OP chose this method:
$$p = 100000r$$
$$q = 100000$$
This results in larger errors than necessary when converted back and compared to the original $r$. I know the errors are unnecessarily large because I've found better approximations when testing alternative methods myself.
My question is twofold:
Is there a way to find the optimal $\frac pq$ pair without using a numerical convergence algorithm (which is what I'm working on right now)? I'm hoping that there's a method that makes a calculation that results in $\le 4$ combinations of $\frac pq$ to try out to find the best approximation.
If there isn't, how should I design the algorithm to make sure I always find the optimal fractional approximation while keeping the complexity (number of iterations) reasonably low?
My current algorithm starts with $p$ or $q$ at the maximum allowed for the $int$ type used and I'm testing against known approximations for $\pi$, such as $\frac{1068966896}{340262731}$ which is the best approximation when the integer type is an $int_{32}$, given that the numerator can be negative and $p$ must therefore be in the range $±2^{31}-1$, i.e. $[-2147483647, +2147483647]$.
If we take $r = \pi$ as an example and the $int$ type is an $int_{16}$, the algorithm will start with these values:
$$p = 2^{15}-1 = 32767$$
$$q = \left\lceil\frac{2^{15}-1}{\pi}\right\rceil = 10431$$
It will then repeatedly decrease either $p$ or $q$ depending on which one that gives the lowest error when converted back and compared to $r$. It also saves the best result so far:
if $\left\lvert \frac pq - r \right\rvert \lt e_{low}$ save the $\frac pq$ pair and the new $e_{low}$.
This goes on until either $p$ or $q$ reaches $0$. The $\frac pq$ combination that gave the smallest error, $e_{low}$, becomes he result after applying the $gcd$.
This seems to work but I don't know math enough to know that it actually does for any upper integer limit that I impose. It has worked with the limits I have tested though. It's also extremely slow. When trying $int_{64}$ approximations it became clear that I needed shortcuts. It just takes too many iterations to be of any practical use. I added a $gcd$ shortcut and changed the $e_{low}$ comparison to include equality with $e_{low}$:
$$\left\lvert\frac pq - r \right\rvert \le e_{low}$$
When this condition was met I used the $\gcd(p,q)$ to skip ahead. This skipping made it a lot faster but it also missed some optimal solutions so I added something to get out of a possible local minimum: If $gcd \gt 2$ I simply multiplied both $p$ and $q$ with $2$. This improved things a lot - but it was still a bit slow and it still missed optimal solutions. I then tried multiplying with $3$ for even $gcd$s and $2$ for odd $gcd$s but there was no improvement in the result that I could see. It was obviously a bit slower though.
I realize that I'm just guessing and need some pointers in the right direction.
I'm limited by having studied high school math 30+ years ago - and I haven't used it much since then - so don't worry about overexplaining things. This limitation has likely made me incapable of recognizing possible solutions to this very problem when I searched for it...
AI: A best approximation of a real number $r$ is a rational fraction $a/b$ with $b>0$ such that for every rational fraction $c/d$ with $d \le b$ and $c/d \ne a/b$,
$$\left| r - \frac{a}{b}\right| < \left|r - \frac{c}{d}\right|$$
Theorem: Every best approximation of a number $r$ is either a convergent or an intermediate fraction of the continued fraction representing $r$ (if you include a "$-1$'th order convergent $1/0$).
For example: if $r = \pi$, the continued fraction representation starts
$3 + 1/(7 + 1/(15 + 1/(1 + ...)))$. The first few convergents are
$$\frac{1}{0}, \frac{3}{1}, \frac{22}{7}, \frac{333}{106}, \frac{355}{113}$$
The intermediate fractions between $1/0$ and $22/7$ are $$\frac{4}{1}, \frac{7}{2}, \frac{10}{3}, \frac{13}{4}, \frac{16}{5}, \frac{19}{6}$$ The intermediate
fractions between $3/1$ and $333/106$ are
$$ \frac{25}{8}, \frac{47}{15}, \frac{69}{22}, \frac{91}{29}, \frac{113}{36}, \ldots,
\frac{311}{99}$$
The first few best approximations of $\pi$ are
$$\frac{3}{1}, \frac{13}{4}, \frac{16}{5}, \frac{19}{6}, \frac{22}{7}, \frac{179}{57},
\frac{201}{64}, \frac{223}{71}, \frac{245}{78}, \frac{267}{85}, \frac{289}{92}, \frac{311}{99}, \frac{333}{106}, \frac{355}{113}$$
|
H: Integral of a floor function.
Well, i was trying to solve this problem, this told me find the integral
\begin{equation}
\int_{0}^{2}f(x)dx
\end{equation}
with $$ x \in <0, \infty>$$
of this funtion:
\begin{equation}
f(x) =\left \{ \begin{matrix} \frac{1}{\lfloor{\frac{1}{x}}\rfloor} & \mbox{if } 0 < x < 1
\\ 0 & \mbox{if } x = 0 \mbox{ or } x > 1\end{matrix}\right.
\\
\end{equation}
how i can see, just i have to solve in the intervalue [0,1]. $$\\$$
I tried it, first I thought in make a change of variable, so i had
\begin{equation}
\lfloor{\frac{1}{x}}\rfloor = y ,
\end{equation}
But i knew that derivate of floor funtion is zero because it is constant in the intervalue, then i thought this wouldn't get my anywhere. $$\\$$
After that i study if this funtion could be integrable, but i really can´t get it. So i am blanking.
AI: Observe that when we only substitute $\frac{1}{x} = y$
$$I = \int_1^\infty \frac{dy}{\lfloor y\rfloor y^2} = \sum_{n=1}^\infty \frac{1}{n}\int_n^{n+1}\frac{dy}{y^2}$$
$$ = \sum_{n=1}^\infty \frac{1}{n^2} - \sum_{n=1}^\infty \frac{1}{n} - \frac{1}{n+1} = \frac{\pi^2}{6} - 1$$
|
H: If $p$ is a prime and $p\equiv1\pmod 4$, then $x^2+4=py^2$ has solutions
Prove that if $p$ is a prime and $p\equiv1\pmod 4$, then $x^2+4=py^2$ has integer solutions.
This is related to units in quadratic number field $\mathbb Q(\sqrt p)$. As far as I know, $x^2+4\equiv0\pmod p$ has a solution because $-4$ is a quadratic residue modulo $p$. However, it is not sufficient to prove that. Any help will be appreciated.
AI: As Jyrki comments, one needs to prove that $u^2-pv^2=-1$ is soluble in integers.
This is a well-known corollary of the theory of Pell's equation.
There is a minimal solution of Pell's equation
$$a^2-pb^2=1$$
in positive integers with $a$ (or $b$) as small as possible.
Note that $b$ must be even, as if $b$ is odd $pb^2+ 1 \equiv2\pmod 4$
and so cannot be a square. Therefore $a$ is odd. Then
$$pb^2=a^2-1=(a-1)(a+1)$$
and so
$$p\left(\frac{b}2\right)^2=\left(\frac{a-1}2\right)\left(\frac{a+1}2\right)$$
and as $(a-1)/2$ and $(a+1)/2$ are coprime positive integers, one is a square
and the other $p$ times a square.
In any case,
$$1=\frac{a+1}2-\frac{a-1}2$$
and so $1$ equals either $u^2-pv^2$ or $pv^2 - u^2$ for some positive integers $u$
and $v$. The former would contradict the minimality of $(a,b)$ as a solution
to Pell's equation; the latter is what we want.
|
H: How to show that (-√2,√2)$\cap$Q is closed and bounded subset of Q ; but not compact?
How to show that (-√2,√2)$\cap$Q is closed and bounded subset of Q ; but not compact.
The part that given set is bounded is clear. How to show that it is closed in Q. Now let G=(-√2,√2)$\cap$Q. Then G is closed in Q if and only if G=Q$\cap$W where W is a closed set in R but i can't think of any W. How to show that G is not compact in Q.
AI: Hint 1: $(-\sqrt{2},\sqrt{2}) \cap \mathbb Q = [-\sqrt{2},\sqrt{2}] \cap \mathbb Q$.
Hint 2 Show that if $a_n \in (-\sqrt{2},\sqrt{2}) \cap \mathbb Q$ is so that $a_n \to \sqrt{2}$ then $(-a_n,a_n)$ is an open cover for this set.
|
H: Math package to help verify subsitutions in finite extended field
This question originates from Pinter's Abstract Algebra, Chapter 31, Exercise B5:
Find the root field of $x^3+x^2+x+2$ over $\mathbb{Z}_3$.
Let $a(x)=x^3+x^2+x+2$. We can show $a(x)$ is irreducible over $\mathbb{Z}_3$ by direct substitution of 0,1 and 2. Let $u$ be a root of $a(x)$ over $\mathbb{Z}_3$. So $\mathbb{Z}_3(u)$ is a field extension of $\mathbb{Z}_3$ with a basis of $\{1, u, u^2\}$, and has exactly 27 elements. Dividing $a(x)$ by $(x-u)$, we get $a(x) = (x-u)(x^2 + (u+1)x + (u^2+u+1))$ over $\mathbb{Z}_3(u)$. Let $b(x) = x^2 + (u+1)x + (u^2+u+1)$. As $u^3+u^2+u+2=0$, we can show $b(x)$ is irreducible in $\mathbb{Z}_3(u)$ by direct substitution of each of the 27 elements of $\mathbb{Z}_3(u)$; and so on.
However, the 27 substitution is quite a tedious and time-consuming process. So I wonder if there is any symbolic math package that we can use to write up a polynomial function (over a finite field) with a simple loop to verify such substitutions. SageMath looks promising, but so far I haven't been able to figure out the exact way to do so.
Any idea?
AI: If you define
$$b(x)=\frac{a(x)}{x-u}$$
then $b(x)$ will not be irreducible over $K=\Bbb Z_3(u)$ (the finite field of $3^3$ elements). To see this, cubing $a(u)=0$ gives $a(u^3)=0$, so that $b(u^3)=0$.
As $u^3=-u^2-u-2=2u^2+2u+1$ you might try substituting that into $b(x)$ to double-check that in fact $b(u^3)=0$, and so $b(x)$ is reducible over $K$.
In fact, if $f(x)$ is irreducible of degree $d$ over a finite field $F$
of order $q$, and $f(u)=0$ in
some extension field, then $f$ splits into linear factors over $F(u)$: indeeed
$$f(x)=(x-u)(x-u^q)(x-u^{q^2})\cdots(x-u^{q^{d-1}}).$$
|
H: Is the following Leibniz's notation in the chain rule written correct?
I have a doubt about the Leibniz's notation in chain rule.
Suppose that $f(x) = \tan^n(x)$.
I want to use the Leibniz's notation, so I think that I will have:
Let ${u(x)=\tan(x)}$
$${\frac{d}{dx}f(x)}=\frac{d}{du}u(x)^n\cdot \frac{d}{dx}u(x).$$
Other examples(that i think is good)
let ${m(x)=2x}$
$${\frac{d}{dx}\sin(2x)= \frac{d}{dm}\sin(m(x))\cdot \frac{d}{dx}m(x)}.$$
Is my use of Leibniz's notation correct?
AI: Some classic books write following: if $z=F(y)$ and $y=f(x)$, then in some appropriate conditions we can write $$\frac{dz}{dx} = \frac{dz}{dy}\frac{dy}{dx}$$. Some think, that more good is $$\frac{dF \circ f}{dx} = \frac{dF}{dy}\frac{df}{dx}$$
Both do not use function in denumerator. So for your first example more conventionally will be to write $z=F(y)=y^n$ and $y=f(x)=\tan x$, so $$\frac{dz}{dx} = \frac{d \tan^nx}{dx} = \frac{dz}{dy}\frac{dy}{dx}=\frac{dy^n}{dy}\frac{d \tan x}{dx}$$ Same for second.
|
H: Finding the domain of convergence
Find the domain of convergence
$ \sum_{n=0}^{\infty} n (z-1)^n$
My attempt : $-1 < (z-1) <1 \tag 1$
adding $1$ on $(1)$, we have
$0 \le z <2$
so the domain convergence will be $z \in [0,2)$
Is its true ?
AI: If this is really a quastion about Complex Analysis, then your answer doesn't makes sense. Since $\limsup_n\sqrt[n]n=1$, the radius of convergence is $1$. And, if $|z-1|=1$, then you don't have $\lim_nn|z-1|=0$, and therefore the series $\sum_{n=1}^\infty n(z-1)^n$ diverges So, your series converges if and only if $|z-1|<1$.
|
H: Euclidean Space is a Lie Group under Addition
I attempt to understand the definition and examples of Lie group supplied by An Introduction to Manifolds by Loring Tu (Second Edition, page no. 66). The definition is given below.
The first example is as follows.
The Euclidean space $\mathbb{R}^n$ is a Lie group under addition.
For $\mathbb{R}^n$ to be a Lie group, the multiplication map $\mu: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n, \mu(x, y) = x + y$, and the inverse map $i: \mathbb{R}^n \to \mathbb{R}^n, i(x) = \frac{1}{x}$, need to be $C^{\infty}$. I am convinced that $\mu$ is a $C^{\infty}$ map on $\mathbb{R}^n \times \mathbb{R}^n$, but I don't see how the inverse map $i$ is $C^{\infty}$ at $x = 0 \in \mathbb{R}^n$. To understand it better, I considered the limiting case where $n = 1$ in $\mathbb{R}^n$. In that case the inverse map $i$ is not defined at $x = 0 \in \mathbb{R}$.
What am I missing here? How can then $\mathbb{R}^n$ be a Lie group under addition?
AI: It doesn't make sense to talk about $\frac1x$ when $x\in\Bbb R^n$ and $n>1$. The inversion, in this context, is the map $x\mapsto-x$.
|
H: How to check the following two connected open sets coincide?
As subsets of a Hilbert space (we don't present the concret space here), if $A_1,B_1,A_2,B_2$ are open sets satisfying
\begin{equation}
A_1 \cap B_1=A_2\cap B_2 =\emptyset, \quad A_1 \cup B_1=A_2 \cup B_2.
\end{equation}
Moreover, $A_1$ and $A_2$ have the point $0$ in common, and they are both connected sets, then can we conclude that $A_1=A_2$?
AI: $$A_1=A_1\cap(A_1 \cup B_1)=A_1\cap(A_2\cup B_2)=(A_1\cap A_2)\cup(A_1\cap B_2)$$
Thus $A_1\cap A_2 =A_1$ or $A_1 \cap A_2=\emptyset$, as $A_1$ connected. The latter cannot happen as $0\in A_1 \cap A_2$. We conclude $A_1\subseteq A_2$.
By the same argument $A_2\subseteq A_1$ and we have $A_1=A_2$.
|
H: Find $\int \frac{x}{\sin^2x-3}dx$
I started like this
\begin{align*}
\int \frac{x}{\sin^2x-3}dx & =\int \frac{x\sec^2x}{\tan^2x-3\sec^2x}dx\\
& =-\int \frac{x\sec^2x}{2\tan^2x+3}dx\\
& = -\left [ \frac {x\tan^{-1}\left (\frac {\sqrt {2}\tan x}{\sqrt {3}}\right)}{\sqrt {6}}-\int \frac {\tan^{-1}\left (\frac {\sqrt {2}\tan x}{\sqrt {3}}\right)}{\sqrt {6}}dx\right]
\end{align*}
Now i am unable to solve this further.
AI: As said in comments, after integration by parts, you are left with
$$I=\int \tan ^{-1}\left(\sqrt{\frac{2}{3}} \tan (x)\right)\,dx$$ which is a monster that I should try to avoid.
In order to compute it, I should rather consider series expansions built around $x=k \pi$ and use for
$$J_k=\int_{k\pi}^{(2k+1)\frac \pi 2} \tan ^{-1}\left(\sqrt{\frac{2}{3}} \tan (x)\right)\,dx$$ the series expansion of the integrand. This is
$$\tan ^{-1}\left(\sqrt{\frac{2}{3}} \tan (x)\right)=t+\frac{t^3}{6}-\frac{3 t^7}{280}-\frac{t^9}{504}+O\left(t^{11}\right)$$ where $t=\sqrt{\frac{2}{3}} (x-\pi k)$.
For illustration, this would give
$$J_0=\frac{\pi ^2}{4 \sqrt{6}}\left(1+\frac{\pi ^2}{72}-\frac{\pi ^6}{80640}-\frac{\pi ^8}{3265920} \right)\approx 1.1305$$ while the monster would give as an exact solution
$$\frac{1}{2 \sqrt{6}}\Phi \left(\frac{2}{3},2,\frac{1}{2}\right)+\frac{1}{2} \log
\left(\frac{3}{2}\right) \tanh ^{-1}\left(\sqrt{\frac{2}{3}}\right)\approx 1.1326$$ For sure, we could have better results pushing the expansion to higher orders.
|
H: Bernoulli Series Are Singular Distribution
I have a question of the following:
Assume $Y_{1}, Y_{2} \ldots$ is a sequence of iid rvs so that $\mathcal{L}\left(Y_{j}\right)=\frac{1}{4} \delta_{0}+\frac{3}{4} \delta_{1},$ for $j \in \mathbb{N}$
Show that the distribution $W$ are singular where
$
W:=2 \sum_{j=1}^{\infty} Y_{j} 3^{-j}
$
What I have tried is
suppose $Y_{1}, Y_{2}, \ldots$ are i.i.d. taking the value 1 with probability $\frac{3}{4},$ and the value 0 with probability $\frac{1}{4} . \operatorname{Set} Y=\sum_{n=1}^{\infty} Z_{n} 3^{-n}$
(i.e. the base- $\left.3 \text { expansion of } W \text { is } 0 . Y_{1} Y_{2} \ldots\right) .$
Further, define $S \subseteq \mathbf{R}$ by
$
S=\left\{x \in[0,1] ; \lim _{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^{n} d_{i}(x)=\frac{3}{4}\right\}
$
where $d_{i}(x)$ is the $i^{\text {th }}$ digit in the (non-terminating) base- 2 expansion of $x$ Then by the strong law of large numbers, we have $\mathbf{P}(W \in S)=1$ while $\lambda(S)=0 .$ Hence, the law of $Y$ cannot be absolutely continuous. But clearly $\mathcal{L}(W)$ has no discrete component. We conclude that $\mathcal{L}(W)$ cannot be written as a mixture of a discrete and an absolutely continuous distribution. In fact, $\mathcal{L}(Y)$ is singular with respect to $(\text { written } \mathcal{L}(W) \perp \lambda).$
However, I realise that $W$ is not really the 3-expansion.
Is there any other way to prove that.
Thanks
AI: $W$ takes values in the Cantor set. Since $Y_i \in \{0,1\}$ almost surely almost all values of $W$ are of the form $\sum \frac {a_n} {3^{n}}$ where $a_n \in \{0,2\}$. These sums all belong to the Cantor set $C$. Since $C$ has measure $0$ it follows that $W$ has singular distribution. Note that independence of $Y_i$ is not required for this.
|
H: How to solve the following system of differential equations?
If $x(t)$ and $y(t)$ are the general solution of the system of the differential equations: $$\frac{dx}{dt}+\frac{dy}{dt}+2y=\sin t $$ $$\frac{dx}{dt}+\frac{dy}{dt}-x-y=0$$ Then which of the following conditions holds for $x(t)$ and $y(t)$? ($a$ is any arbitrary constant)
(a) $x(t)+y(t)=ae^t$
(b) $x(t)+y(t)=a\sin t$
(c) $x(t)-y(t)=ae^{-t}$
(d) $x(t)-y(t)=ae^t+\sin t$
We can write the above system of equations as: $$Dx+(D+2)y=\sin t \label{eq:1}$$ and $$(D-1)x+(D-1)y=0$$
Now, how to solve for $x$ and $y$ ?
The solution says solving above equation for $x$ and $y$ to get:
$(D-1)x=-\frac{\cos t - \sin t}{2}$ and $(D-1)y=\frac{\cos t - \sin t}{2}$
How to solve please let me know. Thanks.
AI: Multiplying the $1$st equation by $(D-1)$ and the $2$nd one by $D$ and then subtracting, we get
\begin{align}
\{(D-1)(D+2)-D(D-1)\}y&=\sin t\\
\implies (D-1)y&=\dfrac12\sin t\\
\end{align}
So the auxiliary equation is
$m-1=0\implies m=1$
So C.F is $y_c=ce^t$
For P.I, \begin{align}
y_p&=\dfrac{1}{D-1}\dfrac12 \sin t\\
&=-\dfrac14 \sin t\\
\end{align}
So the general solution is $y=ce^t-\dfrac14 \sin t$
Now $Dy=ce^t-\dfrac14 \cos t$
So from the $2$nd equation, we get
\begin{align}
&(D-1)x+ce^t-\dfrac14 \cos t-ce^t+\dfrac14 \sin t=0\\
\implies &(D-1)x=\dfrac14(\cos t-\sin t)\\
\end{align}
The auxiliary equation is
$m-1=0\implies m=1$
So C.F is $x_c=de^t$
For P.I, \begin{align}
x_p&=\dfrac{1}{D-1}\dfrac14(\cos t-\sin t)\\
&=\dfrac18 (\sin t-\cos t)\\
\end{align}
So the general solution is $x=de^t+\dfrac18 (\sin t-\cos t)$
|
H: Graph theory: equivalence relation properties and edge
I'm new to graph theory, and I am studying a question about graphs generated using 4 vertices, and $V = \{a, b, c, d\}$, and let E be a symmetric relation on V.
So at this moment, I have successfully managed to draw all 11 non-isomorphic graphs using the vertices given, and I have a rough idea about the symmetric relation given, so it's like $E = \{(a, b), (b, a)\}$ would be a valid example to the symmetric relation (P.S. Correct me if I am wrong, I am actually not sure about my understanding here)
The question asks further about selecting all graphs whose relation E is transitive after adding in all the loops.
I don't understand two things here:
How does the transitive relation defined on graph influence the looking of a graph? In other words, what's the visualization of a graph that has this transitive property of edges on it?
What happens to the initial 11 graphs if we are allowing loops in the graphs now?
Thanks for helping.
AI: For a relation $\sim$ to be transitive means $x \sim y, y\sim z \Rightarrow x \sim z$. For a symmetric relation this implies that for $x\sim y$ also $x\sim x$ and $y\sim y$ hold. Drawing the graph this would be loops, so if we want to speak about a graph being a transitive symmetric relation, we need to impose loops on all vertices, which are incident to an edge.
That being said, note that a connected graph defines a transitive relation, if and only if it is a complete graph (edges between any two distinct vertices). This is because transitivity implies that whenever there are two consecutive edges the endpoints are connected by an edge. So any path can be iteratively made shorter until it becomes an edge from start to endpoint.
Hence in general every graph, which defines a transitive relation, is given by a disjoint union of complete graphs.
|
H: Radius of circle that touches 3 circles, which in turn touch each other
I had $3$ circles of radii $1$, $2$, $3$, all touching each other. A smaller circle was constructed such that it touched all the $3$ circles.
What is the radius of the smaller circle?
This is what I did:
I conveniently positioned the $3$ circles on the coordinate axes and found the coordinates of the centers of each circle.
Then I wrote a general equation of a circle( for the smaller one) , and using the fact that distance between the centers is equal of the sum of radii ( for circles touching each other) I found $3$ equations, which I could use to solve for the variables in the general equation. Hence I found the equation of the smaller circle, and thus its radius.
However, I believe that this method is very inefficient as I ended up with so many steps and sub steps to solve the equations.
Is there a better way to approach this?
I would prefer a geometrical solution instead of a coordinate solution.
Thanks for the help!!
Note :
I found that this question already has an answer here:https://math.stackexchange.com/questions/1867315/problem-including-three-circles-which-touch-each-other-externally
However is there a more efficient solution for this than what was mentioned in those answers?
AI: Let $A$, $B$ and $C$ be centers of circles with radius $3$, $2$ and $1$ respectively and let $x$ be a radius of the needed circle with a center $D$.
Thus, $$\measuredangle ACB=90^{\circ},$$
$$\cos\measuredangle ACD=\frac{4^2+(1+x)^2-(3+x)^2}{2\cdot4(1+x)}=\frac{2-x}{2(1+x)},$$
$$\cos\measuredangle BCD=\frac{3^2+(1+x)^2-(2+x)^2}{2\cdot3(1+x)}=\frac{3-x}{3(1+x)},$$ which gives
$$\left(\frac{2-x}{2(1+x)}\right)^2+\left(\frac{3-x}{3(1+x)}\right)^2=1$$ or
$$23x^2+132x-36=0,$$ which gives $$x=\frac{6}{23}.$$
|
H: is a diffeomorphism regular?
I have learned the inverse function theorem which ensures that a regular mapping (which has its inverse) is a (local) diffeomorphism.
But I wonder whether a diffeomorphism is regular.
I guess the answer would be 'yes', but I have no idea.
AI: If $M$ and $N$ are smooth manifolds and if $F: M \to N$ is a diffeomorphism, then by definition, there is a smooth map $g: N \to M$, such that $f \circ g = I_M$ and $g \circ f = I_N$. Now via chain rule, we get that for every $p \in M$
\begin{equation}
I_{M_{*}} = f_{*} \circ g_{*}: T_{f(p)}(N) \to T_{f(p)}(N)\\
I_{N_{*}} = g_{*} \circ f_{*}: T_p(M) \to T_p(M)
\end{equation}
Where the the '*' denotes the derivative. So $f_*$ is an isomorphism from $T_p(M) $ to $T_{f(p)}(N)$. Since this is true for every $p$, $f_*$ is regular. S
|
H: Why does $x=e^t+2e^{-t},y=e^t-2e^{-t}$ plot to a straight line?
This parametrization satisfies $x^2-y^2=8$, so I was expecting a hyperbola. But what I got was a straight line. Why though?
https://www.wolframalpha.com/input/?i=parametric+plot+%28e%5Et%2B2e%5E%28-t%29%2Ce%5Et-2e%5E%28-t%29%29
EDIT- I tried a different range for $t$ and the point (8,0) isn't even on the line that gets plotted
AI: Try to make the plot for $t$ between for example -3 and 3, the you should see it. Also, the point $(\sqrt 8,0)$ should be on there, not $(8,0)$.
Regarding the comments, it does not give a straight line, since the constant term depends on $t$, which varies. However, the reason why it looks like a straight line is that for 'large' $t > 0$ we have that $x \approx e^t$ and $y \approx e^t$, so that $y \approx x$. A similar thing happens for large negative $t$.
|
H: Find the largest diagonal of a parallelogram if the area is known
The original question:
In a parallelogram, the length of one diagonal is twice of the other
diagonal. If its area is $50\text{ sq. metres}$, then the length of
its bigger diagonal is...
A) $5\sqrt 2$ metres
B) $15\sqrt 2$ metres
C) $10\sqrt 2$ metres
D) $10$ metres
E) None of these
My attempt:
I will construct the parallelogram so one of the diagonal (the shortest one) is the same as its height. (It's like we have a rectangle and we cut it into two pieces at the diagonal and rearrange them and uniting the vertical sides, so we have a parallelogram)
From parallelogram formula we have:
$$\begin{align}
b&=\text{base}\\
h&=\text{height}\\
bh&=50\\
bx&=50\tag{$x$ is a shortest diagonal}\\
b&=\frac {50}{x}\tag{1}\\
\end{align}$$
Suppose my parallelogram is $ABCD$ (clockwise direction). And the intersection point between both diagonals is called point $O$. Now, if we focus on a right triangle $OCD$, the hypotenuse is the half of the largest diagonal which is $x$. The vertical side should be $\frac x 2$, and the base itself from $(1)$ is $\frac{50}{x}$. From pythagorean theorem we have:
$$\begin{align}\left(\frac{x}{2}\right)^2 + \left(\frac{50}{x}\right)^2 &=x^2\\
\frac{2500}{x^2}&=\frac{3x^2}{4}\\
x^4&=\frac{10000}{3}\\
x&=\frac{10}{\sqrt[4]{3}}
\end{align}$$
Since $x$ is the shortest diagonal, the largest diagonal should be $2x$ which is
$$\frac{20}{\sqrt[4]{3}}$$
That's my answer. So, i would choose E).
But, some of my friends answered C) because they consider it as a rhombus which is it makes sense if the answer is C. And by the way, i know if "a rhombus" is "a parallelogram" but a rhombus has the same sides for all sides. And, actually i considered the problem as a parallelogram that doesn't have the same sides for all sides. Only 2 pairs that have the same length. And, if the answer is C, i've tried to verify this on my first parallelogram form which is the height is the same with one of the diagonal and it turns out that the area is not $50$.
So, which one is true? Please help me.
Thanks in advance!
AI: Given the diagonals $p,q$ of a parallelogram, its area is
$$[ABCD]=\frac12pq\sin\theta, $$
where $\theta$ is the angle between diagonals, i.e. the area takes the largest possible value in the case of rhombus ($\theta=\frac\pi2$).
In this case assuming $p>q$ we have:
$$
\frac14 p^2=50\implies p=10\sqrt2.
$$
However this is the smallest possible value of $p$, i.e. varying $\theta$ you can construct a parallelogram of area 50 for any value of $p\ge 10\sqrt2$.
|
H: Given $\kappa = \sup_{\alpha< \lambda} \kappa_{\alpha}$n can we assume the $\{\kappa_\alpha: \alpha < \lambda\}$ is strictly increasing?
Suppose $$\kappa= \sup_{\alpha < \lambda} \kappa_\alpha$$
where $\kappa$ is an infinite cardinal and $\kappa_\alpha$ are cardinals, $\lambda$ is a non-zero limit ordinal, $\lambda < \kappa$ and $\kappa_\alpha < \kappa$ for every $\alpha < \lambda$.
Does there exist a sequence $\{\kappa_\alpha': \alpha < \theta\}$ with $\kappa_\alpha' < \kappa, \theta < \kappa$, $\theta$ is a limit ordinal and $$\kappa = \sup_{\alpha < \theta} \kappa'_\alpha; \quad (\alpha < \beta \implies\kappa'_{\alpha}< \kappa'_\beta)$$
I saw the answers here: Question on singular cardinals but I was not able to understand them. If someone could give an explicit construction, I'd be glad:
My attempt: Using unions, one can make the sequence increasing, and then one wants to try to assume a subset that is strictly increasing but I'm not sure how to completely make this formal.
AI: Yes.
Simply consider $\{\kappa_\alpha\mid\alpha<\lambda\}$ as a set of ordinals, it is a naturally well-ordered set. So we may simply consider its order type as $\theta$. Because $\kappa$ is an infinite cardinal, this $\theta$ has to be a limit ordinal as well.
But we can do better. The set is isomorphic to an ordinal (by considering the order type of the set), so it has a cofinal sequence which is indexed by a cardinal which in turn has to be the cofinality of $\kappa$.
|
H: Prove $\bigcup_{t \in I}P(A_t) \subseteq P(\bigcup_{t \in I}A_t) $
Obviously $ \{A_t\}_{t\in I} $ is basically a family of desired set.
I believe the proof must have something to do with the definition of unions and also the power sets.
I had previously proved a similar problem with almost the same structure where instead of unions, intersections were used. However, nothing came to my mind for this one.
$$\bigcup_{t \in I}P(A_t) \subseteq P(\bigcup_{t \in I}A_t) $$
AI: You prove $\;A\subset B\;$ by definition: $\;\forall\,a\in A\,\;\text{also}\;a\in B\;$ , and here:
$$X\in\bigcup_{t\in I}P(A_t)\implies\exists\,t_0\in I\,\,s.t.\,\,X\in P(A_{t_0})\implies X\subset A_{t_0}\implies $$
$$\implies X\subset\bigcup_{t\in I}A_t\implies X\in P\left(\bigcup_{t\in I}A_t\right)$$
|
H: What is the purpose/meaning of parametrizing a set by an index set
My lecture notes define $B=\{x_i|i\in I\}$ as a basis for a vector space, parametrized by the index set $I$, and uses this notation for the rest of the chapter. But what is the purpose of this notation when we can label the ${x_i}$ by anything we want? Does it not mean the same as simply denoting the vectors by $\{x_i\},\ i=1,...,n$?
AI: If the vector spaces are finite dimensional, there is not much conceptual difference between using $i\in I$ and using $1\leq i\leq n$ to index your bases. However, $i\in I$ allows for infinite dimensions, without changing your notation.
|
H: How many tries to get higher number than uniform random variable?
I came across the following problem, and don't know how to solve it:
Player 1 samples from the Uniform(0,1) distribution. Then Player 2 repeatedly samples from
the same distribution until he obtains a number higher than Player 1’s. How many samples
is he expected to make?
AI: Here is a second solution. Conditioned on player $1$'s pick ($U_0$), $N$ follows a Geometric distribution with parameter $1-U_0$. Hence, $\mathbb{E}(N|U_0) = \frac{1}{1-U_0}$. So,
$$\mathbb{E}(N) = \mathbb{E}\left(\frac{1}{1-U_0}\right) = \int_0^1 \frac{1}{1-x} dx = \infty~.$$
|
H: How is 0 = infinite from infinite series?
The infinite series $1/2 + 1/4 + 1/8 + ... = 1$
made me get $$(1-a)+a(1-a)+a^2(1-a)+...=1$$
by dividing both sides with $(1-a)$
I get $1+a+a^2+a^3+...=\frac{1}{(1-a)}$
where $a\neq1$
However, when $$\lim_{a\to 1}1+a+a^2+a^3+...=\frac{1}{(1-a)}$$ the L.H.S. of the equation will be infinite and R.H.S. will be $0$.
Why is this possible ? Did I made calculation mistakes somewhere ?
AI: The R.H.S won't be zero since $lim_{a→1}\frac{1}{1-a}$ doesn't exist (if you compute the limits for $a→1^{+}$ and $a→1^{-}$ you get $-\infty$ and $+\infty$ respectively). Moreover $\sum_{n=0}^{\infty}r^{n}=\frac{1}{1-r}$ converges only when $|r|<1$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.