Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Information theory applied to a dice (intuition about information theory) Let's say I have a die that has the values 1 to 6 written on it, and I don't know the probability to get each value, I only know that when I throw the die many times I get an average value of 3.5, just like a fair die.
According to information theory, I can guess the most unbiased probability to get a certain value by maximizing the uncertainty under the constraints, where the uncertainty is:
$$H=−K\sum_{i} p_i\ln(p_i)$$
and the constraints are:
$$\sum_{i}p_i=1$$
and
$$\sum_{i}p_i⋅v_i=3.5$$
where $v_i$ are the values of the dice between 1 to 6. If I maximize $H$ under the constrains I get:
$$p_i∝\exp(−v_iμ/K)$$
Where μ is the Lagrange multiplier corresponding to the second condition.
My question is: Is this really the most unbiased probability we can find under the constraints?
The possibility of equal probabilities ($p_i=1/6$) fulfills the constraints, and according to Occam's razor principle, it should be more likely than exponential probability. What am I missing?
|
In general, for a given mean $m$, we have the restrictions $\sum_{i=1}^6 p_i=1$ and $\sum_{i=1}^6 p_i i = m$. Applying Lagrange multipliers we get for the critical point:
$ -1 - \log p_i + \lambda i +\beta=0$
Hence the extrema is given by a (truncated) exponentional family $$p_i = a \exp({-b i}) $$ where the constants are given by the restrictions.
Now, in the particular case where $m=(1+6)/2$, you'd get $b=0$ and $a=1/6$, which amounts to an uniform distribution. (You can deduce this, with no need of doing the calculation by Stelios' comment: the uniform distribution gives the maximum entropy without the mean restriction, and your particular restriction is fullfilled by that distribution.)
Hence, there is no contradicion here, because the uniform distribution indeed belongs to the exponential family.
BTW: I used the expression "maximum entropy distribution" because "most unbiased probability" is rather confusing ("unbiased" has another meaning in statistic) and "more likely (according to Occam's razor principle)" can also be misleading ("likely - lilelihood" also has a definite meaning - perhaps one should better say "more preferable")
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
relation between union and intersection A and B are any two sets.
if A U B ≠ B
is it also true that A ∩ B ≠ A ?
and if so then why?
This is just a step I am using for a bigger proof, if the above is not true then I'll have to search for a different direction.
Thanks in advance.
|
It's easier to prove this the other way round, i.e. you can show that
if $A\cap B=A$, then $A\cup B=B$.
This is easy to show. The fact that $A\cap B = A$ implies that $A\subseteq B$, which also directly means that $A\cup B = B$.
Or, if you want the traditional long way round:
*
*Let $b\in A\cup B$.
*
*Then, if $b\in B$, we are done
*If $b\in A$, then $b\in A\cap B$ (because $A=A\cap B$) which means that $b\in B$
*Therefore, if $b\in A\cup B$, then $b\in B$, which means that $A\cup B\subseteq B$
*We also know that (always) $B\subseteq A\cup B$
*We conclude that $B=A\cup B$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How should I write down the alternating group $A_3$? I didn't understand how to write down the alternating group A3.
Is this the group consisting of only the even permutations? Also, what familiar group is this isomorphic to?
|
Yes, $A_3$ is the set of all even permutations in $S_3 = \{id, (12), (13), (23), (123), (132)\}$.
Remember that an even permutation can be written as the product of an even number of transpositions. The identity of any symmetric group is even, because id can be written as the product of two transposition. In this case, e.g. $id = (1,2)(2, 1)$.
Note also that $(123) = (12)(23)$, and $(132) = (13)(32).$ So, $$A_3 = \{id, (123), (132)\}.$$
Since $|A_3| = 3$, and the fact that there is only one group, up to isomorphism, of order $3$, $$A_3 \cong \mathbb Z_3,$$ where $\mathbb Z_3$ is the cyclic group under addition modulo $3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Mobius image of a line I've been looking at this question;
Let gamma be the mobius transformation defined by:$$\gamma(z) = \frac{2iz-2}{z+1}, z\ne -1$$
Show that $\gamma$ maps the line $\{z: Im z = Re z - 1\}$ into the circle $C(i,1)$.
I've gone about this in a similar method to an example in my notes however I'm not sure if I have the correct answer or not, and if I have I can't really see what I am doing if you understand what I mean.
So I have;
$\gamma(z) \in C(i,1) \Leftrightarrow |\gamma(z)-i|=1$
$\\\Leftrightarrow|\frac{2iz-2}{z+1} -i|=1 \\\Leftrightarrow |iz-2-i|^2=|z+1|^2 \\\Leftrightarrow x^2-2x+y^2+4y+5=x^2+2x+y^2+1 \\\Leftrightarrow x=1$
Writing this out I just have no idea what this is supposed to be... Thanks for any help!
|
Let $$w=\frac{2iz-2}{z+1}\implies z=-\frac{w+2}{w-2i}$$
Now write $z=x+iy$ and $w=u+iv$
Then after a couple of lines of algebra, we get $$x=-\frac{u^2+v^2-2v+2u}{u^2+(v-2)^2}$$ and $$y=\frac{-2u+2v-4}{u^2+(v-2)^2}$$
Now substitute these into the given line equation $y=x-1$ and after some simplification we get $$u^2+(v-1)^2=1$$ which is the circle you are given.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Surds question grade 10 I am a student and need help answering this question. I need a step by step solution.
Simplify:
$ \sqrt {18}$ - $ \sqrt {9}$
What I tried:
($ \sqrt {9}$ × $ \sqrt {2}$) - 9
= (3$ \sqrt {2}$ ) - 9
I don't know what to do next.
Thank you and help is appreciated.
|
this is $$\sqrt{2\cdot 9}-\sqrt{9}=3\sqrt{2}-3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Epsilon-delta proof that $ \lim_{x\to 0} {1\over x^2}$ does not exist I'd like to see an epsilon delta proof that the $\lim: \lim_{x\to 0} {1\over x^2}$ does not exist and an explanation of the exact reason it does not exist, because I am not so sure I believe that the limit does not, in fact, exist, so I need to be proved wrong.
What is the relationship between a limit existing, and the function in question having a least upper bound? Because it seems to me that the only explanation I can find as to why the limit does not exist is that the function is unbounded.
I'm not sure why this is relevant because it seems to me that when $x$ approaches $0$ then ${1\over x^2}$ gets infinitely close to the y-axis which suggests to me that there does exist, in fact, an epsilon infinitely close to zero such that if $|x - a| < \delta$ then $|f(x)-L| < \epsilon$ where $\delta$ is infinitely close to zero and $\epsilon$ is infinitely close to zero.
Obviously, my understanding of calculus hinges on this question, so I really need to be convinced with a bulletproof explanation, otherwise I'll continue to doubt the truth (I don't believe anything unless I fully understand it myself, for better or worse, I ignore other's authority and rely only on proof and logical understanding -- I'm sorry if this attitude offends anyone)! Thanks in advance!
|
An epsilon-delta proof is used to show that the limit exists and is $L$, not usually to show that no limit exists. We can see where it fails. Suppose we claim that $\lim_{x \to 0}\frac 1{x^2}=L$ If somebody gives us an $\epsilon \gt 0$ we have to find a $\delta \gt 0$ such that $|x| \lt \delta \implies |f(x)-L|=|\frac 1{x^2}-L| \lt \epsilon$. The problem is that if $x$ is very small, $\frac 1{x^2}$ is very large and can certainly be larger than $L+\epsilon$.
You are confusing the fact that the graph of $y=\frac 1{x^2}$ gets close to the $y
$ axis, which really means $\lim_{x \to 0} x=0,$ with the value of $\frac 1{x^2}$ getting close to a value, which it does not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2189990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Calculating of PI from sin() function in Java Ok... So I'm trying to finish my school project with processing and I'm wondering if there is a way to calculate PI from sin() like this. But I don't know how to use sin() function with degrees in Java or how to write my own. The problem with radians is that I need to convert radians into degrees with PI, which I'm trying to calculate.
Thank you in advance.
|
Sorry, but this is not a good idea. The formula that you saw essentially expresses that $$\sin x\approx x$$ when $x$ is small, and the smaller $x$ the more exact the approximation. It is valid for angles in radians.
When the angles are in degrees, this relation becomes
$$\sin°x\approx \frac{\pi x}{180}$$ where $\sin°$ denotes the sine of an angle in radians. So you hope to evaluate
$$\pi\approx180\frac{\sin°x}x.$$
If the function $\sin°$ is not available, you will have to emulate it with an explicit conversion, using
$$\sin°x=\sin\frac{\pi x}{180},$$ so that
$$\pi\approx180\frac{\sin\dfrac{\pi x}{180}}x.$$
So, not only this does not allow you to compute $\pi$ as it requires preliminary knowlegde of $\pi$, but it will do that in a very inefficient and inaccurate way, actually replacing $cx/x$ by $\sin cx/x$. You will spend much energy to go round in circles.
Even when a $\sin°$ function is available, this approach is wrong because the $\sin°$ will do the conversion from degrees to radians anyway (using a hard-coded value of $\pi$), and you will have to use an angle so small that $\sin x=x$ numerically, and there is no more point computing the sine.
A less "schizophrenic" approach is using
$$\pi=4\arctan1$$ (in radians).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving Differential Equations For Equilibrium: Is a Different Answer Format Required? I'm going to include the exact wording of the question here, because I think it's relevant:
"First solve the equation $f(x)=0$ to find the critical points of the given autonomous differential equation $\frac{dx}{dt} = f(x)$. Then analyze the sign of $f(x)$ to determine whether each critical point is stable or unstable, and construct the corresponding phase diagram for the differential equation. Next, solve the differential equation explicitly for $x(t)$ in terms of $t$. Finally, use either the exact solution or a computer-generated slope field to sketch typical solution curves for the given differential equation, and verify visually the stability of each critical point."
$\frac{dx}{dt} = 3-x$
My Work
Now, this is a fairly straightforward problem, and I've had no issues apart from the format of my explicit solution. When I solve it, I perform:
$\frac{dx}{dt} = 3-x$
$\frac{1}{3-x}dx = dt$
$-ln(3-x) = t+C$
$x = 3-e^{C-t}$
$x = 3-e^Ce^{-t}$
$x = 3-Ce^{-t}$
Pretty routine stuff. However, if I look at the book's answer to this problem, I see:
$x = 3+(x_0 - 3)e^{-t}$
And I don't understand what I'm looking at. I haven't encountered any examples in the prep work that include $x_0$, and don't understand what $x_0$ represents. The given solution implies that $C = 3-x_0$, and I don't see how this happens. Would anyone be able to shed a bit of light on this?
|
Both are fine since $C$ is just arbitrary. Try experimenting with different values of $x_0$ and you'll see. When given the initial condition $x_0$ you will still be able to calculate $C$, so it wouldn't matter.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving a differential equation with wronskians So I am asked to find a solution to this ODE here and I feel like I am missing something very obvious.
I am asked to find the general solution of:
$x^2y"-3xy'+4y=\frac{x^2}{ln(x)}, y>1$
So I first tried to find the homogeneous solution which was just a cauchy Euler equation:
$x^2y"-3xy'+4y=0$
if I solve that, I get $y_{h}(x)=c_1 x^2 + c_2 x^2 ln(x)$
Then I tried to use variation of parameters to solve the particular solution. I obtain:
$W=\begin{bmatrix}
x^2 & x^2\ln(x) \\
2x & x+2x\ln(x) \\
\end{bmatrix}$
The wronskian ends up becoming $W=x^3$ so things worked out really nicely.
If I try to find the particular solution though, I can't integrate one of the integrals.
$Y_p(t)=-y_1\int \frac{y_2g(x)}{W}+y_2\int \frac{y_1g(x)}{W} dx$
$Y_p(t)=-x^2\int x dx +x^2\ln(x)\int \frac{x}{ln(x)} dx$
but the second integral can't be done so either I made a mistake or there is another way to solve this.
I can't even use Laplace transforms since I don't have initial conditions so I am a little lost here...
Thanks!
|
Yes, you made a mistake.
Note that the coefficient of $y''$ in your differential equation is $x^2$, but you're using a formula intended for a d.e. where the coefficient of $y''$ is $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
To what extent is pi non-repeating? I've been told that pi is an irrational (infinite and non-repeating) number.
But to what extent is it non-repeating?
It obviously repeats individual numbers, and I find it hard to believe that it doesn't repeat 2-3 digit sections eventually.
|
I've been told that pi is an irrational ,(infinite and non repeating), number. But to what extent is it non repeating. It obviously repeats individual numbers, and i find it hard to believe that it doesn't repeat 2-3 digit sections eventually.
$\pi$ certainly does repeat 2-3 digit sections eventually. There are only 1,000 different sequences of 3 digits, so there's no way that $\pi$ (or any other number) can avoid repeating some of them.
In fact, if $\pi$ is a so-called normal number, then every possible 3-digit sequence appears infinitely many times (as does every 10-digit sequence, every 1,000,000-digit sequence, and so on).
When we say that the decimal expansion of $\pi$ is "non-repeating", what we mean is that $\pi$ never begins to repeat just one sequence of digits over and over forever. In other words, the decimal expansion of $\pi$ can repeat itself; it just can't ever fall into a cycle where it repeats the same thing forever.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $ x^4 - 2 $ is irreducible over $ \mathbb{Z}[i] $
How do I prove that $ p(x) = x^4 - 2 $ is irreducible over $ \mathbb{Z}[i] $?
This seems very elementary yet I'm not sure how to do it.
Someone suggested using Eisenstein and $ p = 1+i $, but this doesn't seem right because $ (1+i)^2 = 2i $ is an associate of $ -2 $.
I have seen somewhere that one can use a generalized version of the Rational Root Theorem and simply check that $ 1+i $ and $ 1-i $ are not roots of $ p(x) $, is this correct?
Thank you for your help.
|
You can also note that $x^4-2$ is irreducible over $\mathbb F_5$ and $\mathbb F_5 = \mathbb Z[i]/(2+i)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Definite Integral Problem using substitution $t=\tan(x/2)$ Solve $$ \int_0^\pi \frac {x} {1+\sin x} dx $$ using $\sin x = \frac {2\tan(x/2)} {1+\tan^2(x/2)}$ and the substitution $t = \tan(x/2)$.
I tried doing this but I got to a point where my integral limits were $0$ to $\infty
$. This happened when I substituted for $\tan(x/2)$
Is there a way of doing this using this substitution only?
And also why does this happen?
UPDATE: What I did -
$$ I = \int_0^\pi \frac {\pi-x} {1+\sin x} dx = \int_0^\pi \frac {\pi} {1+ \sin x} dx - I$$
Using $\sin x = \frac {2\tan(x/2)} {1+\tan^2(x/2)}$
$$ $$
$$ 2I = \int_0^\pi \frac {\pi} {1+\sin x} dx = \pi\int_0^\pi \frac {\sec^2(x/2)} {1+ \tan^2(x/2)+2 \tan(x/2)} dx$$
Now if there were no limits, this could've been solved easily by $t = \tan x$.
But I can't do that because if I did, the limits would become $0$ to $\infty$.
A way to solve this would be multiplying and dividing by $1+ \sin x$ but I don't want to do that. I want to use $t=\tan x$
|
After the main symmetry trick, another symmetry trick and a rationalization:
$$ \int_{0}^{\pi}\frac{du}{1+\sin u}=2\int_{0}^{\pi/2}\frac{1-\sin u}{\cos^2 u}\,du =2\left[\tan u-\frac{1}{\cos u}\right]_{0}^{\pi/2}=\color{red}{\large2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
solution to an equation involving natural numbers Suppose $a,b$ are real numbers and we have
$$ 1 = b + an \; \; \; \; \forall n \in \mathbb{N} $$
My book says that only solution is $b=0$, $a=1$. But, this does not make sense to me since if we put $n = 1$, we have
$$ 1 = b + a $$
And $a=b=1/2$ is a solution. What is wrong?
|
For all $n$ implies that we have:
$$1=b+a$$
$$1=b+2a$$
So $a=0$ and $b=1$ is the only solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $\frac{n}{2^n}$ is a null sequence from $\epsilon$ definition of limit I am trying to prove that $\frac{n}{2^n}$ is a null sequence using the $\epsilon$ definition of a limit. Now I chose to use the fact that $2^n > n^2$ for $n > 4$. I said let $\epsilon > 0$ be given. Then for $n > 4$, $\vert\frac{n}{2^n}\vert < \vert\frac{n}{n^2}\vert = \vert\frac{1}{n}\vert.$ The value of $N$ that I chose in the definition of convergence was $[\frac{1}{\epsilon}]+4$. Would this value of $N$ work in this case? I obtained $[\frac{1}{\epsilon}]+1$ from rearranging $\vert\frac{1}{n}\vert < \epsilon$, but of course this would not necessarily guarantee that $n > 4$, which is what we need to produce the first inequality, hence why I added $3$ to this value.
|
A null sequence
eventually gets as small as you want.
Initial terms do not matter.
So you can start at $n=4$
or
$n=10000$
or
$n=10^{10^{10}}$.
All that matters is that,
for any $c > 0$,
there is a $N(c)$
such that
$|a_n| < c$
for $n > N(c)$.
Once you have shown that
there is a $m$ such that
$2^n > n^2$
for all $n > m$,
then,
since
$\dfrac{n}{2^n}
\lt \dfrac1{n}$
for $n > m$,
the rest is easy.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
path connectedness of a preimage Suppose $p:X \rightarrow Y$ is a fibration such that $Y$ is path connected and $p^{-1}\{y\}$ is path connected for some $y \in Y$. Could anyone please show me that with all these conditions that $X$ is also path-connected. Thank you all for helping
|
I will use more common notation. So let $\pi:E\to B$ be a fibration with $B$ path connected and $b\in B$ be such that $\pi^{-1}(b)$ is path connected.
Pick $x, y\in E$ and consider $\lambda:I\to B$ a path such that $\lambda(0)=\pi(x)$ and $\lambda(1)=\pi(y)$. Such path exists because $B$ is path connected. Let $\{*\}$ be a space with exactly one point $*$ and put
$$f:\{*\}\times I\to B$$
$$f(*, t)=\lambda(t)$$
We can apply the homotopy lifting property to this map, because we have a constant map $g:\{*\}\to E$, $g(*)=\lambda(0)$ which is equal to $g(*)=f(*,0)$ and so $\pi\circ g = f\circ i$ where $i$ is the embeding of $\{*\}\to\{*\}\times\{0\}$. In other words we have a commuting diagram
$$\require{AMScd}
\begin{CD}
\{*\} @>g>> E\\
@VViV @VV\pi V \\
\{*\}\times I @>f>> B
\end{CD}
$$
Therefore there exists
$$F:\{*\}\times I\to E$$
such that $\pi\circ F=f$. Note that $F(*,0)\in\pi^{-1}(x)$ and $F(*, 1)\in\pi^{-1}(y)$. Therefore this map induces a path between some point in $\pi^{-1}(x)$ and some point in $\pi^{-1}(y)$, namely $t\mapsto F(*, t)$. So if we knew that every fiber is path connected then we are done because we already know how to connect different fibers.
But if $B$ is path connected then every two fibers are homotopy equivalent. And since homotopy equivalence preserves (path) connectness then every fiber is path connected because $\pi^{-1}(b)$ is. $\Box$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2190908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A proof that a countable product of countable sets is non-empty that does not use the axiom of choice. Is the proof correct? Let $I$ be some non-empty set of indexes and for every $i \in I$ let $A_i$ be a set with cardinality $\aleph_0$.
Is the following proof that the cartesian product $\prod_{i \in I}A_i$ is non-empty valid without the axiom of choice?
By definition, since $|A_i| = \aleph_0$ for every $i \in I$, then there are bijections $f_i: \mathbb{N} \rightarrow A_i$ for every $i \in I$. Let’s define explicitly the function of choice $g: I \rightarrow \bigcup_{i \in I}A_i$ by: $g(i) = f_i(0)$. It’s clearly a function of choice, hence $\prod_{i \in I} A_i \ne \emptyset$. QED.
If it is valid, can it be generalized to any cartesian product of sets with equal cardinality? If it is not valid, why?
|
Your argument does invoke choice, albeit in a subtle way: when you choose a family of bijections $\{f_i: i\in I\}$. Just because, for each $i$, the set $F_i$ of bijections from $A_i$ to $\mathbb{N}$ is nonempty, doesn't mean that you can pick one for each $i$; this is exactly the axiom of choice applied to the family $\{F_i: i\in I\}$. In order to define $g(i)$, you need to refer to a specific $f_i$, so this use of choice is not easy to remove from your argument; and in fact it can be proved that the statement you are seeking to prove is not provable in ZF (= set theory without choice) alone.
In fact, this holds in the most powerful way possible: even choice for families of two-element sets is not provable in ZF!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
How do you find the elements of $\mathbb{F} _5 [x] / (x^2 + 2)$? I am new to the field $\mathbb{F} _5 [x] / (x^2 + 2)$. How would I find all the elements present in this field?
Additionally, I know that the order of the element $x$ is 8 and the order of element $(1+x)$ is 1 (it is the generator), but how would I prove this?
Edit: the order of $(1+x)$ is the same as the order of the group as it is the generator. How would I prove this?
|
To add to @Ethan Bolker's answer, I want to address your question about the order of $x$ and $(x+1)$. You are correct about the order of $x$, since
$$x^8=(x^2)^4=(-2)^4=16=1.$$
However, the order of $x+1$ is not one. This would imply that $(x+1)^2=(x+1)$, which is not true. It should be noted that the order of a generator of a nontrivial group is never 1. The order of $(x+1)$ is 12. You should check this is true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Integration of complex numbers I was solving some integration questions, then one question arised in my mind that "Do the integration of complex no. possible?"
If yes, then what is $\int i dx$?
Definite integration is the area under curve of the graph but the above graph cannot be plotted on real plane.
I want someone to please clear my doubts regarding integration. Thanks!!!
|
The simple answer is that $i$ is a constant, so $\int i \,dx = i x + C$
The more complete answer is that "area under curve of the graph" doesn't really make sense for what you are doing when you say $\int i \,dx$, you can integrate a complex variable $z=x +iy$ over a contour. I would check out contour integration here, it might explain things a little better.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Question About the Logic of my Proof Okay, I am working on the following relatively simple problem:
Let $f(x) = |x-3| + |x-1|$ for all $x \in \mathbb{R}$. Find all $t$ for which $f(t+2) = f(t)$.
So, if $f(t+2)=f(t)$, the is equivalent to $|t+1| = |t-3|$. Thus, if this holds, one can square both sides and arrive at $t=1$. So, this value of $t$ is a necessary condition, but prima facie it isn't the only value. To show sufficiency, could I let $t = 1 + \epsilon$, plug it into the above equation, deduce that $\epsilon = 0$, and conclude that $t=1$ is the only value? Would that go into showing this is the only value?
|
When squaring both sides of an equation you can't lose solutions you could only get extra false solutions. Since $t=1$ satisfies $|t+1|=|t-3|$ that means $t=1$ is not a false solution and is the only solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
searching for $f(x)$ when knowing $f(2^{2x})$ I have a function that works for powers of 2
This is only for INTEGERS
is there a way to calculate any integer x?
$$f(2^{2x})=\frac{4^x+2}3.$$
$f(x)$=?
here are the first 100 values of f(x)
1, 2, 2, 2, 2, 4, 4, 6, 6, 8, 8, 6, 6, 8, 8, 6, 6, 8, 8, 6, 6, 8, 8, \ 14, 14, 16, 16, 14, 14, 16, 16, 22, 22, 24, 24, 22, 22, 24, 24, 30, \ 30, 32, 32, 30, 30, 32, 32, 22, 22, 24, 24, 22, 22, 24, 24, 30, 30, \ 32, 32, 30, 30, 32, 32, 22, 22, 24, 24, 22, 22, 24, 24, 30, 30, 32, \ 32, 30, 30, 32, 32, 22, 22, 24, 24, 22, 22, 24, 24, 30, 30, 32, 32, \ 30, 30, 32, 32, 54, 54, 56, 56, 54 is there a pattern to calculate any f(x)?
thanx
|
Note that $4^x=2^{2x}$, so you can simply swap them both for $x$ and get
$$
f(x)=\frac{x+2}{3}
$$
This new description is only valid for positive real $x$, and even then only if the original description was valid for any real $x$. If the original expression was only valid for integer $x$, for instance, then it would probably be better to leave it unchanged.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Volume preserving mean curvature flow preserving uniformly convex Let $(M_t,g_t)$ be a Riemannian manifold evolve by volume preserving mean curvature flow. So , for second fundamental form, we have
$$
\partial_t h_{ij}=\Delta h_{ij}-2H h_{im}h^m_j+hh_{im}h^m_j + |A|^2 h_{ij}
$$
$H=g^{ij}h_{ij}$ is mean curvature, $|A|^2=g^{ij}g^{kl}h_{ik}h_{jl}$ is inner product of second fundamental form. If the initial manifold is uniformly convex : the eigenvalues of its second fundamental form are strictly positive everywhere. Then, how to show $M_t$ still be uniformly convex for all $t\ge 0$ where the solution exists ?
|
Use Theorem 9.1 of Hamilton's THREE-MANIFOLDS WITH POSITIVE RICCI CURVATURE, which tells you that if a time-dependent symmetric tensor field $h$ satisfies $$\partial_t h_{ij} = \Delta h_{ij} + N_{ij}$$
with the reaction term $N$ satisfying the null-eigenvector condition $$h_{ij} v^i = 0 \implies N_{ij}v^iv^j \ge 0,$$
then positive-definiteness $h \ge 0$ is preserved in time. You can derive this from the scalar maximum principle by studying the scalar function $v \mapsto h(v,v)$ on the unit tangent bundle.
In this case we have $N_{ij} = -2H h_{im}h^m_j+hh_{im}h^m_j + |A|^2 h_{ij}$, so assuming $h_{ij}v^i = 0$ we see $N_{ij} v^i = 0$. Thus the inequality is preserved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Reducing $x^3-\sqrt{2}$ over $\mathbb{Q}(\sqrt{2})[x]$ I think it can be shown that $x^3-\sqrt{2}$ is irreducible by arguing that $x=\sqrt[6]{2}$, which is the only real solution, is not in $\mathbb{Q}(\sqrt{2})$, so the polynomial has no solutions over this field, which implies that the polynomial cannot be expressed as a product of polynomials of degree 1 or 2.
However, I'm offered to prove the irreducibility by showing that the polynomial cannot be expressed as a product of polynomials of $\deg 1$ and $\deg 2$. I'm wondering if this is not too complicated to do so as opposed to my version of the solution? In any case, I don't know how to show it the way I'm offered. Maybe I'm having a lapse in my knowledge of the theory. Would appreciate one's insight.
|
Let $K=\mathbb{Q}(\sqrt{2})$, and let $f = x^3 - \sqrt{2}$.
Since $f \in K[x]$ is cubic, to show $f$ is irreducible in $K[x]$, it suffices to show $f$ doesn't have a root in $K$.
Suppose otherwise. Thus, suppose $r^3=\sqrt{2}$, for some $r \in K$.
Since $r \in K$, we can write $r = a + b\sqrt{2}$, for some $a,b \in \mathbb{Q}$.
Then $r^3 = \sqrt{2}$ implies $\,r$ is not rational, hence $b \ne 0$. Then
\begin{align*}
&r^3 = \sqrt{2}\\[4pt]
\implies\; &(a + b\sqrt{2})^3 = \sqrt{2}\\[4pt]
\implies\; &\left(a(a^2+6b^2)\right) + \left(b(3a^2+2b^2)\right)\sqrt{2} = \sqrt{2}\\[4pt]
\implies\; &\left(a(a^2+6b^2)\right) + \left(b(3a^2+2b^2)-1\right)\sqrt{2} = 0\\[4pt]
\implies\; &a(a^2+6b^2)=0\;\;\text{and}\;\;b(3a^2+2b^2)-1=0\\[4pt]
\end{align*}
Then $\,a(a^2+6b^2)=0 \implies a = 0\;\;$(since $b \ne 0$).
But then, $\,b(3a^2+2b^2)-1=0 \implies 2b^3 - 1 = 0$,
$\qquad$contradiction, since by the rational root test, the polynomial $2x^3 - 1$ has no rational roots.
It follows that $f$ is irreducible in $K[x]$, as was to be shown.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why is Completeness and Compactness not equivalent in Normed Spaces? Given a complete normed space $X=(X,\|\cdot\|)$. Every Cauchy sequence converges in it. I am not able to understand why we can't show that every bounded sequence in $X$ will have a convergent subsequence.
Please give an example to clarify why completeness does not imply compactness and do explain where does the problem lie.
|
Your question seems to be about the local compactness of Banach spaces.
Look at a sequence of sequences with general term $A_n=(\delta_{i,n})_{i>0}$ which is in $l^p$. What can you say about the norm of its term and the norm of the "potential" limit? By the way there is compactness in a weak sense, but you might need Hilbert spaces for their scalar product. See Banach-Alaoglu's theorem.
It is also worth mentioning that a general result exists: A normed vector space (real or complex)'s closed unit ball is compact (for the norm topology) if and only if it is of finite dimension (Riesz lemma).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2191968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Prove that a finite non empty set of real numbers in bounded . I'm trying to solve an exercise that is as follow:
let $S = \{a_1, a_2, a_3, ......, a_n\}$ be a finite nonempty set of real numbers.
show that S is bounded.
I know that to prove a set is bounded you need to prove that it is bounded from above and below, but i do not know how to use that in a mathematical way, how i'm i suppose to prove it?
|
You might let $M = |a_1|+|a_2|+\cdots+|a_n|$. Then for any $i$, you have
$$a_i \leq |a_i| \leq M,$$
so $M$ is an upper bound of $S$. Similarly $-M$ is a lower bound.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Faster way to solve a equation. Solve the equation:
$\sqrt[3]{x-2} + \sqrt[3]{x} + \sqrt[3]{x+2} = 0$
$f(x) = \sqrt[3]{x-2} + \sqrt[3]{x} + \sqrt[3]{x+2}$
Firstly I check the amount of solutions.
*
*Graph of the function starts at the bottom and ends at the top.
*The derivative is always greater than 0, so the function is always growing.
*After all of that we know that there is just 1 solution.
Now I try to get the solution:
*
*$\sqrt[3]{x-2} + \sqrt[3]{x+2} = -\sqrt[3]{x} $
*$(\sqrt[3]{x-2} + \sqrt[3]{x+2})^3 = -x $
*$x-2 + x + 2 + 3\sqrt[3]{(x-2)^2(x+2)} + 3\sqrt[3]{(x-2)(x+2)^2} = -x$
*$3x + 3\sqrt[3]{(x-2)(x+2)}(\sqrt[3]{x-2} + \sqrt[3]{x+2})= 0$
*$3x + 3\sqrt[3]{(x-2)(x+2)}(-\sqrt[3]{x})= 0$
*$3(x - \sqrt[3]{x^2-4} * \sqrt[3]{x}) = 0$
*$\sqrt[3]{x^2} * \sqrt[3]{x} - \sqrt[3]{x^2-4} * \sqrt[3]{x} = 0$
*$\sqrt[3]{x}(\sqrt[3]{x^2} - \sqrt[3]{x^2-4}) = 0 $
So the solution is $x = 0$.
I am wondering if there is a faster way to do this, without checking the amount of solutions.
|
Observe that for the given function, $f(a)=-f(-a)$ for any value of $a$.
Now put $a=0$.
Thus $f(0)=-f(0)$.
=> $2f(0)=0$
=> $f(0)=0$
=> $x=0$ is a solution.
Also, $f'(x)>0$ for all real numbers $x$.
So, the function is always increasing and thus $x=0$ is the only solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can a $C^r$ differentiable function on a arbitrary set be extend to a $G_\delta$ set? Let $S\subseteq\mathbb R^n$ be a arbitrary set, $0\le r\le\infty$.
For $r<\infty$, call a function $f:S\to\mathbb R$ is $C^r$, if there are continuous functions $\{f_\alpha:S\to\mathbb R\}_{0\le|\alpha|\le r}$ such that $f(x+h)=\sum_{0\le|\alpha|\le r}f_\alpha(x)h^\alpha+o(|h|^r)$ holds for all $x\in S$.
Say $f$ is $C^\infty$, if $f$ is $C^r$ for all finite $r$.
If $f:S\to\mathbb R$ is such a $C^r$ function, can we find a $G_\delta$ set contain $S$ such that $f$ has a $C^r$ extension on the open set?
I edit the question from open set to $G_\delta$ set, I hope it might work...
|
Suppose that the closure of $S$ has non-empty interior $V$. If $f$ is continuous on the set $S\cap V$ then it has a continuous extension to a $G_\delta$ set in the closure of $S\cap V$ (countable intersection of open dense set). Conversely, given a $G_\delta$ dense set $\Omega$ in Euclidean space you may construct a continuous function on $\Omega$ that admits no continuous extension. For example, a function $f$ defined and continuous on the irrationals admitting no continuous extension to a rational.
For higher order derivatives with your (slightly unusual) definition, I suspect the conclusion is the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Are these sets representing the subspace topology equal? I have a question regarding the representation of the subspace topology.
Let $(\Omega, \mathcal T)$ be a topological space, $\Omega' \subseteq \Omega$ and $\mathcal T' \subseteq \mathcal T$ the subspace topology with respect to $\mathcal T$. Then, by definiton $$\mathcal T' = \{U\cap\Omega' \,|\, U\in \mathcal T\}.$$
My question: Is the following representation equal to the first one? $$\mathcal T'= \{U\in \mathcal T \,|\,U\subseteq \Omega'\}$$
|
The second definition is not correct in general.
The subspace topology $\Omega '$ is composed by the intersection of the open sets of $\Omega$ with $\Omega'$, hence is possible that there are $U\in\mathcal T$ that are not subsets of $\Omega'$ and that at the same time we need them to define the subtopology (the open sets) of $\Omega'$.
By example take the topological subspace of $\Bbb R$ defined by $A:=[0,1)\cup(2,3]$. Then from the open sets $(-1,1)$ and $(2,4)$ of (the standard topology of) $\Bbb R$ we have that
$$A\cap (-1,1)=[0,1),\quad A\cap (2,4)=(2,3]\tag{1}$$
Hence $[0,1)$ and $(2,3]$ are open sets in the topological subspace $A$ (by your first definition of topological subspace).
But if we follow your second "definition" of topological subspace we cannot define the open sets in $(1)$ because is clear that $[0,1)$ and $(2,3]$ are not open sets in (the standard topology of) $\Bbb R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
show that for $n=1,2,...,$ the number $1+1/2+1/3+...+1/n-\ln(n)$ is positive show that for $n=1,2,...,$ the number $1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}-\ln(n)$ is positive, that it decreases as $n$ increases, and hence that the sequence of
these numbers converges to a limit between $0$ and $1$ (Euler's constant).
I'm trying to prove this by induction on $n$ and I made the base step, I could not with the inductive step because to do so suppose that for $n=1,2,\dots,$ it is true that $1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}-\ln(n)$ is positive and let's see that $1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}+\frac{1}{n+1}-\ln(n+1)$ is positive,
We see that
\begin{align}
&1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}+\frac{1}{n+1})-\ln(n+1)\\
=&1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}-\ln(n)+\frac{1}{n+1}-\ln(n+1)+\ln(n)\\
>&\frac{1}{n+1}-\ln(n+1)+\ln(n)
\end{align}
But I do not know how to prove that $\frac{1}{n+1}-\ln(n+1)+\ln(n)>0$ what do you say? Can you do what I did?
|
Note that the sequence in question, let's call it $\alpha_n$, can be written as
$$\alpha_n = \Big(\sum^n_{k=1} 1/k\Big) - ln(n).$$
Note that
\begin{align}
ln(n) &:= \int^n_1 \frac {1} {t} dt \\\ &= \int_1^2 \frac {1}{t} dt + \int_2^3 \frac {1}{t} dt + \dots + \int^n_{n-1} \frac {1}{t} dt \\\
& \le (2-1) \cdot \frac {1}{1} + (3-2) \cdot \frac {1}{2} + \dots + (n-(n-1)) \cdot \frac {1}{n-1} \\\
&= 1 + \frac {1}{2} + \dots + \frac {1}{n-1}.
\end{align}
Therefore, for a fixed $n$, $\alpha_n \ge \frac {1} {n} > 0$, so the sequence is positive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
CDF of a random variable Consider a random variable $Y_{n}$ such that,
$P$($Y_{n}$ = $\frac{i}{n}$) = $\frac{1}{n}$ , $i$ = 1, 2,..., $n$.
Is the cdf of $Y_n$ for every integer $n \ge 1$ simply $\sum_{k=1}^i \frac{1}{n}$ = $\frac{i}{n}$?
Also, how do you show that for any $u$ $\in$ $R$ the $\lim_{n\to\infty} P($$Y_{n}$ $\le$ $u$) = P(U $\le$ $u$)
Where U is uniform $[0,1]$.
My thought was that if $P($$Y_{n}$ $\le$ $\frac{i}{n}$) = $\frac{i}{n}$, then $P($$Y_{n}$ $\le$ $u$) = $u$ which is equal to
$\int_0^u$ 1 dU.
Any suggestions?
|
The cdf of $Y_n$ is given by
$$
P(Y_n\le y)=\frac{\lfloor ny\rfloor}n
$$
for $0\le y\le 1$, where $\lfloor\cdot\rfloor$ is the floor function. We have that $\lfloor ny\rfloor/ny\to1$ as $n\to\infty$. Hence, $P(Y_n\le y)\to y$ as $n\to\infty$ which is the cdf of the continuous uniform distribution on $[0,1]$.
Alternatively, we can use the moment generating functions. The moment generating function of $Y_n$ is given by
$$
\frac{e^{t/n}[e^t-1]}{n(e^{t/n}-1)}.
$$
for $t\ne0$. We have that $e^{t/n}\to1$ as $n\to\infty$ and $n(e^{t/n}-1)\to t$ as $n\to\infty$. We obtain
$$
\frac{e^{t/n}[e^t-1]}{n(e^{t/n}-1)}\to\frac{e^t-1}t
$$
as $n\to\infty$ which is the moment generating function of the continuous uniform distribution on $[0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Junior olympiad question: Minimum value of 3 digit number divided by sum of its digits I recently had a maths competition where we were given this problem. I solved the question, but I narrowed down the possibilities then did more of a guess and check method. I was hoping someone else could help me get the answer to this question using a more efficient, less time consuming way. The answer that I got was 189/18 which was 10.5. I know it seems significantly easy but I just can't find the correct method for it.
Let A be a number consisting of three different nonzero digits and let B be the sum of all the three digits. Find the minimum value of A/B.
|
Let the number be $\overline{abc}=100a+10b+c\,$ with digits $1 \le a,b,c \le 9\,$. Then:
$$
\begin{align}
\frac{100a+10b+c}{a+b+c} & = 1 + 9\cdot\frac{11a+b}{a+b+c} \\[3px]
& \ge 1 + 9\cdot\frac{11a+b}{a+b+\color{red}{9}} \quad\quad\quad\quad\text{(*)}\\[3px]
& = 1 + 9 + 9 \cdot \frac{10a-9}{a+b+9} \\[3px]
& \ge 1 + 9 + 9 \cdot \frac{10a-9}{a+\color{red}{8}+9} \\[3px]
& = 10 + 9 \cdot \frac{10a+170-179}{a+17} \\[3px]
& = 10 + 9 \cdot 10 - 9 \cdot \frac{179}{a+17} \\[3px]
& \ge 100 - 9 \cdot \frac{179}{\color{red}{1}+17} \\[3px]
& = \frac{189}{18}
\end{align}
$$
The minimum is attained when all the inequalities above are equalities i.e. $\overline{abc}=189\,$.
$(*)\,$ Given the condition that the digits must be different, and given that the next digit $b$ will also compete for the highest value, the alternative is to assign $c=8$ at this step, and save the digit $9$ to be assigned to $b=9$ at the next step. However, that leads to the solution $\overline{abc}=198\,$, which gives a higher ratio $198 / 18 \gt 189 / 18\,$, therefore the unique minimum is indeed attained for $\overline{abc}=189\,$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Some curious binomial coefficient identities I was playing around with some polynomials involving binomial coefficients, and inadvertently proved the following two identities:
(1) For all $p, q \in \mathbb{N}$ and $i \in \left[ 0, \min\{p, q\} \right]$:
$$
\begin{pmatrix}p \\ i\end{pmatrix} = \sum_{j=0}^i (-1)^j \begin{pmatrix}q \\ j\end{pmatrix} \begin{pmatrix}p + q - j \\ i - j\end{pmatrix} \text{.}
$$
(2) For all $q \in \mathbb{N}_{\ge 1}$ and $i \in [0, q]$:
$$
\frac{q - i}{q} = \sum_{j=0}^i (-1)^j \begin{pmatrix}i \\ j\end{pmatrix} \begin{pmatrix}2q - 1 - j \\ i - j\end{pmatrix} \begin{pmatrix}q - j \\ i - j\end{pmatrix}^{-1} \text{.}
$$
Can either of these identities be proven in any trivial way (e.g., by reduction to known identities)?
|
The first one is just inclusion-exclusion in the following way:
Take the set $[p+q]=\{1,\cdots ,p+q\},$ so you want to take $i$ elements from those such that they all belong to $[p].$ By definition you just restrict yourself to the set $[p]$ and hence there are $\binom{p}{i}$, but on the other hand it is the same as this $$|T\setminus \bigcup _{j=1}^q A_{p+j}|,$$
where $T$ is take all possible subsets of size $i$ from $[p+q]$ which can be done in $\binom{p+q}{i}$ and $A_r = \{S\subset [p+q]:|S|=i \wedge r\in S\}$ (so we are taking out all sets that contain elements on $[p+q]\setminus [p]$.)
By inclusion-exclusion then $$|T\setminus \bigcup _{j=1}^q A_{p+j}|=|T|-\sum _{j=1}^q(-1)^{j-1}\sum _{X\in \binom{[p+q]\setminus [p]}{j}}|\bigcap _{y\in X} A_{y}|,$$
as seeing before, $|T|=\binom{p+q}{i},$ and $|A_r|=\binom{p+q-1}{i-1}$ and if you take $r_1,r_2\in [p+q]\setminus [p],$ $|A_{r_1}\cap A_{r_2}|=\binom{p+q-2}{i-2}$ because you have already chosen $2$, hence the intersections are homogeneous and then $$|T|-\sum _{j=1}^q(-1)^{j-1}\sum _{X\in \binom{[p+q]\setminus [p]}{j}}|\bigcap _{y\in X} A_{y}|=\binom{p+q}{i}-\sum _{j=1}^q(-1)^{j-1}\sum _{X\in \binom{[p+q]\setminus [p]}{j}}\binom{p+q-j}{i-j}=\binom{p+q}{i}-\sum _{j=1}^q(-1)^{j-1}\binom{q}{j}\binom{p+q-j}{i-j},$$
what is you identity.
The second one seems more challenging(but it suggests a probabilistic approach.)
Added: The second is a particular case of the first one.
Notice that $\frac{\binom{i}{j}}{\binom{q-j}{i-j}}=\frac{(q-i)!i!}{j!(q-j)!}=\frac{\binom{q}{j}}{\binom{q}{i}},$ so on the LHS you get $$\binom{q-1}{i},$$
and then take $p=q-1$ in your first identity and the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Partial Fraction decomposition (degrees) Let us say that I want to decompose the fraction
$$\frac{x-3}{x^2 +6x+5}$$ into partial fractions.
I know that we have to factor the denominator and write it as $$\frac{A}{x+1} + \frac{B}{x+5}.$$ Then we get
$$x - 3 = A(x+5) + B(x+1).$$
Select convenient values for $x$ and solve for $A$ and $B$.
My question, why if the denominator is linear, the numerator has to be 1 degree less than the denominator? Not just in this specific example, but in all questions - THE top has to be one degree less than the bottom. Why?
|
If $f(x)=\frac{P(x)}{Q(x)}$ is a rational function and $\deg(P(x))\ge\deg(Q(x))$ we call $f$ an Improper rational function, it is said to be "top-heavy".
Conversely, if $\deg(P(x))\lt\deg(Q(x))$ then $f$ is a Proper rational function.
When $\deg(P(x))\ge\deg(Q(x))$, we use polynomial long division to turn a top-heavy rational function $f(x)=\frac{P(x)}{Q(x)}$ into a proper rational function:
$$f(x)=S(x) + \frac {R(x)}{Q(x)}$$
where $S(x)$ and $R(x)$ are polynomials and $\deg(R(x))\lt\deg(Q(x))$.
Select convenient values for $x$ and solve for $A$ and $B$.
Finding $A$ and $B$ (and often $C$, $D$, $E$ and so on) is extremely important when you get to integral calculus if you need to evaluate integrals involving rational functions such as this one using the function from your example:
$$\int \frac{x-3}{x^2 +6x+5} \, dx$$
.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2192932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How to compose two functions? I have three functions:
$$f(x) = x+1 ,\; g(x) = x - 1 ,\; h(x) = 2x$$
I want to find $g\circ f$ such that $g(f(x))$, which equals $(x+1)-1=x$ but how?
I don't understand the steps.
Also, I want to find $h\circ f$ which equals $h(f(x)) = 2x-1$ but I dont know how to get that answer either.
Thanks
|
The function $f$ will take an input and return an output equal to one more than the input.
$f(\underbrace{\color{red}{x}}) = \underbrace{\color{red}{x}}+1$
Similarly $f(\underbrace{\color{red}{55}})=\underbrace{\color{red}{55}}+1$ and $f(\underbrace{\color{red}{8x^2-3}})=\underbrace{\color{red}{8x^2-3}}+1$
The function $g$ will take an input and return an output equal to one less than the input.
$g(\underbrace{\color{blue}{x}})=\underbrace{\color{blue}{x}}-1$
So, we have $(g\circ f)(x)=g(\underbrace{\color{blue}{f(x)}}) = g(\underbrace{\color{blue}{x+1}})=(\underbrace{\color{blue}{x+1}})-1 = x+1-1=x$
Similar manipulation can be done for $h$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Determining whether the series $\sum_{n=1}^{\infty} \frac{\sqrt{n}+\sin(n)}{n^2+5}$ is convergent or divergent by comparison test I am given the series:
$$\sum_{n=1}^{\infty} \frac{\sqrt{n}+\sin(n)}{n^2+5}$$
and I am asked to determine whether it is convergent or not. I know I need to use the comparison test to determine this. I can make a comparison with a harmonic p series ($a_n=\frac{1}{n^p}$ where p > 1, series converges). I argue that as the denominator grows more rapidly than the numerator, I need only look at the denominators:
$$\frac{1}{n^2+5}\le\frac{1}{n^2}$$
$\frac{1}{n^2}$ is a harmonic p series where $p>1$ which converges. As $\frac{\sqrt{n}+\sin(n)}{n^2+5}$ is less than that, by the comparison test, $\sum_{n=1}^{\infty} \frac{\sqrt{n}+\sin(n)}{n^2+5}$ is convergent.
Is this a valid argument for this question?
|
Since $\sin(n) \leq 1$,so $\sum_{n=1}^{\infty} \frac{\sqrt{n}+\sin(n)}{n^2+5}\leq \sum_{n=1}^{\infty} \frac{\sqrt{n}+1}{n^2+5}$,
Now for large $n$, $(n^{2} + 5)$ can be taken to be $n^{2}$ ,
so the series becomes $\sum_{n=1}^{\infty} \frac{\sqrt{n}+1}{n^2} = \sum_{n=1}^{\infty} \frac{1}{n^{1.5}}+\sum_{n=1}^{\infty} \frac{1}{n^2}$ and both the series are converging so,$\sum_{n=1}^{\infty} \frac{\sqrt{n}+1}{n^2+5}$converges!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find the generating function or closed form for the recurrence relation $a_n = a_{n-1} + 4a_{n-2} + 2a_{n-3}$ I was trying to solve this recurrence relation using generating function
$a_n = a_{n-1} + 4a_{n-2} + 2a_{n-3} \qquad : \quad a_0 =1,a_1 =1,a_2 =5, $
I did in the following way
$
\begin{align*}
&G(x) = \sum_{n=0}^{\infty}a_n.x^n \\
&G(x) = a_0x^0 +a_1x^1+a_2x^2+\sum_{n=3}^{\infty}a_n.x^n \\
&G(x) = 1.x^0 +1.x^1+5.x^2+\sum_{n=3}^{\infty}\left ( a_{n-1} +4.a_{n-2} + 2.a_{n-3} \right ).x^n \\
&G(x) = 1.x^0 +1.x^1+5.x^2+ x.\sum_{n=2}^{\infty}a_nx^n + 4x^2.\sum_{n=1}^{\infty}a_nx^n + 2x^3.\sum_{n=0}^{\infty}a_nx^n\\
&G(x) = 1.x^0 +1.x^1+5.x^2+ x.\left [ G(x) - 1 - x \right ] + 4x^2.\left [ G(x) - 1 \right ] + 2x^3.G(x)\\
&G(x) = \frac{1}{1-x-4x^2-2x^3} \\
\end{align*}$
Now how to get the closed form in terms of $n$ after this?
If any other methods available to find the closed form please mention.
Thanks !
|
Here are two variants to derive $a_n$. The first one gives a closed form, the other one an explicit expression, which results in a nice binomial identity.
First variant: Partial fractions
In case it's easy to derive the zeros of the denominator of
\begin{align*}
G(x) = \frac{1}{1-x-4x^2-2x^3}
\end{align*}
the partial fraction decomposition is a convenient method. As @J.G. indicated is $x=-1$ a zero.
Omitting some intermediary calculations we obtain
\begin{align*}
G(x)&=\frac{1}{1-x-4x^2-2x^3}\\
&=\frac{1}{1+x}-\frac{2x}{2x^2+2x-1}\\
&=\frac{1}{1+x}-\frac{x}{(x+\frac{1}{2}(1+\sqrt{3}))(x+\frac{1}{2}(1-\sqrt{3}))}\\
&=\frac{1}{1+x}-\frac{1}{2\sqrt{3}}\cdot\frac{1+\sqrt{3}}{\left(x+\frac{1}{2}+\frac{\sqrt{3}}{2}\right)}
+\frac{1}{2\sqrt{3}}\cdot\frac{1-\sqrt{3}}{\left(x+\frac{1}{2}-\frac{\sqrt{3}}{2}\right)}\\
&=\frac{1}{1+x}-\frac{1}{\sqrt{3}}\cdot\frac{1}{1-(1-\sqrt{3})x}+\frac{1}{\sqrt{3}}\cdot\frac{1}{1-(1+\sqrt{3})x}\\
&=\sum_{n=0}^\infty\left[(-1)^n-\frac{1}{\sqrt{3}}(1-\sqrt{3})^n+\frac{1}{\sqrt{3}}(1+\sqrt{3})^n\right]x^n\tag{1}
\end{align*}
Second variant: Geometric series
We can also directly apply a geometric series expansion. It is convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in a series and obtain
\begin{align*}
[x^n]G(x)&=[x^n]\frac{1}{1-x-4x^2-2x^3}\\
&=[x^n]\sum_{j=0}^\infty x^j(1+4x+2x^2)^j\\
&=\sum_{j=0}^n[x^{n-j}]\sum_{k=0}^j\binom{j}{k}(2x)^k(2+x)^k\tag{2}\\
&=\sum_{j=0}^n\sum_{k=0}^{\min\{j,n-j\}}\binom{j}{k}2^k[x^{n-j-k}](2+x)^k\\
&=\sum_{j=0}^n\sum_{k=0}^{\min\{j,n-j\}}\binom{j}{k}\binom{k}{n-j-k}2^{3k-n+j}\tag{3}
\end{align*}
Comment:
*
*In (2) we use the linearity of the coefficient of operator and apply the rule
\begin{align*}
[x^{p-q}]A(x)=[x^p]x^qA(x)
\end{align*}
We also set the upper limit of the outer sum to $n$ since the exponent of $x^{n-j}$ is non-negative.
*In (3) we select the coefficient of $x^{n-j-k}$.
Binomial identity: We derive from (1) and (3) the following binomial identity by changing the order of summation in the outer sum of (3), i.e. $j\rightarrow n-j$.
\begin{align*}
\sum_{j=0}^n&\sum_{k=0}^{\min\{j,n-j\}}\binom{n-j}{k}\binom{k}{j-k}2^{3k-j}\\
&=(-1)^n-\frac{1}{\sqrt{3}}(1-\sqrt{3})^n+\frac{1}{\sqrt{3}}(1+\sqrt{3})^n\qquad\qquad n\geq 0
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Showing a linear map is injective if and only if kernel is {$ {0} $} So my prof gave me this proof:
$f(x) = f(y) ⇐⇒ f(y − x) = 0 ⇐⇒ y − x ∈ Ker f.$
I dont see why this proof is enough, this only says $y-x \in Ker f$
|
First suppose $f$ is injective.
Since $f$ is linear, $f(0) = 0$, hence $0 \in \text{ker}(f)$.
But if $x$ is any element of $\text{ker}(f)$, then
\begin{align*}
&x \in \text{ker}(f)&&\\[4pt]
\implies\; &f(x) = 0&&\\[4pt]
\implies\; &f(x) = f(0)&&\text{[since $f(0) = 0$]}\\[4pt]
\implies\; &x = 0&&\text{[since $f$ is injective]}\\[4pt]
\end{align*}
It follows that $\text{ker}(f) = \{0\}$.
Thus, $f$ injective implies $\text{ker}(f) = \{0\}$.
Next, suppose $\text{ker}(f) = \{0\}$. Then
\begin{align*}
&f(x)=f(y)&&\\[4pt]
\implies\; &f(x)-f(y) = 0&&\\[4pt]
\implies\; &f(x-y) = 0&&\text{[since $f$ is linear]}\\[4pt]
\implies\; &x-y \in \text{ker}(f)&&\\[4pt]
\implies\; &x-y = 0&&\text{[since $\text{ker}(f) = \{0\}$]}\\[4pt]
\implies\; &x=y&&\\[4pt]
\end{align*}
hence $f$ is injective.
Thus, $\text{ker}(f) = \{0\}$ implies $f$ is injective.
Hence, $f$ is injective $\iff \text{ker}(f) = \{0\}$, as was to be shown.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Partial sum of order statistics of exponential r.v.'s and $\chi^2$ Suppose $X_i \sim Exp(\frac{1}{\lambda}), i = 1,\cdots,n$, where
$f(x) = I_{(0,\infty)}\frac{1}{\lambda}e^{-\frac{x}{\lambda}}$
is the p.d.f. of $X_i$'s.
And we have a positive integer $r$, and the order statistics
$$X_{(1)}\leq X_{(2)} \leq \cdots \leq X_{(r)}$$ where $1<r \leq n$.
Then, denote $$T = \sum_{i=1}^{r} X_{(i)} + (n-r)X_{(r)}$$
The problem is to prove that $\frac{2T}{\lambda} \sim \chi^2_{2r}$.
I'm quite at loss here. It seems to have a lot to do with Gamma distribution, but it doesn't seem the sum of first r- order statistics follows it.
Even if the sum of the first r items does follow Gamma, I don't know how to handle the following $(n-r)X_{(r)}$.
It doesn't seem right to directly compute the p.d.f. of $T$, which I've tried and failed.
I'd appreciate it enormously if anyone can give me any hint or solution !
|
Your question was actually answered eight decades ago by P. V. Sukhatme. I'll explain how to prove the problem assuming that you are familiar with the fundamentals of probability theory and the relationships between the exponential, gamma and $\chi^2$ distributions. First, if you reviewed any textbook that addresses order statistics, then you will observed that the joint probability density function of the order statistics $X_{(1)}, \ldots, X_{(n)}$ is defined to be
$$
f(x_{(1)}, \ldots, x_{(n)}) = n! \prod^n_{i = 1} f(x_{(i)}).
$$
By integrating the above joint probability density function with respect to $x_{(n)}, \ldots, x_{(r + 1)}$, respectively, one will obtain
$$
g(x_{(1)}, \ldots, x_{(r)}) = \frac{n!}{(n - r)!} \prod^r_{i = 1} f(x_{(i)}) \left[1 - F(x_{(r)})\right]^{n - r} = (n - r + 1)! \prod^r_{i = 1} f(x_{(i)}) \left[1 - F(x_{(r)})\right]^{n - r},
$$
where $F(\cdot)$ is the corresponding cumulative distribution function. In the case of the exponential distribution with scale parameter $\lambda$, the above density is given by
$$
g(x_{(1)}, \ldots, x_{(r)}) = \dfrac{(n - r + 1)!}{\lambda^r} \exp\left(\lambda^{-1} T\right),
$$
where $T = \sum^{r}_{i = 1} X_{(i)} + (n - r) X_{(r)}$. Second, define the following transformations:
$$
\begin{array}{l}
S_1 = n X_{(1)} \\
S_2 = (n - 1) \left[X_{(2)} - X_{(1)}\right] \\
\vdots \\
S_{r - 1} = [n - (r - 1) + 1] \left[X_{(r - 1)} - X_{(r - 2)}\right] \\
S_{r} = [n - r + 1] \left[X_{(r)} - X_{(r - 1)}\right] \\
\end{array}
$$
Clearly,
$$
\sum^r_{i = 1} S_i = \sum^{r - 1}_{i = 1} X_{(i)} + (n - r + 1) X_{(r)} = \sum^{r}_{i = 1} X_{(i)} + (n - r) X_{(r)} = T,
$$
and
$$
\begin{array}{l}
X_{(1)} = \frac{S_1}{n} \\
X_{(2)} = \frac{S_2}{n - 1} + \frac{S_1}{n} \\
\vdots \\
X_{(r - 1)} = \frac{S_{r - 1}}{n - r + 2} + \cdots + \frac{S_2}{n - 1} + \frac{S_1}{n} \\
X_{(r)} = \frac{S_{r}}{n - r + 1} + \frac{S_{r - 1}}{n - r + 2} + \cdots + \frac{S_2}{n - 1} + \frac{S_1}{n} \\
\end{array}
$$
Note that $S_1, \ldots, S_r$ are called spacings in statistical literature. Third, acquire the joint probability density function of the transformations $S_1, \ldots, S_r$, which is simply
$$
h(s_1, \ldots, s_r) = g\left(\frac{S_1}{n}, \frac{S_2}{n - 1} + \frac{S_1}{n}, \ldots, \frac{S_{r}}{n - r + 1} + \cdots + \frac{S_2}{n - 1} + \frac{S_1}{n}\right) \mathbf{J}
$$
where $\mathbf{J}$ is the Jacobian of transformation. Clearly, since
$$
\mathbf{J} = \left|
\begin{matrix}
\frac{1}{n} & 0 & 0 & \cdots & 0 \\
\frac{1}{n} & \frac{1}{n - 1} & 0 & \cdots & 0 \\
\frac{1}{n} & \frac{1}{n - 1} & \frac{1}{n - 2} & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{1}{n} & \frac{1}{n - 1} & \frac{1}{n - 2} & \cdots & \frac{1}{n - r + 1} \\
\end{matrix}
\right| = \frac{1}{(n - r + 1)!},
$$
then the above joint probability density function of the transformations $S_1, \ldots, S_r$ is reduced to
$$
h(s_1, \ldots, s_r) = \frac{1}{\lambda^r} e^{-\lambda^{-1} T} = \left(\dfrac{1}{\lambda} e^{-\lambda^{-1} s_1}\right) \cdots \left(\dfrac{1}{\lambda} e^{-\lambda^{-1} s_r}\right),
$$
i.e. $S_1, \ldots, S_r$ are independent and identically-distributed random variables that follow $\mathrm{Exp}(\lambda^{-1})$. Which means $T = \sum^r_{i = 1} S_i$ follows $\mathrm{Gamma}(r, \lambda)$. Hence, $\frac{2T}{\lambda} \sim \chi^2_{2r}$.
References: P. V. Sukhatme (1937), Tests of significance for samples of the $\chi^2$ population with two degrees of freedom, Ann. Eugenics 8, 52-56.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Modal logic: justification of the rule of necessitation I'm studying some lecture notes about modal logic and I'm now reading a paragraph which goes as follows:
The rules of inference of system K are modus ponens and the rule of necessitation:
NEC: if A is a theorem, then ◻A is a theorem.
This rule is legitimate in that it preserves truth in a world in any model: if A is true in a world w, it must be true in every world accessible from w, so ◻A is true in w.
I don't quite understand the last line ("This rule is legitimate..."). Do you have any guesses? Maybe the correct phrasing should have been: if A is true in a world w and it is a theorem...
|
You write "This rule is legitimate in that it preserves truth in a world in any model".
This is not true. Necessitation preserves truth in a model but not truth in a world in a model. It is easy to see that what is true in a world need not be necessarily so in that world, hence adding the box operator may not preserve truth.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Orthonormal columns and rows The assignment:
a) Prove that square-matrix A is orthogonal if and only if A has orthonormal columns.
b) Prove that square-matrix A is orthogonal if and only if A has orthonormal rows.
So I know that A matrix has orthonormal columns if and if only $A^TA=I$.
But how about orthonormal rows? Should I use $AA^T=I$ ?
b) For example, can I prove like this (?) :
Let be $A=\begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix}$
$AA^T=I$
$AA^T=\begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix} \times \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}^T = \begin{bmatrix} a_1a_1^T & a_2a_2^T & a_3a_3^T \end{bmatrix}$
$a_1a_1^T=1 \quad\quad a_2a_2^T=1 \quad\quad a_3a_3^T=1$
So $A$ is orthogonal, because rows of matrix A are orthonormal. $\Box$
a) I did it like this which I think is correct:
$A^TA=I$
$A^TA=\begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}^T \times \begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix} =
\begin{bmatrix}
a_1^Ta_1 & a_1^Ta_2 & a_1^Ta_3
\\
a_2^Ta_1 & a_2^Ta_2 & a_2^Ta_3
\\
a_3^Ta_1 & a_3^Ta_2 & a_3^Ta_3
\end{bmatrix}$
$a_1^Ta_2=0 \quad a_1^Ta_3=0$
$a_2^Ta_1=0 \quad a_2^Ta_3=0$
$a_3^Ta_1=0 \quad a_3^Ta_2=0$
$a_1^Ta_1=1 \quad\quad a_2^Ta_2=1 \quad\quad a_3^Ta_3=1$
So $A$ is orthogonal, because columns of matrix A are orthonormal. $\Box$
|
For (b): Let me denote the matrix $A$ as follows:
$$\begin{pmatrix}
- & a_1 & -\\
- & a_2 & -\\
& \vdots & \\
- & a_n & -
\end{pmatrix}$$
where the $a_i$ are row vectors and I emphasised this by adding '-'. We know that a matrix $A$ is orthogonal if $AA^T = I$. We want to show that the rows of $A$ form an orthonormal set, so let us take two arbitrary rows, $a_j$ and $a_k$, with $1 \leq j,k \leq n$. Note that we have that
$$AA^T = \begin{pmatrix}
- & a_1 & -\\
- & a_2 & -\\
& \vdots & \\
- & a_n & -
\end{pmatrix}\begin{pmatrix}
| & | & & | \\
a_1^T & a_2^T & \ldots & a_n^T\\
| & | & & |
\end{pmatrix} = I$$
so if we compute $a_ja_k^T$, this corresponds to the entry in row $j$, column $k$ of the identity matrix. This entry is equal to $0$ if $j \neq k$ and equal to $1$ if $j = k$. This shows that the rows of $A$ form an orthonormal set. The other implication (orhtonormal rows implies $A$ orthogonal) follows in the same way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Transitivity of a relation on a set.
My book says the answer is B. But how. I understand that the relation is reflexive as (a,a) belongs to R for all a belonging to the given set. Furthermore I understand that the relation is not reflexive as (a,b) belongs to R does not imply that (b,a) belongs to R for all a,b belonging to the given set. What I don't understand is how is the relation transitive ? I mean they have given (1,3) and (3,2) and (1,2) in the relation which makes it transitive for one case. But doesn't transitivity mean that (a,b) belongs to R and (b,c) belongs to R implies that (a,c) belongs to R for all a,b,c belonging to the given set. Isn't the for all violated here? In transitivity? So shouldn't it be not transitive ? Or am I missing something.
EDIT
My book's statement:
|
For it not to be transitive, you need an a,b,c such that (a,b) and (b,c) are in R, but (a,c) is not (a,b,c need not be different). Are seeing any instance of this? Let's see. Here are all pairs of pairs (a,b) and (b,c) in R:
(1,1) and (1,1)
(1,1) and (1,2)
(1,1) and (1,3)
(1,2) and (2,2)
(2,2) and (2,2)
(3,2) and (2,2)
(1,3) and (3,2)
(3,3) and (3,2)
(1,3) and (3,3)
(3,3) and (3,3)
Now, let's see if the (a,c) pair is in R in all those cases as well:
(1,1) and (1,1) => (1,1) Yes!
(1,1) and (1,2) => (1,2) Yes!
(1,1) and (1,3) => (1,3) Yes!
(1,2) and (2,2) => (1,2) Yes!
(2,2) and (2,2) => (2,2) Yes!
(3,2) and (2,2) => (3,2) Yes!
(1,3) and (3,2) => (1,2) Yes!
(3,3) and (3,2) => (3,2) Yes!
(1,3) and (3,3) => (1,3) Yes!
(3,3) and (3,3) => (3,3) Yes!
Yes, they all are ... so it is transitive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Norm of a Positive definite matrix is the largest eigenvalue Let $A$ be a positive definite symmetric matrix. I need to show that
$$\lambda_n=\max \{\frac{\|Ax\|}{\|x\|}: x\ne 0\}$$ is the largest eigen value of $A$. My try is $\frac{\|Ax\|}{\|x\|}\le \lambda,\forall \lambda$ as a not eigen value of $A$, and the equality occurs when $\lambda$ is an eigen value. So $\lambda$ is maximum? May be I am not even understanding the question.
|
By the spectral theorem, there is an orthonormal basis of eigenvectors of $A$, say $x_1,\dots,x_n$. WLOG, assume $0 < \lambda_1 \leq \dots \leq \lambda_n$, where $Ax_i = \lambda_i x_i$. For any non-zero $x$, write $x = c_1x_1 + \cdots + c_nx_n$. Then
$$ \frac{\lVert{Ax\rVert}^2}{\lVert{x\rVert}^2} = \frac{\lVert{c_1\lambda_1x_1 + \cdots + c_n\lambda_nx_n\rVert}^2}{c_1^2 + \cdots + c_n^2} = \frac{c_1^2\lambda_1^2 + \cdots + c_n^2\lambda_n^2}{c_1^2 + \cdots + c_n^2} \leq \lambda_n^2 \frac{c_1^2 + \cdots + c_n^2}{c_1^2 + \cdots + c_n^2} = \lambda_n^2,$$
with equality achieved when $x = x_n$ (whence $\lVert{Ax\rVert}=\lVert{\lambda_n x_n\rVert}=\lambda_n$).
You may wish to read about Rayleigh quotients or see Problem 37 in Brezis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
How to solve $\lim\limits_{x \to \infty}\left(\frac{x}{x+1}\right)^x$ How to solve $$\lim_{x \to \infty}(\dfrac{x}{x+1})^x$$
The answer is $\dfrac{1}{e}$
I can factor the $x$ out to get:
$$\lim_{x \to \infty}\left(\dfrac{x(1)}{x(1+1/x)}\right)^x = \lim_{x \to \infty}\left(\dfrac{1}{1+1/x)}\right)^x$$
How do I further simplify this to get to my limit?
|
You almost got it:
$$\left(\frac1{1+\frac1x}\right)^x=\frac1{\left(1+\frac1x\right)^x}\xrightarrow[x\to\infty]{}\frac1e$$
where the limit is gotten using arithmetic of limits...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2193982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Bilinear operator norm question. So I wrote this down a week ago and cannot figure out what I was thinking. Not sure if this is correct.
Context we have a bilinear operator $B:X\times Y\to \mathbb{K}$.
Is it true that
$$\sup_{x\in X, y\in Y} \|B(x,y)\|< \infty \implies |B(x,y)|\leq K \|x\|\|y\|$$
So the absolute value is a norm on the reals, complex in one dimension so that's not a problem because the norms are equivalent.
So the implication works for the $x,y$ supremum case. But does it immediately follow for all other $(x,y)$? I cannot remember what I was thinking. Can anyone help?
|
$$\sup_{x\in X, y\in Y} \|B(x,y)\|< \infty \implies B=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding the closed form for a recurrence relation I'm having trouble finding a closed form for a geometric recurrence relation where the term being recursively multiplied is of the form (x+a) instead of just (x).
Here's the recursive sequence:
$a_{n} = 4a_{n-1} + 5$ for $n \geq 1$ with the initial condition $a_{0} = 2$.
I know that in general the way to solve these problems is to start by writing out all of the arithmetic for the first few values of $a_{n}$, starting with the initial condition. Here's what I have:
$a_{0} = 2$
$a_{1} = 4 (2) + 5 \equiv ((2)(2)(2) + 5)$
$a_{2} = 4(4(2)+5)+5 \equiv (2)(2)((2)(2)(2)+5)+5$
$a_{3} = 4(4(4(2)+5)+5)+5 \equiv (2)(2)((2)(2)((2)(2)(2)+5)+5)+5$
$\ldots$ etc.
So at this point it's pretty clear to me that
$a_{n} = 2^{2n + 1} + something$
My problem is figuring out how to account for all of those 5's. Especially since the first 5 is being multiplied by $2^{3}$ and all of the other 5's are being multiplied by $2^{2}$.
I guessed something like this:
$a_{n} = 2^{2n+1} + 5(4^{n-1}) + 5^{n-1}$
$\ldots$ and the results were close, but not exact.
Can anyone help me out with the correct method for solving these types of problems? Thanks very much for your time.
|
This sequence is an affine recursion of the form $a_{n+1} = \lambda a_n + \mu$, with $\lambda=4\neq 1$ and $\mu=5$. Its $n$th term is given by
$$a_n = \lambda^n (a_0 - \rho) +\rho \, ,$$
where $\rho= \frac{\mu}{1-\lambda}$. The formula for the $n$th term can be obtained by setting $b_n = a_{n+1}-a_n$, which is a geometric sequence with common ratio $\lambda$,
$$
b_{n+1} = \underbrace{\lambda a_{n+1} + \mu}_{a_{n+2}} - \underbrace{(\lambda a_{n} + \mu)}_{a_{n+1}} = \lambda b_{n} \, ,
$$
and initial condition $$b_0 = a_1 - a_0 = \left(\lambda - 1\right) a_{0} + \mu \, .$$
The $n$th term $a_n$ is deduced from the telescoping series
\begin{aligned}
\sum_{k=0}^{n-1} b_k = a_n - a_0 = b_0 \frac{1-\lambda^n}{1-\lambda} \, .
\end{aligned}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Subset of normed Linear Space is closed The problem in question is as follows:
"Show that the set $P$ of all polynomials on the segment $[a,b]$ is a linear space. For $P$ considered as a subset of the normed linear space $C_[a,b]$ with the norm
$||f(x)||$ = $max_{a{\leq}x{\leq}b}$ $|f(x)|$
Show that $P$ fails to be closed. "
The first part of this question is trivial as we can just show linearity through addition, scalar multiplication and distributivity.
However, I'm having a hard time understanding how the norm gives me information about the closure of the set. Perhaps I'm thinking about closure wrong as it relates to normed spaces wrong but any help would be appreciated
|
Are you familiar with the Weierstrass Approximation theorem?
http://www.mast.queensu.ca/~speicher/Section14.pdf
This pdf does a good job explaining it. But the statement of the theorem should be enough alone, for your purposes: there are continuous functions which are not polynomials, but by the theorem, are the uniform limit of polynomials.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Intuition difference between derivatives of $\exp (x) $ and $\log (x)$ It is well known that the exponential function $\exp (x)$ has the derivative $$\frac{d}{dx} \exp (x) = \exp (x).$$ However, its inverse, the (natural) logarithm, $\log (x)$, in fact changes after derivation: $$\frac{d}{dx} \log (x) = \frac{1}{x}. $$
I understand why this is correct, but this is not intuitive to me. Why does $\exp (x) $ not change after derivation, while $\log (x) $ does and they are just mirrored at $y = x $?
EDIT: Thank you all for your answers, they really helped me understanding this!
|
We can look at it geometrically.
Let $f(x)=e^x$, so $f^{-1}(x)=g(x)=\log(x)$.
Any function's inverse should look like its reflection over $y=x$. Let's consider a point $(x,f^{-1}(x))=(x,y)$ on the graph of log.
By definition, $f(y)=x$. As you know, the tangent line to $f$ has slope $e^x$, so the tangent line to $f$ has slope $s_f=e^y$. We want the slope of $g$.
Well, since the graphs are reflections over $y=x$, so too should be the tangent lines. So, its slope should be the reciprocal of the slope of the other line:
$$
s_g = \frac{1}{s_f} = \frac{1}{e^y} = e^{-g(x)} = e^{-\log(x)} = \frac{1}{e^{\log(x)}} = \frac{1}{x}
$$
Thus, based on this geometric argument, the derivative of $g$ should be $1/x$ (since that's what the slope of the tangent line should be).
See here for some pictures.
Indeed, in general:
$$
\frac{d}{dx}f^{-1}(x) = [f'(f^{-1}(x))]^{-1}
$$
Also, I like to look at the Taylor (or MacLaurin) series:
$$
e^x=\sum_n \frac{x^n}{n!}
$$
$$
\log(1-x) = -\sum_n \frac{x^n}{n}
$$
$$
\frac{1}{1-x} = \sum_n x^n
$$
Notice that term-by-term differentiation causes $e^x$ to become "itself", and that the same action on $\log$ causes it to become the same as $(1-x)^{-1}.
Not sure if it helps with intuition though.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
}
|
Show basis for a topology A Subbasis $\mathcal R$ of a set $X$ is defined to a collection of subsets of $X$ whose union equals $X$. Show that the collection $\mathcal B$ of all finite intersections of element of $\mathcal R$ is a topological basis of $X$.
Attempt :
Let $X$ be a set, let $\mathcal B$ a collection of subsets of $X$, namely $\mathcal B = ${$B_\alpha$}$_{\alpha \in I}$. $\mathcal B$ is called a topological basis of $X$ if satisfy :
*
*For any $x \in X$, there is an element $B_\alpha \in \mathcal B$ such that $x \in B_\alpha$.
*For any $x \in X$, if there exists $B_1, B_2 \in \mathcal B$ such that $x\in (B_1\cap B_2)$ then there is $B_3 \in \mathcal B$ with $x \in B_3 \subset (B_1 \cap B_2)$
How to show that collection $\mathcal B$ of all finite intersections of element of $\mathcal R$ satisfy the two conditions for a basis for a topology ?
|
$R$ satisfies $1$ because its union is $X$. The closure over finite intersections satisfies $2$ (choose $B_3$ equal to $B_1$ intersection $B_2$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why do all elementary functions have an elementary derivative? Considering many elementary functions have an antiderivative which is not elementary, why does this type of thing not also happen in differential calculus?
|
The short answer is that we have differentiation rules for all the elementary functions, and we have differentiation rules for every way we can combine elementary functions (addition, multiplication, composition), where the derivative of a combination of two functions may be expressed using the functions, their derivatives and the different forms of combination.
Integration, on the other hand, neither has a direct rule for multiplication of two functions nor for composition of two functions. We can integrate the corresponding rules for differentiation and get something that looks like it (integration by parts and substitution), but it only works if you're lucky with what elementary functions are combined in what way.
You might say that there is a hope that there are rules out there, that we just haven't found them yet. This is not true; it's been proven that there are always integrals of elementary functions that are not elementary themselves (under most reasonable definitions of "elementary functions"). It's a deep result known as Liouville's theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52",
"answer_count": 6,
"answer_id": 3
}
|
Notation for juxtaposition operation on matrices Does there exist a fairly standard notation for horizontal and vertical juxtaposition operations on matrices? (Vertical juxatposition can be called "stacking.")
For example, juxtaposing horizontally a matrix of the size $m\times n_1$ with a matrix of the size $m\times n_2$, one obtains a matrix of the size $m\times(n_1 + n_2)$. Stacking vertically a matrix $m_1\times n$ with a matrix $m_2\times n$, one obtains a matrix $(m_1 + m_2)\times n$.
I've seen the notation
$$
A = [a_1|a_2|\dotsb|a_n]
$$
for a matrix $A$ with columns $a_1,a_2,\dotsc,a_n$. It looks a bit ad hoc, unless we define $|$ as the horizontal juxtaposition operation that can be applied to any pair of matrices with the same number of rows. A notation for vertical juxtaposition is also needed.
Such notation would be quite useful for writing matrices defined by blocks or decomposing matrices into blocks (into rows or columns in particular).
|
Just draw a matrix of matrices:
$$M=\begin{pmatrix}A&B\\C&D\end{pmatrix}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2194893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Possibilities for the first four positions at the World Cup $2014$
$32$ nations participated in the World Cup $2014$.
How many possibilities were there for the order of the first four positions?
In general,
$$32 \choose 4$$
gives us the possibilities to choose $4$ nations out of $32$ nations. This doesn't include the possible positions of the $4$ nations though. For $4$ nations, there are $4!$ possibilities to rank them, so overall, we have
$${32 \choose 4} 4!$$
possibilites for the order of the first four positions.
What confuses me is that I get the same value by applying the formula
$${n! \over (n - k)!}$$
for $k \le n$.
But in our lecture notes, an example for this formula was the question: "How many possibilities are there to distribute $k$ students on $n$ places?". But in this case, wouldn't $k$ and $n$ be switched like "How many possibilites are there to distribute $32$ nations on $4$ positions?", which means that we would have $k > n$. What am I missing here?
|
I believe your reasoning is correct for the number of possibilities for the first four positions.
The distinction you make between switching $k$ and $n$ seems arbitrary. We could equivalently see the problem as distributing 4 positions among 32 nations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Connected categories and connected limits Let $C$ be a category together with an equivalence relation $∼$ on the objects by $x ∼ y$ whenever there is a morphism $f: x \to y$.
*
*What does $(Ob(C)/∼) \cong 1$ mean? What kind of isomorphism is this?
*What does it mean for a category to have all small connected limits?
|
First of all, note that this relation is in general not symmetric, so we have to take the equivalence relation generated by this relation. That is, $x\sim y$ if and only if there is a zigzag of morphisms $x\to x_1\leftarrow x_2\to \dots \leftarrow x_n \to y$ in $C$. Therefore, we can think of an equivalence class as a connected component of $C$; it's a bunch of objects which can be joined by a zigzag of morphisms. Consequently, if $\mathrm{Ob}(C)/{\sim} \cong 1$, meaning that there is precisely one equivalence class, it makes sense to call $C$ connected: There exists an object of $C$ and each pair of objects can be connected through a zigzag of morphisms.
Now, a connected limit is one where the indexing category is connected and a category has all connected limits if all such connected limits exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=1$, $\lim_{n\to\infty}\frac{a_{2n}}{a_n}=\frac{1}{2}$ then $\lim_{n\to\infty}\frac{a_{3n}}{a_n}=\frac{1}{3}$ Let $\{a_n\}$ be a decreasing sequence and $a_n>0$ for all $n$.
If $\displaystyle\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=1$ and $\displaystyle\lim_{n\to\infty}\frac{a_{2n}}{a_n}=\frac{1}{2}$,
how to prove or disprove that
$\displaystyle\lim_{n\to\infty}\frac{a_{3n}}{a_n}=\frac{1}{3}$ ?
Thank you.
|
This is not an answer, but a longer train of thoughts / conjectures that might lead to a full answer.
It might be an idea to use $\displaystyle\lim_{n\to\infty}\frac{a_{2n}}{a_n}=\frac{1}{2}$ and apply it $r$ times. This gives
$$
\displaystyle\lim_{n\to\infty}\frac{a_{2^rn}}{a_n}=\frac{1}{2^r}
$$
which holds for all $r$. Now take some $m$ and identify $r$ such that $2^r \leq m < 2^{r+1}$. Then one has
$$
\frac{1}{2^r} \geq \displaystyle\lim_{n\to\infty}\frac{a_{m n}}{a_n} > \frac{1}{2^{r+1}}
$$
In particular, this holds when $m$ is chosen to be any power $3^p$. This gives a countable infinite set of inequalities of the type above, all of which must hold. They do hold if
$$
\displaystyle\lim_{n\to\infty}\frac{a_{3 n}}{a_n}=\frac{1}{3}
$$
implying
$$
\displaystyle\lim_{n\to\infty}\frac{a_{3^p n}}{a_n}=\frac{1}{3^p}
$$
The conjecture is that there is no other way that they can hold.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 2,
"answer_id": 0
}
|
Reverse mode differentiation vs. forward mode differentiation - where are the benefits? According to Wikipedia forward mode differentiation is preferred when $f: \mathbb{R}^n \mapsto \mathbb{R}^m$, m >> n. I cannot see any computational benefits. Let us take simple example: $f(x,y) = sin(xy)$. We can visualize it
as graph with four nodes and 3 edges. Top node is $\sin(xy)$, node one level below is $xy$ and two initial nodes are $x$ and $y$. Derivatives on nodes are $\cos(xy)$, $x$, and $y$. For both reverse and forward mode differentiation we have to compute these derivatives. How is reverse mode differentiation is computationally superior here?
|
An analogy might help. Let $\bf A$, $\bf B$, and $\bf C$ be matrices with dimensions such that $\bf ABC$ is well defined. There are two obvious ways to compute this product, represented by $(\bf AB)\bf C$ and $\bf A(\bf BC)$. Which of those will require fewer multiplications and additions depends on the dimensions of the matrices. For example, if $\bf C$ has width 1 then the second form will be faster, or at least no slower. It's efficient to multiply by thin or short matrices early.
If $\bf A$, $\bf B$, and $\bf C$ correspond to Jacobians of various steps in the computation graph, then I believe $\bf C$ having width $1$ corresponds to the case when $m=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 1
}
|
Calculating a Point's X position on an Ellipse, given pos Y How can I calculate for the X coordinate given the Y value of the position? Which the Y position is 10 units, as seen in the image below.
We know the X diameter is 200 and the Y diameter is 150.
|
If you have an ellipse, you can use the standard equation for an ellipse:
$$\left( \frac{x}{a} \right)^2 + \left( \frac{y}{b} \right)^2 = 1,$$
where $a = \frac{200}{2}$ and $b = \frac{150}{2}$. Then you can find the $x$ value simply by substituting your $y = 10$ value and solving the above equation algebraically for $x$.
Note that when you solve for $x$, you will have to take a square root, which means you should have a $\pm$ in your answer (since there are two x-values for which $y = 10$).
You can read more information about ellipses here:
https://en.wikipedia.org/wiki/Ellipse#Equations
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Evaluating the limit: $\lim _{x\to \infty }\left(2^x\sin\left(\frac{b}{2^x}\right)\right)$ I need to find the following limit :
$$\lim_{x\to \infty}\left(2^x\cdot \sin\left(\frac{b}{2^x}\right)\right)$$
I have tried it but I keep getting stuck, so any help would be helpful!
Thank you!
|
By substituting $u = 2^x$, this is $$\lim_{u \to \infty} u \sin \left(\frac{b}{u} \right)$$
You can do this by L'Hôpital: $$\lim_{u \to \infty} \frac{\sin \left(\frac{b}{u}\right)}{1/u}$$
which takes us to $$\lim_{u \to \infty} b \cos \left( \frac{b}{u} \right)$$
which I'm sure you can finish.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is a fiber in algebra the same as a fiber in topology I am reading Dummit & Foote, and they describe a fiber in algebra as property of a homomorphism. So if $\phi$ is a homomorphism from a group $G$ to a group $H$, then the fibers of $\phi$ are the sets of elements in $G$ that map to a single element in $H$.
Now I know that there is a notion of fibers in topology as well. I was just wondering if the definition of a fiber in topology is related to the definition of a fiber in algebra? Or is this just a case of inconvenient use of the same name?
|
Fibre is one of those catch-all words that abounds throughout mathematics.
The general setup is you have some collection of 'objects' such as groups/rings/topological spaces/manifolds and a class of 'nice' maps between them $-$ group-homomorphisms/ring-homomorphisms/continuous maps/differentiable maps.
In all cases the fibres of the nice map $f \colon X \to Y$ are the subsets $f^{-1}(y)$ for $y \in Y$. We call $f^{-1}(y)$ the fibre over $y$. But this assumes we are talking about a specific map. If we change the map we change what the fibres are. Observe the fibres form a partition of $X$.
You may have heard of something called a fibre bundle. Loosely that means a pair of manifolds $X$ and $Y$ and a map $f \colon X \to Y$ such that all fibres are homeomorphic subsets of $X$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Is this limit valid/defined and if so, what is the value of it? I was wondering if the following limit is even defined/valid; if it makes any sense. If so, what is the value of it? If not, why is it not defined/valid?
Define $n$:
$$ab=n$$ for $ a\rightarrow \infty$ and $ b\rightarrow 0$
Sure, this might be a weird question due to its perhaps philosophical nature. Please keep in mind that I am not yet that good at maths, so I would greatly appreciate an "understandable" answer.
Edit: I have read some of your answers. Does the following clarification change anything?
Let $a,b\in$ R
There is no "relation" between the two variables, one cannot be expressed using the other (such as $a=1/b$)
To rephrase the question: If two variables approach infinity and zero respectively, what would their product be (if it can be determined)?
|
There are many limits we run across that are of the form $0 \cdot \infty$. This is known as an indeterminate form and needs closer study, which requires a clearer definition than you have supplied. Each of $a$ and $b$ normally comes with a formula. Often those depend on a parameter that is common to them. For example, we might have $\lim_{c \to \infty} c^2(\frac 1c)$. In that case $a=c^2 \to \infty$ and $b=\frac 1c \to 0$ The product goes to infinity. Alternately, you could have $\lim_{c \to \infty} c(\frac 1{c^2})$ with the opposite behavior.
With the update, you cannot sensibly define a limit of the product. The limit would depend on how fast $a$ goes to $\infty$ compared to how fast $b$ goes to zero. As you have not specified how the limits are taken, the product is only defined if $a$ and $b$ have separate limits that do not result in an inderminate form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
If $A$ is an empty set, how should I understand $\forall x\in A$? It might look quite stupid, but I had become little confused when understanding empty functions. Anyway, my question is,
If there is a statement $P(x)$ starting with "for $\forall x\in A$,..." and $A$ is an empty set, should I understand this as because the assumption is false, the conclusion is absolutely true? If not, how should I?
Well, the place where I got stuck was this:
For every set X, there exists a unique empty function $f : \emptyset \rightarrow X$. To prove this I should set two empty functions $f_1, f_2$, and show that $\forall x\in \emptyset$, $f_1(x)=f_2(x)$. When thinking as I stated above, since the assumption is false, the conclusion is true. But instead if we think about a statement $\forall x\in \emptyset$, $f_1(x)\neq f_2(x)$, this may be also true....(?)
|
Imagine this:
Everytime I have played the lottery I won the jackpot!
Why is this true? Am I the luckiest person on the planet? No. I just have never played the lottery.
That is: the set of all times T that I played the lottery is empty ... which is exactly why the claim $\forall t \in T: Jackpot!(t)$ is true.
And yes, unfortunately it is also true that $\forall t \in T: \neg Jackpot!(t)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2195916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $f'(0)$ does not exist for the given function $f(x)$ If we want to show this we must show that $f$ is not differentiable at $x=0$.
The function is defined as follows:
$$f(x)=
\begin{cases}
x\sin{\frac{1}{x}}, & \text{if $x\ne0$} \\
0, &\text{if $x=0$}
\end{cases}
$$ which is a piecewise function.
I say $f'(0)$ is not defined because $\lim_{x \to 0} f(x) = 1$ but $f(0) = 0$ is not the same value, so $f'(0)$ DNE, so $f$ is not differentiable at $0$, but my professor say this is completely bad!
I say $\lim_{x \to 0} x\sin(\frac{1}{x}) = \lim_{x \to 0} \dfrac{\sin(\frac{1}{x})}{\frac{1}{x}} = 1$. But I think this is not right and I don't know how to show this does not exist!
|
Short answer:
$$\lim_{h\to0}\frac{h\sin\dfrac1h-0}h=\lim_{h\to0}\sin\dfrac1h=\lim_{t\to\infty}\sin t$$
doesn't exist (because for any $L$, $\max(|\sin t-L|)\ge 1$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Writing in Cartesian form and converting to polar form Question: Write $\frac{u+v}{w}$ in the form $re^{i\theta}$ when $u$=$1$, $v$=$\sqrt3i$, and $w$=$1+i$I added $u$ and $v$ for the Cartesian form because it is easier to do.After adding $u$ and $v$, I get $\frac{1+\sqrt3i}{1+i}$. Need help converting to polar form please
|
Hint: Write $u+v$ and $w$ in polar form and then divide...
you will get something like
$\frac{r_1 e^{i \theta_1}}{r_2 e^{i \theta_2}} \\\ $
Write this as $\frac{r_1}{r_2} e^{i(\theta_1-\theta_2)}$
Can anyone change this to a spoiler please :)?
$\frac{2 (\frac{1}{2} +\frac{\sqrt{3}}{2}i)}{\frac{2}{\sqrt{2}}(\frac{\sqrt{2}}{2}+\frac{\sqrt{2}}{2}i)}=\frac{2 e^{i \frac{\pi}{3}}}{\frac{2}{\sqrt{2}}e^{i \frac{\pi}{4}}}=\sqrt{2} e^{i(\frac{\pi}{3}-\frac{\pi}{4})}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
If $a$ and $b$ are relatively prime integers then prove that ($a$ ,$b^2$) =1 From the title I've stucked in this question for half an hour. Could anyone help me?
|
Consider the prime factors of $a$ and those of $b$. The fact that $a$ and $b$ are relatively prime means that $a$ and $b$ have no common prime factors. But then $a$ and $b^2$ have no common prime factors either; that is, $a$ and $b^2$ are relatively prime. QED.
(We have used the observation that $b^2$ has the same prime factors as $b$, with each prime factor in $b^2$ repeated twice as many times as in $b$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
When would a number $p$ be divisible by $p-k$, where $p$ and $k$ are positive integers? When would a number $p$ be divisible by $p-k$, where $p$ and $k$ are positive integers?
Suppose we then set a constant value for $k$, then what would be condition satisfying which, $p-k$ would be a factor of $p$.
|
For the first question : when $k= p - d$ such that $d | p$ , for example : $p=15$ and $d= \{1,3,5,15\}$ so $k = \{14,12,10,0\}$ and we can exclude $0$ to make $k$ positive.
For the second question : let $d|k$ then $p=\frac{k(d+1)}{d}$ for example : $k=18$ then $d = \{1,2,3,6,9,18\}$ so $p = \{36,27,24,21,20,19\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Showing compactness of operator We consider $X=l^p(\mathbb{Z},\mathbb{C})$ for $1\leq p\leq\infty$ and define $T((x_n)_{n\in\mathbb{Z}})= (a_n x_n)_{n\in\mathbb{Z}} :=(\frac{1}{n^2+1}x_n)_{n\in\mathbb{Z}}$. Let $a_n^N = a_n$ if $n\leq N$ and $a_n^N = 0$ else and $T_N(x_n) = (a_n^N x_n)$. Then $rk (T_N)\leq N$, thus $T_N$ is compact and $T_N \rightarrow T$ in $L(X)$. So: T is compact by a lemma which exactly states the limit situation as above.
Does everything goes through in my proof?
|
Your idea is good ! But you should explain in detail why $T_N \to T$ in $L(X)$.
Hence give a proof for $||T_N -T|| \to 0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Matrix Equation - Make X the Subject I'm having a complete mind blank here even though i'm pretty sure the solution is relatively easy.
I need to make X the subject of the following equation:
$$AB - AX = X $$
All i've done so far is:
$$A(B-X) = X$$
$$B-X = A^{-1} X$$
Not sure if thats right?
Thanks in advance.
|
Whatever you have written is correct if inverse of A exists.
Hint for another way of writing an expression for $X$: $AB = (I+A)X.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Writing a probability in terms of the edges of a simplex I describe the problem dimension $2$, but it could be generalized to $n$ dimensions.
So we have $X_{1}, X_{2}, X_{3}$ three $iid$ random variables of continuous law $F(x,y)$ on $\Bbb R^2$. Let's denote by $S[X_{1}, X_{2}, X_{3}]$ the simplex generated by those random variables. Given a point $(x,y)$, is it possible to write $P((x,y)\in S[X_{1}, X_{2}, X_{3}])$ as a function $g$ of $F(x,y)$ ? For example, in dimension $1$, $P(x\in [X_{1}, X_{2}]) = 2F(x)(1-F(x))$.
Thank you
|
This is based on the barycentric coordinates. Applying Cramer's rule or using suitable software, one obtains
\begin{aligned}
a_1 &= \frac{x_2 y_3 - y_2 x_3 + x_3 y - x y_3 - x_2 y + y_2 x}{-x_2 y_1 + x_2 y_3 - x_1 y_3 + y_1 x_3 + y_2 x_1 - y_2 x_3} \, , \\
\\
a_2 &= \frac{x_1 y - x_1 y_3 - x_3 y + y_1 x_3 + x y_3 - y_1 x}{-x_2 y_1 + x_2 y_3 - x_1 y_3 + y_1 x_3 + y_2 x_1 - y_2 x_3} \, , \\
\\
a_3 &= \frac{-x_2 y_1 - x_1 y + y_1 x + x_2 y + y_2 x_1 - y_2 x}{-x_2 y_1 + x_2 y_3 -x_1 y_3 + y_1 x_3 + y_2 x_1 - y_2 x_3} \, ,
\end{aligned}
where $X_i=(x_i,y_i)$ and $X=(x,y)$. It does not look like there is a simple rule to find the law of $a_i$, but the denominators in $\lbrace a_1, a_2, a_3\rbrace$ are all the same, and the numerators are very similar. One may view the problem with a different point of view. Indeed, one can compute the edge lines of the triangle:
\begin{aligned}
E_{12}: & \;\left(y - y_1\right) \left(x_2 - x_1\right) = \left(y_2 - y_1\right) \left(x - x_1\right) \, , \\
E_{13}: & \;\left(y - y_1\right) \left(x_3 - x_1\right) = \left(y_3 - y_1\right) \left(x - x_1\right) \, , \\
E_{23}: & \;\left(y- y_2\right) \left(x_3 - x_2\right) = \left(y_3 - y_2\right) \left(x - x_2\right) \, . \\
\end{aligned}
The point $X=(x,y)$ belongs to the triangle if $X$ belongs to the same half-plane as $X_3$ with respect to $E_{12}$:
\begin{aligned}
\text{sign}\left(\left(y - y_1\right) \left(x_2 - x_1\right) - \left(y_2 - y_1\right) \left(x - x_1\right)\right) = \text{sign}\left(\left(y_3 - y_1\right) \left(x_2 - x_1\right) - \left(y_2 - y_1\right) \left(x_3 - x_1\right)\right) ,
\end{aligned}
etc. The latter equation is not very different from $a_3\geq 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
are the partial derivatives must be continuous in the chain rule? Let $g(t)=(x(t),y(t))$ and suppose that $g'(t)$ exist in $t=t_0$. By the chain rule, if the partial derivatives of a function $f(x,y)$ are continuous in an open neighborhood of $g(t_0)=P$, then
$$
(f\circ g)'(t_0)=f_x(P)x'(t_0)+f_y(P)y'(t_0).
$$
Are the partial derivatives $f_x$ and $f_y$ must be continuous in $P$ or there exist a version of the chain rule with weaker conditions?
|
Let $A\subset\mathbb{R}^{n},\ B\subset\mathbb{R}^{m}$ open sets and let $f:A\to\mathbb{R}^{m},\ g:B\to\mathbb{R^{l}}$ be functions such that:
*
*$f$ is differentiable at $a\in A$
*$g$ is differentiable at $f(a)$
*$f(A)\subseteq B$
Then, $g_{o}f$ is differentiable at $a$ and $(g_{o}f)'(a)=g'(f(a))f'(a)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the last two digits of $47^{89}$ Find the last two digits of the number $47^{89}$
I applied the concept of cyclicity
$47\cdot 47^{88}$
I basically divided the power by 4 and then calculated $7^4=2401$ and multplied it with 47 which gave me the answer $47$ but the actual answer is $67$. How?
|
Your argument would work fine for finding the last two digits of $7^{89}$. But just because $7^4 = 2\,401$ ends in $01$ doesn't mean that $47^4$ will. (In fact, $47^4 = 4\,879\,681$.)
It takes a lot longer for the last digits two of $47^n$ to cycle. You'll at least be able to do calculations with smaller numbers if you think about powers of $47$ separately mod $25$ and mod $4$, then apply the Chinese remainder theorem.
Another approach is to try to compute the last two $47^{89}$ directly by using as few multiplications as possible. For example, if you get the last two digits of $47^{11}$, you can square them three times to get the last two digits of $47^{88}$, and multiply by $47$ again to get to $47^{89}$. (There might be faster ways, too; this is just the first I thought of.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2196910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Prove $f(c) = c$ under a condition.
Suppose $f(x)$ is continuous on $[0,1]$ and $f(0) = 1, f(1) = 0$. Prove that there is a point $c$ in $(0, 1)$ such that $f(c) = c$.
let $l(x) = x$ and $d(x) = f(x) - l(x)$.
We have $d(1) = f(1) - l(1) = -1$ and
$d(0) = 1$.
We divide $[-1, 1]$ in $H$ equal parts , where $H$ is a infinite hyperinterger.
$$-1, -1 + \delta, -1 + 2\delta, \ ... \ , -1 + H\delta = 1$$
Let $-1 + K\delta$ be the last partition point such that $d^*(-1 + K\delta) < 0$
$$\therefore d^*(-1 + K\delta) < 0 < d^*(-1 + (K+1)\delta)$$
$$\therefore f^*(-1 + K\delta) < -1 + K\delta < f^*(-1 + (K+1)\delta) - \delta$$
Since $f^*(-1 + K\delta) \approx f^*(-1 + (K+1)\delta) - \delta$, therefore $f^*(-1 + K\delta) \approx -1 + K\delta$
Let $c = st(-1 + K\delta)$
By taking standard part of the $f^*(-1 + K\delta) \approx -1 + K\delta$ , we get $f(c) = c$.
*
*I think this is probably correct but to be on the safe side please check my proof.
|
$\DeclareMathOperator{\st}{st}$This is indeed the correct and usual approach to proving the intermediate value theorem (and hence this question) under nonstandard analysis.
The fundamental idea is if $f\colon[a,b]\to\mathbb{R}$ is a continuous function and $u\in\mathbb{R}$ satisfies $f(a)<u<f(b)$ (or $f(b)<u<f(a)$) then there is a $c\in (a,b)$ such that $f(c)=u$. Why? $[a,b]$ has a finite subset $F$ containing its standard elements and therefore defining $x=\min\{x'\in F:u\leq f(x')\}$ gives $c=\st(x)$ provided $f$ was continuous. This is slightly more general than the idea you used where you provided a particular set $F$ (which in fact gives a simpler proof), but the idea is the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Show that PTIME and PSPACE is closed under Klenee star how to show that PSPACE and PTIME are closed under Kleene star ? I can only show that NP is closed, but it is easy because we can use non-determinism to guess partition of word. In these two cases I don't have idea how to attack it.
Edit
Using @sdcvvc's hint.
Is obvious that when we build this graph the task is solved - it is because of the fact in linear time we check if there exists path from $1$ to $|w|$ where $w$ is input word.
So, how to build this graph in polynomial time and space ?
This building will be polynomial, lets consider position $i+1$, where for $1,2,3..,i$ . To build graph for $i+1$ we must (in pesimistic case) launch Turing machine for $L$ on words: $w[1,i+1], w[2,i+1],..,w[i,i+1]$. It consumes $(i+1)\cdot f(n)$ where $f(n)$ is polynomial. We can see that it is polynomial time and space.
|
Hint: Consider a graph $G$ where vertices are positions in the word, and there is an edge $i \to j$ if the subword $w[i..j-1]$ is in $L$. Show that this graph can be computed in $PTIME$ (or $PSPACE$). Can you make a connection between this graph and Kleene star $L^{\ast}$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A monoid where square of all elements are 1 is abelian The following problem gives me a very hard time:
Let $M$ be a monoid with $a^2 = 1$ for $a \in M$. Show that $M$ is abelian.
It looks so simple as a monoid only needs to be associative and must have a neutral element (here $1$). So there are not much things to try. However, after some hours of trying I have to admit that I don't know what to try next.
Maybe somebody of you can give me a hint.
Kind regards!
|
$1=abab\to b=ababb \to b=aba \to ab =aaba \to ab =ba $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving an equation where the unknown appears in under the exponent and as a constant I encountered a problem trying to find a solution to the following equation (the problem 9.45 from the wonderful textbook on Applied Calculus by Hoffman et.al (2013, p. 719)):
$$ A(t) = \frac{3}{k} (1 - e^{-kt}) $$
According to the exercise, we know that $ A(1) = 2.3 $ implying that
$$ 2.3 = \frac{3}{k} (1 - e^{-k}) $$
Here I don't know what to do. I played with the equation back and forth trying to multiply and divide both sides of it with all sorts of things but it did not bring me much. Of course, I can multiply both sides by $ k $
$$ 2.3 k = 3 (1 - e^{-k}) $$
And it looks like $ k = 0 $, which cannot be correct as the whole equation does not make sense in terms of the context the problem is given (it is about the content of a drug in patience's bloodstream, therefore there has to be a real-number solution.)
Wolfram Alpha suggests that the answer is $ k = 0.557214 $
Could you suggest me an analytical solution to the equation?
|
The "analytical" answer is $$k = \frac{30}{23} + W\left(-\frac{30}{23} e^{-30/23}\right) $$
where $W$ is the Lambert W function.
But I doubt that Hoffman et al intended you to solve it this way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why don't we take clopen maps as morphisms of the category of topological spaces? Here, by clopen maps I mean a function mapping an open set into an open set and a closed set into a closed set. We say continuous maps are the morphisms of the category of the topological spaces. But isn't it more natural to consider clopen maps instead of continuous maps? For example, we say group homomorphisms are the morphisms, and a group homomorphism $h:G\rightarrow H$ preserves the group structure of "$G$" in "$H$". On the other hand, in some sense, a continuous map $f:X\rightarrow Y$ preserve the topological structure of "$Y$" in "$X$". The direction is reversed. I am wondering if there is any good explanation for this. Thanks!
|
Because then $f : \mathbb{R} \to \mathbb{R}$, $x \mapsto e^x$ would not be a morphism (its image is not closed). Recall that almost every notion in mathematics is motivated by examples, and this includes the definition of a category and explicit examples of categories. You don't want to just play around with the axioms, you want to study the examples you are actually interested in. And a definition of a morphism of spaces which does not include the exponential function is certainly not natural.
Another approach to topological spaces is provided by Kuratowski spaces. Here, one has a relation $x \prec A$ between points $x$ and subsets $A$ of the underlying set, which is supposed to mean that $x$ lies in the closure of $A$. This relation $\prec$ has to satisfy some axioms. Then, a map $f$ is continuous if and only if it preserves this relation $\prec$:
$$x \prec A \Rightarrow f(x) \prec f(A)$$
You might also want to have a look at frames; here the morphisms are really algebraic homomorphisms as you suggest. Frames resp. locales are a generalization of topological spaces. This is also called "pointless topology".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 0
}
|
Describe and draw the half-strip $R$ under the mapping $f(z) = z^2$ Describe mathematically and draw what happens to the half-strip $R = \{z=x+iy: 0 \leq x \leq 1, y \geq 0 \}$ under the mapping $f(z) = z^2$
I need help with describing and drawing the mapping.
solution:
For, $z = x+iy$ and $w=f(z)=z^2$
$\Rightarrow w=(x+iy)^2 = (x^2-y^2) + 2xyi$.
So $w=u+iv \Rightarrow u=x^2-y^2$ and $v = 2xy$
case i: $u=x^2-y^2 = c_1, c_1 >0$
The graph in the $xy$-plane is the hyperbola cutting the $x$-axis. The $uv$-plane $u=c_1$ represents a vertical line. That is, hyperbolas in the $xy$-plane are mapped to vertical lines in the $uv$-plane.
caseii: $v = 2xy = c_2, c_2>0$
$v=c_2$ represents a horizontal line. The hyperbolas in the $xy$-plane are mapped to horizontal lines in the $uv$-plane.
drawing:
|
The strip is the thing that should be drawn on the $x-y$ plane and what it goes to on the $u-v$ plane. It's probably best to consider the boundary first. The first part is the positive $y$ axis with $x=0.$ This goes $iy\to (iy)^2 = -y^2$ so its image is the negative real axis. Next do the segment on the $x$ axis between $0$ and $1.$ This goes from $x\to x^2$ so its image is the same segment.
Last, the segment from $1$ to $1+i\infty.$ This goes $(1+iy)\to (1+iy)^2 = 1-y^2+2iy.$ This is a curve and we can let $t=2y$ so $t\in(0,\infty)$ and we have $v=t$ and $u=1-(t/2)^2.$ This is just the graph $u = 1-(v/2)^2$ for $v>0,$ so looks like a sideways half parabola, with vertex at $(1,0)$ and opening toward the negative real axis.
So the map takes the boundary of the strip to the this wedge shape with a striaght line bottom and parabolic top. I'll leave you to decide where the interior goes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that the Gram Matrix G(B) is Positive Definite
Suppose we have $\boldsymbol P_{\leqslant 1}=\operatorname{span}\{1,x\}$ that is an inner product space with respect to $\int^1_0p(x)q(x)dx$.
Consider the basis $B=\left\{b_1 = 1, b_2 = x\right\}$.
Finding the Gram matrix $G(B)$ I would have
$G(B)=\begin{bmatrix}\int^1_01dx & \int^1_0xdx\\ \int^1_0xdx & \int^1_0x^2dx \end{bmatrix} = \begin{bmatrix}1&\frac12\\\frac12&\frac13\end{bmatrix}$
I want to show that $G(B)$ is positive definite.
It is obvious that $G(B)$ is symmetric since $G_{12} = G_{21}$.
Now, to calculate the eigenvalues, I have found that
$\lambda_1 = \frac{4+\sqrt{13}}{6}$
$\lambda_2 = \frac{4-\sqrt{13}}{6}$
Since $4 > \sqrt{13}$, then $\lambda_2 > 0$.
Is this enough to show that this is positive definite?
The reason that I am skeptical is because I had a homework question similar to the one above, but with a longer basis and with $\textbf{P}_{\leq 2}=\operatorname{span}\{1,x,x^2\}$ :
consider the basis $B = \{b_1 = 1, b_2 = x, b_3 = x^2\}$
The Gram matrix is then
$G(B)=\begin{bmatrix}1&\frac12&\frac13\\\frac12&\frac13&\frac14\\\frac13&\frac14&\frac15\end{bmatrix}$
I attempted to show that $G(B)$ is positive definite by first showing that the matrix is symmetric, and that its eigenvalues $\lambda$ are positive. However, it seems to be extremely unfeasible to find the eigenvalues of $G(B)$. Is there another way I could go about and prove this?
|
There are a lot of ways to prove that a matrix is positive definite, but sometimes working from the definition $x^TAx > 0$ if $x$ nonzero is easiest. In this case you'll see that the Gramian being positive-definite is very general, much more so than looking at monomials.
Let $\langle \cdot, \cdot\rangle $ be your inner product $\langle p, q\rangle = \int_0^1p(x)q(x)dx$.
Let $B$ be a basis for your inner product space and $G$ be the Gramian matrix with $G_{ij} = \langle B_i, B_j\rangle$.
Assume $x$ nonzero.
Then
$$x^TGx =\sum_{i,j}x_iG_{ij}x_j =\sum_{i,j}x_i \langle B_i, B_j\rangle x_j$$
Now you can use properties of the inner product. Linearity in the first term allows simplification to $\sum_j \langle \sum_i x_i B_i, B_j \rangle x_j$, and then linearity in the second term allows simplification to $\langle \sum_i x_i B_i, \sum_j x_j B_j \rangle = \langle y, y \rangle$ for some $y$, and $\langle y, y \rangle$ realizes zero iff $y$ is $0$. But if $x$ is nonzero, then $B$ being a basis implies that $y$ is nonzero, in which case $\langle y, y, \rangle$ is strictly greater than $0$.
Putting it together you get that $$x \not= 0 \implies x^TGx > 0$$
We didn't need to reference the particular inner product as definite integral anywhere in this proof. So although it's probably good for intuition to see how the Gram matrix is positive definite for this particular case, the most important part is that the Gram matrix inherits its properties straight from the inner product, and in particular if you're dealing with real numbers/functions: the Gram matrix is symmetric because the inner product is symmetric, and the Gram matrix is positive definite because the inner product is bilinear and positive definite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2197821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Extend triple of coprime numbers to summation of coprime numbers I apologize if this has already been answered or is common knowledge. If either of these is the case a reference will suffice.
Given $a, b, c$ positive integers which are pairwise relatively prime, do there exist positive integers $x,y,z$ such that
$$a x + b y = cz$$ and $ax, by, cz$ are pairwise relatively prime?
|
Yes. In fact, we can always do this with $x = 1$.
We start by just finding an arbitrary solution to the identity. Choose $y_0$ such that $by_0 \equiv -a \pmod c$; this is possible because $b$ has an inverse modulo $c$. Then we have $a + by_0 = cz_0$, and all is right with the world, except that these might not be relatively prime.
We can generate an infinite family of these solutions of the form $$a + b(y_0 + ck) = c(z_0 + bk)$$ where $k$ can be any natural number. It suffices to choose $k$ to avoid any common factors between the three terms.
(Note that if two of $ax, by, cz$ have a common divisor, the third is divisible by it as well. So it suffices to only check that $\gcd(ax, by) = 1$, which is what we do.)
We know that $\gcd(a, y) = 1$ if $y \equiv 1 \pmod a$. So choose $k$ to solve the equation $ck \equiv 1-y_0 \pmod a$; this is possible because $c$ has an inverse modulo $a$. Then $\gcd(a, y_0 + ck) = 1$, so $\gcd(a, b(y_0+ck)) = 1$: we already knew that $\gcd(a,b)=1$.
So we've found a solution where the first two terms are relatively prime, and therefore all three terms are pairwise relatively prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that for all $n\ge9$, there exist natural numbers $x,y$ such that $n=2x+5y$. How would you use induction to prove this?
|
If $n=9$ then $n = 2\cdot 2 + 5\cdot 1$; taking $x:= 2$ and $y:=1$ suffices.
If $n \geq 9$ is an integer such that
$n-1= 2x + 5y$ for some integers $x,y > 0$, then
$n = n-1 + 1 = 2x+5y + 1 = 2x' + 5y'$.
Note that $2x+5y+1 = 2x+5y+(5-4) = 2(x-2) + 5(y+1)$.
So the preceding equalities are equivalent to
$$
n = 2x' + 5y' = 2(x-2) + 5(y+1).
$$
Taking $x' := x-2$ and $y' := y+1$ suffices,
which are still integers.
Note that for $n=10$ we have $x' = 2-2 = 0$, which is not a natural number if you don't count $0$ as one such.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
find equation of a curve that represents the sum of f(x) Given a straight line with equation f(x)=2x where x belongs to {0,1,2,3,4,5},
how do i find g(x) (a curve?) which represents the sum over f(x)
In plain english if the price of an object doubles every time I purchase it, what will be the total cost if I purchased it 10 times?
|
Your plain English question makes much more sense than your attempt to turn it into algebra.
You should be able to guess the pattern here:
number purchased total price
1 P
2 P + 2P = 3P
3 3P + 4P = 7P
4 7P + 8P = 15P
...
Hint. Think about nearby powers of $2$ in the last column.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Suppose $p>q$, $X\sim \text{Bernoulli}(p)$, $Y\sim \text{Bernoulli}(q)$. Couple $X$ and $Y$ to maximise $P(X=Y)$. My professor's solution to this is as follows: "Create a 2 x 2 matrix with the first row (corresponding to $X=0$) summing to $P(X=0)=1-p$, the second row summing to $P(X=1)=p$, the first column ($Y=0$) summing to $P(Y=0)=1-q$ and the second column summing to $P(Y=1)=q$. We want to maximize the sum of the diagonal which is $P(X=Y)$. Since $p > q$, the first diagonal entry can be at most $1 − p$; the second can be at
most $q$. If we write these in we can fill out the rest of the table (below) to get the desired coupling."
The bit I'm confused about is why $p>q$ implies the first diagonal entry can be at most $1-p$? Can someone explain this please? If we want to maximise the sum of the main diagonal then isn't $1-q+p$ better than $1-p+q$ because $p>q \implies 1-p+q<1$ but $1-q+p>1$?
|
The first diagonal entry is the probability that $X=Y=0$, and we know that both of the following statements are true:
*
*Since $\Pr[X=Y=0] \le \Pr[X=0]$, it is at most $1-p$.
*Since $\Pr[X=Y=0] \le \Pr[Y=0]$, it is at most $1-q$.
However, $p>q$, so the first constraint is stronger, and we can forget about the second constraint.
Similarly, the second diagonal entry is the probability that $X=Y=1$, and we know that both of the following statements are true:
*
*Since $\Pr[X=Y=1] \le \Pr[X=1]$, it is at most $p$.
*Since $\Pr[X=Y=1] \le \Pr[Y=1]$, it is at most $q$.
However, $p>q$, so the second constraint is stronger, and we can forget about the first constraint.
This shows that $\Pr[X=Y]$ is at most $1-p+q$. Algebraically, $$\Pr[X=Y] \le \Pr[X=Y=0] + \Pr[X=Y=1] \le (1-p) + q.$$ We need to show that this upper bound is also a lower bound, and that's what the table does.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Order of a permutation, how to calculate I know this is a basic question however I am slightly confused what to do if the permutation contains the same element twice in cycle notation.
For example: the permutation $(1 2 3)(2 4 1)$, how would I calculate the order when $2$ maps to $3$ and $4$? Is it just the same? Does $p=3$
|
First you'll need to express $(123)(241)$ in terms of the product of disjoint cycles.
$(123)$ and $(241)$ are not disjoint cycles, as you note, since both share the elements $1, 2$.
To do so, you start from the right cycle, and compose with the left cycle. So, in the right-hand cycle, we have $1\mapsto 2$ and in the lefthand cycle, $2\mapsto 3$. Now since $3\mapsto 3$ (righthand) and $3\maps 1$ (lefthand, we have the cycle (13).
In the end, you'll find $$\phi = (123)(241)=(13)(24)$$
Second, the order of a one-cycle permutation is its length; to find the order of a product of more than one permutation cycle, as is the case here, the order of $\phi$ is the $\operatorname {lcm}$ of the lengths of the cycles.
So we have the product of two $2$-cycles, and hence the order of $\phi$ is equal to the $\operatorname {lcm}(2, 2) = 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Compact operators satisfying a certain relation must be finite-rank Let $H$ be an infinite-dimensional Hilbert space, equipped with a given Hilbert basis $(e_i)_{i \in \mathbb{N}}$.
Consider the following introductory problem : can we find a compact operator $A$ in $H$ that satisfies the relation
$$ \sum \limits_{k=0}^n c_kA^k=0$$
for some given $(c_k)_k$, $c_k \in \mathbb{R}^{n+1}$ for all $k=0,...,n$ ?
Two cases arise :
*
*$c_0 \neq 0$ : Suppose that $A$ is a compact operator satisfying the given relation. We can rewrite it as
$$c_nA^n+...+c_1A=-c_0Id$$
Factoring by $A$ and using the fact that $c_0 \neq 0$, we get that
$$ \underbrace{A}_\text{compact} \circ(\underbrace{\frac{-c_n}{c_0}A^{n-1}+...+\frac{c_1}{c_0}Id}_\text{bounded})=Id$$
Set $B=\frac{-c_n}{c_0}A^{n-1}+...+\frac{c_1}{c_0}Id$. The composition $A \circ B$ will compact, because $A$ is compact and $B$ is a bounded operator. Therefore the $Id$ operator is compact, which is absurd because $H$ is infinite-dimensional. Therefore such an A cannot exist.
*
*$c_0=0$ : In this case we can find a compact operator with relative ease. Consider
$$ V= vect\left\{e_n : n \in \left\{0,...,N\right\}\right\}$$ where the $(e_n)$ are elements of the basis of $H$. It is finite-dimensional and thus closed, so let $A$ be the well-defined projection operator on $V$. We then have that
$$ A \circ A = A \iff A^2-A=0$$
and can thus create a relation of the given form.
Since $dim(V) < +\infty$, $A$ is finite-rank, and thus compact.
Now consider again the case where $c_0=0$. My question is the following :
If $A$ is a compact operator satisfying this type of relation for given $(c_k)_k$, then must $A$ be finite-rank ?
I suspect that the answer is yes (maybe an analogy can be made to the case of matrices that have $0$ as an eigenvalue), but don't really have a good idea of where to start. Decomposing a finite-rank operator in $(e_i)$ might shed some light on this but appart from that I am not sure of how to proceed.
|
Denote $i$ the smallest index such that $c_i\ne0$. Then we can factorize the polynomial
$$
\sum_{k=i}^n a_k t^k = c_i t^i \prod_{k=i+1}^n (t-\lambda_k)
$$
This implies
$$
c_i \left( \prod_{k=i+1}^n (A-\lambda_k I) \right) A^i=0.
$$
Since $A$ is compact, the null spaces of all operators $A-\lambda_k I$
are finite-dimensional, hence the range of $A^i$ is finite-dimensional.
In case $i=1$, this shows that $A$ is finite-rank. If $i>1$ then $A^i$ is finite-rank, which does not imply that $A$ is finite-rank:
Define $A:l^2\to l^2$ by
$$
(Ax)_n =\begin{cases} 0 & \text{if $n$ odd}\\
2^{-n}x_{n-1} & \text{if $n$ even}\\
\end{cases}.
$$
So $A$ is a compact multiplication operator times right shift, hence compact. By construction, $A^2=0$ but $A$ has no finite-rank.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Spivak Chapter 2 Problem 7 Question: Use the method of Problem 6 to show that $\sum_{i=0}^n k^p$ can always be written in the form $\frac{n^{p+1}}{p+1} +An^p + Bn^{p-1} + Bn^{p-2} + Cn^{p-3} + ..... $
The method in 6 they are talking about is the telescoping method.
I have tried to derive the solution for some while and I somewhat came to a proof (there's so much constant terms I am kind of confused).
The solution from the manual is:
I can get a sense of the proof, but I am not quite sure how exactly the part after --
"Adding for $k=1,...,n$, we obtain" to the end is exactly formed. An explanation would be very helpful.
|
Hint:
If you aren't familiar with the notation, when I write $\mathcal{O}(k^r)$, I basically mean "terms involving $k^r$". Then it should be clear that $\mathcal{O}(k^r) = \frac{1}{p+1} \mathcal{O}(k^r)$.
In this case, we have $r < p$, so we have
$$(k+1)^{p+1} - k^{p+1} = (p+1)k^p + \mathcal{O}(k^r),$$
so
$$\frac{(k+1)^{p+1} - k^{p+1}}{p+1} = k^p + \frac{\mathcal{O}(k^r)}{p+1} = k^p + \mathcal{O}(k^r),$$ then you can write
$$
\sum_{k=1}^n k^p + \sum_{k=1}^n \mathcal{O}(k^r)= \sum_{k=1}^n \frac{(k+1)^{p+1} - k^{p+1}}{p+1} = \frac{(n+1)^{p+1}}{p+1} - \frac{1}{p+1},
$$
then you can add $\frac{1}{p+1}$ over to the left side and absorb it into $\mathcal{O}(k^r)$ (since $1 = k^0$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the point where the angle of incidence is equal to the angle of reflection Say I have two points, $$A=(-1,1)\\B=(2,1)$$ and a line at $$y=0$$ How do I find the point on the line that makes A and B have the same angle?
|
In order to have the same angle it is enough to have
$$PD=PC=\frac{3}{2}\to 2-x=\frac{3}{2}\to x=\frac{1}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2198925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help to Understand the Proof of Extreme Value Theorem Below are sceen shots taken from Pugh's Book Real Mathematical Analysis.
My question mainly is from the proof below. How does it follow that $M<M$ from $b=c$? Thanks for your help.
|
*
*Notice that during the first part of Case 2, we prove that the least upper bound of $V_c < M$, which means that the least upper bound of $f$ on $[a,c]$ is strictly less than $M$.
*But recall that at the beginning, we defined $M$ as the least upper bound of $f$ on the entire interval $[a,b]$. Using the same notation, this means $M$ is the least upper bound of $V_b$.
*So, if $b = c$, then $V_b = V_c$, so the least upper bound of $V_b < M$ but also the least upper bound of $V_b = M$ by definition of $M$, so $M < M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
does a rectangular matrix have an inverse? I know all square matrices have easily to identify inverses, but does that continue on with rectangular matrices?
|
Actually, not all square matrices have inverses. Only the invertible ones do. For example, $\begin{bmatrix} 1 & 2 \\ 3 & 6 \end{bmatrix}$ does not have an inverse.
And no, non-square matrices do not have inverses in the traditional sense.
There is the concept of a generalized inverse. To very briefly summarize the link, an $n \times m$ matrix $A$ has an $m \times n$ generalized inverse, denoted $A^g$, if $A^g$ satisfies $A A^g A = A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
Find all the positive integers a, b, and c such for which $\binom{a}{b} \binom{b}{c} = 2\binom{a}{c}$ I tried an the following equivalence from a different users post from a while back
that stated the following.
$\binom{a}{b} \binom{b}{c} = \binom{a}{c} \binom{a-c}{b-c}$
where does this equivalence come from?
After applying this equivalence and trying to hammer it out algebraically
I end up with...
$\frac{(a-c)!}{(b-c)!(a-b)!}=2$
Not any closer than when I didn't use the equivalence.
How can I solve this?
|
You've reduced the equation to finding solutions to
$$\binom{a-c}{b-c} = 2. $$
Since $a,b,c$ are positive integers, and the only binomial coefficient equal to $2$ is $\binom{2}{1}$, the equalities
$$ a - c = 2$$ and $$b-c = 1$$ are forced.
Thus the solution set consists of consecutive triples of positive integers
$$(a,b,c) = (n+2,n+1,n) \qquad (\text{for }n\ge 0).$$
You can substitute this back into your original equation and verify that this solution set is valid.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Algebra Precalculus
$a> 0 < b$ For all the integer value of $a$ and $b$
$X= (a^2 + ab)-(ab^2-b)/(2a^2+b^2 -ab)$
Quantity I: $x $
Quantity II: $1.5$
(a) Quantity I $\lt$ Quantity II
(b) Quantity I $\gt$ Quantity II
(c) Quantity I $\ge$ Quantity II
(d) Quantity I $=$ Quantity II
(e) No relation
$(x^a)^c = x^c$
$x^{2b}/x^a = (x^{5a}) * (x^d)*(x^b)$
Quantity I = $b$
Quantity II = $d$
(a) Quantity I $\gt$ Quantity II
(b) Quantity I $\lt$ Quantity II
(c) Quantity I $\ge$ Quantity II
(d) Quantity I $=$ Quantity II
(e) No relation
Generally How to solve this sum and how to approach this Question,please guide me the steps with the answer
|
I assume $a = 1$
Then,
$$x^{2b}/x^a = (x^{5a}) * (x^d)*(x^b) \implies x^{2b}/x = (x^{5}) * (x^d)*(x^b) \implies x^{2b} = (x^{6}) * (x^{d+1})*(x^{b+1}) \implies x^{2b - b - 1} = (x^{6}) * (x^{d+1})\implies x^{b - 1} = (x^{d+7}) \implies x^{b - 1 - d - 7} = 1 \implies x^{b- d - 8} = 1 \implies b = d+8$$
Therefore $b > d$ option a is correct.
$x= (a^2 + ab)-(ab^2-b)/(2a^2+b^2 -ab)$
Since $b <0$ let $-c = b, c \in \mathbb{R^+}$
$$x= (a^2 - ac)-{(ac^2+c)\over(2a^2+c^2 +ac)}$$
$$x= {(a^2 - ac)(2a^2+c^2 +ac)-(ac^2+c)\over(2a^2+c^2 +ac)} = {2a^4 - a^3c-c^3a-ac^2-c\over(2a^2+c^2 +ac)}$$
By AM-GM,
$$x\ge {2a^4 - 2a^2c^2-ac^2-c\over(2a^2+c^2 +ac)} \ge {4a^4 - 4a^2c^2-2ac^2-2c\over(5a^2+3c^2)}$$
Let $c = 1$
Therefore,
$$x\ge {4a^4 - 4a^2-2a-2\over(5a^2+3)}$$
Now let $a = 10$ , then $x \ge 78$
and as you said $x = 0$ for $a = 1, b= -1$.
Therefore option e is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Boundedness of $a+\frac 1a$ when iterated Here's something I was wondering...
Is $$a + \frac 1a$$ for any positive real number $a$ bounded when iterated?
For example,, if we start at $a=1$, continuing gives us $a= 1+ \frac 11=2$, then $a=2+\frac 12=2.5$ and so on. A quick program shows that it seems to grow without bound, but how would one prove this mathematically? If it is possible that is... Any hints would be appreciated.
|
The function
$$
f(x)=x+\frac1x
$$
is strictly increasing for $x\ge1$ (i.e. if $x>y$, then $f(x)>f(y)$). Also, we have that
$$
f(x)=x+\frac1x>x
$$
for $x\ge1$. Hence, $f(f(x))>f(x)$. However, the function might be bounded, i.e. $f(x)\le M$ for all $x\ge1$. But we have that $f(M)>M$, which is a contradiction. So it grows to infinity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
finding moment generating function I am having a bit of trouble finding the moment generation function for
$f(x)=(\frac{1}{2})^{x+1}$ for $x=0,1,2,3,\ldots$
I know that $M(x)=\sum e^{tx}(\frac{1}{2})^{x+1}$ which I have rearranged to make $\frac{1}{2} \sum (\frac{1}{2}e^t)^x$ but I am not sure how to simplify this further.
|
Indeed, the moment generating function is defined as
$$E\left[e^{tX}\right]=\sum_{i=0}^{\infty}e^{ti}P(X=i)=\sum_{i=0}^{\infty}e^{ti}\frac1{2^{i+1}}=\frac 12+\frac12\sum_{i=1}^{\infty}\left(\frac {e^t}2\right)^i.$$
This is a geometrical series with $q=\frac {e^t}2.$ This series is convergent if $0\le q<1$: $t<\ln(2).$ The sum of the series is then
$$E\left[e^{tX}\right]=\frac12\left[1+\frac{e^t}{2-e^t}\right]=\frac{1}{2-e^t}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Boxplot Skewness I do know there are some rules about boxes and whiskers to determine the skewness in a boxplot, but I am confused with some rules in this particular case:
Keeping in mind the rules, in this boxplot the median falls to the right of the center of the box, thus its distribution is negatively skewed. But I also can see that the right line is larger than the left line, thus "according to the rules" the distribution is positively skewed. How do I know the real skewness. Thanks in advance.
|
You are correct that 'indications' of right-skewness of a sample from a boxplot
may be that (a) the median is left of center inside the box and (b) a longer
whisker to the right than to the left. However, boxplots are best used for
samples of moderate or large size.
Of course, I don't know for sure, but
I would guess that the contradictory indications in the boxplot you show
are likely because the sample size is small. (You might use some mathematical
measure of skewness as the Comment by @scitamehtam (+1) suggests, but as mentioned
there various measures of skewness can give different results, and this is
also especially likely to happen with small samples.)
Below are boxplots of 20 samples of size $n = 15$ from a normal population.
The normal distribution is symmetrical, so you might suppose the boxplots
would not show skewness. But there are all sorts of indications of skewness:
medians not in the centers of boxes, and whiskers of noticeably different lengths.
By contrast, here are boxplots of 20 samples of size $n = 1000$ from the same normal population. These boxplots do not show such conflicting results about
skewness; most of them are consistent with data from a symmetrical distribution.
(Don't worry about the 'outliers': They are to be expected in boxplots of large normal samples
because the 'tails' of the normal distribution extend to $\pm \infty.$)
Finally, here are boxplots of 20 samples of size $n = 1000$ from a (severely right-skewed) exponential distribution. All of them have the indications
of skewness you mention. (They also have lots of outliers on the high side,
and none of the low side; another indication of data from a skewed population.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that $f(x,y)=\frac{xy^2}{x^2+y^4}$ is bounded
Let $f\colon\mathbb R^2\to\mathbb R$ be a function given by:
$$
f(x,y)=\begin{cases}\frac{xy^2}{x^2+y^4}&\text{if }(x,y)\neq(0,0),\\
0&\text{if }(x,y)=(0,0).
\end{cases}
$$
I need to show that $f$ is bounded on $\mathbb R^2$, so I need to show that there exists $M>0:\vert f(x,y)\vert\leq M$ for all $(x,y)\in\mathbb R^2$.
We can rewrite $\begin{align}f(x,y)=\frac{x/y^2}{1+(x/y^2)^2}=\frac{z}{1+z^2}\end{align}$, where $z=x/y^2$ (and $y\neq0$).
So for $z$ large enough, our expression will go to 0. Now we only need to worry for the case that $z$ approaches 0. We rewrite again: $\begin{align}f(x,y)=\frac{1}{1/z+z}\end{align}$, so $\lim_{z\to0}\frac{1}{1/z+z}=0$. Whatever $(x,y)$ do, if their values get small enough, we see that $f(x,y)$ will get arbitrarily close to zero, en if their values get big enough, $f(x,y)$ also comes arbitrarily close to zero.
However, this can't be true, because $f(cy^2,y)=\frac{c}{1+c^2}$, so the function isn't even continuous on 0 to begin with. I'm stuck; can someone help me with this?
|
*
*Consider $(z+1)^2 \geq0$. This leads to $\Large \frac{z}{z^2+1}$$\geq-$$\Large\frac{1}{2}$.
*Now take $(z-1)^2 \geq0$ This yields $\Large \frac{z}{z^2+1}$$\leq$$\Large\frac{1}{2}$.
Therefore $\large f $ is bounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2199970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Calculate area of Ellipse without calculus? I like the way integration works, but the final formula $\pi ab$ is too simple.
I know there is a more deeper way to derive it. I just don't like to use calculus here, too many equations.
I'd like to use simple math, which does offer deeper insight into it.
|
Consider the unit disk (bounded by the circle of radius $1$, centered at the origin). Now, to construct an ellipse whose axes are $a$ along the $x$-axis and $b$-along the $y$-axis. This corresponds to the application of the linear transformation
$$
\begin{bmatrix}a&0\\0&b\end{bmatrix}.
$$
We can confirm that this is an ellipse because if your original coordinates are $x_1$ and $x_2$ while your new coordinates are $y_1$ and $y_2$, we have $y_1=ax_1$ and $y_2=bx_2$. Therefore, $y_1$ and $y_2$ satisfy:
$$
\frac{y_1^2}{a^2}+\frac{y_2^2}{b^2}=1.
$$
Since linear transformations scale areas by the determinant (and the original disk has area $\pi$), the resulting area is $ab\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Chern class of $E \times F \to M \times N$ Let $M$ and $N$ be two complex manifolds and $E \to M$, $F \to N$ be two holomorphic vector bundles.
1) Is there a way to define vector bundles $G \to M \times N$ such that a fiber over $(x,y) \in M \times N$ is, eg, $E_x \oplus F_y$ or $E_x \otimes F_y$ ?
2) How are the Chern classes of such bundles related to the Chern classes of $E$ and $F$?
|
If $p_1:M\times N\to M$ and $p_2:M\times N\to N$ are the projections, then you can pull back to get bundles $p_1^*E\to M\times N$ and $p_2^*F\to M\times N$. Then we can define the "box product" $E\boxtimes F\to M\times N$ by
$$E\boxtimes F=p_1^*E\otimes p_2^*F.$$
Now, if $(x,y)\in M\times N$ then since $(p_1^*E)_{(x,y)}=E_x$ and similarly $(p_2^*F)_{(x,y)}=F_y$, we see that the fiber of $E\boxtimes F$ over $(x,y)$ is equal to $E_x\otimes F_y$.
You can do the same thing for the direct sum, as someone else has already suggested; I don't know if $E\boxplus F$ is standard notation for this, but it makes sense to me.
How does relate to Chern classes? Well, by definition $p_1^*(c(E))=c(p_1^*E)$, similarly for $F$, and for the sum you can calculate
$$c(E\boxplus F)=c(p_1^*E\oplus p_2^*F)=c(p_1^*E)\smile c(p_2^*F)=p_1^*c(E)\smile p_2^*c(F)=c(E)\times c(F)$$
where the latter is the "cross product" or the "external cup product". Finding a similar formula for $E\boxtimes F$ will be more difficult, as I don't think there's a nice formula for $c(E\otimes F)$ in general unless the two are both line bundles.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that $S(x) = \int_{h(x)}^{g(x)}f(t)dt \implies S'(x) = f(g(x))g'(x)-f(h(x))h'(x)$ let $g(x), h(x)$ be derivatible functions in $R$ and let $f(x)$ be a continuous function in $R$.
$S(x) = \int_{h(x)}^{g(x)}f(t)dt$.
How can I prove:
$S'(x) = f(g(x))g'(x)-f(h(x))h'(x)$.
|
Not very rigorous, but useful to help to remember the rule:
Let $F(x)=\int f(t)\mathrm dt$, so is, a primitive of $f$, then $F'(x)=f(x)$
$S(x)=\int_{h(x)}^{g(x)}f(t)dt=[F(t)]_{h(x)}^{g(x)}=F(g(x))-F(h(x))$
$S'(x)=F'(g(x))g'(x)-F'(h(x))h('(x)=f((g(x))g'(x)-f(h(x))h'(x)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
An open covering of $\mathbb{Q} \cap [0,1]$ that does not contain any finite subcovering Consider the topological subspace $\mathbb{Q} \cap [0,1]$ endowed with the usual topology of $[0,1]$, since $[0,1]$ is Hausdorff and that $\mathbb{Q} \cap [0,1]$ is not closed, we conclude that $\mathbb{Q} \cap [0,1]$ is not compact, i.e., there exist an open covering $\{O_i\}_{i \in I}$ of $\mathbb{Q} \cap [0,1]$ such that for any finite index set $J \subseteq I$, $\displaystyle \bigcup_{j \in J} O_j \subset \mathbb{Q} \cap [0,1]$, where the inclusion is strict.
My question is if there is an explicite example of such open covering?
|
Pick your favorite irrational number $\xi\in (0,1)$, and consider the open cover
$$\Big\{\Big[0,\xi-\frac{1}{n}\Big)\cup\Big(\xi+\frac{1}{n},1\Big]\Big\}_{n=n_0}^{\infty}$$
where $n_0$ is chosen large enough that $\xi-\frac{1}{n_0}>0$ and $\xi+\frac{1}{n_0}<1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
if it exist the limit of $f(z_n)$ then there exist the the limit of $z_n$ Let $D \subset \mathbb{C}$ a closed compact subset of the complex plane.
Let $f:D \to \mathbb{C}$ a continuous function.
Let $\{z_n\}_{n \in \mathbb{N}} \in D$ a sequence of $D$.
Is it true that
$$
\lim_{n \to \infty} f(z_n)=L \Longrightarrow \lim_{n \to \infty} z_n = z_0 \in D
$$
Thanks
|
No. Take $f$ a constant function and $\{z_n\}$ any divergent sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Differential equation in $\mathcal{S}'$, Fourier method I have to solve that equation with Fourier method: $y'-iy=1+\delta'(x)$
Fourier transform is defined like this:
$F[\varphi](k)=\int\limits_{-\infty}^{\infty}\varphi(x)e^{ikx}dx$
$F^{-1}[\varphi](x)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}\varphi(k)e^{-ikx}dk$, where $\varphi \in \mathcal{S}$.
Applying Fourier method:
$F[y]\cdot(k-1) = k - 2\pi i\delta(k)$
The solution is:
$F[y]=A\delta(k-1)+\frac{k}{k-1+i0}+2\pi i\delta(k)$
Now I want to get $y(x)$ applying inverse transform:
$y(x)=\frac{A}{2\pi}e^{-ix}+i+\delta(x)-ie^{-ix}\theta(x)$
So, I know the correct answer
$y(x)=\frac{A}{2\pi}e^{ix}+i+\delta(x)+ie^{ix}\theta(x)$
but I do not see where I've made mistake.
|
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Notation: $\ds{\mrm{f}\pars{x} =
\int_{-\infty}^{\infty}\hat{\mrm{f}}\pars{k}\expo{\ic kx}
\,{\dd k \over 2\pi}\iff
\,\hat{\mrm{f}}\pars{k} =
\int_{-\infty}^{\infty}\mrm{f}\pars{x}\expo{-\ic kx}\,\dd k}$.
\begin{align}
&\mrm{y}'\pars{x} - \ic\,\mrm{y}\pars{x} = 1 +\delta\,'\pars{x}
\implies
\ic k\,\hat{\mrm{y}}\pars{k} - \ic\,\hat{\mrm{y}}\pars{k} =
2\pi\,\delta\pars{k} + \ic k
\\[5mm]
\implies &
\hat{\mrm{y}}\pars{k} = {k - 2\pi\,\delta\pars{k}\ic \over k - 1} =
1 + {1 \over k - 1} + 2\pi\,\delta\pars{k}\ic
\end{align}
\begin{align}
\mrm{y}_{\pm}\pars{x} & =
\int_{-\infty}^{\infty}\bracks{1 + {1 \over k - 1 \pm \ic 0^{+}} + 2\pi\,\delta\pars{k}\ic}\expo{\ic kx}\,{\dd k \over 2\pi} =
\delta\pars{x} + \ic + \expo{\ic x}\int_{-\infty}^{\infty}
{\expo{\ic kx} \over k \pm \ic 0^{+}}\,{\dd k \over 2\pi}
\\[5mm] & =
\delta\pars{x} + \ic + \expo{\ic x}\bracks{%
\mrm{P.V.}\int_{-\infty}^{\infty}{\expo{\ic kx} \over k}\,{\dd k \over 2\pi} +
\int_{-\infty}^{\infty}\expo{\ic kx}\bracks{\mp\pi\ic\,\delta\pars{k}}
\,{\dd k \over 2\pi}}
\\[5mm] & =
\delta\pars{x} + \ic + \expo{\ic x}\bracks{%
\int_{0}^{\infty}{2\ic\sin\pars{kx} \over k}\,{\dd k \over 2\pi} \mp
{1 \over 2}\,\ic} =
\delta\pars{x} + \ic + \expo{\ic x}\bracks{%
{\ic \over \pi}\,\mrm{sgn}\pars{x}\,{\pi \over 2} \mp {1 \over 2}\,\ic}
\\[5mm] & =
\delta\pars{x} + \ic + \expo{\ic x}
\bracks{{2\Theta\pars{x} - 1 \mp 1}}{\ic \over 2}
\end{align}
$$\bbox[15px,#ffe,border:1px dotted navy]{\ds{%
\left\{\begin{array}{rcl}
\ds{\quad\mrm{y}_{-}\pars{x}} & \ds{=} &
\ds{\delta\pars{x} + \ic + \Theta\pars{x}\expo{\ic x}\ic}
\\[3mm]
\ds{\quad\mrm{y}_{+}\pars{x}} & \ds{=} &
\ds{\delta\pars{x} + \ic - \Theta\pars{-x}\expo{\ic x}\ic}
\end{array}\right.}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove this sequence takes every rational number Given the sequence $a_1 = 0$ and $a_{n+1} = \dfrac{1}{2 \cdot\lfloor{a_n}\rfloor-a_n+1}$ and $p,q\in \mathbb N$ and coprime find $x$ so that $a_x = \dfrac{p}{q}$. I do not even know where would you start with a problem like this.
|
Observation: $a_k<1$ iff $k$ is odd.
Lemma: If $a_{2n}$ = $a_n$+1.
Proof: By induction. $a_2 = 1 = 1+a_1$. Further suppose $a_{2(n-1)}=a_{n-1}+1$. Denote $x=2\lfloor a_{n-1}\rfloor-a_{n-1}+1$. Then
$$a_n=\frac 1x,$$
$$a_{2n-1} = \frac 1{x+1},$$
$$a_{2n} = \frac 1{2\cdot0-\frac1{x+1}+1} = \frac1{\frac{x}{x+1}}=\frac{x+1}{x}=1+\frac 1x = a_n+1.$$
Lemma proved.
Now consider a rational number and the following process with it. While it is greater than or equal to one, subtract one from the number. Otherwise apply the recurrent formula $x\to\frac1{1-x}$. In every application of the formula, the denominator decreases, so we will get eventually to the number 0.
We can follow the process backwards and assign elements $a_k$ to it. We start with $0=a_1$. When we add one to the value, we just jump from $a_k$ to $a_{2k}$. In the other case (after application of $\frac1{1-x}$), we are on an even index $a_k$. So $a_{k-1}$ is odd, so $a_k = \frac1{1-a_{k-1}}$. Since the function $\frac1{1-x}$ is injective, $a_{k-1}$ is the next value in the reverted sequence.
At the end, we reach the original rational number together with its position in the sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Notation for partial derivatives I thought that the meaning of
$$
\frac{\partial f(x, y, z)}{\partial x}
$$
is differentiation on $x$ with fixed $y$ and $z$. So $(x, y, z)$ in the numerator is just saying which variables are fixed. If I need to indicate where the derivative is evaluated, I write it in the right of a vertical bar as a subscript. But today my teacher used $(x, y, z)$ in the numerator to denote where the derivative is evaluated. So, for example,
$$
\frac{\partial f(0, 0, 0)}{\partial x}
$$
means
$$
\frac{\partial f(x, y, z)}{\partial x} \bigg\rvert_{x=0,y=0,z=0}
$$
Is that a standard convention? If so, what is the meaning of
this?
$$
\frac{\partial f(x, y, g(x, y))}{\partial x}
$$
I have two candidates. One is a partial derivative of the composition of $f$ and $g$ where $g$ has some fixed value, and the other is the partial derivative of $f$ on $x$ evaluated at $(x, y, g(x, y))$. I think the two are not the same.
|
*
*Yes, $\frac{\partial f(x,y,z)}{\partial x}$ is derivative w.r.t. $x$ at fixed $y,z$.
*$\frac{\partial f(0,0,0)}{\partial x}$ is not standard notation. Strictly speaking, it should be zero, because $f(0,0,0)$ is a constant which does not depend on $x$. Sometimes, yes, it is used as a shorthand for $\frac{\partial f(x,y,z)}{\partial x}|_{x=y=z=0}$. But you should only do that if it is extreamly clear from the context what you mean. Generally, avoid this notation.
*The third expression is a derivative w.r.t. $x$, but $x$ appears two times in the numerator. Using standard derivation-rules you can compute
$$\frac{\partial f(x,y,g(x,y))}{\partial x} = \frac{\partial f(x,y,z)}{\partial x}\bigg\rvert_{z=g(x,y)} + \frac{\partial f(x,y,z)}{\partial z}\bigg\rvert_{z=g(x,y)} \cdot \frac{\partial g(x,y)}{\partial x}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2200982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 6,
"answer_id": 2
}
|
How find this maximum of the value $\sum_{i=1}^{6}x_{i}x_{i+1}x_{i+2}x_{i+3}$? Let
$$x_{1},x_{2},x_{3},x_{5},x_{6}\ge 0$$ such that
$$x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}=1$$
Find the maximum of the value of
$$\sum_{i=1}^{6}x_{i}\;x_{i+1}\;x_{i+2}\;x_{i+3}$$
where
$$x_{7}=x_{1},\quad x_{8}=x_{2},\quad x_{9}=x_{3}\,.$$
|
For $x_i=\frac{1}{6}$ we get $\frac{1}{216}$.
We'll prove that it's a maximal value.
Indeed, let $x_1=\min\{x_i\}$, $x_2=x_1+a$, $x_3=x_1+b$, $x_4=x_1+c$, $x_5=x_1+d$ and $x_6=x_1+e$.
Hence, $a$, $b$, $c$, $d$ and $e$ are non-negatives and we need to prove that:
$$216\sum_{i=1}^6x_ix_{i+1}x_{i+2}x_{i+3}\leq\left(\sum_{i=1}^6x_i\right)^4$$ or
$$216(a^2+b^2+c^2+d^2+e^2-ab-bc-cd-de)x_1^2+$$
$$24((a+b+c+d+e)^3-9(2abc+abd+abe+acd+ade+2bcd+bce+bde+2cde))x_1+$$
$$+(a+b+c+d+e)^4-216(abcd+bcde),$$
which is true because
$$a^2+b^2+c^2+d^2+e^2-ab-bc-cd-de\geq$$
$$\geq a^2+b^2+c^2+d^2+e^2-ab-bc-cd-de-ea=\frac{1}{2}\sum_{cyc}(a-b)^2\geq0,$$
$$216(abcd+bcde)=216bcd(a+e)\leq216\left(\frac{a+b+c+d+e}{4}\right)^4=$$
$$=\frac{216}{256}(a+b+c+d+e)^4\leq(a+b+c+d+e)^4$$ and
$$(a+b+c+d+e)^3\geq9(2abc+abd+abe+acd+ade+2bcd+bce+bde+2cde),$$
but my proof of this statement is very ugly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2201085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
What's the fastest way to determine all the subgroups of the additive group $\mathbb{Z}_{24}$ Question is as in title.
I know that all of the subgroups of $\mathbb{Z}_{24}$ (under addition) must be cyclic, and I could find them by finding the generating groups for each element of $\mathbb{Z}_{24}$ - but surely there is a quicker way?
Would appreciate any help,
Jack
|
What is important has already been said in comments and other answers: for each $d \mid n$, there is a single subgroup of order $d$ in $\Bbb Z_n$, and it is isomorphic to $\Bbb Z_d$. It can be explicitly described as
$$\left\{ \widehat {\frac {kn} d} \Bigg| 0 \le k \le d-1 \right\} .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2201185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.