Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Ask for different ways to solve $(x+y)dy+(y+1)dx=0$ $(x+y)dy+(y+1)dx=0$
Rewrite the equation to
$\frac{dx}{dy}+\frac{x}{y+1}=\frac{-y}{y+1}$
I can use used the integrating factor $μ(y)=e^{-\int\frac{1}{y+1}dy}$ to solve it.
The answer is $x=\frac{-y^2}{2(y+1)}+\frac{C}{y+1}$
I'm curious that are there any other methods to solve this ODE. I am asking for a different method because sometimes I am not able(or haven't enough time, this one costs me about half an hour to solve it ) to rewrite the equation into a proper form. So, if I could obtain other methods to solve this kind of ODEs, I may have more chances to solve it in a exam.
|
See that we can write as
$$(x+y)dy+(y+1)dx=0$$
$$xdy+ydx+ydy+dx=0$$
$$d(xy)+ydy+dx=0$$ now just integrate.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to show $1 +x + x^2/2! + \dots+ x^{2n}/(2n)!$ is positive for $x\in\Bbb{R}$?
How to show $1 + x + \frac{x^2}{2!} + \dots+ \frac{x^{2n}}{(2n)!}$ is positive for $x\in\Bbb{R}$?
I realize that it's a part of the Taylor Series expansion of $e^x$ but can't proceed with this knowledge? Also, I can't figure out the significance of $2n$ being the highest power.
|
If we use Taylor's theorem with an integral remainder we get:
$$ 1+x+\frac{x}{2}+\ldots+\frac{x^{2n}}{(2n)!} = e^x-\frac{1}{(2n)!}\int_{0}^{x}t^{2n}e^{x-t}\,dt \tag{1}$$
hence in order to prove the non-negativity of the LHS it is enough to show that:
$$ \int_{0}^{x} t^{2n}e^{-t}\,dt \leq (2n)! \tag{2}$$
holds for any $x\in\mathbb{R}$. $(2)$ is trivial if $x\leq 0$, and if we assume $x>0$, since the integrand function is non-negative, we get:
$$ \int_{0}^{x}t^{2n}e^{-t}\,dt \leq \int_{0}^{+\infty}t^{2n}e^{-t}\,dt = \Gamma(2n+1)=(2n)!,\tag{3}$$
proving our claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Proving there is no negative real number greater that all negative rational numbers How would I use the Archimedean property to prove that there is no negative real number greater that all negative rational numbers?
|
Let $x$ be any negative real number. Then $-x$ is positive, and by the Archimedean property there is a natural number $n$ such that $0<-x<n$.
Then $-\frac 1n$ is a negative rational number, and
$$x<-\frac 1n<0$$
Thus $x$ is not greater than all negative rational numbers. Since $x$ are arbitrary, there is no negative real number greater that all negative rational numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Odds of rolling a $6$ in three chances I'm working on a dice game to play with the kids I work with. I'm trying to recall calculating probability but its been a while. Its for a table top DND type game and they are rolling for the properties of their character.
What are the odds of rolling a $6$ with a D6 given $3$ chances? They are trying to get a $6$ and they are given $3$ rolls to do so.
I want to say its $50\%$ ($\frac16 + \frac16 + \frac16 = \frac36$) but it's been a while since I studied this. Am I on the right track?
Thanks for your time.
|
You can most easily calculate this with the counter probability.
What is the probability of rolling no 6.
$$P(\text{no } 6) = \frac 56 \frac 56 \frac 56=\frac{125}{216}$$
Then the probability of rolling at least one 6 (which I think is the outcome you're interested) is
$$P(\text{ at least one 6}) = 1 -\left(\frac 56\right)^{3} = 1- \frac{125}{216} = \frac{91}{216} \approx 42\%. $$
Note that your calculation is incorrect as you would get one 6 for sure if you roll 6 times. But this is obviously incorrect when you played your favorite board game.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Does the Dirac belt trick work in higher dimensions? If the Dirac belt is in 4-space, is it still true that when the belt is initially given a 360 degree twist then it cannot be untwisted?
I assume this is so because SO(n) is not simply connected, but I am still a bit fuzzy on how this all works. (For instance, even in 3-space, if the path in SO(3) is contractible, how do we know that contraction is physically realizable without self-intersection in a belt of finite width?)
|
One can show the following: Given a noncompact connected surface $S$, any two smooth embeddings $f_0, f_1: S\to R^4$ are isotopic. (In particular, there are no twisted bands in $R^4$.) Proofs are a bit involved, but the key fact that for a generic smooth homotopy $F: S\times [0,1]\to R^4$ between $f_0, f_1$, for each $t$, the map $F(~, t): S\to R^4$ has discrete self-intersections, i.e., each point has discrete preimage (for generic $t$ self-intersections are transversal). Since $S$ is noncompact, these self-intersections can be "pushed away to infinity'' in $S$ by modifying $F$.
For a "Dirac belt'' one can give a more direct argument. (Mathematically speaking, a Dirac belt in $R^n$ can be interpreted as a smooth embedding $h: S^1\to R^n$ together with a vector field $X$ along $h$, normal to $h$, i.e., for each $s\in S^1$, the vector s $X(s), h'(s)$ are linearly independent. The corresponding map of the annulus $S^1\times [0,1]\to R^n$ is obtained from $(h,X)$ by the formula $f(s, t)= h(s) + ctX$, where $c$ is a sufficiently small positive number.) The proof in this setting ($n=4$) goes roughly as follows. First isotope $h$ (and regularly homotope $X$) to make it the identity embedding of a round circle $C$ into $R^4$. The vector field $X$ is deformed to a smooth vector field orthogonal to $C$ and, hence, can be identified with a smooth nowhere zero function $C\to R^3$. But any two smooth nowhere zero maps $C\to R^3$ are smoothly homotopic to each other through maps of the same nature. (Since $R^3 - 0$ is simply connected.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\inf{U(f,P)}\geq 1/2$
Define $f(x)=\begin{cases} x\;\;\;\text{if the point $x\in[0,1]$ is rational}\\ 0\;\;\;\text{if the point $x\in[0,1]$ is irrational} \end{cases}$
Prove that $\inf{U(f,P)}\geq 1/2$.
Let $P=\{x_1,x_2,\cdots,x_i\}$ be a partition of $[0,1]$ with $x_i=\frac{i}{n}$ where $i\geq 1$ and $n\in\mathbb{N}$. By the definition of supremum, $\sup(f(x_i))= x_i$, and for each sub-interval $[x_{i-1},x_i]$ of $[0,1]$, $\Delta_{x_i}=\frac{1}{n}$.
$$U(f,P)=\sum_{i=1}^{n}x_i(x_{i}-x_{i-1})=\sum_{i=1}^{n}\frac{i}{n}\cdot\frac{1}{n} =\frac{1}{n^2}\sum_{i=1}^{n}i=\frac{1}{n^2}\cdot\frac{n(n+1)}{2}=\frac{1}{2}+\frac{1}{2n}$$ Thus, $\inf U(f,P)=1/2$ as $n$ approaches to infinity.
For this question, I choose a particular partition that I can evaluate the upper sum to $1/2$, I thought that is not right since the question is asking for any partitions. Can someone give me a hint or suggestion to write an argument to show for any partition $P$ satisfies $\inf{U(f,P)}\geq 1/2$? Thanks
|
Let $\mathcal P_0$ be a partition of $[0,1]$. Refine this partition if necessary to a partition $\mathcal P$ containing $n+1$ points so that $x_{i}-x_{i-1}=1/n.\ $Note that $x_0=0$, $x_{n}=1$ and in general $x_i=i/n$.
We have $U(\mathcal P_0)\geq U(\mathcal P)=\frac{1}{n}\sum_{i=1}^{n}f(x_i^{*})$.
Now let $\epsilon >0$ and choose, using density of the rationals, $x_i^{*}\in [x_{i},x_{i-1}]$ so that $f(x_i^{*})\geq x_i-\epsilon$.
Then
$$U(\mathcal P_0)\geq U(\mathcal P)\geq \frac{1}{n}\sum_{i=1}^{n}(x_i-\epsilon)=\frac{1}{n}\sum_{i=1}^{n}x_i-\frac{1}{n}\sum_{i=1}^{n}\epsilon\geq \frac{1}{n}\sum_{i=1}^{n}\frac{i}{n}-\epsilon =\frac{n(n+1)}{2}\frac{1}{n^{2}}-\epsilon > \frac{1}{2}-\epsilon $$
and the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Show that the curve has two tangents I'm a little stuck on a math problem that reads as follows:
Show that the curve $x = 5\cos(t), y = 3\sin(t)\, \cos(t)$ has two tangents at $(0, 0)$ and find their equations
What I've Tried
*
*$ \frac{dx}{dt} = -5\sin(t) $
*$ \frac{dy}{dt} = 3\cos^2(t) - 3\sin^2(t) $ because of the product rule. We can simplify this to $ 3(\cos^2(t) - \sin^2(t)) \rightarrow 3\cos(2t) $
*In order to get the slope $ m $, we can write $$ \frac{dy}{dx} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}} $$
*Solving for $ \frac{dy}{dx} $ as follows:
*
*$ \frac{3\cos(2t)}{-5\sin(t)} $ can be rewritten as
*$ \frac{-3}{5}(\cos(2t)\csc(t)) = m $
*Plugging $ (0,0) $ back into the equations of $ x $ and $ y $ we have as follows:
*
*$ 5\cos(t) = 0 \rightarrow t = \frac{\pi}{2} $
*
*Note: I'm unsure what happens to the $ 5 $
*$ \frac{dx}{dt} $ evaluated at $ t = \frac{\pi}{2} $ gives us $ -5\sin(\frac{\pi}{2}) = -5 $
*$ \frac{dy}{dt} $ evaluated at $ t = \frac{\pi}{2} $ gives us $ 3\cos(\frac{2\pi}{2}) \rightarrow 3\cos(\pi) = -3 $
*$ \frac{dy}{dx} = \frac{3}{5} $
*Continuing on, if we add $ \pi $ to the value of $ t $ we get $ t = \frac{3\pi}{2} $. Plug the new value of $ t $ into the equations of $ x $ and $ y $
*
*$ \frac{dx}{dt} $ evaluated at $ t = \frac{3\pi}{2} $ gives us $ -5sin(\frac{3\pi}{2}) = 5 $
*$ \frac{dy}{dt} $ evaluated at $ t = \frac{3\pi}{2} $ gives us $ 3cos(\frac{6\pi}{2}) \rightarrow 3cos(3\pi) = -3 $
*$ \frac{dy}{dx} = -\frac{3}{5} $
*We now have our two slopes of the tangent lines:
*
*$ y = -\frac{5}{3}x $
*$ y = \frac{5}{3}x $
The issue is that webassign is claiming that the slopes are wrong as can be seen here:
Here is the solution in graph form that is correct:
p.s. My apologies if this is a repost. I've seen this response Show that the curve x = 5 cos t, y = 4 sin t cos t has two tangents at (0, 0) and find their equations. and followed it already with no avail.
|
You computed the slopes as $\frac{dy}{dx}=\frac{3}{5}$ and $\frac{dy}{dx}=-\frac{3}{5}$, but for some reason when you wrote the equations of the tangent lines you took the reciprocal of these slopes and wrote $y=\frac{5}{3}x$ and $y=-\frac{5}{3}x$. They should be $y=\frac{3}{5}x$ and $y=-\frac{3}{5}x$. Other than that, your method looks correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How many 4 digit numbers are divisible by 29 such that their digit sum is also 29? How many $4$ digit numbers are divisible by $29$ such that their digit sum is also $29$?
Well, answer is $5$ but what is the working and how did they get it?
|
Just to give a different take on things, in order for a number to be a multiple of $29$ and have digit sum congruent to $2$ mod $9$, the number must be of the form
$$29(9k+1)=261k+29$$
For the number to have four digits, we need $1000\le261k+29\le9999$, which requires $4\le k\le38$.
Now for the digit sum to be not just congruent to $2$ mod $9$ but actually equal to $29$, we cannot have any $0$'s or $1$'s among the four digits, and if there's a $2$ then the other three digits must be $9$'s. Therefore, we cannot have $k\equiv1$ or $2$ mod $10$, nor, since $261$ does not divide $9992-29$, can we have $k\equiv3$ mod $10$; also, we need $2999\le261k+29$, which by itself requires $12\le k$. At this point we're left with $19$ possible values of $k$:
$$14,15,16,17,18,19,20\\24,25,26,27,28,29,30\\34,35,36,37,38$$
The values that work turn out to be $k=19$, $29$, $30$, $37$, and $38$. These correspond to the numbers $4988$, $7598$, $7859$, $9686$, and $9947$ found by Christian Blatter, whose approach boiled things down to checking just $11$ possibilities instead of the $19$ listed here. I wonder if some hybrid of the two approaches might improve at least on mine if not both.
Added later: It occurs to me it's possible to winnow the list of $k$'s by about a third without too much effort. If $k$ ends in $6$ or less, then $261k+29$ ends in $5$ or less, which means the first three digits must sum to at least $24$. The smallest such number is $6990$, from which it follows that $k\ge27$. This eliminates $k=14$, $15$, $16$, $24$, $25$, and $26$. This leaves only $13$ values of $k$ to check, which is closer to Christian's tally.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
does uncorrelated imply independence for 2 valued random variables? Let $(X_n)_{n \geq 1}$ be identically distributed random variables such that $\mathbb{P}[X_1 = 1] = \mathbb{P}[X_1 = -1] = \frac{1}{2}$ and for any $i_1< \cdots < i_p, \mathbb{E}[X_{i_1} \cdots X_{i_p}] = 0$ for any $p \in \mathbb{N}.$ Can I conclude that $(X_n)_{n \geq 1}$ are i.i.d.? In other words, does uncorrelated imply independence in this case? I know that in this case they are pairwise independent.
|
I think I solved it. First of all we shift and rescale the random variables, so we denote by $Y_i = \frac{X_i + 1}{2}.$ It is not difficult to check that $\mathbb{E}[Y_{i_1} \cdots Y_{i_p}] = \mathbb{E}[Y_{i_1}] \cdots \mathbb{E}[Y_{i_p}] $ and $\mathbb{P}[Y_i = 0] = \mathbb{P}[Y_i = 1] = \frac{1}{2}.$
But on the other hand, $\mathbb{P} [Y_{i_1} = 1, \dots, Y_{i_p} = 1] = \mathbb{E}[Y_{i_1} \cdots Y_{i_p}] = \mathbb{E}[Y_{i_1}] \cdots \mathbb{E}[Y_{i_p}] = \mathbb{P} [Y_{i_1} = 1] \cdots \mathbb{P} [Y_{i_p} = 1]. $
Now to show independence in general case, we do induction. In particular, for $p=2$, the result follows obvoiusly from the last equation. Next, we suppose that any $p=n$ random variables are independent and we show for $p=n+1.$ For example, we can show
$$\mathbb{P} [Y_1 = 0, Y_2 = 0, \dots, Y_{n+1} = 0] = \\ \mathbb{P} [Y_1 = 0, Y_2 = 0, \dots, Y_{n} = 0] - \mathbb{P} [Y_1 = 0, Y_2 = 0, \dots, Y_n = 0, Y_{n+1} = 1] \\= \mathbb{P} [Y_1 = 0] \mathbb{P} [ Y_2 = 0] \cdots \mathbb{P} [ Y_{n} = 0] - \mathbb{P} [Y_1 = 0] \mathbb{P}[Y_2 = 0] \cdots \mathbb{P}[Y_{n-1} = 0] \mathbb{P}[ Y_{n+1} = 1] + \mathbb{P} [Y_1 = 0, Y_2 = 0, \dots, Y_{n-1} = 0, Y_n = 1, Y_{n+1} = 1].$$
And we continue this process.
I believe this result is also true also when $\mathbb{P}[X_1 = -1] = q = 1 - \mathbb{P}[X_1 = 1] = 1-p.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Give me hints for evaluating this limit Evaluate this limit:
$$\lim_{n\to\infty}{\left\{\left(1+\frac{1}{n}\right)^n-\left(1+\frac{1}{n}\right)\right\}}^{-n}$$
Please give me some hints. If you provide a complete answer instead, please include a spoiler tag.
|
Hint.
$$\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n=e^x=\exp(x)$$
From here on you can easily conclude the result as you mentioned in the comments of your post.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Prove the following limits without using l'Hospital and Sandwich theorem Prove the following limits:
$$\lim_{x \rightarrow 0^+} x^x = 1$$
$$\lim_{x \rightarrow 0^+} x^{\frac{1}{x}}=0$$
$$\lim_{x \rightarrow \infty} x^{\frac{1}{x}}=1$$
They are not that hard using l'Hospital or the Sandwich theorem. But I curious if they can be solved with the basic knowledge of limits. I have been trying to make some famous limits like the definition of $e$ but without luck.
Thank you for your help.
|
Using $a^b=\exp(b\ln a)$ as definition of exponentiation with irrational exponents, it is natural to take logarithms; then the claims are equivalent to
$$\lim_{x\to 0^+}x\ln x=0,\qquad \lim_{x\to 0^+}\frac 1x\ln x=-\infty, \qquad \lim_{x\to \infty}\frac 1x\ln x=0.$$
Substituting $x=e^{-y}$ for the first two and $x=e^y$ for the last (so that $y\to+\infty$ in all cases), they are equivalent to
$$\lim_{y\to+\infty}\frac{-y}{e^y} =0,\qquad \lim_{y\to+\infty }(-ye^y)=-\infty,\qquad \lim_{y\to+\infty}\frac{y}{e^y}=0.$$
This makes the middle one clear and the other tow equivalent to the fact that the exponential has superpolynomial 8or at least superlinear) growth.
If not already known, this follows from the general inequality $e^t\ge 1+t$, from which find for $t\ge -1$ that $e^t=(e^{t/2})^2\ge (1+t/2)^2=1+t+\frac14t^2\ge\frac14t^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Compute $\lim_{n\to\infty}n^{-(n+\nu)}\sum_{k=1}^nk^k$ for $\nu\in\Bbb R$
Let $\nu\in\Bbb R$ and
$$S_{\nu}:=\lim_{n\to\infty}\frac{1}{n^{n+\nu}}\sum_{k=1}^nk^k$$
What is the value of $S_{\nu}$?
Disclaimer: This is not a Homework question. Indeed, I thought at this problem when reading this question.
We have
$$ n^{-\nu}\ =\ n^{n-(n+\nu)}\ \leq \ \underbrace{n^{-(n+\nu)}\sum_{k=1}^nk^k}_{=: \ S_{\nu}^n}\ \leq \ n^{n+1-(n+\nu)} \ = \ n^{1-\nu} \qquad \forall n \in \Bbb N$$
and thus
$ S_{\nu}=\infty$ if $\nu<0$ and $S_{\nu}=0$ if $\nu >1$.
Now, for $\nu\in [0,1]$, I have to admit that I don't see how to compute $S_{\nu}$. It seems that $S_{\nu}\in [0,1]$ for $\nu \in [0,1]$ as shown by the following plot of the few first values of $S_{\nu}^{n}$:
|
As pointed out in a comment, we have
\begin{align*}n^n\ \leq\ \sum_{k=1}^nk^k \ &\ \leq n^n+(n-1)^{n-1}+\sum_{k=1}^{n-2}k^k\leq n^n+(n-1)^{n-1}+\overbrace{(n-2)(n-2)^{n-2}}^{=(n-2)^{n-1}} \\ &\leq n^n+2(n-1)^{n-1}\leq n^n+2n^{n-1}=n^n\Big(1+\frac{2}{n}\Big) \qquad \forall n\in\Bbb N.
\end{align*}
So that
$$n^{-\nu}\leq S_{\nu}^n \leq n^{-\nu}\Big(1+\frac{2}{n}\Big)\qquad \forall n\in\Bbb N.$$
Letting $n\to \infty$, we get
$$S_{\nu}=\begin{cases}\infty & \text{if } \nu<0,\\ 1 & \text{if }\nu=0, \\ 0 & \text{if } \nu >0.\end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1549944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
5 geometric shapes, all touching each other I was playing aroud with shapes, which all connected. I managed to get 3 and 4 shapes all connected to each other, but I can't get 5 to work in 2D.
Does anyone have an idea what these shapes are called and also how to get 5 shapes connected? It would be the best, if all shapes were congruent.
|
It is impossible to make five 2D shapes (on a flat plane) which all touch each other. This is a consequence of the four color map theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Every field has an algebraic closure proof Below is the proof in Fraleigh.
The part that I don't understand is the construction of set $A$. What does "every possible zero of any $f(x)$" mean? When the textbook talked about "zeros", they are always defined in a larger field, now there is no such single extension fields (at least not proved yet), how should I make sense of such a set?
|
The set $A$ is not a field it is just a sufficiently large set. For example one could define it slightly more formally as $A = \bigcup \{(f,i)\,|\, f \in F[x];i \in \{0,...,\text{degree}(f)-1\}\}$. Once again for example one could define $\Omega = P(A) \cup F$ (where $P(A)$ denotes the power set of $A$). The idea is that we want a set sufficiently large so that for each algebraic extension we can construct an isomorphic algebraic extension whose underlying set is a subset of this large set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
proving the validity I need to prove the validity of the formula:
$Q= \forall x \forall y \forall v \ F(x,y,f(x,y),v, g(x,y,v)) \rightarrow \forall x \forall y \exists z \forall v \exists u \ F(x,y,z,v,u)$
I thought the best way to do this is by proving that $\neg Q$ is unsatisfiable and I tried the Tableaux Method but I did not know what I should do with the $f$ and $g$ so I couldn't reach a closed tableau
I would appreciate some pointers or suggestions on how to prove the validity of $Q$ or maybe what I'm doing wrong in the tableau
|
If I remember my tableaus, you get $\forall x,y,zF(\ldots)$ (the antecedent) and $\neg\forall x,y\exists z\ldots$ (the negation the of consequent).
You should have a rule that allows you to conclude
$$¬∃z∀v∃uF(a,b,z,v,u)$$
for some $a,b$. To reduce $¬∃z$ we need something to apply it to. The terms we have lying around are those involving $a,b,f,g$. To line up with the antecedent (which we are trying to contradict) instantiate $f(a,b)$ for $z$. This should get you going.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Integrate $\int \frac{\arctan\sqrt{\frac{x}{2}}dx}{\sqrt{x+2}}$ $$\int \frac{\arctan\sqrt{\frac{x}{2}} \, dx}{\sqrt{x+2}}$$
I've tried substituting $x=2\tan^2y$, and I've got:
$$\frac{1}{\sqrt2}\int\frac{y\sin y}{\cos^4 y} \, dy$$
But I'm not entirely sure this is a good thing as I've been unable to proceed any further from there.
|
the taylor series of $$\frac{\sin y}{\cos^4y}=\sum_{n=1}^{\infty }\frac{(2(2n-1)!-1)y^{2n-1}}{(2n-1)!}$$
so
$$\frac{1}{\sqrt2}\int\frac{y\sin y}{cos^4 y}dy=\frac{1}{\sqrt2}\int\sum_{n=1}^{\infty }\frac{(2(2n-1)!-1)y^{2n}}{(2n-1)!}dy$$
$$=\frac{1}{\sqrt2}\sum_{n=1}^{\infty }\frac{(2(2n-1)!-1)y^{2n+1}}{(2n+1)(2n-1)!}+C$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Is Peirce's law valid? Could anyone help me see how Peirce's law ((P→Q)→P)→P is valid?
It seems to me that from (P→Q), P need not follow.
e.g: assume p=pigs can fly, q=1+1=2
Then if (if pigs can fly → 1+1=2) then pigs can fly
since the first part is a valid inference (1+1=2) is always true how can P follow?
|
$$((P \implies Q) \implies P) \implies P$$
$$((\text{Pigs can fly} \implies 1+1=2) \implies \text{Pigs can fly}) \implies \text{Pigs can fly}$$
$$((\text{False} \implies \text{True}) \implies \text{False}) \implies \text{False}$$
$$(\text{True} \implies \text{False}) \implies \text{False}$$
$$\text{False} \implies \text{False}$$
$$\text{True}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Map from $\mathbb{A}^1 \rightarrow \mathbb{A}^2$ Let the map $\varphi_n:\mathbb{A}^1 \rightarrow \mathbb{A}^2$ be defined by $t\rightarrow(t^2,t^n)$.
-Show that if n is even, the image of $\varphi_n$ is isomorphic to $\mathbb{A}^1$ and $\varphi_n$ is 2:1 away from 0.
-Show that if n is odd, $\varphi_n$ is bijective, and give a rational inverse of it.
For the even case: I believe that I've shown that $\varphi_n$ is exactly 2:1, and I believe that I've shown that the standard curve given by $y=x^{\frac n2}$ is the image of $\varphi_n$. How do I go about showing that this is isomorphic to $\mathbb{A}^1$?
For the odd case: I'm not really sure where to start here, I used the same process that I did for the even case, and I think the function $t=\frac{y}{x^m}$ where $(x,y)=(t^2,t^n)=(t^2,t^{2m+1})$ for some m. How to show that this is bijective?
Thanks for any help you guys can offer!
|
For the even case, you are showing that the image is isomorphic to $\mathbb{A}^1$. It is not necessary (and not true) that $\varphi_n$ is the isomorphism. In fact, your answer almost contains the map from $\mathbb{A}^1$ to $\varphi_n(\mathbb{A}^1)$ and the map in the other direction. (You will need to check that they are mutual inverses of course).
In the odd case, I think you want to show that it is bijective onto its image ($\varphi_n$ is not surjective onto $\mathbb A^2$ because $|y|$ is determined by $|x|$). Pick a point $(x,y)$ in the image of $\varphi_n$. Using what you said in your question, you need to show that $\frac{y}{x^m} = \frac{t^{2m+1}}{(t^2)^m}$ gives you an inverse to the map $\varphi_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
$\sum_{n=1} ^\infty \frac{8}{n15^n}$ without calculator I just can't figure out which function to use to get the sum. I tried with ln, but that gives me an alternating series.
|
The exchange between integral and series is justified because the convergence is uniform when far from $1$.
$$
\sum_{n=1} ^\infty \frac{8}{n15^n}
=8\,\sum_{n=1}^\infty \int_0^{1/15}t^{n-1}\,dt
=8\,\int_0^{1/15}\left(\sum_{n=1}^\infty t^{n-1}\right)\,dt\\
=8\,\int_0^{1/15}\frac{dt}{1-t}=-8\,\log(1-t)\left.\vphantom{\int}\right|_0^{1/15}\\
=-8\,\log\left(\frac{14}{15}\right)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate the Integral: $\int e^{2\theta}\ \sin 3\theta\ d\theta$ $\int e^{2\theta}\ \sin 3\theta\ d\theta$
After Integrating by parts a second time, It seems that the problem will repeat for ever. Am I doing something wrong. I would love for someone to show me using the method I am using in a clean and clear fashion. Thanks.
|
When you do it with integration by parts, you have to go in the "same direction" both times. For instance, if you initially differentiate $e^{2 \theta}$, then you need to differentiate $e^{2 \theta}$ again; if you integrate it, you will wind up back where you started. If you do this, you should find something of the form
$$\int e^{2 \theta} \cos(3 \theta) d \theta = f(\theta) + C \int e^{2 \theta} \cos(3 \theta) d \theta$$
where $C$ is not $1$. Therefore you can solve the equation for the desired quantity:
$$\int e^{2 \theta} \cos(3 \theta) d \theta = \frac{f(\theta)}{1-C}.$$
There is also a nice approach with complex numbers: $\cos(3 \theta)=\frac{e^{3 i \theta}+e^{-3 i \theta}}{2}$, so your integral is
$$\frac{1}{2} \int e^{(2+3i) \theta} + e^{(2-3i) \theta} d \theta$$
which are pretty easy integrals. You do some complex number arithmetic and it works out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Evaluating $\int_{0}^{\infty}\frac{\sin(ax)}{\sinh(x)}dx$ with a rectangular contour I need to try to evaluate $\int_{0}^{\infty}\frac{\sin(ax)}{\sinh(x)}dx$ and it seems like this is supposed to be done using some sort of rectangular contour based on looking at other questions.
My main issue is that I am unsure how this kind of contour works in general. For example:
*
*How high or low should I have the rectangle contour?
*Should the contour be centered on the real axis or should it actually be shifted up or down or does it depend?
*When you make bumps on the contour about the singularities, do you make the bumps as to include the singularities inside of the domain or should they be outside of the domain?
I ask the last question based on this question "tough integral involving $\sin(x^2)$ and $\sinh^2 (x)$" here in which robjohn had his contour surround the singularity at $i$ yet did not surround the singularity at $-i$, so I was wondering if this choice matters and why not include or exclude both singularities using the contour.
For this, though, I believe I need to use the singularities at $0$ and $i\pi$, but I suppose I could have the rectangle between any two consecutive singularities on the imaginary axis, correct?
Either way, any insight on this would be appreciated.
Edit: There is no information given on a. I assume it is an arbitrary complex parameter.
|
To answer your third question, you can do whatever you want, which we will see later. Because we want to evaluate the integral over the real axis, we will include $\mathbb{R}$ (except $z=0$) in the contour. For this integral, this is possible, maybe not in other integrals. Because $\sinh(z)=0$ for all integer multiples of $i\pi$, this gives infinitely many singularities. We don't want this, so the idea is to only include $z=0$ and $z=\pi i$ in the contour.
Note that $$\sin(a(z+i\pi))=\sin(az+ai\pi)=\sin(az)\cos(ai\pi)+\cos(az)\sin(ai\pi)$$ and $$\sinh(z+i\pi)=-\sinh(z).$$ These facts suggest including $i\pi+\mathbb{R}$ in the integral. Note that $\frac{\sin(az)}{\sinh(z)}$ is an even function, so we can integrate over $\mathbb{R}$ instead of from $0$ to $\infty$. From this follows the rectangular contour ($R\rightarrow\infty$ and $\varepsilon\rightarrow0$):
*
*A line from $-R$ to $-\varepsilon$
*A semicircle of radius $\varepsilon$ around $z=0$, choose it to include $z=0$
*A line from $\varepsilon$ to $R$
*A line from $R$ to $R+i\pi$
*A line from $R+i\pi$ to $\varepsilon+i\pi$
*A semicircle of radius $\varepsilon$ around $z=i\pi$, choose it to include $z=i\pi$
*A line from $-\varepsilon+i\pi$ to $-R+i\pi$
*A line from $-R+i\pi$ to $-R$.
We know that $\int_1=\int_3$, because $\frac{\sin(az)}{\sinh(z)}$ is an even function. We also know that $\int_5=\int_7=\cos(ai\pi)\int_1$, because $$\int_{-\infty}^\infty\frac{\sin(az)\cos(ai\pi)+\cos(az)\sin(ai\pi)}{\sinh(z)}dz=\int_{-\infty}^\infty\frac{\sin(az)\cos(ai\pi)}{\sinh(z)}dz$$ and $\frac{\cos(az)}{\sinh(z)}$ is an odd function. Here the minus signs from $\sinh(z+i\pi)=-\sinh(z)$ and the opposite direction cancel each other. Note that we have to say something about convergence. We also have that $\int_4=\int_8=0$, where we again have to say something about convergence.
Now we calculate residues: around $z=0$ we have $\frac{\sin(az)}{\sinh(z)}\rightarrow0$ and around $z=i\pi$ we have $$\frac{\sin(az)}{\sinh(z)}=\frac{\sin(az)\cos(ai\pi)+\cos(az)\sin(ai\pi)}{-\sinh(z)}\overset{z\rightarrow0}{\rightarrow}-\frac{\sin(ai\pi)}{\sinh(z)},$$ which gives $-\sin(ai\pi)$ as residue.
Note that $\int_6=-2\pi i\frac{1}{2}\sin(ai\pi)$, (there is a theorem about poles of order 1 and half circles around it) and $\int_2=0$. From this follows that $$2(1+\cos(ai\pi))\int_0^\infty\frac{\sin(az)}{\sinh(z)}dz-\pi i\sin(ai\pi)=-2\pi i\sin(ai\pi),$$ which gives also the answer to your third question. If we don't include $z=i\pi$, this gives $$2(1+\cos(ai\pi))\int_0^\infty\frac{\sin(az)}{\sinh(z)}dz+\pi i \sin(ai\pi)=0,$$ because the halfcircle around $z=i\pi$ is traversed clockwise, instead of counterclockwise.
Finally, the answer is $$\int_0^\infty\frac{\sin(az)}{\sinh(z)}dz=-\pi i\frac{\sin(ai\pi)}{2+2\cos(ai\pi)}=\frac{\pi}{2}\tanh(\frac{\pi a}{2}).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1550932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Higher homotopy groups meaning I am developing intuition for higher homotopy groups but it's very hard for me to visualize what $\pi_2$ represents (and $\pi_n$ for that matter). I know that $\pi_2(S^2) \cong \mathbb{Z}$ and can kind of see "wrapping" a sphere around itself an integer number of times. But, something like $\pi_3(S^2)$ doesn't make very much intuitive sense to me at all. How am I supposed to think about these groups? When is it supposedly obvious that $\pi_i(X)$ is trivial and when is it nontrivial?
|
The question in its current form is too general. However, the case $\pi_n(S^{n-1})$ is relatively easy and was discovered first.
In Milnor's Topology from differential viewpoint, you can find a quite intuitive explanation of the fact that elements of $\pi_3(S^2)$ correspond to framed cobordism classes of $1$-dimensional smooth submanifolds of $S^3$, that is, certain natural equivalence classes of circles in $S^3$ with a framing. (Essentially, this duality is induced by taking preimage of the "north pole" $f^{-1}(n)$ for any $f: S^3\to S^2$ for $[f]\in \pi_3(S^2)$.)
A framing on a circle in $S^3$ is the choice of two "normal" vectors in each point of the circle. If you have one such framing, any other framing can be obtained by a loop $S^1\to SO(2)$ that acts on the 2 framing vectors pointwise. Not surprisingly, homotopic curves gives rise to cobordant framings. Clearly, $\pi_1(SO(2))\simeq \mathbb{Z}$.
Using the same approach, the case $\pi_n(S^{n-1})\simeq \mathbb{Z}_2$ for $n>3$ can be reduced to $\pi_1(SO(n-1))\simeq \mathbb{Z}_2$ for $n>3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Understanding Vacuously True (Truth Table) I don't know very much about formal logic, and I'm trying to understand the concept of vacuously true statements. Consider the truth table below:
$$\begin{array} {|c|}
\hline
P & Q & P\implies Q & Q\implies P & P\iff Q \\ \hline
T & T & T & T & T \\ \hline
T & F & F & \color{blue} T & F\\ \hline
F & T & \color{blue}T & F & F\\ \hline
F & F & \color{blue} T & \color{blue}T & T
\end{array}$$.
The blue letters are definitions. To see why these definitions are the correct choices (as opposed to $F$s), suppose we changed the lower left $\color{blue} T$ to an $F$ (this would then force the lower right $\color{blue}T$ to an $F$), so $P\iff Q$ would be $F$ for $P$ and $Q$ both $F$, which isn't want we want. So I see why this makes sense to choose the lower entries as $\color{blue}T$.
However, it isn't clear to me why $P\implies Q$ true for $P$ false and $Q$ true is the sensible choice (same for $Q\implies P$ true for $Q$ false and $P$ true). For if the $\color{blue}T$s in the middle rows were switched to $F$s, then $P\iff Q$ would still be $F$. So I don't see what the problem would be. Can someone please explain?
|
Here is why we say $P \implies Q$ is true if $P$ is false and $Q$ is true:
Let $P$ be the statement "it's raining outside" and $Q$ be the statement "the car is wet".
In order for $P \implies Q$ to be true, what needs to happen is: every time it rains outside, it better follow that the car is wet. That's all you need to check.
So the only time $P \implies Q$ is false is when we get that it's raining outside and the car isn't wet.
When it's not raining, we don't have that it's raining outside and the car is not wet. Since we don't have this, that means $P \implies Q$ isn't false, since the only time it is false is when we get that it's raining outside and the car isn't wet. If it's not raining outside, we don't get this, so the statement is not false, so it must be true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 0
}
|
Simplifying $2\cos(t)\cos(2t)-\sin(t)\sin(2t)$ How do I simplify $2\cos(t)\cos(2t)-\sin(t)\sin(2t)$? I know this should be possible, but I don't know how.
I have tried the $\cos(t)\cos(u)-\sin(t)\sin(u)=\cos(t+u)$, but I don't know what to do with the $2$ in front of $\cos(t)$.
|
$$2\cos(t)\cos(2t)-\sin(t)\sin(2t)=\frac{\cos(t)+3\cos(3t)}{2}$$
Proof:
$$2\cos(t)\cos(2t)-\sin(t)\sin(2t)=\frac{\cos(t)+3\cos(3t)}{2}\Longleftrightarrow$$
$$2\left(2\cos(t)\cos(2t)-\sin(t)\sin(2t)\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$2\left(\cos(-t)+\cos(3t)-\sin(t)\sin(2t)\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$2\left(\cos(t)+\cos(3t)-\sin(t)\sin(2t)\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$2\left(\cos(t)+\cos(3t)+\frac{\cos(3t)-\cos(-t)}{2}\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$2\left(\cos(t)+\cos(3t)+\frac{\cos(3t)-\cos(t)}{2}\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$2\left(\cos(t)+\cos(3t)+\frac{\cos(3t)}{2}-\frac{\cos(t)}{2}\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$2\left(\frac{\cos(t)}{2}+\frac{3}{2}\cos(3t)\right)=\cos(t)+3\cos(3t)\Longleftrightarrow$$
$$\cos(t)+3\cos(3t)=\cos(t)+3\cos(3t)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Complex equation: $z^8 = (1+z^2)^4$ What's up with this complex equation?
$ z^8 = (1+z^2)^4 $
To start with, there seems to be a problem when we try to apply root of four to both sides of the equation:
$ z^8 = (1+z^2)^4 $
$ z^2 = 1 + z^2 $
which very clearly doesn't have any solutions, but we know there are solutions: the problem is from an exam, and, besides, wolphram alpha gladily gives them to us.
We've tried to solve it using the trigonomectric form, but the sum inside of the parenthesis is killing all of our attempts.
Any help? Ideas?
|
The solutions of $z^8 = (1+z^2)^4$, and the problem you see, are clearly seen from a factorization process:
$$\begin{align}
z^8 - (1+z^2)^4 &= 0 \,, \\
[z^4 + (1+z^2)^2] [z^4 - (1+z^2)^2] &= 0 \,, \\
[z^2 + i(1+z^2)] [z^2 - i (1+z^2)] [z^2 - (1+z^2)] [z^2 + (1+z^2)] &= 0 \,.
\end{align}$$
From the last line, it is clear that you have only considered one case (the third term) over four other cases. This case, in particular, has no solutions,
$$\begin{align}
[z^2 + i(1+z^2)] [z^2 - i (1+z^2)] [z^2 - (1+z^2)] [z^2 + (1+z^2)] &= 0 \,, \\
[(1+i)z^2 + i] [(1-i)z^2 - i ] [ - 1 ] [2z^2 + 1] &= 0 \,, \\
\end{align}$$
I believe this clarify a bit the answers posted by Millikan and Levap.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 3
}
|
Prove that functional equation doesn't have range $\Bbb R.$ Prove that any solution $f: \mathbb{R} \to \mathbb{R}$ of the functional equation
$$ f(x + 1)f(x) + f(x + 1) + 1 = 0 $$
cannot have range $\mathbb{R}$.
I transformed it into
$$ f(x) = \frac {-1} {f(x + 1)} - 1 = \frac {-1 - f(x + 1)} {f(x + 1)} $$
I tried to evaluate $f(x + 1)$ and $f(x + 2)$ and put them into the transformed equation:
1) after $f(x + 1)$
$$ f(x) = \frac {1} {-1 - f(x + 2)}$$
2) after $f(x + 2)$
$$ f(x) = \frac {1} {\frac {-1}{f(x + 3)}} = -f(x + 3) $$
What am I supposed to do next?
|
Assume that the range of $f$ is all of $\Bbb R$. Then there are $a,b\in\Bbb R$ with $f(a)=0$ and $f(b)=-1$. But then
$$ 1=f(a)f(a-1)+f(a)+1=0$$
and
$$ 1=f(b+1)f(b)+f(b+1)+1=0,$$
both of which are absurd.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Is This Matrix Diagonalizable? Consider the matrix A below:
$$A = \begin{pmatrix} 3 & 2 & 2 \\ -1 & 0 & -2 \\ 1 & 2 & 4\end{pmatrix}$$
Is the matrix A diagonalizable? If so, then find a diagonal matrix D and an invertible matrix P such that $P^{-1}AP=D$.
I know it is supposed to be diagonalizable but I have tried to solve it and haven't succeeded. The format I'm used to is $A=PDP^{-1}$.
Thanks.
|
Assuming $A$ is an $n\times n$ matrix, $A$ diagonalizable if $P_A(\lambda)$ has $n$ distinct real roots or if, for each eigenvalue, the algebraic multiplicity equals the geometric multiplicity. So let's find out:
$$P_A(\lambda) = det(A - \lambda I) = det \begin{pmatrix} 3-\lambda & 2 & 2 \\ -1 & -\lambda & -2 \\ 1 & 2 & 4-\lambda\end{pmatrix}$$
$$ = (3-\lambda)((-\lambda)(4-\lambda) + 4) - 2(\lambda - 4 + 2) + 2(-2+\lambda) = 0$$
$$\Rightarrow -\lambda^3+7 \lambda^2-16 \lambda+12 = 0$$
$$\Rightarrow -(\lambda-3) (\lambda-2)^2 = 0$$
$$\lambda = 3 \text{ (multiplicity $1$)}$$
$$\lambda = 2 \text{ (multiplicity $2$)}$$
We see that $\lambda=2$ has algebraic multiplicity 2, so we need to check for its geometric multiplicity.
Geometric multiplicity is equal to $n - Rank(A - \lambda I)$
$$Rank(A - 2I) = dim\left(col\begin{pmatrix} 1 & 2 & 2 \\ -1 & -2 & -2 \\ 1 & 2 & 2\end{pmatrix}\right) $$
$$ = dim\left(col\begin{pmatrix} 1 & 2 & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{pmatrix}\right) = 1$$
So $$\text{geometric multiplicity} = 3 - Rank(A - 2I) = 3 - 1 = 2$$
$$ = \text{algebraic multiplicity when $\lambda=2$}$$
So the matrix is diaginalizable.
Now we can diaginalize it:
Let's find our eigenvectors.
For $\lambda=2$:
$$\vec{V_1} + \vec{V_2} = null(A - \lambda I) = null(A - 2I) = null\begin{pmatrix} 1 & 2 & 2 \\ -1 & -2 & -2 \\ 1 & 2 & 2\end{pmatrix}$$
$$ = null\begin{pmatrix} 1 & 2 & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{pmatrix}$$
$$ = \begin{pmatrix} -2 \\ 1 \\ 0\end{pmatrix}x_1 + \begin{pmatrix} -2 \\ 0 \\ 1\end{pmatrix}x_2$$
So our eigenpairs so far are $\left(2, \begin{pmatrix} -2 \\ 1 \\ 0\end{pmatrix}\right)$ and $\left(2, \begin{pmatrix} -2 \\ 0 \\ 1\end{pmatrix}\right)$
Now, to find it for $\lambda=3$:
$$\vec{V} = null(A-\lambda I) = null(A - 3I) = null\begin{pmatrix} 0 & 2 & 2 \\ -1 & -3 & -2 \\ 1 & 2 & 1\end{pmatrix}$$
$$ = null\begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 0\end{pmatrix} = \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}x_3$$
So our eigenpair is $\left(3, \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}\right)$
We now have everything we need to make our diaginalized matrix:
$$PDP^{-1} = \begin{pmatrix}-2 & -2 & 1 \\ 1 & 0 & -1 \\ 0 & 1 & 1 \end{pmatrix}\begin{pmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{pmatrix}\begin{pmatrix}-2 & -2 & 1 \\ 1 & 0 & -1 \\ 0 & 1 & 1 \end{pmatrix}^{-1}$$
$$ = \begin{pmatrix}-2 & -2 & 1 \\ 1 & 0 & -1 \\ 0 & 1 & 1\end{pmatrix}\begin{pmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3\end{pmatrix}\begin{pmatrix}1 & 3 & 2 \\ -1 & -2 & -1 \\ 1 & 2 & 2\end{pmatrix}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Regarding ranks of row echelon form matrices How do I prove that the rank of a matrix in reduced row echelon form is equal to the number of non-zero rows it has?
|
By definition of row echelon form, all the non-zero rows are linearly independent. (Think on all the non-zero leading coefficients which are "aligned" ).
So the non-zero rows form a basis for the row space (i.e the subspace spanned by all the rows). Therefore their number equals to the dimension of this space, as required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Negative binomial distribution.Calculate the probability that exactly 12 packages are inspected. In a shipment of 20 packages, 7 packages are damaged. The packages are randomly inspected, one at a time, without replacement, until the fourth damaged package is discovered.
Calculate the probability that exactly 12 packages are inspected.
I used negative binomial distribution, with $p=$ $\frac{7}{20}$,$r=4$ and $x=8$, where $r$ is the number of successes, $x$ is the number of failures before $r^{th}$ success and $p$ is the probability of success of a particular trial.
$p(x) =
\binom {r+x-1}{x}p^r(1-p)^x
$
$p(x=8) =
\binom {11}{8}(\frac{7}{20})^4(1-\frac{7}{20})^8=0.079
$
However the answer is 0.119. What have I done wrong?
|
The distribution is not negative binomial, for in the negative binomial the trials are independent. Here we are not replacing after inspecting. The resulting distribution is sometimes called negative hypergeometric, but the term is little used.
We can use an analysis close in spirit to the one that leads to the negative binomial "formula." We get the $4$-th "success" on the $12$-th trial if we have exactly $3$ successes in the first $11$, and then a success.
There are $\binom{20}{11}$ equally likely ways to choose the first $11$ items. There are $\binom{7}{3}\binom{13}{8}$ ways to choose $3$ defective and $8$ good.
So the probability exactly $3$ of the first $11$ items are defective is $\dfrac{\binom{7}{3}\binom{13}{8}}{\binom{20}{11}}$.
Suppose we had $3$ defectives in the first $11$ trials. Then there are $9$ items left, of which $4$ are defective. Thus the conditional probabilit that the $12$-th item is defective is $\frac{4}{9}$. Multiply.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1551940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Divisivility and Sylow p-subgroups Hello I have been working in this problem for two days but i can't get the answer, I would appreciate any help or hint.
Let $p$ be prime, $m\ge 1$, $r \ge 2$ and $(p,r) = 1$.
If there is a simple group $G$ such that $ |G| = p^ {m}r $, then $ p^ {m}|(r-1)! $
|
Pick a Sylow $p$-subgroup $P$ and let $G$ act on $G/P$ by left multiplication: $p\cdot gP = pgP$. This gives a homomorphism $G\to S(G/P)=S_r$, which is clearly not identically equal to $1$. Since $G$ is simple, the kernel is thus trivial and hence $S_r$ contains an isomorphic copy of $G$ as a subgroup. So $p^m r| r!$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove $\sum_p p^{-2} < \frac{1}{2}$? I am trying to prove $\sum_p p^{-2} < \frac{1}{2}$, where $p$ ranges over all primes. I think this should be doable by elementary methods but a proof evades me.
Questions already asked here (eg. What is the value of $\sum_{p\le x} 1/p^2$? and Rate of convergence of series of squared prime reciprocals) deal with the exact value of the above sum, and so require some non-elementary math.
|
Here's a solution that exploits a comment of Oscar Lanzi under my other answer (using an observation that I learned from a note of Noam Elkies [pdf]). In particular, it avoids both the identity $\sum_{n \in \Bbb N} \frac{1}{n^2} = \frac{\pi^2}{6}$ and using integration.
Let $\Bbb P$ denote the set of prime numbers and $X$ the union of $\{2\}$ and the set of odd integers $> 1$; in particular $\Bbb P \subset X$, so where $E$ denotes the set of positive, even integers:
$$\sum_{p \in \Bbb P} \frac{1}{p^2} \leq \sum_{n \in X} \frac{1}{n^2} = \color{#00af00}{\sum_{n \in \Bbb N \setminus E} \frac{1}{n^2}} - \frac{1}{1^2} + \frac{1}{2^2}.$$
Now, $$\sum_{n \in \Bbb N} \frac{1}{n^2} < 1 + \sum_{n \in \Bbb N \setminus \{1\}} \frac{1}{n^2 - \frac{1}{4}} = 1 + \sum_{n \in \Bbb N \setminus \{1\}} \left(\frac{1}{n - \frac{1}{2}} - \frac{1}{n + \frac{1}{2}} \right) = 1 + \frac{2}{3} = \frac{5}{3};$$ the second-to-last equality follows from the telescoping of the sum in the third expression.
The sum over just the even terms satisfies
$$\sum_{m \in E} \frac{1}{m^2} = \sum_{n \in \Bbb N} \frac{1}{(2 n)^2} = \frac{1}{4} \sum_{n \in \Bbb N} \frac{1}{n^2} ,$$ and thus
$$\color{#00af00}{\sum_{n \in \Bbb N \setminus E} \frac{1}{n^2} = \left(1 - \frac{1}{4}\right) \sum_{n \in \Bbb N} \frac{1}{n^2} < \frac{3}{4} \cdot \frac{5}{3} = \frac{5}{4}}.$$
Substituting in the first display equation above yields $$\sum_{p \in \Bbb P} \frac{1}{p^2} \leq \sum_{n \in X} \frac {1}{n^2} < \color{#00af00}{\frac{5}{4}} - 1 + \frac{1}{4} = \frac{1}{2} .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 5
}
|
Primes on the form $p^2-1$ Prove that there exists a unique prime number of the form $p^2 − 1$ where $p\geq 2$ is an integer.
I have no idea how to approach the question. any hints will be greatly appreciated
|
$n(x) = n^2-1$
$n(x) = n^2-1^2$
$n(x) = (n+1)(n-1)$
For it to be prime it has to be only be divisible by itself. What to do from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 3
}
|
Prove $x_n$ converges IFF $x_n$ is bounded and has at most one limit point I'm not entirely sure how to go about proving this so hopefully someone can point me in the right direction. The definition I have for a limit point is "$a$ will be a limit point if for a sequence $x_n$ there exists a subsequence $(x_{n_k})$ such that $\lim_{k\to\infty} (x_{n_k})=a$".
In the forward direction if $x_n$ converges to a value $a$ then every convergent subsequence of $x_n$ must also converge to $a$. I'm not entirely sure where else to go from here. Intuitively I know that what I'm trying to prove must be true since if $x_n$ converges to $x$ then there exists an $N\in\mathbb{N}$ such that $\forall n\geq N$ we have infinitely many terms satisfying $\forall \varepsilon >0\Rightarrow x-\varepsilon<x_n$ which implies that $x$ is a limit point. I'm not entirely sure how to rule out a second limit point. Would it be sufficient to show that if $a$ and $b$ are limits of a convergent sequence $x_n$ then $a$ must equal $b$?
In the backwards direction if $x_n$ is bounded and has only one limit point then we know that there exists only one point $x$ such that we can find infinitely many terms satisfying $\forall \varepsilon >0$ $\left | x_n-x\right |<\varepsilon$ which implies that $x_n$ converges to $x$.
I'm quite concerned I am proving this incorrectly since I don't think I am using any facts about subsequences although they are mentioned directly in the definition for a limit point.
|
First show: a point $x_0$ is a limit point of a sequence $\{x_n\}$ if and only if for every $\varepsilon>0$, there exists an open ball $B(x_0,\varepsilon)$ centered at $x_0$ with radius of $\varepsilon$ that contains infinite number of $x_n$.
In the leftward direction of the original problem, let $x_0$ be the unique limit point. For every $\varepsilon>0$, there exists an open ball $B(x_0,\varepsilon)$ that contains infinite many of $x_n$. Thus, there are finite many $x_n$ that are outside the open ball. Otherwise, these infinite exceptional points have another limit point, which is not the case since $x_0$ is the unique limit point. Let $N$ be the max index of these exceptional points. Therefore, for all $n>N$, we have $x_n\in B(x_0,\varepsilon)$, which proves $x_n\rightarrow x_0$ as $n\rightarrow \infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
The probability of two spies on the same team, and the probability after switching? I was watching a TV show, and the setting is pretty simple:
There are 10 players in total, within them there are 2 "spies", players initially form 2 teams of 5 people, and there are 3 rounds of game, where at the end of each round, we can switch a player from team A with a player from team B (or no switches, team members remain unchanged), 2 "spies" win if they are in the same team when the game finishes, otherwise they lose and the rest of the players win
of cause this is a strategy game...spies know their identity and will try to manipulate others
TL;DR
here is my question:
one of the player said it is better to maintain the original teams since it is unlikely that 2 spies are in the same team by initial arrangement (random)
well, I got that the initial probability of 2 spies in the same team is $4\over9$ which is indeed less that 50%, but randomly switch a player from team A with another player in team B after round 1 gives me the same probability...so there is no dominant strategy?
here is my calculation: (case 1) 2 spies start being in the same team, after round 1, we randomly switch two players between 2 teams, the probability of 2 spies still in the same team is:$${4\over9}*{3\over5}$$
3/5 is the probability of non-spy player being selected from the spy team
(case 2) 2 spies start not being in the same team, after round 1, we randomly switch two players between 2 teams, the probability of 2 spies accidentally being in the same team is:$${5\over9}*({1\over5}*{4\over5}*2) $$
1/5*4/5 is the probability of a spy switched with a non-spy player, since 2 spies can end up in either team, (times 2 is required)
sum of these two probabilities gives me again $4\over9$.
Is my calculation or the understanding of this game correct?
|
If a player is switched at random (i.e., disregarding the element of politics and trickery), then the probability should certainly remain the same.
Consider shuffling a deck of cards, and suppose you "win" if the ace of spades is on top. Before looking at the top card, you're allowed to make any two cards switch places. Would doing so give you a better chance of winning?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding extrema of $\frac{\sin (x) \sin (y)}{x y}$ I need to find the extrema of the following function in the range $-2\pi$ and $2\pi$ for both $x$ and $y$, but I don't know how to go about doing it since it's a bit weird and not similar to other functions I've seen:
$$f(x,y)=\frac{\sin (x) \sin (y)}{xy}$$
I've evaluated the gradient function as below:
$\nabla f = <\frac{\sin (y) (x \cos (x)-\sin (x))}{x^2 y}, \frac{\sin (x) (y \cos (y)-\sin (y))}{y^2 x}>$
but setting it to zero gives a few answers for $x$ and $y$, none of which seem to be the right answer.
The following is the sketch of the graph in the aforementioned range. According to WolframAlpha, it should have it's local extrema at $\{0, 4.49\}$, $\{0, -4.49\}$, $\{4.49, 0\}$, $\{-4.49, 0\}$.
|
As said in comments, if $$F=\frac{\sin (x) \sin (y)}{xy}$$ $$F'_x=\frac{\cos (x) \sin (y)}{x y}-\frac{\sin (x) \sin (y)}{x^2 y}=\frac{\sin (y) (x \cos (x)-\sin (x))}{x^2 y}$$ $$F'_y=\frac{\sin (x) \cos (y)}{x y}-\frac{\sin (x) \sin (y)}{x y^2}=\frac{\sin (x) (y \cos (y)-\sin (y))}{x y^2}$$ where we see appearing the solutions of equations $z=\tan(z)$; beside the trivial solution $z=0$, there is only one solution which does not show any closed form. This solution is close to $\frac {3\pi}2$.
Developing $z-\tan(z)$ around $z=\frac {3\pi}2$ as a series, we have $$z-\tan(z)=\frac{1}{z-\frac{3 \pi }{2}}+\frac{3 \pi }{2}+\frac{2}{3} \left(z-\frac{3 \pi
}{2}\right)+O\left(\left(z-\frac{3 \pi }{2}\right)^2\right)$$ and the positive root is given by $$z=\frac{1}{8} \left(3 \pi +\sqrt{81 \pi ^2-96}\right)\approx 4.49340$$ which is extremely close to the solution $(\approx 4.49341)$.
Then the solutions shown by Wolfram Alpha.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Winning Strategy with Addition to X=0 Problem:
Two players play the following game. Initially, X=0. The players take turns adding any number between 1 and 10 (inclusive) to X. The game ends when X reaches 100. The player who reaches 100 wins. Find a winning strategy for one of the players.
This is my solution, which hopefully you can comment on and verify:
If I have 100 and win, then I must have had a number between 90 and 99 on my last turn. On the turn before that, my opponent must have 89 because then we will have a number between 90 and 99 on our last turn. On the turn before that, I want a number between 79 and 88 so that I could force my opponent to have 89 on their turn. On the turn before that, my opponent should have a 78 so that I can get to a number between 79 and 88. On the turn before that, I want a number between 68 and 77 so that I could force my opponent to have a 78 sum on his/her turn. Continuing in this manner,
we see that our opponent must have the sums on his/her turn: 89,78,67,56,45,34,23,12, and 1.
As the winner, I want to be in the following intervals of sums at each of my turns: 90-99,79-88,68-77,57-66,46-55,35-44,24-33,14-22,2-11.
Thus, the winning strategy is to go first and add 1 to X=0. Then, no matter how our opponent plays, we can always choose a number between 1 and 10 to force our opponent to have one of the losing positions above and so I will win...
|
Community wiki answer so the question can be marked as answered:
Yes, you are correct. Also note the links in the comments for further reading about similar games and a systematic way of solving them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Unique solution of quadratic expression I have a quadratic expression: $x^HAx - B$ in which I have to solve for $x$. $A$ is a 4 x 4 positive definite hermitian matrix and $x$ is a 4 x 1 vector. I have solved it by considering Singular Value Decomposition of A as well as Cholesky decomposition of A but each time I have obtained a different solution for this expression. Is it possible to obtain $x$ such that $x^HAx = B$ (or close) for this problem? I would appreciate suggestions.
|
If I understand correctly, you want to find $x$ such that $x^HAx = B$, where $B \geq 0$ is a real number. Let $v_1, \ldots, v_4$ be an orthonormal basis of eigenvectors of $A$ with $Av_i = \lambda_i v_i$. Write $x = a_1 v_1 + \ldots + a_4 v_4$. Then you have
$$ x^HAx = \left< Ax, x \right> = \sum_{i=1}^4 \lambda_i |a_i|^2 = B. $$
This gives you all possible solutions. If you work over the real numbers, you can see that the set of all solutions is an ellipsoid in $\mathbb{R}^n$ and in particular, the solutions are definitely not unique.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1552904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability Deck of Cards In a hand of 13 playing cards from a deck of 52 whats the probability of drawing exactly one king.
My approach would be $${4 \choose 1}*{48 \choose 12}/{52 \choose 13}*{2}$$
I divided by 2 because I felt I had ordered the king and the other 12 cards chosen but this is wrong. Can someone please explain in depth why this is wrong so that I don't make the same mistake again.
|
you have two disjoints subsets: 'kings' and 'everything else'. $\binom{4}{1}\binom{48}{12}$ simply means 'any 1 out of 4' AND 'any 12 out of 48'. There's no order involved. Any form of order would be if, for example, you would need to get 3 kings. Then you would need to divide by $3!$ because the order doesn't matter.
Does this answer your question?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Are Morrey spaces reflexive? Since $L^{p,0}=L^p$ and $L^1$ is not reflexive, thus in general Morrey space is not reflexive, but how about for $L^{p,\lambda}$ with $1<p<+\infty$ and $0<\lambda<n$, where $n$ is the dimension of domain.
What's more, it seems that the dual space for Morrey spaces are not clear so far?
|
For any function space defined as "some supremum is finite" without further conditions like continuity or vanishing, you should expect that:
(a) the space is nonreflexive and nonseparable;
(b) its dual is too large to be described in concrete terms.
The reason for both things is that $\ell_\infty$ embeds into the space (recall that every subspace of a nonreflexive space must be nonreflexive). In case of Morrey spaces, take a well-separated sequence of balls $B_n$ with quickly decaying radii $r_n$, and define, for any sequence $c\in\ell_\infty$,
$$f_c = \sum_n c_n r_n^{(\lambda-n)/p}\chi_{B_n}$$
This is constructed so that
$$\|c\|_\infty\le \|f\|_{\lambda,p}\le K\|c\|_\infty$$
for some constant $K$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
limit of $n \left(e-\sum_{k=0}^{n-1} \frac{1}{k!}\right) = ?$ As in the title:
$$
\lim_{n\to\infty} n \left(e-\sum_{k=0}^{n-1} \frac{1}{k!}\right) = ?
$$
Numerically, it seems 0, but how to prove/disprove it?
I tried to show that the speed of convergence of the sum to e is faster than $1/n$ but with no success.
|
I tried to show that the speed of convergence of the sum to e is faster than $1/n,$ but with no success.
Have you tried Stirling's approximation ?
In my opinion, a far more interesting question would have been trying to prove that
$$\lim_{n\to\infty}(n+a)\bigg[~e^b-\bigg(1+\dfrac b{n+c}\bigg)^{n+d}~\bigg]~=~b~e^b~\bigg(\dfrac b2+c-d\bigg),$$
since, in this case, the growth of the latter formula towards $e^b$ is comparable to that of $1/n.$ In your example, however, in order for the product to converge to a “meaningful” non-zero quantity, the order of the multiplication factor should have been somewhere in the range of $n!$, as has already been pointed put by Daniel Fischer in the comments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 4
}
|
Is there a sequence such that $a_n\to0$, $na_n\to\infty$ and $\left(na_{n+1}-\sum_{k=1}^n a_k\right)$ is convergent? Is there a positive decreasing sequence $(a_n)_{n\ge 0}$ such that
${\it i.}$ $\lim\limits_{n\to\infty} a_n=0$.
${\it ii.}$ $\lim\limits_{n\to\infty} na_n=\infty$.
${\it iii.}$ there is a real $\ell$ such that $\lim\limits_{n\to\infty} \left(na_{n+1}-\sum_{k=1}^n a_k\right)=\ell.$
Of course without condition ${\it i.}$ constant sequences are non-increasing examples that satisfy the other two conditions, but the requirement that the sequence must be decreasing to zero makes it hard to find!.
|
Assume $\{a_n\}$ exists.
Let $a_n = f (n)$. By i and ii, we have $f (n) = o (1)$ and $f (n) = \Omega (1/n)$. By Euler-McLaurin summation formula, we have $$\sum_{k = 1}^{n} f (k) = c_1 F (n) + o (F (n))$$ for a constant $c_1$, where $F$ is anti-derivative of $f$, and $F (n) = o (n)$ and $F (n) = \Omega (1)$. Then, by iii, we have
$\begin {eqnarray}
n a_{n + 1} - \sum_{k = 1}^{n} a_k & = & n f (n + 1) - \left(c_1 F (n) + o (F (n)\right) \nonumber \\ & = & c_2 F (n) + o (F (n)) - \left(c_1 F (n) + o (F (n))\right) \nonumber \\ & = & (c_1 - c_2) F (n) + o (F (n)) \nonumber \\ & = & O (F (n)) \nonumber \\ & = & \Omega (1),
\end {eqnarray}$
but $\Omega (1)_{n \to \infty} = \infty$, a contradiction. Hence, $\{a_n\}$ does not exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
}
|
Show that linear transformation is not surjective? Given the matrix, $$M =
\begin{bmatrix}1&7&9&3\\2&15&19&8\\7&52&66&27\\3&4&10&-24\end{bmatrix}$$
Show that the linear transformation $T_m: \mathbb R^4 \to \mathbb R^4$ defined by the multiplication of column vectors of the left by $M$ is not surjective by exhibiting a column vector
\begin{bmatrix}a\\b\\c\\d\end{bmatrix}
not in the image of $T_m$, ie such that
$$M*\begin{bmatrix}x\\y\\z\\t\end{bmatrix}=
\begin{bmatrix}a\\b\\c\\d\end{bmatrix}$$
Info needed:
row reduced M= \begin{bmatrix}1&0&2&0\\0&1&1&0\\0&0&0&1\\0&0&0&0\end{bmatrix}
row reduced $M^T$ \begin{bmatrix}1&0&1&0\\0&1&3&0\\0&0&0&1\\0&0&0&0\end{bmatrix}
|
A very important identity that you may have seen in your course so far is $$\text{Im}(M)^{\perp}=\text{Ker}\left(M^{T}\right).$$
This is true because if $\vec{y}\in\text{Im}(A)^\perp$ then by definition it
satisfies $\langle\vec{y}\cdot M\vec{x}\rangle=0$
for all $\vec{x}$ and so $$\langle\vec{y}\cdot M\vec{x}\rangle=\vec{y}^{T}M\vec{x}=\left(M^{T}\vec{y}\right)^{T}\vec{x}=\langle M^{T}\vec{y},\vec{x}\rangle=0$$
for all $\vec{x}$
, and so $M^{T}\vec{y}$ must be the all zero vector, and hence $\vec{y}\in\text{Ker}(M^{T})$. Now, since $\text{Im}(M)^{\perp}=\text{Ker}\left(M^{T}\right)$, any nontrivial vector in $\text{Ker}(M^{T})$
cannot be in the image of $M$
since it lies in the orthogonal complement of the image. So you need only find a non-zero vector in $\text{Ker}(M^{T})$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How can we choose $z$ to make this series absolute convergent? I have the following series:
$\sum_{n=0}^\infty \frac{n^{\ln (n)}}{3^n} (z+2)^n$
The question is, how can we choose $z \in \mathbb{C}$ to make this absolute convergent?
For the first sight it seemed impossible to have such $z$, since $n^{ln n}$ should be way way bigger than $3^n$ as $n \rightarrow \infty$. Am I wrong? Any hints, ideas?
Thanks!
|
The reciprocal of the radius of convergence, as given by the Cauchy-Hadamard Theorem is
$$
\begin{align}
\limsup_{n\to\infty}\left(\frac{n^{\log(n)}}{3^n}\right)^{1/n}
&=\frac13\limsup_{n\to\infty}n^{\log(n)/n}\\
&=\frac13\limsup_{n\to\infty}e^{\log(n)^2/n}\\
&=\frac13e^{\limsup\limits_{n\to\infty}\log(n)^2/n}\\[3pt]
&=\frac13
\end{align}
$$
Therefore, for $\left|z+2\right|\lt3$, the series is absolutely convergent.
For $\left|z+2\right|=3$, the absolute value of the terms of the series is $n^{\log(n)}$, which does not tend to $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Does this type of "cyclic" matrix have a name? Let $\{a_1, a_2, ..., a_n\} \subset \mathbb{C}$ and consider the matrix of the form
$$
\begin{bmatrix}
a_1 & a_2 & ... & a_{n-1} & a_n\\
a_2 & a_3 & ... & a_n & a_1\\
.\\
.\\
.\\
a_n & a_1 & ... & a_{n-2} & a_{n-1}
\end{bmatrix}
$$
Does this type of matrix have a specific name?
|
Yes, it is a type of Circulant Matrix.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Determine the sign of the integral $\int_0^\pi x\cos x\,dx$ without calculating it
Without explicitly evaluating, determine the sign of the integral $$\int_0^{\pi} x\cos(x)dx$$
I know $x\cos(x) > 0$ when $0 < x < {\pi}/2$ and $x\cos(x) < 0$ when $\pi/2 < x < \pi$, and, in fact, the end result is negative, but I'm unsure of where to go to show this. Do I now need to split the integral up into two regions and manipulate the function?
Thanks!
|
$$\int_0^{\pi} x\cos(x)dx=\int_0^{\frac \pi 2} x\cos(x)dx+\int_{\frac \pi 2} ^{\pi} x\cos(x)dx$$
let $u=\pi-x$ in the second integral
$$\int_0^{\pi} x\cos(x)dx=\int_0^{\frac \pi 2} x\cos(x)dx - \int_0^{\frac \pi 2} (\pi-u)\cos(u)du$$
$$=\int_0^{\frac \pi 2} (2u-\pi)\cos(u)du$$
which is negative because $\cos(u)\ge 0$ and $(2u-\pi )\le 0 $ whenever $0 \le u \le \frac \pi 2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
Recurrence relation for the determinant of a tridiagonal matrix Let
$$f_n := \begin{vmatrix}
a_1 & b_1 \\
c_1 & a_2 & b_2 \\
& c_2 & \ddots & \ddots \\
& & \ddots & \ddots & b_{n-1} \\
& & & c_{n-1} & a_n
\end{vmatrix}$$
Apparently, the determinant of the tridiagional matrix above is given by the recurrence relation
$$f_n = a_n f_{n-1} - c_{n-1} b_{n-1}f_{n-2}$$
with initial values $f_0 = 1$ and $f_{-1} = 0$ (according to Wikipedia). Can anyone please explain to me how they came to this recurrence relation?
I don't really understand how to derive it.
|
For $n \ge 2$ using Laplace expansion on the last row gives
\begin{align}
f_n &=
\begin{vmatrix}
a_1 & b_1 \\
c_1 & a_2 & b_2 \\
& c_2 & \ddots & \ddots \\
& & \ddots & \ddots & b_{n-3} \\
& & & c_{n-3} & a_{n-2} & b_{n-2} \\
& & & & c_{n-2} & a_{n-1} & b_{n-1} \\
& & & & & c_{n-1} & a_n
\end{vmatrix}
\\
&=
(-1)^{2n-1}
c_{n-1}
\begin{vmatrix}
a_1 & b_1 \\
c_1 & a_2 & b_2 \\
& c_2 & \ddots & \ddots \\
& & \ddots & \ddots & b_{n-3} \\
& & & c_{n-3} & a_{n-2} \\
& & & & c_{n-2} & b_{n-1}
\end{vmatrix}
+ (-1)^{2n}
a_n
\begin{vmatrix}
a_1 & b_1 \\
c_1 & a_2 & b_2 \\
& c_2 & \ddots & \ddots \\
& & \ddots & \ddots & b_{n-2} \\
& & & c_{n-2} & a_{n-1}
\end{vmatrix}
\\
&=
- c_{n-1}
(-1)^{2(n-1)}
b_{n-1}
\begin{vmatrix}
a_1 & b_1 \\
c_1 & a_2 & b_2 \\
& c_2 & \ddots & \ddots \\
& & \ddots & \ddots & b_{n-3} \\
& & & c_{n-3} & a_{n-2}
\end{vmatrix}
+ a_n f_{n-1} \\
&= a_n f_{n-1} - c_{n-1} b_{n-1} f_{n-2}
\end{align}
as recurrence relation.
For the initial conditions:
From comparing the above formula with matrices for $n=1$ and $n=2$ we get:
$$
f_1 = a_1 f_0 - c_0 b_0 f_{-1} \overset{!}{=} a_1 \\
f_2 = a_2 f_1 - c_1 b_1 f_0 \overset{!}{=} a_1 a_2 - c_1 b_1
$$
The latter implies $f_0 = 1$ and the former $f_{-1} = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Evaluate $\int \frac{x+1}{(x^2-x+8)^3}\, dx$ Could you give me a hint on how to find $$\int \frac{x+1}{(x^2-x+8)^3}\, dx$$
It doesn't seem like partial fractions are the way to go with here and using the integration by parts method seems to be tedious.
I have also tried substituting $(x^2-x+8)$ but it gets even more complicated then.
Is there a way to solve this using only basic formulas?
|
We don't need to use any integration by parts.
We start by completing the square
$$\int \frac{x+1}{(x^2-x+8)^3} dx = \int \frac{x-\frac{1}{2}+\frac{3}{2}}{((x-\frac{1}{2})^2+\frac{31}{4})^3} dx =$$
$$\underbrace{ \int \frac{x-\frac{1}{2}}{((x-\frac{1}{2})^2+\frac{31}{4})^3} dx}_{=:I_1}+ \frac{3}{2}\underbrace{\int\frac{1}{((x-\frac{1}{2})^2+\frac{31}{4})^3}dx}_{=:I_2}$$
For $I_1$ substitution $(x-\frac{1}{2})^2 = t$ then $\frac{dt}{dx}= 2(x-\frac{1}{2})$ so
$$I_1 = \frac{1}{2}\int \frac{1}{t^3}dt = -\frac{1}{4t^2}+C$$
For $I_2$ first substitution $x-\frac{1}{2}=t$
$$I_2 = \int \frac{1}{(t^2+\frac{31}{4})^3}dt$$
then rearrange
$$ \int \frac{1}{(t^2+\frac{31}{4})^3}dt = \frac{4^3}{31^3}\int \frac{1}{{\left(1+\left(\frac{2t}{\sqrt{31}}\right)^2\right)}^3}dt.$$
Now another substitution $\frac{2t}{\sqrt{31}}=z$ gives us the following
$$\frac{4^3}{31^3}\int \frac{1}{{\left(1+\left(\frac{2t}{\sqrt{31}}\right)^2\right)}^3}dt = \left(\frac{4}{31}\right)^\frac{5}{2}\int \frac{1}{(1+z^2)^3}dz.$$
The last substitution $z=\tan{y}$ and $dz= \frac{1}{\cos^2y}dy$, and we get something much simpler
$$\left(\frac{4}{31}\right)^\frac{5}{2}\int \frac{1}{(1+z^2)^3}dz = \left(\frac{4}{31}\right)^\frac{5}{2}\int \frac{1}{\cos^2y}\cos^6(y)dy= \left(\frac{4}{31}\right)^\frac{5}{2}\int \cos^4(y)dy$$
Now we use that $\cos^2 y = {1 + \cos(2y) \over 2}$ to show that $\cos^4 y = \frac{\cos(4y) + 4\cos(2y)+3}{8}$ and thus
$$\left(\frac{4}{31}\right)^\frac{5}{2}\int \cos^4(y)dy = \left(\frac{4}{31^\frac{5}{2}}\right)\int (\cos(4y) + 4\cos(2y)+3)dy$$
which is easy to find. Now you have to go backwards with all the substitutions to get the result with variable $x$ only.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Easy way of memorizing values of sine, cosine, and tangent My math professor recently told us that she wanted us to be able to answer $\sin\left(\frac{\pi }{2}\right)$ in our head on the snap. I know I can simply memorize the table for the test by this Friday, but I may likely forget them after the test. So is there a trick or pattern you guys usually use to remember it? For example, SOHCAHTOA tells us what sine, cosine, and tangent really mean.
If not, I will just memorize the table. But just wanted to know what memorization techniques you guys use. I feel this is the perfect place to ask, because I bet a bunch of people in the math stackexchange, also had to go through the same thing freshman year of college.
Oh here is a picture of the unit circle:
|
First of all, you know the square diagonal is $\sqrt 2$ of the side, so sine and cosine of $45^\circ$ is $1/\sqrt 2 = \sqrt 2/2$ and tangent is $1$.
Recall also the isosceles right triangle and find the described ratios in it
(see https://commons.wikimedia.org/wiki/File:Reglas.svg)
For the rest of the first quarter just memorize that the common 30-60-90 degree triangle has its shortest to longest edge ratio of $1:2$ (see the image linked above) so sine reaches $1/2$ at one-third of the right angle. And, from the other corner of the triangle, cosine of two-thirds of a right angle is $1/2$, too (since it is a ratio of the same two lengths).
Now, from the Pythagorean theorem, the third side is $\sqrt{1^2-(1/2)^2} = \sqrt 3/2$ of the hypotenuse, hence $\sin 60^\circ$ and $\cos 30^\circ$.
Finally tangent of $60^\circ$ is $\frac{\sqrt 3}2 : \frac 12 = \sqrt 3$ (tangent is an increasing function in the first quarter and $60^\circ > 45^\circ$, so $\tan 60^\circ > 1$) and tangent of $30^\circ$ is a reciprocal of it.
The last three rows of your table follow directly from the first row and from trigonometric functions' main properties — being even, odd and periodic. Just familiarize with their graphs and find specific points on them. Pay attention to different symmetries and translations between them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1553990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62",
"answer_count": 12,
"answer_id": 8
}
|
Showing the integral of a function is finite almost everywhere Suppose$\ E \subset \mathbb R$ is closed. Let$\ d(y) = \inf \{|x-y| : x \in E \} $ and let $\ M(x) = \int_0^1\frac{d^a(y)}{|x-y|^{(1+a)}} dy $ , for some arbitrary constant $a$.
Show that $\ M(x)$ is finite everywhere in $E$ except on a set of measure zero. We are given the hint to integrate$\ M(x)$ over $E$.
I had the idea to use Fubini's Theorem to interchange the order of the integrals, obtaining
$$\int_E \int_0^1\frac{d^a(y)}{|x-y|^{(1+a)}} \,dydx = \int_0^1\int_E\frac{d^a(y)}{|x-y|^{(1+a)}} \,dxdy $$
in hoping this would allow us to simplify the expression or obtain some new information, but I'm unsure how to continue or if this is even the right way to approach the problem.
|
First of all, observe that it is enough to extend the $y$ integration over $(0,1)\cap E^c$ only because $d(y)=0$ on $E$. Also, since this set is open, I can write it as a disjoint union of open intervals $I_n$, and $M(x)=\sum \int_{I_n} d^a(y)/|x-y|^{1+a}\, dy$.
Now let's look at $\int_E M(x)\, dx$ (as you already did above), and consider the contribution coming from a fixed interval $y\in I_n=I=(c,d)$ to this. By Fubini, this is bounded by
$$
\int_c^d dy\, d^a(y) \int_{(0,1)\setminus (c,d)} \frac{dx}{|x-y|^{1+a}} .\quad\quad\quad\quad (1)
$$
We can evaluate the $x$ integral. This will be $\lesssim \max\{ (y-c)^{-a}, (d-y)^{-a}\}= d^{-a}(y)$; the equality holds because $(c,d)$ is a component of $E^c$, so $c,d\in E$ (unless $c=0$ or $d=1$). Thus (1) is $\lesssim d-c$ and since our intervals are disjoint and contained in $(0,1)$, we conclude that $\int_E M(x)\, dx <\infty$, as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let S be a cubic spline that has knots t0 < t1 < · · · < tn. Let S be a cubic spline that has knots $t_0 < t_1 < · · · < t_n$. Suppose that
on the two intervals $[t_0, t_1]$ and $[t_2, t_3]$, S reduces to linear polynomials. What does
the polynomial S look like on the interval $[t_1, t_2]$ (linear, quadratic, or cubic)?
I feel like this may be asking me something trivial since (cubic spline) might imply something cubic i.e. $x^3$.. I'm not sure though. I know that the choice of degree most frequently made for a spline function is 3. I think I also read that by Taylor's Theorem, the two interval polynomials are forced to be the same, though I could of misread. Any advice or links would be great, this is sort of a vague topic I'm guessing
|
The shape of S within interval $[t_1,t_2]$ could be linear, quadratic or cubic. Each "segment" of this cubic spline in between two knots is a cubic polynomial by its own. Therefore, the shape could look linear, quadratic or cubic depending on the control points configuration. If your cubic spline is required to be $C^1$ continuous, then the shape of S within $[t_1,t_2]$ will not be linear unless its neighboring segments are collinear.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Conditional expectation of Y given X equals expectation of X? Let $Y\vert X \sim N(X,X^2)$ We can write
$$E[Y] = E[E[Y\vert X]] =E[X]$$
Where the first equality follows from the law of iterated expectations, and the second equality confuses me. Where does the second equality come from? $X$ is distribution uniformly over $[0,1]$ if that helps.
I thought maybe if I wrote out the integrals (using the definition of expected value) the result would follow, but I did not see an obvious way to get from the integrals to $E[x]$, so I don't think that is the correct approach.
Thanks.
|
If $$Y \mid X \sim \operatorname{Normal}(\mu = X, \sigma^2 = X^2),$$ then what is $$\operatorname{E}[Y \mid X]?$$ This is obviously simply $X$: Given the value of $X$, $Y$ is normal with mean $\mu = X$, thus the expected value of $Y$ given $X$ is $X$. So $$\operatorname{E}[Y \mid X] = X.$$
Next, just take the expectation with respect to $X$: we have $$\operatorname{E}[\operatorname{E}[Y \mid X]] = \operatorname{E}[X].$$ Note that the two expectations are not with respect to the same variable: the inner expectation is with respect to $Y$ for a given $X$; the outer expectation is with respect to $X$ alone.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Maximum likelihood estimator for family of uniform distributions Question:
Based on the random sample $Y_1 = 6.3$ , $Y_2 = 1.8$, $Y_3 = 14.2$, and $Y_4 = 7.6$, use the method of maximum likelihood to estimate the parameter $\theta$ in the uniform pdf
$f_Y(y;\theta) = \frac{1}{\theta}$ , $0 \leq y \leq \theta$
My attempt:
L($\theta$) = $\theta^{-n} $
So, to maximise L($\theta$), $\theta$ must be minimum, and so $\theta$ = min($Y_i$)
But the answer is $\theta$ = max($Y_i$)
Where am I going wrong?
|
If you take the minimum $Y$ (or something smaller than the maximum) as your estimate for $\theta$, the probability of observing a $Y>\theta$ is zero. But since you have observations in that area this cannot be the MLE.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do I prove that among any $5$ integers, you are able to find $3$ such that their sum is divisible by $3$? How do I prove that among any $5$ integers, you are able to find $3$ such that their sum is divisible by $3?$
I realize that this is a number theory question and we use modular arithmetic, but I'm unsure of where to begin with this specific situation.
|
Consider the following seven numbers:
$a_1\\a_2\\a_3\\a_4\\a_5\\a_1+a_2+a_3+a_4\\a_2+a_3+a_4+a_5$
Since there are seven numbers three of them must have same remainder modulo $3$. If all are from the first five then their sum is multiple of $3$ otherwise one of the four-element-sum and one of its summand must be included and their difference is divisible by $3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
}
|
Volume bounded by elliptic paraboloids Find the volume bounded by the elliptic paraboloids given by $z=x^2 + 9 y^2$ and $z= 18- x^2 - 9 y^2$.
First I found the intersection region, then I got $x^2+ 9 y^2 =1$. I think this will be area of integration now what will be the integrand. Please help me.
|
You are wrong about the intersection region. Equating the two expressions for $z$ gives
$$x^2+9y^2=18-(x^2+9y^2)$$
$$2(x^2+9y^2)=18$$
$$x^2+9y^2=9$$
and thus here $z=9$.
The volume of your intersection can be divided into to parts: $0\le z\le 9$ where the restrictions on $x$ and $y$ are $x^2+9y^2\le z$, and $9\le z\le 18$ where the restrictions on $x$ and $y$ are $18-(x^2+9y^2)\ge z$. You can see that those two parts have equal shapes and sizes and thus equal volumes, but that is not necessary to use.
So for each region, for each $z_0$ find the area of the cross-section of your region with the plane $z=z_0$, which is an ellipse so the area is easy to find and is an expression in $z_0$. Then integrate that area over $z$ between the limits I gave. Or if you like, use a triple integral for each region. It is also possible to do a double integral over the area $x^2+9y^2=9$.
Since you ask, I'll give more details. I prefer the single-integral approach, so I'll show that here.
For the lower region $x^2+9y^2\le z$ for $0\le z\le 9$, we can use our knowledge of conic sections to see that for a given $z$ that is an ellipse with major axis $a=\sqrt z$ over the $x$-coordinate and minor axis $b=\frac{\sqrt z}3$ over the $y$-coordinate. We can use the formula for the area of an ellipse
$$A=\pi ab=\pi(\sqrt z)\left(\frac{\sqrt z}3\right)=\frac{\pi z}3$$
We now find the volume of that region with
$$V_1=\int_0^9 \frac{\pi z}3\,dz$$
For the upper region $x^2+9y^2\le 18-z$ for $9\le z\le 18$, we can use our knowledge of conic sections to see that for a given $z$ that is an ellipse with major axis $a=\sqrt{18-z}$ over the $x$-coordinate and minor axis $b=\frac{\sqrt{18-z}}3$ over the $y$-coordinate. We can use the formula for the area of an ellipse
$$A=\pi ab=\pi(\sqrt{18-z})\left(\frac{\sqrt{18-z}}3\right)=\frac {\pi(18-z)}3$$
We now find the volume of that region with
$$V_2=\int_9^{18} \frac{\pi(18-z)}3\,dz$$
Your total volume is then $V_1+V_2$.
I like this approach since it is just a pair of single integrals, each of which is very easy. Your question seems to assume the double-integral approach. Let me know if those are the bounds you really want.
Here is the double-integral, if you really want it.
We saw that the largest possible area for a given $z$ is $x^2+9y^2\le 9$. We get from that
$$-1\le y\le 1, \qquad -3\sqrt{1-y^2}\le x\le 3\sqrt{1-y^2}$$
and the bounds on $z$ from your two original conditions are
$$x^2+9y^2\le z\le 18-x^2-9y^2$$
So the appropriate double integral is
$$\int_{-1}^1 \int_{-3\sqrt{1-y^2}}^{3\sqrt{1-y^2}} [(18-x^2-9y^2)-(x^2+9y^2)]\,dx\,dy$$
Good luck with that!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Differentiability of $f(x+y) = f(x)f(y)$ Let $f$: $\mathbb R$ $\to$ $\mathbb R$ be a function such that $f(x+y)$ = $f(x)f(y)$ for all $x,y$ $\in$ $\mathbb R$. Suppose that $f'(0)$ exists. Prove that $f$ is a differentiable function.
This is what I've tried:
Using the definition of differentiability and taking arbitrary $x_0$ $\in$ $\mathbb R$.
$\lim_{h\to 0}$ ${f(x_0 + h)-f(x_0)\over h}$ $=$ $\cdots$ $=$ $f(x_0)$$\lim_{h\to 0}$ ${f(h) - 1\over h}$.
Then since $x_0$ arbitrary, using $f(x_0+0) = f(x_0) = f(x_0)f(0)$ for $y = 0$, can I finish the proof?
|
By assumption
$$
\lim_{h \to 0}\frac{f(h) - f(0)}{h} = f(0)\lim_{h \to 0}\frac{f(h) - 1}{h}
$$
exists,
implying
$$
l := \lim_{h \to 0}\frac{f(h)-1}{h}
$$
exists;
hence,
if $x \in \Bbb{R}$, then
$$
\frac{f(x+h) - f(x)}{h} = \frac{f(x)[f(h) - 1]}{h} \to f(x)l
$$
as $h \to 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Interior and exterior of a subset in the relative topology of the ambient space Let $X = (0, 4] \cup \{ 6 \} \cup [10, 11] \subset \Bbb R$. How do I find the interior and exterior of $A = (0, 2] \cup \{ 6 \} \cup (10, 11]$ in $X$?
Any help would be very welcome! Thanks in advance.
|
$A^o=(0,2)\cup\{6\}\cup(10, 11]$. ($B_1(6)\cap X=\{6\}\subset A$...)
$ext A=(2,4]\cup \{10\}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Zero divided by zero must be equal to zero What is wrong with the following argument (if you don't involve ring theory)?
Proposition 1: $\frac{0}{0} = 0$
Proof: Suppose that $\frac{0}{0}$ is not equal to $0$
$\frac{0}{0}$ is not equal to $0 \Rightarrow \frac{0}{0} = x$ , some $x$ not equal to $0$ $\Rightarrow$ $2(\frac{0}{0}) = 2x$ $\Rightarrow$ $\frac{2\cdot 0}{0} = 2x$ $\Rightarrow$ $\frac{0}{0} = 2x$ $\Rightarrow$ $x = 2x$ $\Rightarrow$ $ x = 0$ $\Rightarrow$[because $x$ is not equal to $0$]$\Rightarrow$ contradiction
Therefore, it is not the case that $\frac{0}{0}$ is not equal to $0$
Therefore, $\frac{0}{0} = 0$.
Q.E.D.
Update (2015-12-01) after your answers:
Proposition 2: $\frac{0}{0}$ is not a real number
Proof [Update (2015-12-07): Part 1 of this argument is not valid, as pointed out in the comments below]:
Suppose that $\frac{0}{0}= x$, where $x$ is a real number.
Then, either $x = 0$ or $x$ is not equal to $0$.
1) Suppose $x = 0$, that is $\frac{0}{0} = 0$
Then, $1 = 0 + 1 = \frac{0}{0} + \frac{1}{1} = \frac{0 \cdot 1}{0 \cdot 1} + \frac{1 \cdot 0}{1 \cdot 0} = \frac{0 \cdot 1 + 1 \cdot 0}{0 \cdot 1} = \frac{0 + 0}{0} = \frac{0}{0} = 0 $
Contradiction
Therefore, it is not the case that $x = 0$.
2) Suppose that $x$ is not equal to $0$.
$x = \frac{0}{0} \Rightarrow 2x = 2 \cdot \frac{0}{0} = \frac{2 \cdot 0}{0} = \frac{0}{0} = x \Rightarrow x = 0 \Rightarrow$ contradiction
Therefore, it is not the case that $x$ is a real number that is not equal to $0$.
Therefore, $\frac{0}{0}$ is not a real number.
Q.E.D.
Update (2015-12-02)
If you accept the (almost) usual definition, that for all real numbers $a$, $b$ and $c$, we have $\frac{a}{b}=c$ iff $ a=cb $, then I think the following should be enough to exclude $\frac{0}{0}$ from the real numbers.
Proposition 3: $\frac{0}{0}$ is not a real number
Proof: Suppose that $\frac{0}{0} = x$, where $x$ is a real number.
$\frac{0}{0}=x \Leftrightarrow x \cdot 0 = 0 = (x + 1) \cdot 0 \Leftrightarrow \frac{0}{0}=x+1$
$ \therefore x = x + 1 \Leftrightarrow 0 = 1 \Leftrightarrow \bot$
Q.E.D.
Update (2015-12-07):
How about the following improvement of Proposition 1 (it should be combined with a new definition of division and fraction, accounting for the $\frac{0}{0}$-case)?
Proposition 4: Suppose $\frac{0}{0}$ is defined, so that $\frac{0}{0} \in \mathbb{R}$, and that the rule $a \cdot \frac{b}{c} = \frac{a \cdot b}{c}$ holds for all real numbers $a$, $b$ and $c$.
Then, $\frac{0}{0} = 0$
Proof: Suppose that $\frac{0}{0}=x$, where $x \ne 0$.
$x = \frac{0}{0} \Rightarrow 2x = 2 \cdot \frac{0}{0} = \frac{2 \cdot 0}{0} = \frac{0}{0} = x \Rightarrow x = 0 \Rightarrow \bot$
$\therefore \frac{0}{0}=0$
Q.E.D.
Suggested definition of division of real numbers:
If $b \ne 0$, then
$\frac{a}{b}=c$ iff $a=bc$
If $a=0$ and $b=0$, then
$\frac{a}{b}=0$
If $a \ne 0$ and $b=0$, then $\frac{a}{b}$ is undefined.
A somewhat more minimalistic version:
Proposition 5. If $\frac{0}{0}$ is defined, so that $\frac{0}{0} \in \mathbb{R}$, then $\frac{0}{0}=0$.
Proof: Suppose $\frac{0}{0} \in \mathbb{R}$ and that $\frac{0}{0}=a \ne 0$.
$a = \frac{0}{0} = \frac{2 \cdot 0}{0} = 2a \Rightarrow a = 0 \Rightarrow \bot$
$\therefore \frac{0}{0}=0$
Q.E.D.
|
0/0 is indeterminate. Which means it has infinite number of values.
(0)(1) = 0 => 0/0 = 1
(0)(2) = 0 => 0/0 = 2
(0)(4) = 0 => 0/0 = 3 .....
From above it can be shown that 0/0 has infinite number of values which make it meaningless. Hence the term.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1554929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52",
"answer_count": 16,
"answer_id": 15
}
|
Show that rank of skew-symetric is even number $$A = -A^T$$
I assume that $A$ is not singular.
So $$\det{A} \neq 0$$ Then $$ \det(A) = \det(-A^T) = \det(-I_{n} A^T) = (-1)^n\det(A^T) = (-1)^n\det(A)$$
So I get that $n$ must be even.
But what about odd $n$? I know it has to be singular matrix. Hints?
|
Note: This answer is essentially based on this one by Jason DeVito. I have merely added some details.
I assume $A$ is real matrix. Note that it's rank as a real matrix equals its rank when considered as a complex matrix.
So from now on we consider $A$ as a complex matrix.
It is proved here that all the eigenvalues of $A$ are purely imaginary. Also, we know that for a real matrix, complex eigenvalues come in conjugate pairs. (Since the coefficients of the characteristic polynomial are real).
Since skew-symmetric matrices are digonalizable over $\mathbb{C}$, we get there is an even number of non-zero eigenvalues $\pm y_1 i,\pm y_2 i,...,\pm y_k i$ different from zero. Since the rank of a matrix is invariant under similarity, we get that $rank(A)$ equals the rank of it's diagonal form, which is trivially $2k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Show that $\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a} \, dz=f(a)$
Let $a\in\Bbb C$ and $r>0$ and denote by $B(a,r)\subseteq \Bbb C$ the open ball of center $a$ and radius $r$. Assume that $f:B(a,r)\to\Bbb C$ is a continuous function and for each $\epsilon>0$ let $\gamma_\epsilon:[0,2\pi]\to \Bbb C$ be given by $\gamma_\epsilon(t)=a+\epsilon e^{it}$. Show that
$$\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a} \, dz = f(a).$$
I tried the following
$$\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a}dz=\lim_{\epsilon\to0^{+}}\frac{1}{2\pi i}\int_0^{2\pi}\frac{f(a+\epsilon e^{it})}{a+\epsilon e^{it}-a}i\epsilon e^{it} \, dt = \lim_{\epsilon\to0^{+}} \frac{1}{2\pi} \int_0^{2\pi}f(a+\epsilon e^{it}) \, dt$$
If I can change the limit and the integral, then it is obvious. I tried to use the Lebesgue bounded convergence theorem to argue that, since $f:B(a,r)\to\Bbb C$ thus on $B(a,r)$ we have $|f(a+\epsilon e^{it})|\le M$, where M is the maximum of $|f(x)|$ on $B(a,r)$.
Is that valid?
|
$f$ is continuous in $a$, therefore for all $\eta > 0$ there exists
a $\delta > 0$ such that
$$
|f(z) - f(a)| < \eta \text{ for all } z \in B(a, \delta) \, .
$$
Then for $0 < \epsilon < \delta$
$$
\left| \frac{1}{2\pi i}\int_{\gamma_\epsilon}\frac{f(z)}{z-a} \, dz - f(a) \right| = \left | \frac{1}{2\pi}\int_0^{2\pi}(f(a+\epsilon e^{it}) - f(a)) \, dt \right|
\le \frac{1}{2\pi} \int_0^{2\pi} \bigl|f(a+\epsilon e^{it}) - f(a) \bigr| \, dt \\
\le \frac{1}{2\pi} \int_0^{2\pi} \eta \, dt = \eta
$$
and the conclusion follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
What is the integral of $\frac{x-1}{(x+3)(x^2+1)}$? I've worked with partial fractions to get the integral in the form $$\int\frac{A}{x+3} + \frac{Bx + C}{x^2+1}\,dx$$ Is there a quicker way?
|
Notice, for $(Bx+C)$ part, you should use separation as follows $$\int \frac{x-1}{(x+3)(x^2+1)}\ dx=\int \frac{-2}{5(x+3)}+\frac{2x-1}{5(x^2+1)}\ dx$$
$$=\frac{1}{5}\int \left(-\frac{2}{x+3}+\frac{2x}{x^2+1}-\frac{1}{x^2+1}\right)\ dx$$
$$=\frac{1}{5} \left(-2\int \frac{1}{x+3}\ dx+\int \frac{d(x^2)}{x^2+1}-\int\frac{1}{x^2+1}\ dx\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Finding the sum of the infinite series Hi I am trying to solve the sum of the series of this problem:
$$
11 + 2 + \frac 4 {11} + \frac 8 {121} + \cdots
$$
I know its a geometric series, but I cannot find the pattern around this.
|
Your series is a geometric series with a common ration of $q =\dfrac{2}{11}$ and a first term $a_0$ equal to $11$.
$-1 < \dfrac{2}{11}< 1$ so the series is convergent.
Its sum is equal to
$$a_0 \cdot \dfrac{1}{1-q} = 11 \cdot \dfrac{1}{1-\dfrac{2}{11}} = \dfrac{121}{9}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
Subsequences and convergence. Do all subsequences have to converge to the same limit for the sequence to be convergent? I was asked this question:
Prove that $a_n$ converges if and only if:
$a_{2n},a_{2n+1},a_{3n}$ all converge
I thought this was an easy generic question until I read the hint which said:
Note: It is not required that the three sub-sequences have the same limit. This needs to be shown
This is what is confusing me because I have found two sources stating something different:
Proposition 4.2. A sequence an converges to L ∈ R if and only if every subsequence converges to L.
and
Let $a_n$ be a real sequence. If the subsequence $a_{2n}$ converges to a real number L and the subsequence $a_{2n+1}$ converges to the same number L, then $a_n$ converges to L as well.
So my question is: for a sequence $a_n$ to converge does it's subsequences have to converge to the same limit? (I suspect not) and if the answer is no can you help me prove why?
Thanks in advance
|
I think the word "required" in the hint is confusing. Focus instead on the last sentence:
Note: It is not required that the three sub-sequences have the same limit. This needs to be shown
In other words, you need to show that as a result of the given hypotheses, the three sub-sequences do have the same limit. You need to do that precisely because it is a necessary condition for $a_n$ to converge.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
When is a quasiprojective variety Kobayashi hyperbolic? I am looking for some (simple and well-known) sufficient conditions on a quasiprojective variety to be Kobayashi hyperbolic.
I realize that in this generality it may be a complicated (maybe even hopelessly naive) question, so less generality is perfectly OK.
A bit more specifically, I have read in some paper a comment like "this is the complement in a (complex) projective space of a certain number of hyperplanes, so it is Kobayashi hyperbolic". Why is that? What is the statement that seems to be so classical that the author did not include it?
|
I'm certainly far from knowing the subject well. However, what follows quite easily is the following:
Suppose $X$ is Kobayashi hyperbolic. Then $X$ cannot contain any rational curves, and any rational map $f:A\to X$ from an abelian variety (or more generally any compex torus) must be constant.
The reason is simply that the Kobayashi pseudodistance is zero on the complex plane.
The converse statement is an open problem linked to the Lang conjecture, which suggests that a variety $X$ is of general type if and only if $X\neq U$, where $U$ is the closure of the union of the images of all rational maps from an abelian variety to $X$.
Further, it is known that if $X$ is compact then it is Kobayashi hyperbolic if and only if there are no non-constant maps $\mathbb{C}\to X$. (Brody's theorem)
The criterion you read about is probably the following:
Suppose $X=\mathbb{P}^{n} \backslash D$ where $D$ is the union of $2n+1$ hyperplanes in general position. Then $X$ is Kobayashi hyperbolic.
The question whether the complement of a single generic hypersurface of some large enough degree is hyperbolic, is also an open problem I think.
All this (and much more) can be found in the book by Kobayashi.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How do you solve 5th degree polynomials? I looked on Wikipedia for a formula for roots of a 5th degree polynomial, but it said that by Abel's theorem it isn't possible. The Abel's theorem states that you can't solve specific polynomials of the 5th degree using basic operations and root extractions.
Can you find the roots of a specific quintic with only real irrational roots (e.g. $f(x)=x^5+x+2$) using other methods (such as logarithms, trigonometry, or convergent sums of infinite series, etc.)?
Basically, how can the exact values of the roots of such functions be expressed other than a radical (since we know that for some functions it is not a radical)?
If no, is numerical solving/graphing the only way to solve such polynomials?
Edit: I found a link here that explains all the ways that the above mentioned functions could be solved.
|
As mentioned above, no general formula to find all the roots of any 5th degree equation exists, but various special solution techniques do exist. My own favourite:
- By inspection, see if the polynomial has any simple real solutions such as x = 0 or x = 1 or -1 or 2 or -2. If so, divide the poly by (x-a), where a is the found root, and then solve the resultant 4th degree equation by Ferrari's rule.
- If no obvious real root exists, one will have to be found. This can be done by noting that if f(p) and f(-p) have different signs, then a root must lie between x=p and x= -p. We now try the halfway point between p and -p, say q. We then repeat the above procedure, continually decreasing the interval in which the root can be found. When the interval is small enough, we have found a root.
- This is the bisection method; when such a root has been isolated we divide the polynomial by that root, producing a 4th degree equation which can again be solved by Ferrari or any another method.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 0
}
|
If $f$ is Riemann integrable and $g$ is continuous, what is a condition on $g$ such that $g \circ f$ has the same discontinuity set as $f$? I know that if $f$ is Riemann integrable and $g$ is continuous, then the discontinuity set of $g \circ f$ is contained in the discontinuity set of $f$. How would I go about finding a sufficient condition for $g$ so that the discontinuity sets are equal? I've been playing around with the idea that $g$ has to be strictly continuous, but I haven't been able to design a proof that shows that this is in fact the case.
|
I don't know if Riemann integrability has much influence here. What you are aiming at is that where $f$ is discontinuous you would have to make $g$ map values taken by $f$ near the discontinuity to different values.
For example if we take $f$ to be the Heaviside step function and we see that $f$ takes values $0$ and $1$ near it's step, but if for example $g(x) = x(x-1)$ (that is taking the value $0$ for both $0$ and $1$) we would have $g\circ f=0$.
One way to achieve this requirement is for $g$ to be strictly monotone since that would mean that $g$ will map the various values taken by $f$ near it's discontinuity points to map to different values.
To get this reasoning more strict it would be useful to use an alternate definition of continuity and limit. I think the following definition would be equivalent to the standard (in metric spaces at least):
Given a function $f$, consider $\overline f(x)=\bigcap f(\Omega_x)$ where $\Omega_x$ are open neighbourhoods of $x$. $f$ is continuous if $\overline f(x) = \{f(x)\}$, and similar the limit $\lim_{x\to a}f$ is defined as the element of $\overline f(x)$ if it's unique.
The equvalent requirement of $g$ (given that it's continuous) is then that $g(\overline f(x))$ should contain at least two elements whenever $\overline f(x)$ contains at least two elements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Let $f(x)=\sin^{23}x-\cos^{22}x$ and $g(x)=1+\frac{1}{2}\arctan|x|$,then the number of values of $x$ in the interval $[-10\pi,20\pi]$ Let $f(x)=\sin^{23}x-\cos^{22}x$ and $g(x)=1+\frac{1}{2}\arctan|x|$,then the number of values of $x$ in the interval $[-10\pi,20\pi]$ satisfying the equation $f(x)=\text{sgn}(g(x)),$ is
$(A)6\hspace{1cm}(B)10\hspace{1cm}(C)15\hspace{1cm}(D)20$
My Attempt:
$g(x)=1+\frac{1}{2}\arctan|x|$
$\text{sgn}(g(x))=1$
Because $g(x)$ is positive throughout the real number line.
$f(x)=\text{sgn}(g(x))\Rightarrow \sin^{23}x-\cos^{22}x=1$
We need to find the number of roots of $\sin^{23}x-\cos^{22}x-1=0$ in the interval $[-10\pi,20\pi]$
Periodicity of $\sin^{23}x-\cos^{22}x-1$ is $2\pi$.
But i am not sure if the function $\sin^{23}x-\cos^{22}x-1$ has one root in the interval $[0,2\pi]$ or not .
If it has one root in the $[0,2\pi]$ interval,then the answer must be $15$.Am i correct or some other approach is applicable here.
Please help me.
|
$$f(x)=sin^{23}x-cos^{22}x$$ then
$$f'(x)=sinxcosx\left(23sin^{21}x+22cos^{20}x\right)=sinxcosx \times g(x)$$
In First Quadrant obviously $f'(x) >0$, so $f$ is Increasing in $(0 \: \frac{\pi}{2})$
In Second Quadrant, $sinx \gt 0$ and $cosx \lt 0$ and $g(x) \gt 0$ so $f'(x) \lt 0$ in $(\frac{\pi}{2} \: \pi)$
and in Third and Fourth Quadrants, undoubtedly $f(x) \lt 0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1555998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A child sits at a computer keyboard and types each of the 26 letters of the alphabet exactly once, in a random order. A child sits at a computer keyboard and types each of the 26 letters of the alphabet
exactly once, in a random order.
How many independent children typists would you need such that the probability
that the word ‘exam’ appears is at least 0.9?
Probability of getting EXAM is 23!/26!. So I decided to do (23!/26!)^n ≥ 0.9. Is that correct ?
|
You're right. The probability of getting "EXAM" is $p=\frac{23!}{26!}$. Now, suppose, at least $n$ children are needed. If the total number of occurings we get is $X$, then $P(X=r)={n \choose r} \cdot p^r \cdot (1-p)^{(n-r)}$, according to the theory of Barnoulli Trials. Now, $P(X \geq 1) \geq 0.9 \implies P(X=0) \leq 0.1 \implies {n \choose 0} \cdot p^0 \cdot (1-p)^{(n-0)} \leq 0.1 \implies n \log { \left(1-\frac{23!}{26!}\right)} \leq \log 0.1 \implies n \geq 35920$
So, you'll need at least $35920$ children.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
What is the chance of rolling a die and getting the number six three times at exactly 10 rolls?
What is the chance of rolling a die and getting the number six three times at exactly 10 rolls?
I was asked this question in my statistic class. I thought the way to do this was $(1/6)^3 \times (5/6)^7$, because that is getting six 3 times and not getting it 7 times. However, that's wrong, I figured that it's because that $(1/6)^3$ would be getting 6 three times in a row.
Could you explain how to do this?
|
You’ve computed the probability of rolling six three times and some other number seven times in that order. You also need to account for all of the other ways in which three sixes can come up in ten rolls. The binomial distribution is called for here. If we let $X$ be the number of times a six is rolled, then $$
P(X=3)={10\choose 3}\left(\frac16\right)^3\left(\frac56\right)^7.
$$ As you can see, there’s an extra factor of ${10\choose3}$ that you missed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
When is implication true? If we have $p\implies q$, then the only case the logical value of this implication is false is when $p$ is true, but $q$ false.
So suppose I have a broken soda machine - it will never give me any can of coke, no matter if I insert some coins in it or not.
Let $p$ be 'I insert a coin', and $q$ - 'I get a can of coke'.
So even though the logical value of $p \implies q$ is true (when $p$ and $q$ are false), it doesn't mean the implication itself is true, right? As I said, $p \implies q$ has a logical value $1$, but implication is true when it matches the truth table of implication. And in this case, it won't, because $p \implies q$ is false for true $p$ (the machine doesn't work).
That's why I think it's not right to say the implication is true based on only one row of the truth table. Does it make sense?
|
The fact some specific values satisfy the formula doesn't mean the formula is true in general. "It is noon now and it rains" is true right now and right here, but at other place or in other time this will appear false.
Your implication will turn out true if you check it keeps satisfied by any possible combination of its components values.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
}
|
Prove that $m-n$ divides $p$ Let $a_1,a_2,a_3$ be a non constant arithmetic Progression of integers with common difference $p$ and $b_1,b_2,b_3$ be a geometric Progression with common ratio $r$. Consider $3$ polynomials $P_1(x)=x^2+a_1x+b_1, P_2(x)=x^2+a_2x+b_2,P_3(x)=x^2+a_3x+b_3$. Suppose there exist integers $m$ and $n$ such that $gcd(m,n)=1$ and $P_1(m)=P_2(n),P_2(m)=P_3(n)$ and $P_3(m)=P_1(n)$. Prove that $m-n$ divides $p$.
I made the $3$ equations but am unable to get the result from them. How should I manipulate them. Thanks.
|
(Too long for a comment)
The claim is not true. A counterexample is
$$m=5,\quad n=3,\quad p=3,\quad r=\frac 35,\quad a_1=-11,\quad b_1=\frac{75}{2}.$$
In the following, I'm going to write about how I figured out these values and about that the claim is true if $b_1$ is an integer.
We have
$$P_1(m)=P_2(n)\iff m^2+a_1m+b_1=n^2+a_2n+b_2$$
$$\iff m^2+a_1m+b_1=n^2+(a_1+p)n+b_1r\tag 1$$
$$P_2(m)=P_3(n)\iff m^2+a_2m+b_2=n^2+a_3n+b_3$$
$$\iff m^2+(a_1+p)m+b_1r=n^2+(a_1+2p)n+b_1r^2\tag 2$$
$$P_3(m)=P_1(n)\iff m^2+a_3m+b_3=n^2+a_1n+b_1$$
$$\iff m^2+(a_1+2p)m+b_1r^2=n^2+a_1n+b_1\tag 3$$
From $(2)-(1)$, we have
$$pm+b_1(r-1)=pn+b_1r(r-1)\tag4$$
From $(3)-(2)$, we have
$$pm+b_1r(r-1)=-2pn+b_1(1-r^2)\tag5$$
Now $(4)-(5)$ gives
$$b_1(r-1-r^2+r)=3pn+b_1(r^2-r-1+r^2)\iff pn=b_1(-r^2+r)\tag6$$
Also $-2\times (4)-(5)$ gives
$$pm=b_1(-r+1)\tag 7$$
Hence, from $(6)(7)$, we have
$$pmr=pn,$$
i.e.
$$r=\frac nm.$$
Now from $(7)$, we have
$$pm=b_1\left(-\frac nm+1\right)\Rightarrow pm^2=b_1(m-n)\tag8$$
Note here that $\gcd(m^2,m-n)=1$.
Hence, if $b_1$ is an integer, then we have that $m-n$ divides $p$.
By the way, from $(3)$, we can have $a_1=-m-n-p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Delta Method corollary Consider the Delta Method as stated in van der Vaart Theorem 3.1 at page 26 (you can find the page here https://books.google.co.uk/books?id=UEuQEM5RjWgC&pg=PA32&lpg=PA32&dq=van+der+vaart+theorem+3.1+because+the+sequence+converges+in+distribution&source=bl&ots=mnRJLD8XLC&sig=inIMmSPvWDfrPc6r4U7dnuQ_3OM&hl=it&sa=X&ved=0ahUKEwj2sYfJob3JAhUKox4KHZNcBTUQ6AEILTAC#v=onepage&q=van%20der%20vaart%20theorem%203.1%20because%20the%20sequence%20converges%20in%20distribution&f=false).
Notice that among the sufficient conditions it just requires the function $\phi()$ to be differentiable at $\theta$.
I have understood the proof but I don't know how to show that when $T$ is $N(0,\Sigma)$ then $\phi_{\theta}'(T)$ is $N(0,(\varphi'_{\theta})^T\Sigma (\varphi'_{\theta}))$.
I have found this proof
https://en.wikipedia.org/wiki/Delta_method
but it assumes $\phi()$ differentiable in the entire domain to apply the mean value theorem.
Do you have suggestions on how to proceed?
|
If $T$ follows a Gaussian distribution on $\mathbb{R}^{n}$ with mean $0$ and covariance matrix $\Sigma$, denoted by $T \, \sim \, \mathcal{N}(0,\Sigma)$, and if $A$ is a $n \times n$ real matrix, we have :
$$ AT \, \sim \, \mathcal{N}(0,A \Sigma A^{\top}) $$
If $f \, : \, \mathbb{R}^{n} \, \rightarrow \, \mathbb{R}^{n}$ is differentiable at $\theta$, the differential of $f$ at $\theta$, denoted by $\mathrm{D}_{\theta}f$, is a linear map from $\mathbb{R}^{n}$ to itself. Let $\mathrm{J}(\theta)$ the jacobian matrix of $f$ at $\theta$. Therefore, for all $x$, $\mathrm{D}_{\theta}f \cdot x = \mathrm{J}(\theta)x$.
$$ \mathrm{D}_{\theta}f \cdot T \, \sim \, \mathcal{N}( 0 , \mathrm{J}(\theta) \, \Sigma \, \mathrm{J}(\theta)^{\top} ) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Expectation of inverse of sum of iid random variables - approximation I am working with the set of i.i.d discrete random variables $\{\zeta_1, \ldots, \zeta_n\}$. Each one of them can take either of the $m$ values $\{z_1, \ldots, z_m\}$ with corresponding probabilities $\{p_1, \ldots , p_m\}$.
I am trying to understand when I can apply the following approximation for the expectation (which I believe to be the first-order one):
\begin{equation}
\mathbb{E}\left[\min\left(A, \frac{1}{\sum_{k=1}^n \mathrm{I}[\zeta_k\geq \bar{z}]}\right)\right] \approx \min\left(A, \frac{1}{\mathbb{E}\left[\sum_{k=1}^n \mathrm{I}[\zeta_k\geq \bar{z}]\right]}\right)
\end{equation}
Here $A$ and $\bar{z}$-- are some constants and an indicator function $\mathrm{I}[\zeta_k\geq \bar{z}]=1$ if $\zeta_k\geq \bar{z}$ and $0$ otherwise.
I am working in the regime $n\to\infty$ and the question is whether the approximation above is good enough under this condition? (if it is possible to judge, though)
I checked the following link, however I am not sure how to make use of $\mathcal{L}_{X(t)}^n$ under large $n$ in the case of discrete distribution.
Thank you in advance for any help.
|
Let $p=P(\zeta\ge z)$. Then as $n\to\infty$ $$LHS\approx E(\frac 1 {B(n,p)};B(n,p)\ge 1)\approx \frac 1 {np}+\frac {np(1-p)}{(np)^3}=\frac 1 {np}+\frac {1-p}{(np)^2}=\frac 1{np}(1+\frac{1-p}{np})$$
while $RHS=\frac 1 {np}$. Hence $LHS\sim RHS$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Showing that a cubic extension of an imaginary quadratic number field is unramified. Let $\alpha^3-\alpha-1=0$, $K=\mathbb Q(\sqrt{-23})$, $K'=\mathbb Q(\alpha)$, and $L=\mathbb Q(\sqrt{-23},\alpha)$.
Then I am asked to show that the field extension $L/K$ is unramified.
I know that if $\mathfrak p\in\operatorname{Max}(\mathcal O_K)$ ramifies in $L$ then $\mathfrak p\mid\mathfrak d$ where $\mathfrak d$ is the discriminant ideal, i.e. the ideal of $\mathcal O_L$ generated by all $\operatorname{disc}(x_1,x_2,x_3)$ such that $(x_1,x_2,x_3)$ is a $K$-basis of $L$ and $x_1,x_2,x_3\in\mathcal O_L$.
I know that $(1,\alpha,\alpha^2)$ is one such basis with discriminant $-23$, so if $\mathfrak p$ ramifies in $L$ then $23\in\mathfrak p$, and in $\mathcal O_K$ we have $(23)=(\sqrt{-23})^2$. Therefore the only candidate for $\mathfrak p$ is $(\sqrt{23})$. If I could find a basis with discriminant not divisible by $23$ then I would be done, but that quickly turns messy.
Now factoring $23$ in $\mathcal O_{K'}$ gives $(23)=(23,\alpha-3)(23,\alpha-10)^2$, so I need would like to have two different prime ideals of $\mathcal O_L$ containing $(23,\alpha-10)$, then I will have three different prime ideals of $\mathcal O_L$ containing $\sqrt{-23}$ and I will be done.
Alternatively I need to show that no prime ideal $\mathfrak q$ of $\mathcal O_L$ has $\sqrt{-23}\in\mathfrak q^2$.
Any ideas?
|
Here’s an approach quite different from what you had in mind, purely local and not ideal-theoretic.
The field $K$ is ramified only at $23$, and since the $\Bbb Q$-discriminant of $k=\Bbb Q(\alpha)$ is of absolute value $23$ as well, the only possibility for ramification of $K'=Kk$ over $K$ is above the prime $23$, in other words at the unique prime of $K$ lying over the $\Bbb Z$-prime $23$. So we may think of localizing and completing.
Calling $\,f(X)=X^3-X-1$, we have $f(3)=23$ and $f(10)=989=23\cdot43$. Indeed, over $\Bbb F_{23}$, we have $f(X)\equiv(X-3)(X-10)^2$. Let’s examine $g(X)=f(X+10)=X^3+30X^2+299X+989=X^3+30X+13\cdot23X+43\cdot23$, which shows that $g$ has two roots of $23$-adic valuation $1/2$ (additive valuation, that is). Now let’s go farther, and, calling $\sqrt{-23}=\beta$, look at
\begin{align}
h(X)=g(X+4\beta)&=X^3+(30+12\beta)X^2+(-805 + 240\beta)X+(-10051 - 276\beta)\\
&=X^3+(30+12\beta)X^2+(-35\cdot23+240\beta)X+(-19\cdot23^2-12\cdot23\beta)\,.
\end{align}
Look at the $23$-adic valuations of the coefficients: $0$ for the degree-$3$ and degree-$2$ terms, $1/2$ for the linear term, and $3/2$ for the constant term. So the Newton polygon of $h$ has three segments of width one, slopes $0$, $1/2$, and $1$. Thus $h(X)=f(X+10+4\beta)$ has three roots all in $\Bbb Q_{23}(\sqrt{-23}\,)$, and therefore the unique prime of $K$ above $23$ splits completely in $K'$, and the extension is unramified.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Complex numbers problem I'm trying to use the relevant rules and definitions but making little progress.
|
Simplify it in $x,y$ you get two parts i.e. two equations and it's done. $(x+iy)(1+x-iy)+\frac{5.(x).(1+2i)}{5}-2x-4=0+0i$ now we have two equations. The imaginary part and real part ie $(x+x^2+y^2)+x-2x-4=0$ and $ y+xy+2x=0$ two equations two unknowns solve for $x,y$ and you are done with it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Find Least Squares Regression Line I have a problem where I need to find the least squares regression line. I have found $\beta_0$ and $\beta_1$ in the following equation
$$y = \beta_0 + \beta_1 \cdot x + \epsilon$$
So I have both the vectors $y$ and $x$.
I know that $\hat{y}$ the vector predictor of $y$ is $x \cdot \beta$ and that the residual vector is $\epsilon = y - \hat{y}$.
I know also that the least squares regression line looks something like this $$\hat{y} = a + b \cdot x$$
and that what I need to find is $a$ and $b$, but I don't know exactly how to do it. Currently I am using Matlab, and I need to do it in Matlab. Any idea how should I proceed, based on the fact that I am using Matlab?
Correct me if I did/said something wrong anyway.
|
First define
X = [ones(size(x)) x];
then type
regress(y,X)
Observations:
*
*the first step is to include a constant in the regression (otherwise you would be imposing $a=0$).
*the output will be a vector with the OLS estimates $(a,b)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why is $e$ the number that it is? Why is $e$ the number that it is? Most of the irrational number that we learn about in school have something to do with geometry, like $\pi$ is the ratio of a circle's diameter to its circumference. Those numbers can also be derived using maths ($\pi = \lim_{n\to\infty}4\sum_{k = 1}^{n}\frac{(-1)^{k+1}}{2k - 1}$). On the other hand, $e$ is derived only from math, and finds no other immediate geometric basis. Why then, is $e$ the number that it is ($2.7182\ldots$) and not some other number?
EDIT: My question is more along the lines of why $e$ is $2.7182\ldots$ On an alien world where there is a different system of numbers, could there be a similar constant that has all the properties of $e$, but is not $2.7182\ldots$?
|
The identity (which can be seen as a definition) $e^{i\theta} = \cos\theta + i\sin \theta$ is geometrically significant. If you delve into the subject of complex analysis, you'll understand that $e^{i\theta}$ can be seen as a "rotation".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1556943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 11,
"answer_id": 7
}
|
A more "complete picture" of the relationship between various modes of convergence I am trying to generate/looking for a more comprehensive/complete list/diagram of how the 4 major modes (as listed on wikipedia) of convergence of random variables relate to each other:
*
*Distribution (law)
*Probability
*Almost sure
*$\mathcal{L}^p$ (in mean)
Wikipedia also has this handy little chart (image versions of this exist online as well)
$$\begin{matrix}
\xrightarrow{L^s} & \underset{s>r\geq1}{\Rightarrow} & \xrightarrow{L^r} & & \\
& & \Downarrow & & \\
\xrightarrow{a.s.} & \Rightarrow & \xrightarrow{\ p\ } & \Rightarrow & \xrightarrow{\ d\ }
\end{matrix}$$
What I am trying to generate/looking for is something like this:
$$
\begin{matrix}
\xrightarrow{L^s} & \underset{s>r\geq1}{\Rightarrow} & \xrightarrow{L^r} & & \\
& & \Downarrow \overset{(a)}{\uparrow} & & \\
\xrightarrow{a.s.} & \underset{\overset{(b)}{\leftarrow}}{\Rightarrow} & \xrightarrow{\ p\ } & \Rightarrow & \xrightarrow{\ d\ }
\end{matrix}$$
Where: $(a) = $ "with uniform integrability" and $(b) = \text{ if } \forall \epsilon> 0, \sum_n \mathbb{P} \left(|X_n - X| > \varepsilon\right) < \infty$
and so on and so forth. Does anyone know if anything like this exists? Does anyone want to help me fill this out? Can anyone suggest a better format for organizing this?
Thanks!
|
Gearoid de Barra's 2003 book Measure Theory and Integration contains an entire section called "Convergence Diagrams" (7.3, pp.128-131) concerning six modes of convergence in three different settings: almost everywhere, in mean, uniform, in $L^p$, almost uniform, and in measure. The settings are: the general case, the case where $\mu(X) < \infty$, and the case where the sequence of converging functions is dominated by an $L^1$ function. In each setting, the diagram features arrows between modes of convergence: an arrow from mode 1 to mode 2 means, in that setting, convergence in mode 1 implies convergence in mode 2. For example, in the general case, almost uniform convergence implies almost everywhere convergence; if the space has finite measure, the converse is true, and the arrow is there to show this.
The subsequent section devoted to counterexamples illustrating the non-implications between the modes, which explain why the diagrams lack arrows where they do. Some of the relevant pages are visible on Google Books.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to solve $2^x=2x$ analytically. $$2^x=2x$$
I am able to find the solutions for this equation by looking at a graph and guessing. I found them to be $x_1=1$ and $x_2=2$.
I also can also find them by guess and check, but is there anyway algebraically to solve this problem?
|
Not without using the Lambert $W$ function, which is really just a fancy reformulation of the problem, and not actually a solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Why Radians oppose to degrees? Degrees seem to be so much easier to work with and more useful than something with a $\pi $ in it. If I say $33\:^\circ $ everyone will be able to immediately approximate the angle, because its easy to visualize 30 degrees from a right angle(split right angle into three equal parts), but if I tell someone $\frac{\pi }{6}$ radians or .5236 radians...I am pretty sure only math majors will tell you how much the angle will approximately be.
Note: When I say approximately be, I mean draw two lines connected by that angle without using a protractor.
Speaking of protractor. If someone were to measure an angle with a protractor, they would use degrees; I haven't seen a protractor with radians because it doesn't seem intuitive.
So my question is what are the advantages of using degrees? It seems highly counterproductive? I am sure there are advantages to it, so I would love to hear some.
PS: I am taking College Calc 1 and its the first time I have been introduced to radians. All of high school I simply used degrees.
|
The radian is the standard unit of angular measure, and is often used in many areas of mathematics. Recall that $C = 2\pi r$. If we let $r =1$, then we get $C = 2\pi$. We are essentially expressing the angle in terms of the length of a corresponding arc of a unit circle, instead of arbitrarily dividing it into $360$ degrees.
The reason that you believe degrees to be a more intuitive way of expressing the measure of an angle, is simply because this has been the way you have been exposed to them up until this point in your education. In calculus and other branches of mathematics aside from geometry, angles are universally measured in radians. This is because radians have a mathematical "naturalness" that leads to a more elegant formulation of a number of important results. Trigonometric functions for instance, are simple and elegant when expressed in radians.
For more information: Advantages of Measuring in Radians, Degrees vs. Radians
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Of what is the Hopf map the boundary? Consider a generator $x$ of the singular homology group $H_3(S^3)$. I think of this (perhaps wrongly?) as something like the identity on $S^3$, cut up into simplices. Now we have the Hopf fibration $\eta: S^3 \to S^2$, which gives us $\eta_*x \in H_3(S^2) = 0$. Thus $\eta_*x$ is a boundary. Of what is it the boundary?
If this is silly or uninteresting, why is it so?
|
$\newcommand{\Cpx}{\mathbf{C}}\newcommand{\Proj}{\mathbf{P}}$Let $M$ be the complex surface obtained by blowing up the unit ball in $\Cpx^{2}$ at the origin. The exceptional curve at the origin is a complex projective line, i.e., an $S^{2}$. The boundary of $M$ is $S^{3}$, and radial projection from the unit ball to the origin induces a holomorphic (hence continuous) map $M \to S^{2}$ whose restriction to the boundary is the Hopf map $S^{3} \to S^{2}$. Thus $\eta_{*}x$ may be viewed as the boundary of $M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
With a 45-gallon and a 124-gallon jug with no marketings (but known capacities), how do you measure exactly 1 gallon of water? I could imagine filling up the $124$, then filling the $45$ from it, so there is $124-45=79$ left in the $124$, then again so there is $37$. Then I'm lost!
|
You could use the 124-gallon jug four times to get 496 gallons, and subtract the 45-gallons 11 times to get 1 gallon.
How to solve:
Let $x$ be the number of 124-gallons and $y$ the number of 45-gallons. We want solutions to
$$ 124x + 45y = 1$$
Obviously, one will be negative. Taking modulo 45, we get
$$ 34x = 1 \mod 45 $$
Multiplying by $34^{-1} = 4$, we see $x = 4 \mod 45$ and when $x = 4$, $y = -11$.
If you don't have free extra cups, then you could do this but just make sure you don't go over the 124 limit. So fill the the 124-gallon jug, take out 45 as many times as you can until you get 37 gallons. Then fill the 45-jug with all 37 gallons, refill the 124-jug and repeat. You will end up refilling 4 times and throwing away water 11 times.
If you don't have spare cups, the exact path would look like this, where (r) represents refilling the 124-gallons jug and --> indicates that you emptied the 45-gallon jug.
124 0 (r)
79 45-->0
34 45-->0
0 34
124 34 (r)
113 45-->0
68 45-->0
23 45-->0
0 23
124 23 (r)
102 45-->0
57 45-->0
12 45-->0
0 12
124 12 (r)
91 45-->0
46 45-->0
1 45-->0 (Solution)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof of trignometric identity Could you help me prove this? I've gotten stuck, need some help..
$$\sin^2\Theta + \tan^2\Theta = \sec^2\Theta - \cos^2\Theta$$
Here's what I've done so far:
Left Side:
$$\sin^2\Theta + \frac{\sin^2\Theta}{\cos^2\Theta}=\frac{\sin^2\Theta\cos^2\Theta+\sin^2\Theta}{\cos^2\Theta}$$
Right Side:
$$\frac{1}{\cos^2\Theta} - \cos^2\Theta = \frac{1-\cos^2\Theta\cos^2\Theta}{\cos^2\Theta}$$
Thanks in advance.
Note: I have tried simplifying it even further but I'm not getting the results, so I've left it at the points that I'm sure of.
|
Hint: $\sin^2\theta+\cos^2\theta=1.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
If $T: V \to V $ is an operator with $\mathrm{rank}(T)=n-1$, then there exists a vector $v \in V$ such that $T^{n-1}(v) \ne 0$
Let $V$ be an $n$-dimensional vector space over a field $F$ and $T: V \to V$ an operator. If $\mathrm{rank}(T)=n-1,$ prove that $T^{n-1}$ is not equal to the $0$ operator on $V$.
We need to show that there exists a vector $v \in V$ such that $T^{n-1}(v) \ne 0$. Since $\mathrm{rank}(T) = n-1$ and $\dim V = n$, we have $\dim \ker(T) = n -(n-1)=1$ by the rank-nullity theorem. Suppose $(w)$ is a basis of $\ker(T)$ and extend it to a basis $B_V = (v_1,\ldots,v_n)$ where $w = v_j$ for some $j \in \{1,\ldots,n\}$. Apply $T$ to a vector $v$ in the basis of $V$ but not contained in $\mathrm{span}(w)$. Then $T(v) \ne T(\alpha w) = \alpha T(w)$ for every $\alpha \in F$. Thus $T^{n-1}(v) \ne \alpha T^{n-1}(w) = \alpha T^{n-2}(T(w)) = \alpha T^{n-2}(0)=0 \Rightarrow T^{n-1}(v) \ne 0$.
Question 1: Is this correct? I am really struggling to prove this.
Question 2: In the case where $V$ is a $1$-dimensional vector space, wouldn't $T^{n-1}(v)$ be the $0$ operator on $V$? We have $\ker(T) \subset V$ and if $\mathrm{rank} = n -1 = 1 - 1 = 0$ then $\dim V = 0 + \mathrm{nullity}(T) = \dim \ker(T)$ implying that $V= \ker(T)$ and so $T(v)=0$ for every $v \in V$. Surely this implies that $T^{n-1}(v)$ is the $0$ operator?
|
Question 1: No, it's not correct. You can't conclude that $T^{n-1}(v)\neq \alpha T^{n-1}(w)$ just because $T(v)\neq \alpha T(w)$. If you're still stuck on the problem, I suggest trying to prove that $\dim\ker(T^k)\leq k$ for all $k$ by induction on $k$.
Question 2: When $n=1$, the hypotheses do imply $T=0$. However, $T^{n-1}$ is not $0$, because $n-1=0$, and any operator (even $0$) raised to the $0$th power is defined to be the identity operator.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Existence of integer sequence for any positive real number I am stuck on the following problem:
Given any $\lambda > 0$ there exists strictly increasing sequence of positive integers $$ 1 \leq n_1 < n_2 < ... $$ such that
$$\sum _{i=1}^{\infty} \frac{1}{n_i} = \lambda$$
Any help will be appreciated.
Thank you.
|
You can use the following lemma : for any two sequence $u_{n}$ and $v_{n}$ from
$\mathbb{R}^{\mathbb{N}}$ that tend to $+\infty$ , and $u_{n+1}-u_{n} \to 0$
then the set $\{u_{n}-v_{m}|(n,m)\in \mathbb{N}^{2}\}$ is dense in $\mathbb{R}$ .
Proof :
let $e>0$ there is $k$ such that $\forall n \geq k$ we have $|u_{n+1}-u_{n}| < e$ , now let $s\geq u_{k}$ the set
$H=\{ n\geq k | s\geq u_{n}\}$ is a noempty ($k$ is in it) finite part of $\mathbb{N}$ since $u_{n} \to +\infty$ let
$m=max(H)$ hence $u_{m}\leq s < u_{m+1}$
now let $x \in \mathbb{R}$ , since $v_{n} \to +\infty$ there is $p$ such that $x+v_{p} \geq u_{k}$
and from the above we know that there is $n$ such that $|u_{n}-(x+v_{p})|< e$ (just take $s=x+v_{p}$ )
which is $|u_{n}-v_{p}-x| < e$ Done !
thake $u_{n}=v_{n}=H_{n}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Condition that all 3 roots of $az^3+bz^2+cz+d=0$ have negative real part The problem is - 'find the condition that all 3 roots of $f(z)=az^3+bz^2+cz+d=0$ have negative real part, where $z$ is a complex number'.
The answer - '$a,b$, and $d$ have the same sign.'
Honestly, I have no clue about how to proceed. Here is what I tried- $ f'(z)=3az^2+2bz+c$, which at extrema gives the roots as $z=\frac{-2b+/-\sqrt{(4b^2-12ac)}}{6a}$. If the real part is negative, then $\frac{-2b}{6a}<0$, which implies that $a,b$ have the same sign. I am not sure if what I have done is right, and have no idea about proving the rest of it. Please help.
Thanks in advance!!
|
Ok, I think I have it.
Since $z$ is a complex number, $f(z)$ must have a complex root. As they occur in conjugate pairs, 2 out of 3 roots of f(z) must be imaginary. The remaining one shall be purely real (and obviously rational). Now, all three have negative real parts. So, lets name the roots-$\alpha=-x+iy, \beta=-x-iy, \gamma=-k$, where $x,y,k>0$. Now,since $\alpha+\beta+\gamma=-b/a$; we get (after substituting and a trivial simplification),
$-(2x+k)=-b/a$. As both x and k are positive, we get $b/a>0$ i.e. $b$ and $a$ are of same sign.(Which can also be arrived at with differentiation, as in my question)
Now, $\alpha\beta\gamma=-d/a$, which after substitution and simplification gives-
$(-k)(x^2+y^2)=-d/a$.
As $x^2+y^2$ is always positive, multiplying with negative number $(-k)$ gives the LHS as negative. Thus, $-d/a<0$, implying $d/a>0$, and hence, $a,d$ have same sign.
Combining above two results, $a,b,d$ have the same sign.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1557977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Is $xRy \iff x+y = 0$ an equivalence relation? $R$ is a relation on real numbers. $xRy \iff x+y = 0 $.
Is it an equivalence relation?
My answer is no
proof:
-(Reflexive) let $x = a$ , $aRa \iff 2a=0$.
Since $2a = 0$ doesn't hold for every real number $a$, $R$ is not reflexive.
Since $R$ isn't reflexive, $R$ is not an equivalence relation.
Is my reasoning correct ?
|
It is not only correct, it is almost the most straightforward approach you could take. Nicely done!
The only improvement I would suggest is that you demonstrate a particular real number $a$ for which $a\:R\:a$ fails to hold. For example, you could just say: "Since $1+1=2\ne 0,$ then $1\:R\:1$ fails to hold, and so $R$ is not an equivalence relation on the real numbers."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
"Exploding Dice" vs. "Non-Exploding Dice" For a roleplaying game I'm designing, I'm using a core resolution mechanic that comes down to counting "successes" on rolled dice. Characters roll a handful of dice (six-sided) and count each die that generated a 4, a 5, or a 6. Dice that generated a 1, a 2, or a 3 are discarded.
Further, for dice that generated a 6, you roll one additional die and count that die as if it was part of the original roll - this is called "exploding."
Example:
A character rolls 6 dice. He rolls the following: 1, 1, 3, 4, 6, 6. So far, three successes (4, 6, 6). The two 6s explode, giving him the following rolls: 2, 5. In total, the character scores four successes (4, 6, 6, 5).
Question:
What are the consequences of removing a character's ability to "explode" 6s compared to the consequences of allowing a character's 6s to "explode" twice (where each 6 adds two additional dice to the roll). I'm hoping that the probabilistic outcome of these two processes are similar, but I can't figure out if they actually are.
|
Let's call $x$ the average of successes per dice. First, let's examine the case in which they can not explode 6s. 3 possible successes, 3 possible fails, so $x=1/2$
Now the case where they explode twice. There are 3 possible successes and 3 possible fails, but also 1 case where 2 additional dices are rolled (so and additional $2x$ successes in that case). Therefore, the successes are $x={1\over2} +{2 \over 6} x$
You get then that $x={3 \over 4}$
Edit: In the standard explode system (1 for each 6) you would get $x={1 \over 2}+{1 \over 6} x $, which implies $x={3 \over 5} $
For every 100 successes they would normally have (with 167 dices), they will have 83 with the penalisation and 125 with the bonus.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Inverse image of a vector subspace I've a question on the inverse image in linear functions.
Let $f:V\rightarrow W$ be a linear function between two vector spaces.
I know that if I have to find the inverse image of a particular vector $\vec{w} \in W$ I can find it as
$f^{-1}(\vec{w})= \Big\{ { \vec{x_0}+\vec{y}} \Big\}$
where $ \vec{x_0}$ is a particular vector contained in $f^{-1}(\vec{w})$ and $\vec{y}$ is a generic vector of $Ker(f)$, therefore $f^{-1}(\vec{w})$ is never a vector space.
But if I have to determine the inverse image of a vector subspace $\mathscr{K} \subset W$, where $\mathscr{K} =\mathscr{L}(\vec{v_1},\vec{v_2},...,\vec{v_k})$, can I find it as
$f^{-1}(\mathscr{K})=\mathscr{L} (\vec{y},\vec{x_1},\vec{x_2},...,\vec{x_k})$
where $\vec{y}$ is a generic vector of $Ker(f)$ and
$\vec{ x_1}$ is a vector such that $f(\vec{ x_1})=\lambda_1 \vec{v_1}$ ,
$\vec{ x_2}$ is a vector such that $f(\vec{ x_2})=\lambda_2\vec{v_2}$ and
$\vec{ x_k}$ is a vector such that $f(\vec{ x_k})=\lambda_k \vec{v_k}$ ?
In this way $f^{-1}(\mathscr{K})$ is a vector space as it should be.
Thanks a lot for your help
|
your guess about the inverse image of the subspace is right.
Take $\vec{x} \in f^{-1}(\mathscr{K})$, so that $f(\vec{x}) \in \mathscr{K}$.
Hence we will have,
\begin{align}
f(\vec{x})&= \sum \limits_{i=1}^{k}c_i \vec{v_i}\;\;\;\text{for some scalar} c_i\\
&= \sum \limits_{i=1}^{k}c_if(\vec{x_i})\\
&= f(\sum \limits_{i=1}^{k}c_i \vec{x_i})
\end{align}
This gives $\vec{x}-\sum \limits_{i=1}^{k}c_i \vec{x_i} \in kerf.$ And you get that $\vec{x}\in \mathscr{L}\{\vec{y},\vec{x_1},\vec{x_2},...,\vec{x_k}\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Finding XOR of all even numbers from n to m Problem : Given a range $[n,m]$, find the xor of all even numbers in this range. Constraints : $1 \le n \le m \le 10^{18}$
How do we solve this problem?
P.S. I asked this question at stackoverflow but that was not the appropriate place to ask this question. So I re-asked it here.
|
Something like this:
For bit $k$,
where $k=0$ is the
units bit,
blocks of length
$2^{k+1}$
xor to zero,
since there are
$2^k$ with a $0$
and
$2^k$ with a $1$.
Therefore,
we can xor the $k$-th bit
from
$i=
n+\lfloor \frac{m-n}{2^{k+1}} \rfloor
$
to
$m$
to get that bit,
since the blocks of
length $2^{k+1}$
xor to zero.
Do this until
$2^{k} > m$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Show that $f$ is uniformly continuous given that $f'$ is bounded. Let $D\subset \mathbb{R}$ be an interval and $f:D\rightarrow \mathbb{R}$ a continuously differentiable function such that its derivative $f':D\rightarrow \mathbb{R}$ is bounded. Show that $f$ is uniformly continuous.
I will admit before I begin; I am stabbing in the dark.
If we restrict the domain of $f$ to some interval $[-X,X] \subset D$, then $f:[-X,X]\rightarrow \mathbb{R}$ is uniformly continuous.
Now since we know that $f'$ is bounded we can say that for some arbitrary $y \in D$, $|f'(x)| = \lim_{x \rightarrow y}\frac{|f(x) - f(y)|}{|x-y|} \leq K ( \forall x \in D:x \neq y) $ for some $K >0 $
This is not going to sounds mathematical, but I need to somehow get rid of the limit so I can proceed with something like $\frac{|f(x) - f(y)|}{|x-y|} \leq K$
|
Let $f:[-X,X]\rightarrow \mathbb{R}$ has the conditions that you want. Let $\epsilon>0$, then we want to find $\delta>0$ such that:
$$\forall x,y\in [-X,X](|x-y|<\delta \rightarrow |f(x)-f(y)|<\epsilon)$$
By Mean value theorem for every $x,y\in [-X,X],x<y$, there exists $c\in (-X,X)$ such that
$$\frac{f(y)-f(x)}{y-x}=f'(c)$$
Because for every $x\in (-X,X)$, we have $|f'(x)|<K$, therefore for every $x,y\in [-X,X],x<y$
$$\frac{|f(y)-f(x)|}{|y-x|}<K$$
so we choose $\delta = \frac{\epsilon}{K}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Can you always decrease the variance by removing outliers? Consider a finite set of real numbers. Let its variance be $V$. Let the highest number be $h$ and the lowest number be $l$, and let's assume $l < h$.
Let $x$ be an arbitrary number with $l < x < h$.
Now create a new set by removing one element equal to $h$ and replacing it with $x$. Call the variance of this new set $V_h$.
Also create a new set by removing one element equal to $l$ and replacing it with $x$. Call the variance of this new set $V_l$.
Is it true (and is there a proof!) of the statement: "either $V_l < V$ or $V_h < V$"?
To show that both are not always true, consider the set ${0,0,0,1}$ and replace one of the $0$'s with $0.9$. The variance goes up.
|
I think I may have found the proof.
Number the points $p_1,p_2,...,p_N$ from lowest to highest.
Replace one of the extreme points, $p_E$ where $E \in \{1,N\}$, by some $x$ with $p_1<x<p_N$. Call the new set ${q_1,...,q_N}$.
Now
$$
Var(q) = \frac{1}{N} \sum (q_i - \bar{q})^2
\le \frac{1}{N} \sum (q_i - \bar{p})^2
$$
since the variance is minimized around the mean.
So, we want to prove that
$$
\sum (q_i - \bar{p})^2 < Var(p)
$$
i.e. that
$$
\sum (q_i - \bar{p})^2 < \sum (p_i - \bar{p})^2.
$$
This reduces to
$$
(x-\bar{p})^2 < (p_E - \bar{p})^2
$$
since the sets only differ in one element.
Now choose $E$ so that $x-\bar{p}$ has the same sign as $p_E - \bar{p}$. As $p_E$ was an extreme point, we know that the above inequality holds.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Limit of $n\ln\left(\frac{n-1}{n+1}\right)$ I'm having trouble with this limit:
$$\lim_{n\to ∞}n\ln\left(\frac{n-1}{n+1}\right)$$
It's supposed to be solvable without using l'Hospital's rule. I'm guessing it's a case for the squeeze theorem, but I'm not exactly sure. Any advice?
|
Notice, let $\frac{1}{n}=t$, hence $$\lim_{n\to \infty}n\ln\left(\frac{n-1}{n+1}\right)$$ $$=\lim_{t\to 0}\frac{1}{t}\ln\left(\frac{\frac{1}{t}-1}{\frac{1}{t}+1}\right)$$ $$=\lim_{t\to 0}\frac{\ln\left(\frac{1-t}{1+t}\right)}{t}$$
$$=\lim_{t\to 0}\frac{\ln\left(1-t\right)-\ln\left(1+t\right)}{t}$$
Now, using Taylor's series,
$$=\lim_{t\to 0}\frac{-\left(t+\frac{t^2}{2}+O(t^2)\right)-\left(t-\frac{t^2}{2}+O(t^2)\right)}{t}$$
$$=\lim_{t\to 0}\left(-\left(1+\frac{t}{2}+O(t)\right)-\left(1-\frac{t}{2}+O(t)\right)\right)$$
$$=-\left(1+0\right)-\left(1-0\right)=\color{red}{-2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Condition for Sums in a Set Let $$ S = \{ a_1,a_2, \cdots , a_k \}$$ be comprised of divisors of $n \in \mathbb{N}, n>1$ and $n$ not prime. Suppose we select $p$ elements from the set $S$ (possibly more than once for each divisor), and the $p$ chosen elements have a total sum of $q$. Prove that it is always possible to select $q$ elements with a sum of $np$.
I suppose that I should add what I have done so far. I tried to roughly calculate the maximum number of times the largest divisor less than $n$ will go into the number, and then repeated the process until I had constructed a number less than $n$ such that the difference between the number I had formed was less than $a_2$ times the number of elements I have left to form the number. This approach seems to lack rigour though, and I have a hunch that a constructive method is not the way to go about this problem. Also, a natural extension would be to do the problem, but with the constraint that we are not allowed to select some of the divisors of $n$ as part of the $p$ elements at all.
Thanks
|
The restriction on $n$ isn't necessary: it's true for $n$ prime as well as $n$ composite (and even, trivially, for $n=1$). Indeed, I came up with the following solution by considering the case where $n$ is prime.
Suppose that $a_1,\dots,a_p$ are (not necessarily distinct) divisors of $n$, and set $q=a_1+\cdots+a_p$. Define a sequence of $q$ divisors of $n$ as follows: first take $a_1$ copies of $\frac n{a_1}$, then $a_2$ copies of $\frac n{a_2}$, and so on through $a_p$ copies of $\frac n{a_p}$.
*
*The total number of divisors in this sequence is indeed $a_1+\cdots+a_p=q$.
*Each batch of divisors sums to $a_j \cdot \frac n{a_j} = n$, and there are $p$ batches of divisors; so the total sum is $np$.
(Side note: if you iterate this process twice, you get back the concatenation of $n$ copies of the original sequence.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1558921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why does $p$ have to be moderate in the Poisson approximation to binomial random variable? So the proof that a binomial rv with large $n$ approximates a poisson rv with $\lambda = np$ (given below) doesn't seem to use the fact that $p$ is moderate/small, so why does wikipedia and my textbook (Ross) state this as a condition?
Proof:
If $\lambda = np$, then $ P(X = i) = {n \choose i} p^i (1 - p)^{n-i} $
$$ = \frac{(n(n-1) ... (n - i + 1)}{i!} (\frac{\lambda}{n})^i \frac{(1 - \frac{\lambda}{n})^n}{(1 - \frac{\lambda}{n})^i} $$
and since $ \lim_{n\to \infty} \frac{(n(n-1) ... (n - i + 1)}{n^i} = 1$
and $ \lim_{n\to \infty} (1 - \frac{\lambda}{n})^i = 1$
$$ \lim_{n\to \infty} P(X = i) = \frac{\lambda^i}{i!}e^{-\lambda}$$
|
Suppose for example that $p=1/2$ and $n$ is say $100$. Let $i=50$. Then $\frac{n(n-1)(n-2)\cdots(n-i+1)}{n^i}$ is quite far from $1$. So the argument quoted no longer works.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Orthogonal complement of the column space of a matrix Let $H =\operatorname{Col}(A)$, where $$A =\begin{pmatrix}
1&2\\
2&4\\
3& 1\end{pmatrix}$$
Find $H^\perp$, the orthogonal complement
of $H$.
$H$ is the same thing as $A$, and as I understand it, Orthogonal complement means the span of vectors that are orthogonal to the matrix, but I don't understand how to solve for this. Nor am I completely clear on how a vector can be orthogonal to a matrix, as I only no how to find the dot product between vectors with only $1$ row.
Thanks.
|
In general, for any matrix $A \in \mathbb{C}^{m\times n}$, the answer may be obtained, using those relations:
$$ \text{im}(A^*) =\ker(A)^{\perp} ~~\text{and}~ ~\ker(A^*) = \text{im}(A)^{\perp}$$ furthermore
$$\mathbb{C}^{m} = \ker(A^*) \oplus \text{im}(A) ~~\text{and}~ ~ \mathbb{C}^{n} = \ker(A) \oplus \text{im}(A^*)$$ (this is sometimes called Fredholm alternative)
Where
*
*$\ker(\cdot)$ is the kernel of a matrix.
*$\text{im}(\cdot)$ is the image of a matrix.
*$A^*$ is the conjugate transpose. When dealing with real matrices only, this becomes the usual transpose.
*$\oplus$ - direct sum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Double Angle identity??? The question asks to fully solve for $$\left(\sin{\pi \over 8}+\cos{\pi \over 8}\right)^2$$
My question is, is this a double angle formula? And if so, how would I go about to solve it?
I interpreted it this way; $$\left(\sin{\pi \over 8}+\cos{\pi \over 8}\right)^2$$
$$=2\sin{\pi \over 4}+\left(1-2\sin{\pi \over 4}\right)$$
Have I done this right so far? I feel I have not.
|
It is related to a double-angle identity. The relevant identities you need are:
$$\sin^2\theta + \cos^2\theta = 1$$
and
$$\sin 2\theta = 2\sin\theta\cos\theta$$
You will also need to expand your binomial, using the basic algebraic identity
$$(A+B)^2 = A^2 + 2AB + B^2$$.
So, start by expanding the binomial; then use the two trig identities to simplify.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 0
}
|
How to show that det(A)≤1?
Let $A = (a_{ij})_n$ where $a_{ij} \ge 0$ for $i,j=1,2,\ldots,n$ and $\sum_{j=1}^n a_{ij} \le 1$ for $i = 1,2,\ldots,n$.
Show that $|\det(A)| \le 1$.
Should I use the definition of matrix:
$$\det(A)=\sum \textrm{sgn}(\sigma)a_{1,\sigma(1)}a_{2,\sigma(2)}...a_{n,\sigma(n)}$$
I don't understand what is $a_{i,\sigma(i)}$ ? Where $i=1,2,
\ldots,n$. Or is there another way to solve it?
|
Just take modulus the determinant expression you wrote. Note each term on the right under sum can be bounded by using G.M. $\le$ A.M.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
How to 'get rid of' limit so I can finish proof? Suppose $\sup_{x \in \mathbb{R}} f'(x) \le M$.
I am trying to show that this is true if and only if $$\frac{f(x) - f(y)}{x - y} \le M$$
for all $x, y \in \mathbb{R}$.
Proof
$\text{sup}_{x \in \mathbb{R}} f'(x) \le M$
$f'(x) \le M$ for all $x \in \mathbb{R}$
$\lim_{y \to x} \frac{f(y) - f(x)}{y - x} \le M$
$\lim_{y \to x} \frac{f(x) - f(y)}{x - y} \le M$
I can see geometrically why this property holds, but how do I get rid of the limit here? Or am I approaching it wrong in general?
|
well, first of all, we have to presume f is continuous and differentiable on R. This statement isn't true otherwise.
1) Suppose $\frac{f(x) - f(y)}{x - y} > M$ for some $x, y \in \mathbb R$.
By the mean value theorem, there exist a $c; x <c < y$ where $f'(c) = \frac{f(x) - f(y)}{x - y}$.
So $f'(c) > M$.
So $\sup f'(x) \le M \implies f'(c) \le M$ for all $c \in \mathbb R \implies$ $\frac{f(x) - f(y)}{x - y} \le M$ for all $x, y \in \mathbb R$.
2) Suppose $\frac{f(x) - f(y)}{x - y} \le M$ for some $x, y \in \mathbb R$.
Then $\lim_{y \rightarrow x}\frac{f(x) - f(y)}{x - y} = f'(x) \le M$ for all $x \in \mathbb R$. So {$f'(x)|x \in \mathbb R$} is bounded above by M so $\sup_{x \in \mathbb{R}} f'(x) \le M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to solve the quadratic form I am a physicist and I have a problem solving this
\begin{equation}
Q(x)=\frac{1}{2}(x,Ax)+(b,x)+c
\end{equation}
In a book it says that:
"The minimum of Q lies at $\bar{x}=-A^{-1}b$ and
\begin{equation}
Q(x)=Q(\bar{x})+\frac{1}{2}((x-\bar{x}),A(x-\bar{x}))
\end{equation}
How do I go to this? What how much is $Q(\bar{x})$?
|
You have
$$
\frac{1}{2}((x-\bar{x}),A(x-\bar{x}))=\frac{1}{2}(x,Ax)-(x,A\bar{x})+\frac{1}{2}(\bar{x},A\bar{x})
$$
so that
$$
\tag{1}
\frac{1}{2}((x-\bar{x}),A(x-\bar{x}))+Q(\bar x)=\frac{1}{2}(x,Ax)-(x,A\bar{x})+(\bar{x},A\bar{x})+(b,\bar x)+c.
$$
On the other hand:
$$
(\bar x,A\bar x)+(b,\bar x)=(A^{-1}b,b)-(b,A^{-1}b)=0
\quad\hbox{and}\quad
(x,A\bar{x})=-(x,b),
$$
and plugging these into (1) yields the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Augmented Algebras Recently I started to study Operads. My reference is Algebraic Operads of Jean-Louis Loday and Bruno Vallette. In this book they define augmented algebra of the following form:
an $\mathbb{K}$-algebra $A$ is augmented when there is a morphism of algebras $\epsilon: A\rightarrow \mathbb{K}$ called augmentation map.
My problem starts when they claim that if $A$ is augmented then $A$ is canonically isomorphic, as vector space, to $\mathbb{K}1_A\oplus \ker\epsilon$.
Well, I know that the Splitting Lemma gives us an isomorphism between $A$ and $\mathbb{K}1_A\oplus \ker\epsilon$, since that $\epsilon$ is surjective and all exact sequence of vector spaces splits. But the proof that I know of this result depends on the basis of the vectorial spaces and for me it does not provide a canonical isomorphism.
So the book is correct? If is, how do I proof that there is a canonical isomorphism between A and $\mathbb{K}1_A\oplus \ker\epsilon$?
|
Define $\phi \colon A \to \mathbf K1_A \oplus \ker \epsilon$ by
$$ \phi(a) = \bigl(\epsilon(a)1_A, a - \epsilon(a)1_A\bigr) $$
then $\phi$ is obviously linear and one-to-one. And if $(r1_A, a) \in \mathbf K 1_A \oplus \ker \epsilon$ is given, then
$$ \epsilon(r1_A + a) = r + \epsilon(a) = r $$
hence $$ \phi(r1_A + a) = (r1_A, a) $$
so $\phi$ is onto. Hence, $\phi$ is an isomorphism (and no choices involved in its construction).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1559719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.