Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Negate the following statements formally so that **no negation symbol remains:** Negate the following statements formally so that no negation symbol remains:
(i) $ \ \ \forall y \exists x (y>0 \to x \leq 0 ) \ $
(ii) $ \ \forall x \forall y \exists z(x <z \leq y ) \ $
Answer:
My approach is as follows:
(i)
The negation of the statements in (i) without negation symbol is
$ \exists y \ \ s.t. \ \ \forall x (x>0 \to y \leq 0) \ $
(ii)
Given
$ \forall x \forall y \ \exists z \ ( x <z \leq y ) \ \\\sim \forall x \forall y \ \exists z \ ( x <z \wedge z \leq y ) $
The negation is given as
$ \exists x \exists y \ \ s.t. \ \ \forall z \ ( x \geq z \wedge z > y ) $
Am I true ?
Is there any help ?
|
Your answer to (i) is wrong. The correct negation of $A\rightarrow B$ is $A\wedge \neg B$ so in your case, it should be
$\exists y \forall x (y>0) \wedge (x>0)$
which is not equivalent to the answer you have of $\exists y \forall x (x>0 \rightarrow y\leq 0)$. Your sentence is equivalent to $\exists y \forall x ( y\leq 0)\vee(x\leq 0)$, using that $A\rightarrow B$ is equivalent to $B\vee \neg A$.
Your answer to part (ii) is correct.
(EDIT): I misread your answer to (ii), confusing the conjunction with the disjunction. I is wrong, but if you make that change, it will be correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2409371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Is Scrabble's method of determining turn order fair? At least the way my family plays, turn order is determined by drawing tiles and seeing who has the letter closest to A (blanks taking precedence). If I recall correctly, there are 100 unevenly distributed letters.
Intuitively, it seems like it would be unfair, though I can't come up with a way to prove it. Obviously the distribution matters: my thoughts are if there are more tiles in the first half of the alphabet (including blanks), then the starting player holds an advantage since they're more likely to get a tile earlier in the alphabet than the next (and vice versa if there are fewer tiles in the first half). That doesn't seem too right, though. I'm probably just forgetting some basic prob stats.
I imagine this is likely a duplicate, but I couldn't find anything relating to it (maybe since I've been searching for Scrabble). My apologies if it is.
Please feel free to edit in appropriate tags.
|
Look at a toy problem. Suppose there are two players and three tiles: $A_{1}$, $A_{2}$, and $B$. The subscripts on the $A$,s indicate there are two $A$ tiles. Look at all the possible outcomes: the first letter is what the first player draws, the second letter is what the second player draws and the third letter is the remaining letter.
*
*$A_{1}A_{2}B$
*$A_{1}BA_{2}$
*$A_{2}A_{1}B$
*$A_{2}BA_{1}$
*$BA_{1}A_{2}$
*$BA_{2}A_{1}$
The first and third outcomes are draws. The first player wins the second and fourth outcomes. The second player wins the fifth and sixth outcomes. In this example; if the first player draws a good tile, an $A$, the first player can do no worse than draw; if the first player draws a bad tile, a $B$, the second player has to win.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2409496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 6,
"answer_id": 1
}
|
Hamiltonian from Lagrangian $L= \frac{m}{2}(\dot{r}^2+r^2\dot{\theta^2})+ \frac{k\cos(\theta)}{r^2}$ I'm doing the first exercises with the Lagrangians and Hamiltonians.
Let:
$$L= \frac{m}{2}(\dot{r}^2+r^2\dot{\theta^2})+ \frac{k\cos(\theta)}{r^2}$$
$$p_1=m\dot{r}$$
$$p_2=mr^2\dot{\theta}$$
$$H=\frac{p^2}{2m}-\frac{k\cos(\theta)}{r^2}$$
I do not understand why the Hamiltonian is:
$$H=\frac{1}{2m}\left(p_1^2+\frac{p_2^2}{r^2}\right)-\frac{k\cos(\theta)}{r^2}$$
and not
$$H=\frac{1}{2m}(p_1^2+ p_2^2)-\frac{k\cos(\theta)}{r^2}$$
|
The answer given by Harry49 is correct but I think that you should also understand the more general approach to the problem. It is not always true that the Hamiltonian $H$ is equal to $T+V$; there are some features that the Lagrangian must obey and one should be careful before stating this equality. In the case above this is a true statement but in general what one will have is that
$$H := \sum_i\dot{q}_i(q_i,p_i,t)p^i - L(q_i,\dot{q}_i(q_i,p_i,t),t)\tag{1}$$
$$p_i := \frac{\partial L}{\partial \dot{q}_i}$$
So, we have the generalized coordinates $r$ and $\theta$ such that there will be generalized momentum for each
$$p_r := \frac{\partial L}{\partial \dot{r}} = m\dot{r}$$
$$p_{\theta} := \frac{\partial L}{\partial \dot{\theta}} = mr^2\dot{\theta}$$
Then we note that there are bijections between the generalized velocities and the generalized momentums, in such a way that we can wright $\dot{r} = \dot{r}(p_r)$ and $\dot{\theta} = \dot\theta(p_\theta, r)$. We then set that in $(1)$, in order to get
$$H = \frac{p_r^2}{m}+\frac{p_\theta^2}{mr^2} - \left( \frac{m}{2}\left(\frac{p_r^2}{m^2}+r^2\frac{p_\theta^2}{m^2r^4}\right)+ \frac{k\cos(\theta)}{r^2} \right)$$
$$H = \underbrace{\frac{p_r^2}{m}-\frac{p_r^2}{2m}}_{p_r²/2m} + \underbrace{\frac{p_\theta ^2}{mr^2} - \frac{p_\theta^2}{2mr^2}}_{p_\theta^2/2mr²} - \frac{k\cos(\theta)}{r^2}$$
$$H = \frac{p_r^2 }{2m}+\frac{p_\theta^2}{2mr^2} - \frac{k\cos(\theta)}{r^2}$$
So we get the $r^2$ below. There are reasons for that, when you go to generalized coordinates that are polar coordinates in order to simplify your treatment you need to maintain the dimensions of the generalized velocities, but because the $\dot\theta$ has dimensions of $s^{-1}$ you get the $r$ in front. There are also geometric reasons related to it. The fact that you get a $r^2$ below is just a consequence of changing to Hamilton mechanics but maintaining all these constrains and geometrical/physical knowledge of the movement.
The first part is indeed related to kinetic energy. This kinetic energy has a radial component and a polar component.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2409631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
solution verification for biased dice I have the following problem for which I'm not sure my solution is correct:
A dice is constructed in such a way that 1 dot occurs twice more often than the rest of the points. The probabilities for the rest of the dots are mutually equal. The dice is thrown 2 times.
Calculate the probability that the dots on the second dice are more than the dots on the first one.
My solution:
Let x be the probability for 1, and y the probability for anything else.
$$
\left\{
\begin{array}{c}
x=2y\\
x+5y=1
\end{array}
\right.
$$
I get that $x=\frac{2}{7}$ and $y=\frac{1}{7}$. I have four different scenarios for the dots - $(1, 1), (1, i), (i, j), (i, 1)$, where $2 \le i \le 6$ and $2 \le j \le 6$. I have denoted those cases $H_1, H_2, H_3 $ and $H_4$ respectively. For the probability of the desired event I'm using the formula for total probability:
$$P(A)=\sum_{i=1}^4P(H_i)P(A|H_i)=\frac{2}{7}\frac{2}{7}0+\frac{2}{7}\frac{5}{7}1+\frac{5}{7}\frac{5}{7}(\frac{10}{49})+\frac{5}{7}\frac{2}{7}0=\frac{740}{49^2} \approx 0.30$$
Now, is this correct and are there other ways to solve this problem?
|
I don't think the term $\frac57\frac57\frac{10}{49}$ is right. $\frac57\frac57$ is the probability that both are greater than $1$, but if that happens the probability that $i<j$ is quite a bit bigger than $\frac{10}{49}$.
The right way to approach this is that the probability of the second die showing more dots is equal to the probability of the first die showing more dots, so the probability you want is just half the probability that the two dice show different values. So work out the probability they show the same value, subtract from $1$, and halve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2409878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to expand formula for covariance matrix? I have seen an expression for the covariance of not of just two random variables but the whole covariance matrix $\mathbf{K}$.
$$\mathbf{K} = E\bigg((\mathbf{x} - E(\mathbf{x}))(\mathbf{x} - E(\mathbf{x}))^{T}\bigg) $$ where $\mathbf{x}$ is a vector of random variables $(\mathbf{x_{1}}, .., \mathbf{x_{n}})$.
Presummably, in a more statistical sense the same is true for averages : let's denote them by $\langle \square \rangle = \frac{1}{n} \sum_{i=1}^{n}\big(\square\big)$.
$$\mathbf{K} = \bigg\langle(\mathbf{x} - \langle\mathbf{x}\rangle)(\mathbf{x} - \langle\mathbf{x}\rangle)^{T}\bigg\rangle $$ where $\mathbf{x}$ is a vector of vectors $(\mathbf{x_{1}}, .., \mathbf{x_{n}})$, e.g. $\mathbf{x_{1}} = \begin{bmatrix}1 \\ 2 \\ 3\end{bmatrix}$.
Can this be expanded to see that individual elements of $\mathbf{K}$ indeed make up covariates?
Instead of calculating each individual covariate (computationally) to build $\mathbf{K}$ it would be nice to have a single analytical expression.
|
The covariance between, say $x_j, x_k$ is given by
\begin{align}
\sigma_{jk} &= \text{Cov}(x_j, x_k) \\
&= \mathbb{E}[(x_j-\mu_j)(x_k-\mu_k)]\\
&= \mathbb{E}(x_j x_k)-\mu_j \mu_k \\
\end{align}
Clearly, when $k=j$ we obtain the variance.
$$\sigma_{jj} = \mathbb{E}[(x_j-\mu_j)^2]$$
For $k$-variables, set the covariance matrix as
$$
\bf{\Sigma} = (\sigma_{ij})=\begin{bmatrix}
\sigma_{11} & \sigma_{12} & \sigma_{13} & \dots & \sigma_{1j} \\
\sigma_{21} & \sigma_{22} & \sigma_{23} & \dots & \sigma_{2j} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\sigma_{i1} & \sigma_{i2} & \sigma_{i3} & \dots & \sigma_{ij}
\end{bmatrix}
$$
Clearly as in the above definition of the variance, the diagonal entries of the above matrix are found by composing $(x_i-\mu_i)(x_j-\mu_j)^T$. It is left to you to see that the off diagonal elements of this matrix follows the same pattern.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2409989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Combinatorics: n people visit k exhibitions Edit: Excuse me, the numerator of the given solution should've been a rising factorial. That said, I still don't understand where it comes from?
The question is as follows :
n friends visit k exhibitions, each person visits only 1 exhibition. Find the number of possibilities if
b) Only the number of friends who goes to each exhibition matters. Neither the order nor who goes where matters.
Now, I have the solution which would be $$\frac{[n+1]^{k-1}}{(k-1)!} $$
But I don't really understand why that is so.
If I understand that correctly, that would be a stars and bars problem, but I don't seem to grasp the concept of it.
Any help would be greatly appreciated.
|
Arranging $n$ stars and $k-1$ bars is a way of modeling this situation. Each arrangement of those $n+k-1$ symbols gives us a set of numbers of people to go to each exhibition. However, the result of that counting problem is:
$$\binom{n+k-1}{k-1}=\frac{(n+k-1)!}{n!(k-1)!}$$
Do we have a condition that each exhibition has to be attended by at least one friend? Do we even know that $n\geq k$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that two summations are equivalent
Show that
$$
\sum_{i=1}^n \left(x_i - \bar{x}\right) \left(y_i - \bar{y}\right)
= \left(\sum_{i=1}^n x_i y_i\right) - n \bar{x} \bar{y}.
$$
My work is attached:
.
I'm stuck on what I should do next.
Any guidance in the right direction would be great!
|
You are correct. Recall that by definition $\overline{x}=\frac{1}{n}\sum_{i=1}^nx_i$ and $\overline{y}=\frac{1}{n}\sum_{i=1}^ny_i$.
Therefore by linearity,
$$\sum_{i=1}^n\overline{y}x_i=\overline{y}\sum_{i=1}^nx_i=n\overline{x}\overline{y},\quad
\sum_{i=1}^n\overline{x}y_i=\overline{x}\sum_{i=1}^ny_i=n\overline{x}\overline{y},\quad
\sum_{i=1}^n\overline{x}\overline{y}=\overline{x}\overline{y}\sum_{i=1}^n 1=n\overline{x}\overline{y}.$$
Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Proving that $\{x\in\Bbb{R}\mid 1+x+x^2 = 0\} = \varnothing$ without the quadratic formula and without calculus I'm asked to prove that $\{x\in\Bbb{R}\mid 1+x+x^2 = 0\} = \varnothing$ in an algebra textbook.
The formula for the real roots of a second degree polynomial is not introduced yet. And the book is written without assuming any prior calculus knowledge so I can't prove this by finding the minimum and the limits as x approaches $\infty \text{ and} -\infty $.
So there has to be a simple algebraic proof involving neither the quadratic formula nor calculus but I'm stuck.
Here are some things I thought:
Method 1:
$1+x+x^2 = 0 \iff 1+x+x^2+x = x$
$\iff x^2+2x+1 = x$
$\iff (x+1)^2 = x $
And here maybe prove that there is no x such that $(x+1)^2 = x$ ???
Method 2:
$1+x+x^2 = 0$
$\iff x^2+1 = -x$
By the trichotomy law only one of these propositions hold: $x=0$ or
$x>0$ or $x<0$.
Assuming $x=0$:
$x^2+1= 0^2+1 = 0 +1 = 1$
$-x = - 0 = 0$
And $1\neq 0$
Assuming $x>0$:
$x>0 \implies -x < 0$
And $x^2+1 \ge 1 \text{ } \forall x$
With this method I have trouble proving the case $x<0$:
I thought maybe something like this could help but I'm not sure:
$x<0 \implies -x=|x|$
$x^2 = |x|^2$
And then prove that there is no x such that $|x|^2 + 1 = |x|$??
Can anyone please help me? Remember: No calculus or quadratic formula allowed.
|
$x=0$ is not a root, so divide by $x \ne 0$ and write the equation as:
$$
x+\frac{1}{x} = -1
$$
This requires $x$ to be negative for the LHS to be negative, but then $y=-x$ is positive and $\displaystyle y+\frac{1}{y} \ge 2$ by AM-GM, so $\displaystyle x+\frac{1}{x} \le -2 \lt -1\,$, therefore there are no real solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 14,
"answer_id": 12
}
|
A tricky integration problem Given: $$f(x)=\int_{x}^{0}\frac{\cos(xt)}t\, dt.$$ What is $f'(x)?$
|
This is an improper integral, as $t$ approaches $0$, the numerator approaches $1$ while the denominator approaches $0$.
You must check that the limit doesn't diverge!
$$
f(x)=\lim_{c\rightarrow 0^{\pm}}\int_x^c\frac{\cos(xt)}{t}dt
$$
where you use $\pm$ on $0$ depending on the sign of $x$.
Therefore, near $0$, you're integrating something like $\int_0^\varepsilon\frac{dt}{t}$.
This, however, diverges, so $f(x)$ diverges. How do you take a derivative?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Baby Rudin Theorem 2.27c I feel like I'm missing something very simple here, but I'm confused at how Rudin proved Theorem 2.27 c:
If $X$ is a metric space and $E\subset X$, then $\overline{E}\subset F$ for every closed set $F\subset X$ such that $E\subset F$. Note: $\overline{E}$ denotes the closure of $E$; in other words, $\overline{E} = E \cup E'$, where $E'$ is the set of limit points of $E$.
Proof: If $F$ is closed and $F \supset E$, then $F\supset F'$, hence $F\supset E'$. Thus $F \supset \overline{E}$.
What I'm confused about is how we know $F \supset E'$ from the previous facts?
|
If $x$ is a limit point of $E$ then $x = \lim x_n$ for some sequence $x_n \in E \setminus \{x\}$. If $E \subseteq F$ then $x_n \in F \setminus \{x\}$ so we can also say that $x$ is a limit point of $F$. Therefore
$$ E' \subseteq F' \subseteq F. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 1
}
|
net convergence implies bounded? I just saw the following theorem:
Theorem Let $\alpha:[a,b] \to \mathbb{R}$ be a mapping. If the Riemann-Stieltjes integral $$I(f) := \int_a^b f(t) \, d\alpha(t)$$ exists for all continuous functions $f:[a,b] \to \mathbb{R}$, then $\alpha$ is of bounded variation.
in this answer.
but I'm confused by this step in the proof:
Since, by assumption, $I^{\Pi}(f) \to I(f)$ as $|\Pi| \to 0$ for all $f \in C[a,b]$, we have
$$\sup_{\Pi} |I^{\Pi}(f)| \leq c_f < \infty$$
Does he use the fact that if $a_n$ converge, then $\sup_n a_n<\infty$ ? But I have heard from someone that the convergence of Riemann sums is a kind of "net convergence", does this convergence have the same property of ordinary convergence?
|
You can take limits along one sequence of partitions with the norms (i.e. the maximum of the lengths of subintervals) tending to zero. There is no need to use nets here since you are not proving the existence of limits of Riemann Steiltje sums.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving an nonlinear ODE $$y'y'' = ky^2$$
I need a closed form expresion, if not atleast an almost closed form expression such as an inverse of an integral characterization. What could be the properties of its solutions?
|
Multiply both sides by $y'$ and integrate to get $y'^3=ky^3+C$. This integrates again to give a very ugly hypergeometric function (according to Wolfram alpha). So just hope your boundary conditions make C=0 to give An aedonist's solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Sum of nonzero squares modulo p It is easy to prove that for prime $p$ every element of $\mathbb{Z} / p \mathbb{Z}$ can be written as the sum of two squares. An elementary proof is given here: Sum of two squares modulo p
How can we show that, provided further $p \geq 7$, any nonzero element of $\mathbb{Z}/ p \mathbb{Z}$ is the sum of two nonzero squares? I don't see how we could extend the counting argument used in the linked post to this case. Thanks
|
Adapting Mikhail Ivanov's argument to a slightly different but AFAICT equivalent question to fit here. Some of the elements appeared also in my answer to that question.
Every non-zero element of $\Bbb{Z}/p\Bbb{Z}$ is either a square or a non-square
If $a=b^2$ is a non-zero square, then, as $p>5$ we have
$$
a=b^2=(3b/5)^2+(4b/5)^2
$$
as a sum of two non-zero squares.
On the other hand, if $a$ is a non-zero non-square then $ab^2$ is a non-zero non-square for any $b\neq0$. Furthermore, we get all the non-squares in this way. If $a=x^2+y^2$ with $xy\neq0$, then $ab^2=(bx)^2+(by)^2$, so it suffices to show that we can write at least one of the non-squares as a sum of two non-zero squares. Let's pretend for one time's sake that $\Bbb{Z}/p\Bbb{Z}$ has an "order", so $0<1<2<\ldots<p-1$. Let $a$ be the smallest non-square in this order. Clearly $a>1$. It follows that $a-1=b^2$ is a square, where $b\neq0$. This implies that $a=1+b^2$ is a sum of two non-zero squares. Therefore so are all the other non-squares.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2410920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Complex structure through two involutions The standard algebraic definition of a complex structure $I$ is $I^2=-1$. On real pairs $(a,b)$ it is represented like $(a,b)\mapsto (-b,a)$. But what if we have not had the negative pairs of reals $-(a,b)=(-a,-b)\quad\Leftrightarrow\quad I^2=-1$ and had only the single real involution $a\mapsto -a$. Is it known a definition of $I$ through composition of two independent involutions on couples? I mean the swap $(a,b)\mapsto (b,a)$ and the one entry sign change $(a,b)\mapsto (-a,b)$? They both look more primitive so more fundamental.
|
I'm not sure exactly what you're asking. On the surface, it looks like you're excited about the factorization $\begin{bmatrix}-1&0\\0&1\end{bmatrix}\begin{bmatrix}0&1\\1&0\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}0&-1\\1&0\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}-b\\a\end{bmatrix}$.
Of course, there is an equally plausible decomposition using complex conjugation rather than negation of the real numbers $(a,b)\mapsto (a,-b)$ which would yield a factorization
$\begin{bmatrix}0&1\\1&0\end{bmatrix}\begin{bmatrix}1&0\\0&-1\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}0&-1\\1&0\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}-b\\a\end{bmatrix}$.
We can play these games all day with different involutions. The question is: are these that look special(/fundamental/primitive) to us give us any insights?
Personally I don't see any. The "specialness" as far as I can see is just an artifact of the basis we are working with. The product of these particular involution is not fundamentally different from the product of any two other involutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is the span of any 3 n-dimensional, linearly independent vectors, $\mathbb R^3$? I am not sure if the span of a set of any 3 (n-dimensional) vectors that are linearly independent is $\mathbb R^3$.
I think, since any two (2 dimensional vectors) that are independent always span $\mathbb R^2$, that if the dimension of those vectors is 3 or more,they should span a space that looks like $\mathbb R^3 $, since 2 3D vectors that are independent span a plane.
|
Any two $n$-dimensional vector spaces (over the same field ) are isomorphic. .. It would be easy to write down an isomorphism: just send basis vectors to basis vectors. .. i.e. $\mathcal i:V\to W$ by $\mathcal i (v_i)=w_i $ where $\{v_1, \dots, v_n\} $ and $\{w_1, \dots,v_n\} $are bases and extend linearly. ..
Secondly, the span of $3$ linearly independent vectors is a $3$-dimensional vector space...
Therefore, any $n $-dimensional vector space (over $\mathbb R $) is isomorphic to $\mathbb R^n $. In particular, the span of $3$ linearly independent vectors (over $\mathbb R $)is isomorphic to $\mathbb R^3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Integration by parts to prove a function is constant a.e. Let $(a,b)$ be an interval on $\mathbb{R}$. Let $f \in L^1(a,b)$. Assume that $$
\int_a^b f(x)g'(x)\, dx =0
$$
for all $C^1$ functions $g$ with support compactly contained in $(a,b)$. Prove that there is a constant $c$ such that $f(x)=c$ for almost every $x \in (a,b)$.
My thought was to use integration by parts so as to have $$
\int_a^b g(x)df(x)=0
$$
but since $f(x)$ is only integrable, it does not seem to work.
Any help/hint is appreciated!
|
Hint: If you happen to know that
$$ \int_a^b f(x) h(x) dx = 0 , \ \ \forall h\in C_c((a,b)): \int_a^b h\; dx=1$$
implies that $f$ vanish
then you may reduce to this situation by considering the difference
of two such $h$'s. One of them will give rise to the constant $c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Function between two measurable functions is measurable Let $f$ be a function on $\mathbb{R}^n$. Assume that for any $\epsilon>0$, there exist measurable functions $g, h \in L^1(\mathbb{R}^n)$ such that $g(x) \leq f(x) \leq h(x)$ for all $x \in \mathbb{R}^n$ and $$
\int_{\mathbb{R}^n} (h(x)-g(x))\, dx < \epsilon
$$
Prove that $f$ is measurable on $\mathbb{R}^n$ and $f \in L^1(\mathbb{R}^n)$.
My first thought was to show that $f(x)$ is a pointwise limit of measurable functions; but since all information we have about $f(x)$ is in $L^1$, it would be hard to consider pointwise limits.
My problems are:
(1) I am not sure how to prove $f$ is measurable. To establish measurability of $f(x)$, I need to go through the definition that $\{x: f(x)<c\}$ is measurable for all $c$. Now $\{x: f(x)<c\} = \{x: \exists h \, \text{measurable s.t.}\, f(x)\leq h(x)<c\}$, and the trouble is I am not sure how to express the latter as a countable union of measurable sets.
(2) I was wondering if the following proof for $f \in L^1$ is correct: Assuming $f$ is measurable, we want to show that $f \in L^1(\mathbb{R}^n)$. For any $n\in \mathbb{N}$, there is $h_n, g_n \in L^1$ with $h_n(x)\leq f(x) \leq g_n(x)$ for all $x$ and $\int |f(x)-h_n(x)|\, dx = \int f(x)-h_n(x)\, dx \leq \int g_n(x)-h_n(x)\, dx < 1/n$. Hence $||f-h_n||_{L^1} < 1/n$, and thus $f \in L^1$ by completeness of $L^1$.
In summary, I appreciate any help/hint on measurability of $f(x)$, and check on if the proof in (2) is correct. Thank you!
|
(1) Measurability of $f(x)$. Thanks to the hint of @Robert Israel
Let $g_n, h_n \in L^1$ be such that $h_n(x) \leq f(x) \leq g_n(x)$ for all $x$ and $\int g_n(x)-h_n(x)\, dx < 1/n$. Let $h(x) = \limsup h_n(x)$ and $g(x) = \liminf g_n(x)$, so $h(x) \leq f(x) \leq g(x)$ for all $x$, and $h, g$ are measurable. Moreover, by Fatou's Lemma, $\int g(x)-h(x)\, dx \leq \liminf \int g_n(x)-h_n(x)\, dx =0$, so $g(x)=h(x)$ a.e. Thus $g(x)=h(x)=f(x)$ a.e., so $f$ is measurable.
(2) $f \in L^1$. The original proof is correct, and an easier version is pointed out by @Bungo.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Find A and B in this limit Can you find $a$ and $b$? In how many ways can I find them?
$$\lim_{x\to0} \frac{a+\cos(bx)}{x^2}=-8$$
|
For
$\lim_{x\to 0}\frac{\ a+\cos bx}{x^2}=-8
$,
note that,
for small $x$,
$\cos x
\approx 1-\frac{x^2}{2}
$.
Therefore
$\dfrac{\ a+\cos bx}{x^2}
\approx
\dfrac{\ a+1-\frac{(bx)^2}{2}}{x^2}
$.
If
$a+1 \ne 0$,
then
$\dfrac{\ a+1-\frac{(bx)^2}{2}}{x^2}
\to \infty$
as $x \to 0$.
Therefore,
to have the limit exist,
we must have
$a+1=0$
or $a = -1$.
The expression then becomes
$\dfrac{\ a+1-\frac{(bx)^2}{2}}{x^2}
=\dfrac{-\frac{(bx)^2}{2}}{x^2}
=-\frac{b^2}{2}
$.
If this is $-8$,
then
$b^2 = 16$,
so
$b = 4$.
Therefore
$a = -1, b=4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Partial Fraction Decomposition with Complex Number $\frac{1}{z^2 - 2i}$. How do I decompose the fraction
$$\dfrac{1}{z^2 - 2i}$$
into partial fractions? I understand how to do partial fraction decomposition with real numbers, but I am unsure of how to do it with complex numbers. I attempted to find examples online, but all examples are with real numbers -- not complex.
I would greatly appreciate it if people could please take the time to demonstrate this.
|
Note that $z^2-2i=(z+\sqrt{2i})(z-\sqrt{2i})$ and $\sqrt{2i}=\sqrt{2}e^{i\pi/4}=1+i$. To simplify, let $b=1+i$, then $$\frac{1}{z^2-2i}=\frac{1}{(z+b)(z-b)}$$ From here it actually doesn't matter if you regard $b$ as real or complex, the process to find the partial fractions is the same as long as the terms are linear in $z$. So we let $$\frac{1}{(z+b)(z-b)}=\frac{A}{z+b}+\frac{B}{z-b}$$ for some $A,B\in \mathbb C$. Adding the two fractions on the right hand side we get that $$A(z-b)+B(z+b)=1$$ and so $$A+B=0$$
$$-bA+bB=1$$ which has solution $$A=-\frac{1}{2b}$$ $$B=\frac 1{2b}$$
Plugging in the original $b=1+i$ we have that $$\frac{1}{2b}=\frac12\frac 1{(1+i)}\frac{(1-i)}{(1-i)}=\frac 14(1-i)$$ Therefore $$\frac{1}{z^2-2i}=-\frac{\frac 14(1-i)}{z+1+i}+\frac{\frac 14(1-i)}{z-1-i}.$$ As you can see the process for computing the partial fraction coefficients with complex rationals is equivalent to that of real numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Independent odds, am I (+ friend) seeing this wrong or is there a mistake in the practice exam? I found this exercise in a practice exam:
Any student has a 90% chance of entering a University. Two students
are applying. Assuming each student’s results are independent, what is
the probability that at least one of them will be successful in
entering the National University?
A. $0.50$
B. $0.65$
C. $0.88$
D. $0.90$
E. $0.96$
I think the answer is something different than the answers above, namely $0.99$.
$0.01 = (0.1 \times 0.1)$ is the chance of neither, so $1 - 0.01$ must be $0.99$ right? But it's not part of the possible answers.
Other way: $(0.9 \times 0.9) + (0.9 \times 0.1) + (0.1 \times 0.9) = 0.99$
Am I missing something here?
|
That's the standard way of doing it:
$P(\text{At Least One}) = 1 - P(\text{Both Fail}) = 1 - 0.1 \times 0.1 = 1 - 0.01 = 0.99$
And the not so standard way:
$P(1\text{ in}) + P(2\text{ in Without }1) = 0.9 + (0.9 \times 0.1) = .99$.
Or
$$\begin{align}
P(1\text{ in Without }2) + P(2\text{in Without }1) + P(1\text{ in And }2\text{ in})
&= 0.9 \times 0.1 + 0.9 \times 0.1 + 0.9 \times 0.9 \\
&= 0.09 + 0.09 + 0.81 = .99
\end{align}$$
No matter how we cut it, you are right. They are wrong.
(It's probably just a typo.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 4,
"answer_id": 0
}
|
Better way to reduce $17^{136}\bmod 21$? What I have done:
Note $17\equiv -4$ mod 21, and $(-4)^2 \equiv 5$ mod 21. So $17^{136} \equiv (-4)^{136} \equiv 5^{68}$ mod 21. Also note $5^2 \equiv 4$ mod 21 and $4^3 \equiv 1$ mod 21, so $5^{68} \equiv 4^{34} \equiv (4^3)^{11}\cdot4 \equiv 4$ mod 21. I feel this is rather complicated, and there should be a better way.
|
You could also reduce modulo each of the factors of $21$ and then use the Chinese Remainder Theorem to recover the result modulo $21$.
$$
17^{136} \cong 2^{136} \cong (2^2)^{68} \cong 1^{68} \cong 1 \pmod{3}
$$
and
$$
17^{136} \cong 3^{136} \cong 3^{6 \cdot 22 + 4} \cong (3^6)^{22} \cdot 3^4 \cong 1^{22} \cdot 3^4 \cong 3^4 \cong 9^2 \cong 2^2 \cong 4 \pmod{7} .
$$
Solving the system $x \cong 1 \pmod{3}$ and $x \cong 4 \pmod{7}$, we get $x \cong 4 \pmod{21}$. Therefore $17^{136} \cong 4 \pmod{21}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Does it follow that $\sup\emptyset = 0$ if the domain is $\mathbb{R}_{\geq 0} \cup \{\infty\}$? I am currently working on a paper where I am taking $\min$s of $\max$s over sets of non-negative real numbers including positive infinity, i.e. my equations look something like $\min\max\{x_1,\ldots,x_n\}$ where $x_i \in \mathbb{R}_{\geq 0} \cup \{\infty\}$.
However, for technical reasons, I want it that $\min\emptyset = \infty$ and $\max\emptyset = 0$, so my idea is to use $\inf$ and $\sup$ instead of $\min$ and $\max$ (since for finite sets $\min$ is the same as $\inf$ and $\sup$ the same as $\max$).
So my question is: Does it follow that $\sup\emptyset = 0$ if the domain is $\mathbb{R}_{\geq 0} \cup \{\infty\}$?
|
Yes: Every number is an upper bound of the empty set, and $0$ is the least such number in your domain.
(By contrast, $\max\emptyset$ doesn't exist by the usual definitions. If you really wanted to define $\max\emptyset$, you would have to extend the definition of $\max$ in a way that risks being misleading.)
If your concern is that the equation $\sup\emptyset=0$ tacitly depends on the domain, you could clarify by abbreviating $[0,\infty]=\mathbb R_{\geq0}\cup\{\infty\}$ and then writing $\sup_{[0,\infty]}\emptyset=0$. This would distinguish the equation from the more familiar result $\sup_{[-\infty,\infty]}\emptyset=-\infty$. For intuitive explanations of the latter, see Infimum and supremum of the empty set and related questions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2411919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate a limit using l'Hospital rule Evaluate $$\lim_{x\to0} \frac{e^x-x-1}{3(e^x-\frac{x^2}{2}-x-1)^{\frac{2}{3}}}$$
I tried to apply l'Hospital rule in order to get the limit to be equal to
$$\lim_{x\to0}\frac{e^x-1}{2(e^x-\frac{x^2}{2}-x-1)^{-\frac{1}{3}}(e^x-x-1)}$$
but the new denominator has an indeterminate form itself and by repeatedly applying l'Hospital rule, it doesn't seem to help... This is where I got stuck.
|
You can use $e^x=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{24}+o(x^5)$
$$\qquad{\lim_{x\to0} \frac{e^x-x-1}{3(e^x-\frac{x^2}{2}-x-1)^{\frac{2}{3}}}=\\
\lim_{x\to0} \frac{(1+x+\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{24}+o(x^5))-x-1}{3((1+x+\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{24}+o(x^5))-\frac{x^2}{2}-x-1)^{\frac{2}{3}}}=\\
\lim_{x\to0} \frac{(\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{24}+o(x^5))}{3((\frac{x^3}{6}+\frac{x^4}{24}+o(x^5)))^{\frac{2}{3}}}=\\
\lim_{x\to0} \frac{x^2(\frac{1}{2}+\frac{x}{6}+\frac{x^2}{24}+o(x^3))}{3(x^3(\frac{1}{6}+\frac{x}{24}+o(x^2)))^{\frac{2}{3}}}=\\
\lim_{x\to0} \frac{x^2(\frac{1}{2}+\frac{x}{6}+\frac{x^2}{24}+o(x^3))}{3x^{3\times\frac{2}{3}}((\frac{1}{6}+\frac{x}{24}+o(x^2)))^{\frac{2}{3}}}=\\
\lim_{x\to0} \frac{(\frac{1}{2}+\frac{x}{6}+\frac{x^2}{24}+o(x^3))}{3((\frac{1}{6}+\frac{x}{24}+o(x^2)))^{\frac{2}{3}}}=\\
\frac{\frac{1}{2}}{3(\frac{1}{6})^{\frac{2}{3}}}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Why is this deterministic variant of Miller-Rabin not working? I am using this paper as a reference.
The Miller-Rabin test, as classically formulated, is non-deterministic -- you pick a base $b$, check if your number $n$ is a $b$-strong probable prime ($b$-SPRP), and if it is, your number is probably prime (repeat until "confident.")
A deterministic variant, assuming your number $n$ is below some bound (say $n<2^{64}$), is to pick a small number of bases $b$, and check if $n$ is a $b$-SPRB relative to each of those bases. There seems to be a bit of a sport to finding very small sets of bases, so as to make this process as fast as possible.
In particular, the cited reference declares a theorem of Jaeschke and Sinclair, that
If $n < 2^{64}$ is a $b$-SPRP for $b\in\{2, 325, 9375, 28178, 450775,
9780504, 1795265022\}$, then $n$ is a prime.
It doesn't state any extra hypotheses on $n$, or on what it means to be a $b$-SPRP. However, the classical formulation of Miller-Rabin only talks about $n$ being a $b$-SPRP when $b\leq n-2$, whereas the theorem above seems to allow $n<b$.
In particular, I have found (purely by accident) that $n=13$ does not satisfy the above criterion, meaning that as stated it gives wrong answers, and I don't know why (so I can't predict more of them).
So the question: Is this a shortened form of a proper theorem, where I should only be checking the values of $b$ where $b\leq n-2$? Is this an error in the paper? Am I just crazy?
For sake of completeness, the definition of $b$-SPRB I am using is the one given in the paper:
Factor $n$ as $2^sd$, where $s$ and $d$ are nonnegative integers and $d$ is odd. Then $n$ is a $b$-SPRB iff $b^d\equiv 1\ (mod\ n)$ or, for some $r$ with $0\leq r < s$, $\left(b^d\right)^{2^r}\equiv -1\ (mod\ n)$.
Not a duplicate of: Bases required for prime-testing with Miller-Rabin up to $2^{63}-1$ seems to lead to the same questions (they don't address the issue of when $n<b$; it's just irrelevant there) and uses bases so small it doesn't matter.
|
If you look at Best known SPRP base sets, you can see the remark "Depending on your Miller-Rabin implementation, you may need to take a ← a mod n." and "When the witness a equals 0, the test should return that n is prime." The latter is saying we skip that test when n divides the base.
This is especially critical for making sense of the smaller base sets which generally have very large bases.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Suppose we roll a fair $6$ sided die repeatedly. Find the expected number of rolls required to see $3$ of the same number in succession.
Suppose we roll a fair six sided die repeatedly.
Find the expected number of rolls required to see $3$ of the same number in
succession
From the link below, I learned that $258$ rolls are expected to see 3 sixes appear in succession. So I'm thinking that for a same (any) number, the rolls expected would be $258/6 = 43$. But I'm unsure how to show this and whether it really is correct.
How many times to roll a die before getting two consecutive sixes?
|
From Did's answer here, the probability generating function $u_0(s)=\mathbb{E}(s^T)$ for
the number of trials $T$ needed to get three consecutive values the same is
$$u_0(s)={s^3\over 36-30s-5s^2}.$$ Differentiating this and setting $s=1$ in the
derivative shows that $\mathbb{E}(T)=43.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
Why alternating power series can be truncated to created upper and lower bounds for functions? I often see functions that can be represented by alternating power series like this
\begin{align}
f(x) =\sum_{i=0}^\infty(-1)^i a_i x^i,
\end{align}
being upper and lower bounded by a truncated power series. For example, the function above can be bounded as follows:
\begin{align}
a_0 - a_1 x\le f(x)\le a_0 - a_1 x + a_2 x^2
\end{align}
However, I am curious about why is this so? I think it has something to do with how fast the approximate error decreases.
|
There is a theorem about constant alternating series $\sum_{k=0}^\infty (-1)^k c_k$ with positive $c_k$ monotonically decreasing to $0$. Such series are automatically convergent to a finite sum $s\in{\mathbb R}$. Furthermore the even partial sums $s_{2m}:=\sum_{k=0}^{2m}(-1)^k c_k$ are all larger than $s$, and the odd partial sums $s_{2m+1}$ are all smaller than $s$. This can be expressed in the following way: The truncation error $s-s_n$ is smaller in absolute value than the first neglected term, and has the same sign as this term.
If you have a power series $f(x):=\sum_{k=0}^\infty (-1)^k a_kx^k$ with positive $a_k$ this principle can be applied for positive $x$-values small enough to guarantee that $a_{k+1}x^{k+1}<a_k x^k$, or $a_{k+1}x<a_k$, for all $k$.
Consider as an example the series
$$e^{-x}=\sum_{k=0}^\infty(-1)^k{x^k\over k!}\ .$$
If, e.g., $x=5$ then the stated condition is only fulfilled for $k\geq6$. It follows that the principle in question is only applicable for this $x$ if you truncate the series after the sixth term.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The max norm $|. |_{\infty}$ is generating a metrics
I want to prove that the max norm $\|\cdot \|_{\infty}$ generates in the
space of continuos functions $C([a,b])$ a metrics.
Well, I am not sure how to properly make it. I just know that if I have two functions $f$ and $g$ then for all $x \in [a,b]$ a I can find the distance between those two functions such that $\|f - g\|$. For the $x \in [a,b]$ I can find out the biggest possible distance between $f(x)$ and $g(x)$ by $max_{x \in [a,b]}|f(x) - g(x)|$. Is this the metrics or do I have to do more or something else?
|
You are, I think, trying to say:
The norm $\|\cdot\|_\infty$ generates a metric on $C([a,b])$ by $$d(f,g) = \|f-g\| = \max_{x \in [a,b]} \{| f(x) -g(x) |\}$$
This is true, as any norm $\|\cdot\|$ on a linear space defines a metric in this way (norm of the difference).
We use $\|-f\| = \|f\|$ for symmetry, $\|f\| = 0 \implies f=0$ for the non-pseudometricity, and $\|f+g\| \le \|f\| + \|g\|$ for the triangle inequality of that metric.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
A question about asymptotic notations with sums. I need to prove that $$ \sum_{k=0}^{n-2017} \binom{n}{k} \binom{2k}{k} \frac{ \sqrt{k}}{2^{k}} = \Theta(3^{n})$$
I'm pretty sure it's straightforward to prove that it's $\Omega(3^{n})$ but I'm not sure how to prove the $O(3^{n})$ part. Maybe with using square root and derivative.
|
*
*For the upper bound,, note that
$$\begin{align}
\sum_{k=0}^{n-2017} \binom{n}{k} \binom{2k}{k} \frac{ \sqrt{k}}{2^{k}}
&\leq \sum_{k=0}^{n} \binom{n}{k} \binom{2k}{k} \frac{ \sqrt{k}}{2^{k}}
\leq \sum_{k=0}^{n} \binom{n}{k} \frac{2^{2k}}{\sqrt{k}} \frac{ \sqrt{k}}{2^{k}}
\\&= \sum_{k=0}^{n} \binom{n}{k} 2^k
= (1+2)^n = 3^n
\end{align}$$
giving the upper bound. We used at the beginning the fact that
$$
\binom{2k}{k} \leq \frac{2^{2k}}{\sqrt{3k+1}}\leq \frac{2^{2k}}{\sqrt{k}}.
$$
*For the lower bound, we have that
$$\begin{align}
\sum_{k=0}^{n-2017} \binom{n}{k} \binom{2k}{k} \frac{ \sqrt{k}}{2^{k}}
&
\geq \sum_{k=0}^{n-2017} \binom{n}{k} \frac{2^{2k}}{2\sqrt{k}} \frac{ \sqrt{k}}{2^{k}}
\\&= \frac{1}{2}\sum_{k=0}^{n-2017} \binom{n}{k} 2^k
\geq \frac{1}{2}\sum_{k=0}^{n} \binom{n}{k} 2^k - \frac{2017}{2}\binom{n}{n-2017} 2^n\\
&= \frac{1}{2}(1+2)^n - \frac{2017}{2} \binom{n}{2017}2^n
= \frac{3^n}{2} - \Theta(n^{2017}2^n) = \Theta(3^n)
\end{align}$$
giving the lower bound. There, we used the basic result for any constant $k$, $\binom{n}{k} = \Theta(n^k)$ when $n\to \infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sylow's theorem proof Sylow's theorems state that the $p$-Sylow subgroups exist for a group G of order $p^km$, where $p$ is prime and does not divide $m$. My question is how to prove that there is at least 1 subgroup of order $p^n$ exists for every non-negative integer $n \le k$.
|
Actually, there exists a normal subgroup of order $p^n$ for any group of order $p^k$, where $n\leq k$. We know that the center of a nontrivial $p$-group is nontrivial, then you can use induction to prove the fact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Inequality for Fibonacci to find an upper bound of harmonic Fibonacci series I want to find an sharp upper bound for $$\sum_{n=1}^{\infty}\frac{1}{F_n}$$which $F_n $ is the n$th$ term of Fibonacci sequence .
I wrote a Matlab program to find an upper bound ,$\sum_{n=1}^{10^6}\frac{1}{F_n}<4$
Now my question is:(1):Is there an inequality to find this ?
(2): Is that series have a close form ?
$${F_n} = \frac{{{\varphi ^n} - {{( - \varphi )}^{ - n}}}}{{\sqrt 5 }}\to \\\sum_{n=1}^{\infty}\frac{1}{F_n}=\sum_{n=1}^{\infty}\frac{\sqrt 5}{{{\varphi ^n} - {{( - \varphi )}^{ - n}}}}\\\leq \sum_{n=1}^{\infty}\frac{\sqrt 5}{{{(\frac{{1 + \sqrt 5 }}{2} ) ^n} }}=\frac{\sqrt5}{1-\frac{1}{\frac{{1 + \sqrt 5 }}{2}}}\approx12.18\\$$I am thankful for a hint or solution which can bring a sharper upper bound .
|
Using Wolfram Mathematica answer is :
Wolfram Mathematica Code
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2412980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How to find the Fourier Transform of a function that is periodic in an interval only? I know how to find the Fourier Series for a periodic function (Periodic for all inputs, from the definition). I also know how to find the Fourier Transform for non periodic functions. But, Which formula to use to calculate the Fourier transform of a function that is periodic in an interval only (Bounded), like shown here..
In this function, the period is 'r' and the function exists from 0 to M.
|
Let be
$$
\mathcal F\{f(t)\}=F(\omega)=\int_{-\infty}^\infty f(t)\mathrm e^{-i\omega t}\mathrm d t=\int_{0}^Mf(t)\mathrm e^{-i\omega t}\mathrm d t
$$
Let be $f_0(t)$ the basis function in the interval $[0,\, r]$ and $M=nr$ for some $n\ge 1$. We have
$$
f(t)=\sum_{k=0}^{n-1}f_0(t-kr)
$$
and then
$$
\begin{align}
F(\omega)&=\int_{0}^Mf(t)\mathrm e^{-i\omega t}\mathrm d t=\int_{0}^{nr} \sum_{k=0}^{n-1}f_0(t-kr)\mathrm e^{-i\omega t}\mathrm d t\\
&=\sum_{k=0}^{n-1}\int_{0}^{nr}f_0(t-kr)\mathrm e^{-i\omega t}\mathrm d t=\sum_{k=0}^{n-1}\int_{0}^{r}f_0(t)\mathrm e^{-i\omega t}\mathrm e^{-i\omega kr}\mathrm d t\\
&=F_0(\omega)\sum_{k=0}^{n-1}\mathrm e^{-i\omega kr}=F_0(\omega)\frac{1-\mathrm e^{-i\omega nr}}{1-\mathrm e^{-i\omega r}}\\
&=F_0(\omega)\frac{\sin \left(\frac{n \omega r}{2}\right)}{\sin \left(\frac{\omega r}{2}\right)}\,\mathrm e^{i\frac{\omega r}{2} (n-1)}
\end{align}
$$
where $F_0(\omega)=\mathcal{F}\{f_0(t)\}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Understanding SVD Notation Given any $m \times n$ matrix $M$, one can write
$$
M = U\Sigma V^T
$$
is the Singular Value Decomposition, where $U$ and $V$ are orthonormal and $\Sigma$ is a diagonal matrix.
Now, the same $M$ can be written as:
$$M = \sum_{i=1}^r u_i c_i v_i^T\,,$$ where $u_i$ is the $i$th column of $U$, $v_i$ is the $i$th column of $V$ and $c_i$ is the $i$th diagonal entry of $\Sigma$.
I don't understand why the second representation is the same as first one?
In general, how could matrix multiplication be expressed as product of columns, I have learnt that matrices are multiplied row by column. This is the only way even Profs do, so how can a matrix multiplication be expressed as only involving column vectors?
Sorry if the question is too basic, but I am having lot of trouble understanding how people are using column vectors in matrix multiplications.
|
First, just for clarity: $U$ is $m\times m$, $V$ is $n\times n$, and $\Sigma$ is $m\times n$.
We have, according to the first decomposition, that for any $1\leq i\leq m$ and $1\leq j\leq n$,
$$
M_{i,j} = (U\Sigma V^T)_{ij} = \sum_{k=1}^n (U\Sigma)_{ik} (V^T)_{kj}
= \sum_{k=1}^n (U\Sigma)_{ik} V_{jk}
= \sum_{k=1}^n \sum_{\ell=1}^m U_{i\ell}\Sigma_{\ell k} V_{jk}
$$
Now, since $\Sigma$ is an $m\times n$ diagonal matrix, $\Sigma_{\ell k}$ will be $0$ if $k\neq \ell$, and equal to $c_k$ otherwise. (Note also that it will only be non-zero for $k\leq r\stackrel{\rm def}{=} \min(n,m)$, since after that there is no $c_k$). Therefore, the inner sum can be simplified out, and we get
$$
M_{i,j} = \sum_{k=1}^r U_{ik} c_k V_{jk} \tag{$\dagger$}
$$
Now, let us look at the $(i,j)$-th entry of the other expression: since $u_k v_k^T$ is a matrix and $c_k$ a scalar, we have
$$
\left(\sum_{k=1}^r u_k c_k v_k^T\right)_{i,j}
= \left(\sum_{k=1}^r c_k (u_k v_k^T)\right)_{i,j}
= \sum_{k=1}^r c_k (u_k v_k^T)_{i,j}
$$
But what is $(u_k v_k^T)_{i,j}$? It is the product of the $i$-th entry of the vector $u_k$ and the $j$-th entry of the vector $v_k$, i.e. by definition it is $U_{ik} V_{jk}$. So overall,
$$
\left(\sum_{k=1}^r u_k c_k v_k^T\right)_{i,j}
= \sum_{k=1}^r c_k U_{ik} V_{jk} \tag{$\ddagger$}
$$
and we get the same RHS as in $(\dagger)$, showing what we wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Can every domain be exhausted by compact *connected* subsets?
Let $D \subset \Bbb R^n$ be an open connected set. I would like to exhibit an increasing sequence of compact connected subsets of $D$ converging to $D$.
For example, for a ball we might take a sequence of closed ball inside it of radius $r-\frac 1n$.
User SteamyRoot showed that the usual closed exhaustion that generalizes the above example, given by $D_n = \{ x\in D: \mathrm{dist}(x, \partial D) \geq \frac 1n \}$ is not always connected (by considering the shape of eye-glasses), even before modifying it to be bounded. Considering an infinite sequence of glasses connected to one another in a row that are smaller and smaller shows that this will not be connected no matter how small we choose $n$.
We also cannot do this by taking the union of all closed squares of side length $2^{-n}$ contained entirely within $D$. This is again not connected because of the same counterexample (infinite glasses), even though this example does have such an exhaustion.
|
Fix a point $x_0 \in D$. For every $A$ subset of $D$ containing $x_0$ denote by $A'$ the component of $A$ containing $x_0$. Assume now we have $D_m$ an increasing sequence of open subsets of $D$ containing $x_0$ and with union $D$. Then $D_m'$ (the $x_0$ components) also cover $D$. Indeed, consider $x \in D$. There exists a path $\gamma$ from $x_0$ to $x$ contained in $D$. Since the $D_n$'s cover $D$ and $\gamma$ is compact there exists $n$ so that $\gamma \subset D_m$. Then $x\in D_m'$.
Take now any increasing sequence of compacts $K_m$ whose interiors cover $D$ ( for instance
$K_m= \{ x \in \mathbb{R}^n \ | \ ||x|| \le m \textrm { and } d(x, \mathbb{R}^n \backslash D ) \ge \frac{1}{m} \}$). Then
$L_m = K_m'$ will be an exhaustion of $D$ with connected compacts.
$\bf{Added:}$ In the construction above, we can also take $L_m$ to the the closure of the $x_0$ connected component of $\mathring{K_m}$ ( the interior of $K_m$). The advantage is that the interior of $L_m$ is connected and its closure is $L_m$, so $L_m$ is better behaved.
In general, if $L$ is a compact subset of the open set $D$, consider a covering of the full space with a lattice of $\epsilon$ cubes so that the diagonal of the cubes $\sqrt{n} \cdot \epsilon < d(L, \mathbb{R}^n \backslash D)$. Consider $\tilde L$ to be the union of all the cubes that intersect $L$. Then $\tilde L$ is a compact and $L \subset \tilde L \subset D$. If, moreover, $L$ is connected then $\tilde L$ is connected.
If $L$ has connected interior $\mathring{L}$ and is the closure of its interior, then the same holds for $\tilde{L}$. So we can get pretty well behaved compact exhaustions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Why does finer mesh mean worse condition number? Suppose I am working on the finite element approximation of a problem. My understanding is that the condition number of the resulting algebraic system becomes worse when the mesh becomes finer. What is the reason for this?
|
It is not universally true that the condition number becomes worse with a finer mesh, for example take $10u = f$. There's no good reason to solve this via FEM, but you can do it, and the condition number doesn't increase with mesh refinement. Less trivially, if the operator in question is bounded and strictly coercive, then the condition number of the operator is finite and as our mesh refines the condition number of the matrix should converge to the condition number of the operator.
The problem is that bounded and strictly coercive operators rarely model interesting physics. Hence the condition number of the operator is unbounded and as we create better approximations to it, the condition number of the approximating operator becomes worse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
All isometric embeddings that maps $\mathbb{R}$ to an inner product vector space $\mathbb{E}$ are of the form Let $\mathbb{E}$ be a normed vector spaces with inner product.
All isometric embeddings $f:\mathbb{R} \to \mathbb{E}$ are of the form
$f(t)=u+tv$, with $u,v \in \mathbb{E}$ and $\|v\|=1$.
This was an assignment due yesterday.
I kind of used a similar solved problem from the textbook I am using [Espaços Métricos, E. L. Lima], but found out it proved nothing, specially not that ALL isometric embedddings like $f$ are of the same form.
While trying to find useful information about this I found this paper on arXiv https://arxiv.org/pdf/1202.0503.pdf
In page 6, Lemma A.1 states that
"In an inner product space $(V, \|k\|)$ a sphere and a straight line can
coincide in at most two points."
|
Let $f$ be such an isometry, $u=f(0)$, $v=f(1)-u$. As $\|f(x)-f(y)\|=|x-y|$ for all $x,y\in\Bbb R$, this implies $\|v\|=1$.
Let $t\in\Bbb R$ be arbitrary.
Let $w=f(t)-(u+tv)$. Then
$$\begin{align}t^2&=\|f(t)-f(0)\|^2\\&= \langle tv+w,tv+w\rangle\\&=t^2\|v\|^2+2t\langle v,w\rangle+\|w\|^2\\&=t^2+2t\langle v,w\rangle+\|w\|^2\end{align}$$
so that
$$\tag1 2t\langle v,w\rangle+\|w\|^2=0.$$
Also,
$$\begin{align}(t-1)^2&=\|f(t)-f(1)\|^2\\&= \langle (t-1)v+w,(t-1)v+w\rangle\\&=(t-1)^2\|v\|^2+2(t-1)\langle v,w\rangle+\|w\|^2\\&=(t-1)^2+2(t-1)\langle v,w\rangle+\|w\|^2\end{align}$$
so that
$$\tag2 2(t-1)\langle v,w\rangle+\|w\|^2=0 $$
Subtracting $(1)$ and $(2)$, we find $\langle v,w\rangle = 0$.
Plugging that back into $(1)$, we find $w=0$, i.e.,
$$ f(t)=u+tv.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If almost-periodic function is not identically zero, then it is not in L2 I have an $\mathbb{R}_+ \to \mathbb{R}$ function $f(t)$, which is a combination of sums and products of $\sin$ and $\cos$ functions of incommensurable frequencies. Thus $f(t)$ is a quasiperiodic function, or, more generally, almost-periodic function. The goal is to show that if there exists $t_0$ such that $f(t_0)\ne 0$, then $$\int_0^\infty{f^2(s)ds}=\infty.$$
My idea is to do it by contradiction. Assume that $$\int_0^\infty{f^2(s)ds}=C,$$ then $$\lim_{t\to\infty}{f(t)}=0.$$ Then for any $\epsilon>0$ there exitis $t_\epsilon$ such that $|f(t)|<\epsilon$ for all $t>t_\epsilon$. Let us choose $\epsilon$ such that $|f(t_0)|>2\epsilon$. For alomost-periodic functions it is known that for any $T$ there exists $\tau>T$ such that $|f(t_0)-f(\tau)|<\epsilon$ and $|f(\tau)|>\epsilon$. This yields a contradiction. Thus if almost-periodic function is not identically zero, then it is not square-integrable.
Questions:
Q1. Is it correct that a combination of products and sums of periodic functions is almost-periodic?
Q2. Is the proof correct?
Q3. Most probably this is something very well-known. What is a good reference to cite?
|
Your proof is wrong. An $L^2$ function does not necessarily have limit $0$ at $\infty$.
However, what is true is that given $\epsilon > 0$, an almost periodic function $f$ has arbitrarily large "almost periods" $\tau$ such that
$|f(t+\tau) - f(t)| < \epsilon$ for all $t \in \mathbb R$. You can use this to show that if $\int_a^b |f(t)|^2\; dt = c > 0$, there are infinitely many disjoint intervals $[a_n, b_n]$ with $\int_{a_n}^{b_n} |f(t)|^2 \; dt > c/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How can this "illegal geometry" problem be possible? Using 2 triangles each with base of 8 and height of 3, and 2 trapezoids with heights of 3 on top, 5 on bottom and height of 5, these four figures can create an area with 64 units squared. However, when rearranged as a rectangle with 13 x 5=65, one additional unit squared seemed to have been created. How is this possible?
|
This is a classic illusion based on the Fibonacci number identity
$$
13 \times 5 = 1 + 8 \times 8 .
$$
The "diagonal" of the rectangle isn't one. The slopes on each segment don't agree. There's one unit of area between the "diagonals".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Evaluate $\int_{0}^{1}2^{x^2+x}\mathrm dx$
Question : Evaluate - $$\int_{0}^{1}2^{x^2+x}\mathrm dx$$
My Attempt : First I tried to evaluate the indefinite integral of $2^{x^2+x}$ in order to put the limits $0$ and $1$ later on, but couldn't integrate it. Then I checked on WA and came to know that it's elementary integral doesn't exist. Now I moved one to using properties of definite integration such as $$\int_a^b f(x) \mathrm dx=\int_a^b f(a+b-x) \mathrm dx$$
But it couldn't help either. Can you please give me hint to proceed on this question?
P.S. - This is a high school level problem and therefore its solution shouldn't involve any special functions, such as Gaussian Integral etc.
Edit : I asked my teacher this question and basically this was an approximation based question. This was a MCQ type question which has an option "None of the above" and it was the correct answer, since the other options were made in such a way that can be rejected by bounding this integral between 2 functions. For example we can use $$2^{x^2+x}<2^{2x} ~; ~x\in (0,1)$$ and thus can be sure that this integral is less than $3/\ln(4)$.
Thanks all for devoting your time in my question!
|
Hint:
Using $u=\frac{2x+1}{2}$ yields an imaginary error function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2413891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Why are Desmos and W/A not plotting the graph correctly for a rational function? I had this question to plot this rational fraction function:
$$y=\frac{x-2}{x^2-4}$$
With asymptotes at $x=2,-2$
Now, I did immediately realise that this could be simplified to:
$$y=\frac{1}{x+2}$$
But, when one immediately simplifies it into this form, wouldn't one lose one of the asymptotes at $x=2$?
When I went to check online, on both Desmos and WolframAlpha, both gave this result (which does not have the x=2 asymptote):
This is the way I thought was correct:
I further justify myself by subbing in x = 2 into the original formula, which produces a divide by zero case.
Could someone point me in the right direction or is Desmos/Wolfram at fault here?
|
Let $f(x)=\frac{x-2}{x^2-4}$. Let us prove that there is an asymptote at $x = -2$:
$$\lim_{x\to -2^+}f(x)=\lim_{x\to -2^+}\frac{x-2}{x^2-4} = \lim_{x\to -2^+}\frac{1}{x+2} = +\infty,$$
$$\lim_{x\to -2^-}f(x)=\lim_{x\to -2^-}\frac{x-2}{x^2-4} = \lim_{x\to -2^-}\frac{1}{x+2} = -\infty.$$
However, there is actually no asymptote at $x = 2$. Your mistake is that you didn't check the limit:
$$\lim_{x\to 2}f(x)=\lim_{x\to 2}\frac{x-2}{x^2-4} = \lim_{x\to 2}\frac{1}{x+2} = \frac 14.$$
As you can see, the limit is not $\pm\infty$, which would be needed for it to be an asymptote. Actually, $f$ can be extended continuously:
$$g(x):=\begin{cases}
f(x),& x\neq 2\\
\lim_{t\to 2}f(t),& x= 2\\
\end{cases}$$ and immediately it follows that $g(x) = \frac{1}{x+2}$.
This explains why the graph of $f$ looks like the graph of $g$; the only difference is that one point must be erased from the graph: $(2,\frac 14)$. If you want to emphasize it, this would be a way to do it:
If you want similar example, plot function $x\mapsto \frac{\sin x}x$ and observe that there is no asymptote at $x = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Uniformly continuity of a real valued function $f$
Let $f:\mathbb{R}\to \mathbb{R}$ be a function given by $f(x)=\sum_{k=1}^\infty \frac{g(x-k)}{2^k}$
where $g:\mathbb{R}\to \mathbb{R}$ is a uniformly continuous function such that the series converges for each $x$ belongs to $\mathbb{R}$.
Then show that $f$ is uniformly continuous.
How I show this. Please help me to solve this.
|
If $g$ is uniformly continuous in $\mathbb{R}$ then given $\epsilon>0$ there exist $\delta>0$ such that if $|x-y|<\delta$ then $|g(x)-g(y)|<\epsilon$.
Hence, if $|x-y|<\delta$, then $|(x-k)-(y-k)|=|x-y|<\delta$ and
$$|f(x)-f(y)|\leq \sum_{k=1}^\infty \frac{|g(x-k)-g(y-k)|}{2^k}\leq
\sum_{k=1}^\infty \frac{\epsilon}{2^k}=\epsilon,$$
that is $f$ is uniformly continuous in $\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Factors and primes I wanted to know if anyone could explain how to work out this question.
$n$ is a (natural) number. 100 is the $LCM$ of 20 and $n$. Work out two different possible values for $n$.
|
$100$ is the LCM of $20$ and $n$. Hence, $100$ must be a multiple of $n$, so we only need to look at divisors of $100$ as possible values of $n$.
Furthermore, divisors of $20$ will lead to $20$ as LCM of $n$ and $20$ and not $100$, so they can be ruled out.
We are left with $n = 25$, $n = 50$ and $n = 100$.
Edit: I would like to note that in THIS case, all three possibilities are solutions, however, this is not generally true. Don't forget to check the possibilities that are left over.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find $\lim\limits_{n\rightarrow \infty} |\frac{a_n}{b_n}|$ Suppose that $\lim\limits_{n\rightarrow \infty} |\frac{a_{n+1}}{a_n}| = \frac{1}{\alpha}$, $\lim\limits_{n\rightarrow \infty} |\frac{b_{n+1}}{b_n}| = \frac{1}{\beta}$ and $\alpha > \beta$. Does it implies that $\lim\limits_{n\rightarrow \infty} |\frac{a_n}{b_n}| = 0 ?$
I think it is correct because the condition means that increasing rate of $b_n$ greater than increasing rate of $a_n$. Then $\lim\limits_{n\rightarrow \infty} |\frac{a_n}{b_n}| = 0 $ no matter what initial value $a_0$ and $b_0$ are given.
|
We show that $|b_n/a_n|\to 0$ if and only if $\alpha<\beta$.
If. Indeed, to be meaningful, it means that $\alpha,\beta>0$. Fix $\varepsilon>0$, hence there exists $n_0=n_0(\varepsilon)>0$ such that
$$
|a_{n+1}|\ge \left(\frac{1}{\alpha}-\varepsilon\right)|a_n| \,\,\text{ and }\,\,|b_{n+1}|\le \left(\frac{1}{\beta}+\varepsilon\right)|b_n|
$$
for all $n\ge n_0$. This implies
$$
\left|\frac{b_n}{a_n}\right|\le \frac{\frac{1}{\beta}+\varepsilon}{\frac{1}{\alpha}-\varepsilon}\,\cdot \,\left|\frac{b_{n-1}}{a_{n-1}}\right|\le \cdots \le \left(\frac{\frac{1}{\beta}+\varepsilon}{\frac{1}{\alpha}-\varepsilon}\right)^{n-n_0}\,\cdot \,\left|\frac{b_{n_0}}{a_{n_0}}\right|
$$
In particular, if $\alpha<\beta$, set $\varepsilon:=\frac{1/\alpha-1/\beta}{3}$ then
$$
\frac{\frac{1}{\beta}+\varepsilon}{\frac{1}{\alpha}-\varepsilon}<1 \implies \lim_{n\to \infty}\left|\frac{b_n}{a_n}\right|=0.
$$
Only if. If $\alpha \ge \beta$ then set $a_n:=\alpha^{-n}$ and $b_n:=\beta^{-n}$ for all $n$. Hence
$$
\left|\frac{b_n}{a_n}\right|= \left(\frac{\alpha}{\beta}\right)^n\ge 1
$$
for all $n$, so it does not converge to $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Rolle's theorem with a supplementary condition I have to find the types of roots (i.e real or complex) of the equation $$ 11^x + 13^x+ 17^x -19^x = 0 \dots (1) $$
If $$ f(x) = 11^x + 13^x+ 17^x -19^x = 0 $$ , then obviously $ f'(x)= 0 $ has a 0 solution, and indeed every derivative of $f(x)$ has a 0 solution.
In this context a question arises in my mind : if all the conditions of Rolle's Theorem are satisfied for a function $g(x) $ in $[a,b]$, and in addition if $g'(c)=0$ ,then is it necessary that $c$ lies between $a$ and $b$ ?
If it's true, then we can conclude that $f(x)=0 $
has more than 2 real roots right?
Any insight ? Thank you.
|
if all the conditions of Rolle's Theorem are satisfied for a function $g(x) $ in $[a,b]$, and in addition if $g'(c)=0$ ,then is it necessary that $c$ lies between $a$ and $b$ ?
No, not at all. Consider for example, $g(x)=\sin(x)$ with $a=0$, $b=\pi$, and $c=\frac32\pi$.
Or (if you mean "strictly between $a$ and $b$"), consider $g(x)=\sin^2(x)$ with $a=0$ and $b=c=\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability that no letter is in alphabetical order Given a random string of distinct letters, find the probability that none of the letters are in order. A letter is in order when every letter preceding it is of lower alphabetical value and every letter after it is higher.
having trouble with the combinatorial approach because there seems to be no easy way to avoid over-counting possibilities.
Example of none in order:
dbac
d: bac should precede it - not in order
b: a should precede it and cd should come after - not in order
a: bac should come after - not in order
c: ab should precede it, d should come after it - not in order
|
Let's replace letters with numbers.
Without loosing in generality we can assume that they are the number $1,2,\cdots,n$.
We call $P(n)$ the sought number of permutations in which none of the numbers are ordered, according to your definition.
Then we shall have
$$ \bbox[lightyellow] {
\eqalign{
& P(n) = {\rm N}{\rm .}\,{\rm of}\,{\rm permutations}\;{\rm of}\left[ {1, \cdots ,n} \right]\;: \cr
& 1 = \prod\limits_{1\, \le \,k\, \le \,n} {\neg \left( {\prod\limits_{1\, \le \,j\, \le \,k - 1} {\left[ {x_{\,k - j} < x_{\,k} } \right]} \prod\limits_{1\, \le \,j\, \le \,n - k} {\left[ {x_{\,k} < x_{\,k + j} } \right]} } \right)} = \cr
& = \prod\limits_{1\, \le \,k\, \le \,n} {\left( {1 - \left( {\prod\limits_{1\, \le \,j\, \le \,k - 1} {\left[ {x_{\,j} < x_{\,k} } \right]} \prod\limits_{k + 1\, \le \,j\, \le \,n} {\left[ {x_{\,k} < x_{\,j} } \right]} } \right)} \right)} \cr}
} \tag{1}$$
where $[X]$ denotes the Iverson bracket.
Taking the complement of the above
$$ \bbox[lightyellow] {
\eqalign{
& Q(n) = n! - P(n) = {\rm N}{\rm .}\,{\rm of}\,{\rm permutations}\;{\rm of}\left[ {1, \cdots ,n} \right]\;: \cr
& 1 = \neg \prod\limits_{1\, \le \,k\, \le \,n} {\neg \left( {\prod\limits_{1\, \le \,j\, \le \,k - 1} {\left[ {x_{\,k - j} < x_{\,k} } \right]} \prod\limits_{1\, \le \,j\, \le \,n - k} {\left[ {x_{\,k} < x_{\,k + j} } \right]} } \right)} \quad \Rightarrow \cr
& \Rightarrow \quad 0 < \sum\limits_{1\, \le \,k\, \le \,n} {\left( {\prod\limits_{1\, \le \,j\, \le \,k - 1} {\left[ {x_{\,j} < x_{\,k} } \right]} \prod\limits_{k + 1\, \le \,j\, \le \,n} {\left[ {x_{\,k} < x_{\,j} } \right]} } \right)} \cr}
} \tag{2}$$
For a single product in the sum above to be greater than $0$, we need that all the terms lower than $x_k$ be before it, and all the higher ones come after it.
That means a permutation into two "separated" (non-overlapping) cycles.
In the matrix representation of the permutation it means that $x_k$ divides the matrix into two blocks which are permutations in their own. And since the permutation matrix has only a $1$ for each row and each column
then $x_k$ must be a fixed point.
We get the situation represented in the sketch below.
To count $Q(n)$ avoiding over-counting, we put that $x_k$ be the first fixed point. The block above is therefore accounted by $P(k-1)$,
while the block below is a general permutation. So
$$ \bbox[lightyellow] {
Q(n) = n! - P(n) = \sum\limits_{1\, \le \,k\, \le \,n} {\left( {n - k} \right)!\;P(k - 1)} = \sum\limits_{0\, \le \,j\, \le \,n - 1} {\left( {n - 1 - j} \right)!\;P(j)}
} \tag{3}$$
which means
$$ \bbox[lightyellow] {
P(n) = n! - \sum\limits_{0\, \le \,j\, \le \,n - 1} {\left( {n - 1 - j} \right)!P(j)}
} \tag{4}$$
This reproduce the sequence already indicated by Marko.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Matrix of the perpendicular Projection I am bit doubtful about my reasoning so I hope there's someone that will correct me if I'm wrong on the steps 1 to 3.
Given is:
$W = span(v_1, v_2)$
with:
$v_1 = \begin{bmatrix} 1\\0\\1\\0 \end{bmatrix}, v_2 = \begin{bmatrix} 1\\-1\\1\\-1 \end{bmatrix}$
Steps:
i)
I use the Gram-Schmidt process to find an orthonormal basis which, using $v_1$ as $b_1$(first vector), is
$W = span\{1/\sqrt{2}\begin{bmatrix}1\\0\\1\\0 \end{bmatrix};1/\sqrt{2}\begin{bmatrix}0\\-1\\0\\-1\end{bmatrix} \}$
ii)
I find the the Matrix of the orthogonal reflection relative to the canonical basis:
$M = 1 - 2n^Tn$, I could choose either of the two vector, and I choose $b_1$.
$M = I - 2(1/\sqrt{2})(1/\sqrt{2})\begin{bmatrix}1\\0\\1\\0 \end{bmatrix}\begin{bmatrix}1&0&1&0\end{bmatrix}$
$M = \begin{bmatrix}0&0&1&0\\0&1&0&0\\1&0&0&0\\0&0&0&1\end{bmatrix}$
iii)
I should compute the Matrix of the perpendicular projection relative to the canonical basis, $proj_w : V \to V$ on $W$.
I thought I should compute $<x,b_1>b_1 + <x,b_2>b_2$ with $b_j$ being a vector of the orthonormal basis found at step 1 and $x$ being $x =(x_1,x_2,x_3,x_4)$ and that should be it, but I'm very unsure.
|
Yes, your reasoning is correct for iii). The columns of the matrix are the images of the canonical basis, so the first column is
$$\langle e_1, b_1 \rangle b_1 + \langle e_1, b_2 \rangle b_2 = \frac{1}{\sqrt{2}} b_1$$
and similarly for $e_2, e_3, e_4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Disjoint curves in a smooth manifold. I already asked the question here but it was in a too specific way, so I asked it again here for more visibility. How do we prove the following statement :
Let $M$ be a smooth connected manifold without boundary of dim $\geq 2$ and $(x_1, y_1,... , x_n,y_n )$ be $2n$ distinct points of $M$. Then there exist smooth curves $\gamma_i :[0,1] \rightarrow M$ such that $\gamma_i (0) =x_i$ and $\gamma_i(1) =y_i$ for all $i=1,...,n$ which don't intersect each other.
|
I would recommend try proving this by induction. Proving that a connect manifold is path connected by a smooth path would be a very useful lemma.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Determine whether $\sum\limits_{n=1}^{\infty} (-1)^{n-1}(\frac{n}{n^2+1})$ is absolutely convergent, conditionally convergent, or divergent.
Determine whether the series is absolutely convergent, conditionally convergent, or divergent.
$$\sum_{n=1}^{\infty} (-1)^{n-1}\left(\frac{n}{n^2+1}\right)$$
Here's my work:
$b_n = (\dfrac{n}{n^2+1})$
$b_{n+1} = (\dfrac{n+1}{(n+1)^2+1})$
$\lim\limits_{n \to \infty}(\dfrac{n}{n^2+1}) = \lim\limits_{n \to \infty}(\dfrac{1}{n+1/n})=0$
Then I simplified $b_n - b_{n+1}$ in hopes of showing that the sum would be greater than or equal to $0$, but I failed (and erased my work so that's why I haven't included it).
I know the limit of $|b_n|$ is also 0, and I can use that for testing conditional convergence there, but I would run into the same problem for the second half of the test.
I'm having trouble wrapping my head around tests involving absolute values, or more specifically when I have to simplify them.
|
This definitely converges by the alternating series test. The AST asks that the unsigned terms decrease and have a limit of 0. In your case, the terms $\frac{n}{n^2+1}$ do exactly that, so it converges.
Now, which flavor of convergence?
If you take absolute values, the resulting series $\sum_n \frac{n}{n^2+1}$ diverges. You can probably get this quickest by limit comparison: terms are on the order of $1/n$. Also, the integral test here is pretty fast because you can see the logarithm.
To apply limit comparison, let's compare $\sum_n \frac{n}{n^2+1}$ to $\sum_n \frac{1}{n}$. Dividing a term in the first by a term in the second gives
$$
(\frac{n}{n^2+1})/(\frac{1}{n}) = \frac{n^2}{n^2+1}.
$$
Taking the limit gives $L=1$. Since $L>0$, both series "do the same thing." Since $\sum_n \frac{1}{n}$ diverges, so does $\sum_n \frac{n}{n^2+1}$.
Hence it converges conditionally because it converges, but the series of absolute values does not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Suppose that 3 men and 3 women will be randomly seated at a round table having 6 evenly-spaced seats Suppose that $3$ men and $3$ women will be randomly seated at a round table having $6$ evenly-spaced seats (with each seat being directly opposite one other seat).
(a) What is the probability that all $3$ men will be seated next to one another?
(b) What is the probability that starting at the northernmost seat and going clockwise around the table, a 2nd man will be encountered before a 2nd woman?
(c) What is the probability that no two men are seated directly opposite of one another?
(d) Suppose that the $6$ people are named Ashley, Barney, Carly, David, Emma, and Fred. What is the probability that starting with Ashley and going clockwise, the 6 people are seated in alphabetical order?
Attempted Solution:
(a) There are $6!$ = $720$ ways to seat these $6$ people. There are 6 different ways you can seat all three men next to each other, if you consider them as a clump. Each clump of three has $3!$ = 6 different ways to be arranged. Thus, there is $6*3!*3!$ = $216$ different arrangements. This gives a probability of $216\over{720}$ = $.3$.
(b) I wasn't sure about this but I realized that in any situation, either a 2nd man is reached before a 2nd woman or a 2nd woman is reached before a 2nd man. This gives a probability of $.5$.
(c) After drawing a picture of the table, this appears to be the same situation as part (a), giving a probability of $.3.$ I just realized this assumption was incorrect. Now I am getting that there are 8 different arrangements when considering males and females as clumps, with $3!$ different ways of arranging each clump, giving $8*3!*3!$ = $288$. This gives a probability of $288\over{720}$ = $.4$.
(d) $6\over{6}$ * $1\over{5}$ * $1\over{4}$ * $1\over{3}$ * $1\over{2}$ * $1\over{1}$ = $.00833$.
Any corrections to my attempted solutions would be greatly appreciated.
|
All of your answers are correct. I will assume that only the relative order of the people matters, that the men are named Barney, David, and Fred, and that the women are named Ashley, Carly, and Emma.
What is the probability that all three men will be seated next to each other?
We seat Ashley. The remaining people can be seated in $5!$ ways as we proceed clockwise around the table.
For the favorable cases, we have two blocks of people to arrange. Since the blocks of men and women must be adjacent, this can be done in one way. The block of men can be arranged in $3!$ ways. The block of women can be arranged in $3!$ ways. Hence, the probability that all the men are seated next to each other is
$$\frac{3!3!}{5!} = \frac{3}{10}$$
What is the probability that starting at the northernmost seat and going clockwise around the table, a second man will be encountered before a second woman?
You made good use of symmetry. Nice solution.
What is the probability that no two men are seated directly opposite each other?
We count arrangements in which two men are seated directly opposite each other. There are $\binom{3}{2}$ ways to select the men. Once they are seated, there are $4$ ways to seat the third man relative to the already seated man whose name appears first alphabetically and $3!$ ways to seat the women as we proceed clockwise around the table from the third man. Hence, there are
$$\binom{3}{2}\binom{4}{1}3!$$
seating arrangements in which two men are opposite each other. Hence, the probability that no two men are opposite each other is
$$1 - \frac{\binom{3}{2}\binom{4}{1}3!}{5!} = 1 - \frac{3}{5} = \frac{2}{5}$$
Suppose that the six people are named Ashley, Barney, Carly, David, Emma, and Fred. What is the probability that starting with Ashley and going clockwise, the six people are seated in alphabetical order.
There is only one permissible seating arrangement. Hence, the probability that they are seated in alphabetical order is
$$\frac{1}{5!} = \frac{1}{120}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2414941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$n \times n$ positive matrix with $a_{ij} a_{ji} = 1$ has an eigenvalue not less than $n$
$A$ is a real $n \times n$ matrix with positive elements $\{a_{ij}\}$. For all pairs $(i, j), a_{ij} a_{ji}=1$. Prove that $A$ has an eigenvalue not less than $n$.
|
Proof 1. By Perron-Frobenius theorem, $Av=\rho(A)v$ for some positive eigenvector $v$. Let $D=\operatorname{diag}(v)$ (the diagonal matrix whose diagonal is $v$), $e=(1,\ldots,1)^T$ and $B=D^{-1}AD$. Then $Be=\rho(A)e$. Since $B$ is also a positive matrix with $b_{ij}b_{ji}=1$ for all $i,j$, and $2\le b+\frac1b$ for every positive real number $b$, we have $n^2\le e^TBe=e^T\left(\rho(A)e\right)=n\rho(A)$ and the result follows.
Proof 2. For any (entrywise) nonnegative square matrix, we have (cf. Horn and Johnson, Topics in Matrix Analysis, 1/e, p.363, corollary 5.7.11)
$$
\rho\left[A^{(1/2)}\circ(A^T)^{(1/2)}\right]\le\rho(A),\tag{1}
$$
where the square roots in the above are taken entrywise. In our case, $A\circ A^T=E$, the matrix with all entries equal to one. Hence $(1)$ gives $\rho(A)\ge\rho(E)=n$. As $\rho(A)$ is an eigenvalue of $A$ (Perron-Frobenius theorem), the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate limit of $\lim_{x \to \infty}\frac{x^x}{\left(x+2\right)^x}$ $\lim_{x \to \infty}\dfrac{x^x}{\left(x+2\right)^x}$
I tried using Taylor and L'H and wasn't able to land on an answer.
Any help would be appreciated!
|
HINT: write $$\frac{1}{\left(\left(1+\frac{2}{x}\right)^{2x}\right)^{1/2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
Are essential isolated singularities preserved under non-zero holomorphic functions?
Question. In univariate complex analysis, are essential isolated singularities preserved under non-zero holomorphic functions?
For example, if we've already proved that $e^{1/z}$ has an essential singularity at $0$, can we deduce that $e^{e^{1/z}}$ also has one at zero, without making any computations?
|
Let $f\colon D\longrightarrow\mathbb C$ be an analytic function and suppose that it has an essential singularity at some point $z_0$. Let $g$ be a non-constant entire function. Then $g\circ f$ also has an essential singularity at $z_0$. This is so because, by the Casorati-Weierstrass, if $U$ is a neighborhood of $z_0$ such that $U\setminus\{z_0\}\subset D$, then $f\bigl(U\setminus\{z_0\}\bigr)$ is a dense subset of $\mathbb C$. On the other and, it is an easy corollary of the Liouville theorem that the image of $g$ is dense. Therefore, $(g\circ f)(D)$ is dense too. And it follows from this that $g\circ f$ also has an essential singularity at $z_0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
How do we calculate the chances of getting n fixed points in a permutation? I understand that attaining a derangement for n = 4 or more objects has a probability of about 1/e or about 37 %. But what about permutations with precisely n fixed points? For example, if I have exactly ten cards (numbered 1 to 10), shuffle them and then lay them out in a row, what are the chances of getting one fixed point, two fixed points, ..., 10 fixed points?
My attempt for (say) 4 fixed points is: (10!)(10C6)(1/e).
But I cannot convince myself this is correct.
|
Denote by $F(n,k)$ the number of permutations of $n$ elements with exactly $k$ fixed points. It satisfies the following relations.
*
*$F(n,n)=1$ given by the identity permutation.
*$F(n,k) = \binom{n}{k}F(n-k,0)$ given by choosing $k$ points to fix and then taking a permutation of the other elements with no fixed points.
*$\sum_{k=0}^nF(n,k) = n!$
This is enough to compute $F(n,k)$ recursively. Here's an ugly python code finding $F(n,k)$.
import math
import scipy.special
n_fixed_points = {(0,0):1}
def F(n,k):
if n_fixed_points.has_key((n,k)):
return n_fixed_points[(n,k)]
if k==n:
n_fixed_points[(n,k)] = 1
return 1
if k>0:
f = scipy.special.binom(n,k)*F(n-k,0)
n_fixed_points[(n,k)] = f
return f
f = math.factorial(n)-sum([scipy.special.binom(n,i)*F(n-i,0) for i in xrange(1,n+1)])
n_fixed_points[(n,k)] = f
return f
print F(10,4)
It gives $F(10,4) = 55650$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
why $\sin^{-1}(x)+\cos^{-1}(x) = π/2$ How do i find the When $x$ is ranging from $-1$ to $1$?
I want know why $\sin^{-1}(x)+\cos^{-1}(x) = π/2$
I have already tried inverse-function.
thanks.
|
The "co" in "cosine" stands for "complement". It means, the sine of the complementary angle. Two angles are complementary if they add to a right angle, $\pi/2$ radians. Thus:
$$
\arcsin x + \arccos x = \frac{\pi}{2}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
Differentiating an expression with f(x) in it $\frac{d}{dx}$ $\frac{x^4}{f(x)}$ at $x=1$ where $f(1)=1$ and $f'(1)=3$
I've tried simply differentiating the expression using the quotient rule, obtaining $\frac{4x^3*f(x)-x^4*f'(x)}{f(x)^2}$ But I'm not sure about where to go from here, I'm confused by the use of f(x) in the expression
|
Since $f$ is differentiable at $1$, the composition $x\mapsto f(x)/x^4$ is differentiable at $1$ aswell. Hence by the quotient rule
$$\left .\frac{\mathrm d}{\mathrm d x }\frac{f(x)}{x^4}\right|_{x=1} = \left.\frac{4x^3 f(x)-x^4 f'(x)}{f(x)^2}\right|_{x=1}=\frac{4-3}{1}=1,$$
where we used $f(1)=1$ and $f'(1)=3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How can I write this in a different ways? $\vert(\bigcup_{i=0}^n Ai)\vert$ I'm trying to write this equation in a more compressed way. For me, I think it would contribute a lot to my practice to see different ways of writing the same identity.
How can I write this in a different ways?
$$\vert(\bigcup_{i=1}^n Ai)\vert = \sum_{i=1}^n \vert(Ai)\vert - \sum_{i=1\,\,i<j=2}^n \vert(Ai \cap Aj)\vert + \sum_{i=1\,i<j=2\,j<l=3}^n \vert(Ai \cap Aj \cap Al)\vert - ... + (-1)^{n-1} \vert(\bigcap_{i=0}^n Ai)\vert$$
Thanks!
|
Two different ways:
$$ \left\lvert \bigcup_{i = 0}^n A_i \right\rvert = \sum_{\emptyset\ne S \subseteq\{0,\dots,n\}} (-1)^{|S|-1} \left\lvert \bigcap_{j \in S} A_j \right\rvert = \sum_{k = 1}^{n + 1} (-1)^{k-1} \sum_{0\le j_1 < \dots < j_k \le n} |A_{j_1} \cap \dots \cap A_{j_k}|. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why is p ∧ q ⇒ r true when p is true and q is false? I'm taking Intro to Logic on Coursera. One of the exercises has this:
Consider a truth assignment in which p is true, q is false, r is true.
Use this truth assignment to evaluate the following sentences.
The answer key says $p ∧ q ⇒ r$ is true, but I don't understand why. If I read it correctly, it says "If TRUE and FALSE then TRUE."
I thought TRUE and FALSE should imply FALSE because the two propositions are different. Can anyone explain?
|
If we have predicates $p$ and $q$ then $p\implies q$ is true when either $p$ is false or $q$ is true (or both). That is,
$$(p\implies q)\iff (\lnot p\lor q)$$
The implication will not hold only when $p$ is true and $q$ is false. That is, $p\implies q$ is false when $p\land \lnot q$ is true.
So in your example, $p$ is true and $q$ is false, so $p\land q$ is false. Thus, $p\land q\implies r$ is indeed true (since the "if" part of the implication is false). We see this as
$$(p\land q\implies r)\iff (\lnot(p\land q)\lor r)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2415924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Determining the value of required parameter for the equations to have a common root
Determine the value of $a$ such that $x^2-11x+a=0$ and $x^2-14x+2a=0$ may have a common root.
My attempt:
Let the common root be $\alpha$
On substituting $\alpha$ in both equations and then subtracting, $a = -3\alpha$
How do I continue from here? What are the other conditions for them to have common roots?
|
If you do the calculation correctly $a=3\alpha$. So,$x^2-11*x+3\alpha=0$ is the first equation and it has root $\alpha$. So $\alpha^2-11*\alpha+3\alpha=0$, that is $\alpha(\alpha-8)=0$. So, $\alpha=0,8$. If common root $\alpha=8$ then $a=3\alpha=3*8=24$. If $\alpha=0$ then $a=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Linear algebra - changing basis and spanning the space let $|e_i\rangle$ and $|f_i\rangle$ be basis vectors, and matrix $\textbf{S}_{ij}$ that
$|e_j\rangle = \sum_j\textbf{S}_{ij}|f_i\rangle$
so that
$\textbf{a}^f=\textbf{Sa}^e$ where the superscript indicates the basis.
My textbooks says that $\textbf{S}^{-1}$ exists since if $\textbf{S}$ were singular, $|f_i\rangle$ won't span the space, and I don't get this.
|
Any kind of vectors and matrices we see in undergraduate texts, like $S=\begin{pmatrix}1&1\\0&3\end{pmatrix}$, are closely related to orthonormal vectors in Euclidean spaces. That is, we use the fact that when a basis is given, any vectors in the span of the basis can be represented by the unique linear transformation of the basis. Thus, we use orthonormal vectors in Euclidean space to represent the numbers in a given matrix and vector. For example, the orthonormal basis of $\mathbb{R}^2$ is the set of vectors $e_1 = \begin{pmatrix}1\\0\end{pmatrix}$ and $e_2=\begin{pmatrix}0\\1\end{pmatrix}$. Thus, when we write a vector $a = \begin{pmatrix}7\\5 \end{pmatrix}$, we represent $a$ in terms of the orthonormal basis $\{e_i\}$ as defined above; there are $7e_1$s and $5e_2$s which are added to represent $a$.
As far as I understand, the problem you are dealing with is related to the concept "change of basis". Thus, the basis vectors $\{e_i\}$ and $\{f_i\}$ are spanning the same vector space $V$, and your matrix $S$ is a square matrix with $n\times n$ elements.
You can have numerous kinds of bases representing the same vector space. For example, in $\mathbb{R}^2$, two bundles of vectors $\{e_1,e_2\},\{f_1,f_2\}$ are spanning $\mathbb{R}^2$ when they are defined as
$$ e_1 = \begin{pmatrix}1\\0\end{pmatrix}\ \ e_2 = \begin{pmatrix}0\\ 1\end{pmatrix}\ \ f_1 = \begin{pmatrix}1\\1\end{pmatrix} \ \ f_2 = \begin{pmatrix}2\\ 1\end{pmatrix}$$
note that $f_1$ and $f_2$ are not even orthogonal (the angle between them is not rigid). Thus, your problems boils down to represent a given basis $e_j$ in terms of linear transformation $\sum_{i} \alpha^j_i f_i$. And of course, this set of equations can be represented by a matrix, $S$, which is just an ordinary $n \times n$ matrix that makes you to see the linear operator in a numerical way.
Let us get back to the above example in $\mathbb{R}^2$. You can represent $e_1$ and $e_2$ in terms of $f_1$ and $f_2$ by a linear transformation. In this case there are two equations like the following:
$$\begin{eqnarray}e_1 = -f_1 + f_2\\
e_2 = 2f_1 - f_2\end{eqnarray}$$
With these equations in mind, note that every vectors and matrices you see in a usual form (numbers assigned on each rows and columns of elements) are in fact represented by a orthonormal basis. By using the above equations, you can find a way to represent vectors in a given space by a basis other than the orthonormal one.
In the example, I deliberately set $\{v_i\}$ as orthonormal vectors. Let us choose a vector $a^e = \begin{pmatrix}8\\6\end{pmatrix}$ from $\mathbb{R}^2$. Then the notation implies that
$$a^e = 8e_1 + 6e_2$$
However, by using the equations, you can get
$$8e_1 + 6e_2 = 8(-f_1 + f_2) + 6(2f_1 - f_2) = 4f_1 + 2f_2$$
That means $a^f$, which is the same vector with different basis, is denoted as
$$a^f = \begin{pmatrix}4\\2\end{pmatrix},\quad a^f = 4f_1 + 2f_2$$
Thus the matrix $S$ can be easily derived.
$$S = \begin{pmatrix}-1 & 2 \\ 1 & -1\end{pmatrix} \ \ \because a^f = Sa^e$$
When $S$ is singular, that means the set of equations are linearly dependent and this is a complete nonsense because $e_1$ and $e_2$ are linearly independent by the assumption.
You can try an arbitrary finite-dimensional vector space version of this example to solve your problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $x_{n^2}= 2x_{2n}+n^{1/n} - \sum\limits_{k=1}^{\infty}\frac{1}{2^k}$ and $\sum(x_{n+1}-x_n)$ converges, then $\lim x_n = 0$ Let $x_n$ be a real number sequence such as: $x_{n^2}= 2x_{2n}+n^{1/n} - \sum_{k=1}^{\infty}\frac{1}{2^k}$, for every $n=1,2,3...$ and the sum $\sum_{n=1}^{\infty}(x_{n+1}-x_n)$ converges. Show that $\lim x_n = 0$.
So, I began by saying that since the sum converges then, $(x_{n+1}-x_n) \to0$ and then I tried to find the $x_{(n+1)^2}-x_{n^2}$ difference and then take its limit saying that it must be equal to 0 (Not sure about that step). With that, I have: $\lim (2x_{2(n+1)}-2x_{2n}+(n+1)^{1/n+1} - n^{1/n}) = 0 \implies \lim (x_{2(n+1)} -x_{2n}) = 0 $ and i don't know what to do from here.
|
Note that:
*
*The series $\sum(x_{n+1}-x_n)$ converges and its $n$th partial sum is $x_{n+1}-x_1$, hence the sequence $(x_n)$ converges, call $\ell$ its limit.
*In the identity $x_{n^2}= 2x_{2n}+n^{1/n} - \sum\limits_{k=1}^{\infty}\frac{1}{2^k}$, the term $\sum\frac1{2^k}$ equals $1$ and $n^{1/n}\to1$ when $n\to\infty$.
*Since $x_{n^2}\to\ell$ and $x_{2n}\to\ell$, item 2. implies that $\ell=2\ell+1-1$.
*Because $\ell$ is finite, item 3. implies that $\ell=0$.
QED.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Showing the local maximum or minimum, while the function changes sign infinitely often Please I need a hand in solving this problem:
These 3 functions' values at $0$ are all $0$ and for $x\ne0$, $$f(x)=x^4\sin\frac{1}{x}, \, g(x)=x^4\left(2+\sin\frac{1}{x}\right), \, h(x)=x^4\left(-2+\sin\frac{1}{x}\right)$$
b- Show that $f$ has neither a local maximum nor a local minimum
at $0$, $g$ has a local minimum, and $h$ has a local
maximum.
The derivatives of these functions change sign infinitely often on both sides of $0$. I couldn't use the 1st nor the 2nd derivative test.
|
HINT: we have for 1) $$f'(x)=4x^3\sin\left(\frac{1}{x}\right)+x^4\cos\left(\frac{1}{x}\right)\cdot \left(-\frac{1}{x^2}\right)$$
and for 2)$$g'(x)=4x^3\left(2+\sin\left(\frac{1}{x}\right)\right)+x^4\cdot\cos\left(\frac{1}{x}\right)\left(-\frac{1}{x^2}\right)$$
and 3)$$h'(x)=4x^3\left(-2+\sin\left(\frac{1}{x}\right)\right)+x^4\cos\left(\frac{1}{x}\right)\left(-\frac{1}{x^2}\right)$$
additionally use that $$|f(x)|\le x^4$$
and for $$f(x)$$ we have a sign Change from minus to plus
$g(x)$ has a sign change from plus to plus
$h(x)$ has a sign change for minus to minus.....
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Comparing big powers
Which of the following is the largest?
A. $1^{200}$
B. $2^{400}$
C.$4^{80}$
D. $6^{300}$
E. $10^{250}$
I'm stuck trying to solve this. Obviously A and C are wrong ($4^{80}$ is less than $2^{400}$ and 1 to any power is always 1). And cancelling $2^{200}$ from each of the remaining choices, I can also eliminate B. However, I don't really know how to compare D and E... Any hints or helps?
|
$$6^{300} = (6^6)^{50} \ \ ; \ \ 10^{250} = (10^5)^{50}$$
so it is enough to check what is larger between $6^6$ and $10^5$. Now,
$$6^6 = (6^3)^2 = 216^2 < 300^2 = 90000 < 100000 = 10^5$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Tangent space of closure of subgroup of Lie group I am working my way through Stillwell's Naive Lie Theory. I am looking for suggestions how to address question 8.2.4 - Show that
$\{\text{sequential tangents to }H\} = T_{\mathbf 1}\overline H$ where $H$ is an arbitrary subgroup of a matrix Lie group?
|
If $X$ is a sequential tangent vector to $H$ at $\mathbf 1$, then it is also a sequential tangent vector to $\overline H$ at $\mathbf 1$ and therefore $X\in T_{\mathbf 1}\overline H$. (In Stillwell's book, this is proved right after the definition of sequential tangent vector.)
Now, let $X\in T_{\mathbf 1}\overline H$. Then there is a sequence $(A_n)_{n\in\mathbb N}$ of points of $\overline H$ such that$$X=\lim_{n\to\infty}n(A_n-\mathbf{1}).$$For each $n\in\mathbb N$, let $X_n\in H$ be such that $\|A_n-X_n\|\leqslant\frac1{n^2}$. Then\begin{align}X&=\lim_{n\to\infty}n(A_n-\mathbf{1})\\&=\lim_{n\to\infty}n(X_n-\mathbf{1})+\lim_{n\to\infty}n(A_n-X_n)\\&=\lim_{n\to\infty}n(X_n-\mathbf{1})\\&=\lim_{n\to\infty}\frac{X_n-\mathbf{1}}{1/n}\end{align}and therefore $X$ is a sequential tangent vector to $H$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I prove, using Dirac notation, that $\langle A|A\rangle$ is a real number? Using bra-ket notation, how do I prove that the inner product of a ket with itself (ie, $\langle A | A \rangle$) is a real number?
I understand that the rule of inner products states that: $\langle B | A \rangle = \overline{\langle A | B \rangle}$.
Therefore $|A\rangle$ is equal to it's own complex conjugate, and hence must have no imaginary part, but I'm having trouble writing the proof in a formal way.
|
If you want/need more detail you could set $\langle A|A \rangle = x + iy$ and then use $\overline{\langle A|A \rangle} = \langle A|A \rangle$ to conclude that $-y = y$, which is true only for $y=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
The second derivative of $\log\left(\sum\limits_{i=1}^ne^{x_i}\right)$ seems negative, but I have to prove the function is convex For the second derivative I got $$\frac{\partial^2}{\partial x_k x_j}\log \left(\sum_{i=1}^{n} e^{x_i}\right)=-\frac{e^{x_k}e^{x_j}}{\left(\sum_{i=1}^{n} e^{x_i}\right)^2},$$ where $j \neq k$, and $1 \le j,k \le n$.
This Hessian is negative, which can't allow $\log(\sum_{i=1}^{n}{e^{x_i}})$ to be convex, but I am asked to show that $\log(\sum_{i=1}^{n}{e^{x_i}})$ is convex. Obviously something is wrong here, so unless the second derivative I showed is actually positive, then I must have just gotten the wrong Hessian.
Someone helped me calculate that:
$$\frac{\partial^2}{\partial x_k^2}\log \left(\sum_{i=1}^{n} e^{x_i}\right)=\frac{e^{x_k}\left(\sum_{i=1}^{n} e^{x_i}-e^{x_k}\right)}{\left(\sum_{i=1}^{n} e^{x_i}\right)^2},$$
|
There is nothing wrong. What you have only shown is not that the Hessian matrix $H$ is negative definite, but merely that the off-diagonal entries of $H$ are negative. The matrix $\pmatrix{1&-1\\ -1&1}$, for instance, has negative off-diagonal entries, but the matrix itself is positive semidefinite.
In your case, it only takes a little more work to show that the $i$-th diagonal entry of the Hessian matrix $H$ is given by
$$
\frac{e^{x_i}}{\sum_{i=1}^ne^{x_i}}-\left(\frac{e^{x_i}}{\sum_{i=1}^ne^{x_i}}\right)^2.
$$
Therefore $H$ is a (weakly) diagonally dominant matrix with a nonnegative diagonal, meaning that its eigenvalues have nonnegative real parts (Gershgorin disc theorem). As $H$ is also real symmetric, it has real eigenvalues. Hence all its eigenvalues of $H$ are nonnegative, i.e. $H$ is positive semidefinite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2416837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
convex hull of a set of points equivalent to a set I am trying to prove the following problem:
Given a set of points $S = \{(x_i,t_i)_{i = 1}^K \}$ where $x_i \in R^n, t_i >0 ,\forall i = 1,...,K$ and $Y = \{y \in R^n: y = \frac{x}{t},(x,t) \in conv(S) \}$.
I have proved that $Y \subseteq conv(x_1/t_1,...,x_K/t_K)$. But I am stuck on proving that $conv(x_1/t_1,...,x_K/t_K) \subseteq Y$. Can anyone help me and give me some hint?
Great thanks!
|
To give a motivation as to my answer, let me first write out the proof that $Y\subseteq conv(x_1/t_1,\ldots,x_K/t_K)$. In this case, there exist $x\in\mathbb R^n,t>0$ and non-negative numbers $\lambda_1,\ldots,\lambda_K$ such that $y=\frac xt$ and
$$
\sum\limits_{i=1}^K\lambda_ix_i=x,\qquad\sum\limits_{i=1}^K\lambda_it_i=t,\qquad\sum\limits_{i=1}^K\lambda_i=1,
$$
by definition. Observe that
$$
y=\frac xt=\sum\limits_{i=1}^K\frac{\lambda_ix_i}t=\sum\limits_{i=1}^K\left(\frac{t_i}{t_i\sum\limits_{j=1}^K\lambda_jt_j}\right)\lambda_ix_i=\sum\limits_{i=1}^K\left(\frac{\lambda_it_i}{\sum\limits_{j=1}^K\lambda_jt_j}\right)\frac{x_i}{t_i},
$$
and since
$$
\sum\limits_{i=1}^K\frac{\lambda_it_i}{\sum\limits_{j=1}^K\lambda_jt_j}=1,
$$
it follows $y\in conv(x_1/t_1,\ldots,x_K/t_K)$.
Conversely, if $z\in conv(x_1/t_1,\ldots,x_K/t_K)$, then there exist non-negative numbers $\lambda_1,\ldots,\lambda_K$ such that $z=\sum\limits_{i=1}^K\lambda_i\frac{x_i}{t_i}$, and $\sum\limits_{i=1}^K\lambda_i=1$. The problem is resolved as soon as we have a list of non-negative numbers $\delta_1,\ldots,\delta_K$ such that $\sum\limits_{i=1}^K\delta_i=1$ and
$$
\lambda_i=\frac{\delta_i t_i}{\sum\limits_{j=1}^K\delta_jt_j},\quad\text{for each }i=1,2,\ldots,K.
$$
This is because, in this case, we can write
$$
z=\sum\limits_{i=1}^K\left(\frac{\delta_i t_i}{\sum\limits_{j=1}^K\delta_jt_j}\right)\frac{x_i}{t_i}=\frac{\sum\limits_{i=1}^K\delta_ix_i}{\sum\limits_{j=1}^K\delta_jt_j}=:\frac{x}t,
$$
and, by construction, $(x,t)\in conv(S)$. So the problem is to find the list of numbers $\delta_i$'s. However, this is a linear algebra problem ($K$ variables with $K+1$ equations, one of which is non-homogeneous), and one can construct the $\delta_i$'s directly from the $\lambda_i$'s. For instance, for any $c>0$, if we let
$$
\delta_i':=c\frac{\lambda_i}{t_i},\quad\text{for each }i=1,2,\ldots,K,
$$
then by multiplying through $t_i$ in each of the above definitions and then adding up the equations, we get:
$$
c=\sum\limits_{j=1}^K\delta_j't_j,
$$
and upon letting $\delta_i=\frac{\delta_i'}{\sum\limits_{j=1}^K\delta_j'}$, it follows the list $\delta_1,\ldots,\delta_K$ is of the desired form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why use $\forall$ instead of $\in$? So, here are two ways to say what I interpret as the same statement:
$f_i(x,y)\geq0 \hspace{0.85cm} \forall i \{0,1,2\}$
which implies that $f_0(x,y) \geq0$ and $f_1(x,y)\geq0$ and $f_2(x,y)\geq0$
but doesn't
$f_i(x,y)\geq0 \hspace{0.85cm}\{i\in\mathbb{Z}|i\in[0,2]\}$
imply the same thing?
Is there another reason why these different notations are used, besides the fact that the one consumes less space than the other? Apologies if one of these notations falls into a specific category of mathematics without my knowledge. I am not fully taught (evidently).
Any responses are appreciated.
|
Actually, in formal mathematics, the condition should come first:
$$ \forall i \in \{0,1,2\} \; f_i(x,y) \ge 0 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is it possible to construct a strictly monotonic sequence of all rational numbers? I know that the set of all rational numbers is countable, and can be enumerated by a sequence, say $\{a_n\}$. But can we construct a monotonic $\{a_n\}_{n=1}^{\infty}$, e.g. with $a_k<a_{k+1}$? It doesn't seem plausible to me, because then $a_1$ would be the smallest rational number, which clearly can't be any finite number. Am I mistaken?
|
As stated, the answer is no, because the question uses the symbol $<$ which has the implied meaning: The usual ordering of $\mathbb{Q}$ where $\frac{a}{b}<\frac{c}{d}$ iff $ad < bc$ in $\mathbb{Z}$.
But.
As mentioned in another answer, $\mathbb{Q}$ can be well-ordered, i.e. one can define a different order $\prec$ with the property that every nonempty subset of $\mathbb{Q}$ contains a least element with respect to $\prec$. For this ordering, a monotone sequence containing all of the rationals is easy to construct: let $x_1$ be the smallest rational, let $x_2$ be the smallest element of $\mathbb{Q} \setminus \{x_1\}$, etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 5,
"answer_id": 2
}
|
Show that the limit of the resulting sequence is $\sqrt{2}$. Consider the sequence of rational numbers defined recursively by the following formula: $$(x,y) \mapsto (x^2 + 2y^2, 2xy)$$ and starting at the point $(x,y) = (2,1)$. Show that the limit of the ratio of the resulting sequence is $\sqrt{2}$.
Is there any easy way to look it down and solve it.
I was writting in the form $$(x,y) \mapsto (x^2 + 2y^2, 2xy)= ((x+y)^2+x^2, (x-y)^2+x^2)$$
Still it is not showing any easy calculation later.
|
Define $x_n = x_{n-1}^2+2y_{n-1}^2$, $y_n =2x_{n-1}y_{n-1}$ and $x_0 = 2$, $y_0=1$. It is obvious that $x_n$ and $y_n$ are positive for all $n$.
Assume that $\frac{x_n}{y_n}$ converges and denote it's limit by $L$. Then we have $$\frac{x_n}{y_n} = \frac{x_{n-1}^2+2y_{n-1}^2}{2x_{n-1}y_{n-1}} = \frac{\frac{x_{n-1}^2}{y_{n-1}^2}+2}{2\frac{x_{n-1}}{y_{n-1}}}$$ and by taking limit we get $$L = \frac{L^2+2}{2L} \implies 2L^2 = L^2 + 2 \implies L^2 = 2\implies L=\pm \sqrt 2.$$
Since, $\frac{x_n}{y_n}>0$ for all $n$, $L\geq 0$, and thus $L = \sqrt 2$.
To prove that $\frac{x_n}{y_n}$ converges, we can prove that it is decreasing and bounded bellow. Note that
$$\frac{x_n}{y_n}\geq \frac{x_{n+1}}{y_{n+1}} \iff \frac{x_n}{y_n}\geq \frac{x_n^2+2y_n^2}{2x_ny_n}\iff 2x_n^2\geq x_n^2+2y_n^2 \iff \frac{x_n}{y_n}\geq \sqrt 2,$$
so we can get both that the sequence is decreasing and is bounded bellow in one shot. You can prove that $\frac{x_n}{y_n}\geq 2$ by induction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Radius of convergence of $\displaystyle{\sum_{n=0}^{\infty}} {(n!)^3 \over (3n)!}z^{3n} $.
Find the radius of convergence of $\displaystyle{\sum_{n=0}^{\infty}} {(n!)^3 \over (3n)!}z^{3n} \ ?$
I applied Cauchy-Hadamard test and the result is coming $0$ (radius of convergence). To obtain the limit I also used Cauchy's first limit theorem. For a lot of messy calculation I didn't provide my work.
Please someone check whether I'm right or wrong.
Thank you..
|
If we represent the factorials everywhere as gamma-functions and Pochhammer symbols, this is actually a generalized hypergeometric function 4F3(...;;;z/3). According to wikipedia the radius of convergence is where the argument becomes 1, which means in our case, |z|=3.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Let $X$ and $Y$ be $N(0,1)$, show that $Z$ is $SN(\lambda)$. Let $X$ and $Y$ be $N(0,1)$ and let $Z$ be
*
*$Z = Y$ if $\lambda Y ≥ X$
*$Z = -Y$ if $\lambda Y < X$.
Show that $Z$ is $SN(\lambda)$, (skew normal distribution).
I've tried using the transformation theorem without success, and I'm realizing that I'm probably setting up the problem wrong. Grateful for all tips and/or solutions!
|
Start with the obvious decomposition:
$$
P(Z\leq z)=P(\lambda Y\geq X)P(Z\leq z|\lambda Y\geq X)+P(\lambda Y< X)P(Z\leq z|\lambda Y< X).
$$
Now:
$$
P(Z\leq z,\lambda Y\geq X)=P(Y\leq z,\lambda Y\geq X).
$$
$$
P(Z\leq z,\lambda Y< X)=P(-Y\leq z,\lambda Y< X).
$$
Since $X$ and $Y$ are both normally distributed, denote their densiy by the function $f$ with $\int_{-\infty}^{y}f(x)\mathrm dx=F(y)$.
\begin{split}
P(Y\leq z,\lambda Y\geq X)&=\int_{-\infty}^z f(y)\int_{-\infty}^{\lambda y}f(x)\mathrm dx\mathrm dy \\
&=\int_{-\infty}^z f(y)F(\lambda y)\mathrm dy .
\end{split}
Similarly:
\begin{split}
P(-Y\leq z,\lambda Y< X)&=\int_{-z}^\infty f(y)\int^{\infty}_{\lambda y}f(x)\mathrm dx\mathrm dy \\
&=\int_{-z}^\infty f(y)(1-F(\lambda y))\mathrm dy .
\end{split}
So:
$$
P(Z\leq z)=\int_{-z}^\infty f(y)(1-F(\lambda y))\mathrm dy +\int_{-\infty}^z f(y)F(\lambda y)\mathrm dy.
$$
Finding the density amounts to taking a derivative with respect to $z$:
$$
f_Z(z)=f(-z)(1-F(-\lambda z))+f(z)F(\lambda z).
$$
Using the evenness of $f$, we have $f(-z)=f(z)$ and
$$
1-F(-z)=\int_{-z}^\infty f(y)\mathrm dy =\int_{-\infty}^z f(y)\mathrm dy=F(z).
$$
This leads to the skew normal distribution:
$$
f_Z(z)=2f(z)F(\lambda z).
$$
Interestingly we have only used the evenness of $f$ and that's all you need.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
System of differential equations $dx=\frac{dy}{y+z}=\frac{dz}{x+y+z}$ This is the first time I have seen system of differential equations in this form: $$dx=\frac{dy}{y+z}=\frac{dz}{x+y+z}$$
Can you please help me solve it because I don't even know where to start?
|
$$\frac{dx}{1}=\frac{dy}{y+z}=\frac{dz}{x+y+z}$$
This system looks like to be involved in solving a PDE with the method of characteristics. The PDE should be :
$$\frac{\partial z(x,y)}{\partial x}+(y+z(x,y))\frac{\partial z(x,y)}{\partial y}=x+y+z(x,y)$$
$\underline{\text{If this supposition is true}}$,
unfortunately the boundary conditions are missing in the wording of the question.
$$\frac{dy}{y+z}=\frac{dz}{x+y+z}=\frac{dz-dy}{(x+y+z)-(y+z)}=\frac{dz-dy}{x}$$
A first family of characteristics comes from $\quad \frac{dx}{1}=\frac{dz-dy}{x} \quad\to\quad z-y-\frac{x^2}{2}=c_1$
A second family of characteristics comes from $\quad \frac{dx}{1}=\frac{dy}{y+z}=\frac{dy}{y+(y+\frac{x^2}{2}+c_1)}=\frac{dy}{2y+\frac{x^2}{2}+c_1}$
$\frac{dy}{dx}=2y+\frac{x^2}{2}+c_1 \quad\to\quad y=-\frac{c_1}{2}-\frac{x^2}{4}-\frac{x}{4}-\frac{1}{8}+c_2e^{2x}$
$y+\frac{z-y-\frac{x^2}{2}}{2}+\frac{x^2}{4}+\frac{x}{4}+\frac{1}{8}=c_2e^{2x} \quad\to\quad (4z+4y+2x+1)e^{-2x}=8c_2$
The general solution of the PDE is expressed on the form of an implicit equation :
$$\Phi\left((2z-2y-x^2)\:,\:(4z+4y+2x+1)e^{-2x} \right)=0$$
where $\Phi$ is any function of two variables (to be determined according to some boundary conditions).
$\underline{\text{If the above supposition is false}}$ :
Then, $z$ is function of $x$ only, that is $z(x)$ instead of $z(x,y)$.
The system becomes :
$$1=\frac{y'}{y+z}=\frac{z'}{x+y+z}$$
Following the same calculus, the result is :
$z(x)=y+\frac{x^2}{2}+c_1$
$ y(x)=-\frac{c_1}{2}-\frac{x^2}{4}-\frac{x}{4}-\frac{1}{8}+c_2e^{2x}$
$$\begin{cases}
y(x)=-\frac{c_1}{2}-\frac{x^2}{4}-\frac{x}{4}-\frac{1}{8}+c_2e^{2x} \\
z(x)=\frac{c_1}{2}+\frac{x^2}{4}-\frac{x}{4}-\frac{1}{8}+c_2e^{2x}
\end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$X^n= \begin{pmatrix}3&6\\ 2&4\end{pmatrix}$, How many solutions are there if n is odd? $X^n= \begin{pmatrix}3&6\\ 2&4\end{pmatrix}$, $n \in N^*$
How many solutions are there if n is odd?
From the powers of $\begin{pmatrix}3&6\\ 2&4\end{pmatrix}$ I got that $X=\begin{pmatrix}\frac{3}{\sqrt[n]{7^{n-1}}}&\frac{6}{\sqrt[n]{7^{n-1}}}\\ \frac{2}{\sqrt[n]{7^{n-1}}}&\frac{4}{\sqrt[n]{7^{n-1}}}\end{pmatrix}$. But I'm not sure whether this is the only solution... Could I get some hints on how to get this done? Thank you
|
$\det(X^n)=(\det X)^n=\det \begin{pmatrix}3&6\\ 2&4\end{pmatrix} = 0$, so $\det X=0$, and $X$ is singular.
$X$ cannot be the zero matrix, so it has two real eigenvalues: $0$ and $a\neq 0$. $X$ is diagonalizable: there is some non-singular $P$ such that $X=P\begin{pmatrix}0&0\\ 0&a\end{pmatrix}P^{-1}$, yielding $X^n=P\begin{pmatrix}0&0\\ 0&a^n\end{pmatrix}P^{-1}$.
Then $trace(X^n)=7=a^n$. Since $n$ is odd, we have $a=7^{1/n}$, hence $trace(X)=7^{1/n}$.
By Cayley-Hamilton, $X^2-7^{1/n}X=0$, that is $X^2=7^{1/n}X$. It's easy to prove by induction that for all $m\geq 1$, $X^m=7^{(m-1)/n}X$.
With $m=n$, $\begin{pmatrix}3&6\\ 2&4\end{pmatrix}=X^n=7^{(n-1)/n}X$, so that $$X=\frac{1}{7^{(n-1)/n}}\begin{pmatrix}3&6\\ 2&4\end{pmatrix}$$
The only solution is the one found by the OP.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How can I find the minimizer of the following optimization problem? I want to know whether the following optimization problem falls under "convex optimization". The problem at hand is minimizing the following objective function, i.e.,
$$
\left\{\min_{\boldsymbol{\beta}\in \mathbb{R}^{d}} \sum_{i=1}^{n} \left(y_{i}-\boldsymbol{\beta}^{T}\mathbf{A}\boldsymbol{\beta}\right)^{2}: y_{i}\in \mathbb{R}, \mathbf{A}\in \mathbb{R}^{d\times d} \right\}
$$
where $\mathbf{A}$ is symmetric but not positive definite. I wish to find the minimizer $\boldsymbol{\beta}$ of the above optimization problem. At first glance, I thought because $f(\boldsymbol{\beta}) = \boldsymbol{\beta}^{T}\mathbf{A}\boldsymbol{\beta}$ is quadratic and $g(x) = x^{2}$ is also quadratic which are both convex, the composition $g \circ f$ will also be convex. How can I find the minimizer $\boldsymbol{\beta}$?
|
This particular instance should be trivial to solve. Let $x = \beta^TA\beta$. Solve the scalar least-squares-problem that arises in $x$. Denote that solution $x^{\star}$. Let $v$ be any vector such that $x^{\star}$ and $v^TAv$ have the same sign (e.g., if positive let $v$ be an eigenvector associated to a positive eigenvalue). Now scale that vector suitably and pick $\beta = \frac{\sqrt{|x^{\star}|}}{\sqrt{v^TAv}}v$.
I assume $A$ is indefinite and thus has both negative and positive eigenvalues. If not, you would have to add constraints $x\geq 0$ or $x\leq 0$ in the first problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does $\operatorname{tr} (A)=0$ imply $\operatorname{tr} (A^3)=0$? Let $A_{(2n+1)\times(2n+1)}$ be a symmetric matrix of Rank $2n$. Then does $\operatorname{tr}A=0$ imply $\operatorname{tr}A^3=0$? If not, Under what condition?
|
The answer is NO. For a counterexample, let
$$A=\operatorname{diag}(1,3,-2,-2,0)$$
We have $\operatorname{Tr}(A)=0$ and $\operatorname{Tr}(A^3)=12\ne0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2417965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Minimize $ {L}_{p} $ Norm Regularized with a Linear Term (Conjugate Function of the Norm Function) The problem is given as following:
$$ \min_{x} {a}^{T} x + \lambda \left\| x \right\|_{p} $$
Namely minimizing a ${L}_{p} $ norm term regularized by a linear term.
The above form occurs repeatedly on the dual forms of convex optimization problems as it is related to the Conjugate Function.
I will add my solution as Wiki Solution.
Please add your own or validate the community solution.
I will mark as an answer other people solution.
|
I'll take $\lambda = 1$ for simplicity. Let $f(x) = \| x \|_p$. Note that
\begin{equation}
\inf_x \, a^T x + \|x \|_p = - \sup_x \, \langle - a, x \rangle - \| x \|_p = - f^*(-a).
\end{equation}
The conjugate of a norm is the indicator function for the dual norm unit ball.
Moreover, the dual norm for the $p$-norm is the $q$-norm, where $q = 1 - 1/p$. Thus,
$$
f^*(-a) = \begin{cases} 0 & \quad \text{if } \| a \|_q \leq 1, \\
\infty & \quad \text{otherwise.}
\end{cases}
$$
It follows that
$$
\inf_x \, a^T x + \|x \|_p = \begin{cases} 0 & \quad \text{if } \| a \|_q \leq 1, \\
-\infty & \quad \text{otherwise.}
\end{cases}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Roots of polynomial and unit circle Let $$p(k)=a_0+a_1x+a_2x^2+a_3x ^3+\dots+a_kx^k$$
Is possible known just looking at the coefficients $a_0,a_1,\dots,a_k$ $(a_k\in\mathbb{R})$ if the polynomial $p(k)$ will have roots out of the unit circle for values of $k=2,3,4$?
EDIT: I'm asking about both complex and real roots and I want to know if there is a way to check if all the roots will be outside of the unit circle or in somehow verify that at least one of the roots will be inside the unit circle.
|
Firstly, to clarify - are you allowing complex roots or only real roots?
I do not know of an if or only if test. However, there are a couple of things that will help in some circumstances. Divide through by $a_k$ then:
*
*$a_0$ is the product of the roots, so if it is outside the unit circle, then there must be a root also outside.
*$a_1$ is the sum of the roots, so if its absolute is greater than k, then there must be a root outside the unit circle.
If this is a real life problem, then plugging your polynomial into a computer program is the way to go. If this is a question for mathematical interest, then I would be very interested to hear if someone knows a solution!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
How are these three equations that are rewritten equal eachother?
This question has been driving me insane for about a day now. I cannot figure how these two are equal and I understand the laws of logarithms decently. Regardless, when I try to show equality in a calculator between these three functions expressed in the gray; I get different answers every time. So does this boil down a calculation error on my end or is there more going on here that I can't seem to wrap my head around.
|
The fact that in the image they are appearing equal is a bit of a misrepresentation for me. Following the help of @Bernard the equation actually is not e^(log(x))*(log(x)) but more rather x^log(x) = e^(log(x) * ln(x)) due to the base e.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Infinite 'hex': is a win always achievable? In the game hex, at least one player always wins because they can form a chain of hexagons across the board. This led me to wonder, what happens if we generalise to infinitely many points?
Specifically, if every point in a unit square (including boundaries) is coloured red or blue, does there necessarily exist a continuous function $f: [0,1] \to [0,1]\times [0,1]$ such that $f(x)$ is either
a) Always red$\space\space$ and $f(0)=(0,a), f(1)=(1,b)$ for some a,b
b) Always blue and $f(0)=(a,0), f(1)=(b,1)$ for some a,b
Furthermore, if there exists a function such that (a) is true, then does that necessarily mean there does not exist a function such that (b) is true?
(In the example, red wins with the path shown an blue loses)
My intuition tells me that this is true, but I have no idea how to begin proving it. My best idea was to colour the regions to the left and right of the square red. Then anything connected to this red region is marked green. If the other side is connected to this then we are done. Otherwise, take the points along the boundary of this green region. They must be blue otherwise there exists a point closer to the region that is blue (by definition of the green region). Hence this boundary reaches all the way down to the bottom and we are done. But I'm not sure if this green region is well-defined or anything and have no idea how to show that it is.
(Also, I've got no idea what tag(s) to put on this, sorry)
|
Color $(x,y)\in[0,1]^2$ red if $x=0$ or $y=1/2\cdot\sin(1/x)$. Color everything else blue. There are no paths of either color connecting its respective edges.
Note that the red path does not "reach" the line $x=0$. See also this post and the counterexample in the answers: "Intermediate Value Theorem" for curves
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 3,
"answer_id": 0
}
|
If $x^y y^x z^z=c$, then find $\frac {\partial^2z}{\partial x \,\partial y}$ at $x=y=z$. If $x^y y^x z^z=c$, then find $\dfrac {\partial^2z}{\partial x \,\partial y}$ at $x=y=z$.
I tried taking $\log$ but that doesn't help.
Any hints will be appreciated. Thanks.
This is what I have tried:
|
Here is the solution:
Hopefully the image is clear and helpful
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Permanent generating function identity for $\exp{\mathbf{x^{T}}A\mathbf{y}}$ In this paper, there's an identity that I can't prove to my satisfaction, (there's a similar statement in here)
which is that, given a permanent of a $(\mathbf{k,l})$-replicated matrix $A$, (written $A^{(\mathbf{k,l})}$),
$$\sum_{\mathbf{k,l}\geq 0}\mathrm{per}(A^{(\mathbf{k,l})})\frac{\mathbf{x}^{\mathbf{k}}\mathbf{y}^{\mathbf{l}}}{\mathbf{k}!\mathbf{l}!}=\exp{\mathbf{x^{T}}A\mathbf{y}}$$
Notation: $\mathbf{x^{k}}=\prod_{i=1}^{n}x_{i}^{k_{i}}$, $\mathbf{k}!=\prod_{i=1}^{n}k_{i}!$, and $A^{(\mathbf{k,l})}$ is the block matrix with the entry $a_{i,j}$ repeated $k_{i}\times l_{j}$ times.
This is related to MacMahon's Master Theorem where
$$
\sum_{\mathbf{k}\geq 0}\mathrm{per}(A^{(\mathbf{k,k})})\frac{\mathbf{x}^{\mathbf{k}}}{\mathbf{k}!}=\det (1-XA)^{-1}
$$
with $X_{ij}=x_{i}\delta_{ij}$ and $A=A^{(\mathbf{1,1})}$
|
Here is a double-counting argument. The coefficient of $\mathbf{x}^{\mathbf{k}}\mathbf{y}^{\mathbf{l}}$ on either side can be written as
$$\frac{1}{m!^2}\sum_{K,L,\pi} a_{K(1),L(\pi(1))}\dots a_{K(m),L(\pi(m))}$$
where $m=\sum k_i=\sum l_j$, and the sum is over:
*
*$K:[m]\to [n]$ with $|K^{-1}(\{i\})|=k_i$ for each $i$
*$L:[m]\to [n]$ with $|L^{-1}(\{j\})|=l_j$ for each $j$
*permutations $\pi:[m]\to[m]$
To see this is equal to the left-hand-side coefficient, note that the sum over $\pi$ for any fixed $(K,L)$ is $\mathrm{per}(A^{(\mathbf{k,l})})$; the $\sum_K$ gives a factor of $m!/\mathbf k!$ and likewise the $\sum_L$ gives a factor of $m!/\mathbf l!$. To see this is equal to the right-hand-side coefficient, recognise the sum over $(K,L)$ for any fixed $\pi$ as the coefficient of $\mathbf{x}^{\mathbf{k}}\mathbf{y}^{\mathbf{l}}$ in $(\sum_{i,j=1}^n a_{i,j}x_iy_j)^m$, with the $\sum_\pi$ giving a factor of $m!$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Graph expression The manual for the LaTeX package TikZ v. 3.0.1a states (section 19.7 "Graph Operators, Color Classes, and Group Expressions", p. 280):
TikZ's graph command employs a powerful mechanism for adding edges between nodes and sets of nodes. To a graph theorist, this mechanism may be known as a graph expression: A graph is specified by starting with small graphs and then applying operators to them that form larger graphs and that connect and recolor colored subsets of the graph's node in different ways.
I searched Google for "graph expression" and couldn't find anything relevant. Where would a graph theorist know the term graph expression from? I'd appreciate references to where this term is defined, explained and used.
|
This book - Recent Trends in Algebraic Development Techniques: 19th International Workshop, WADT 2008 - defines graph expression and discusses operators in that context. Link
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Dual space & orthogonal $F_1^{\perp}+F_2^{\perp}\subset(F_1 \cap F_2)^{\perp}$
$E$ is a finite-dimensional vector space over a field $K$ and $E^∗$ its dual.
$F_1$ and $F_2$ is a subspace of $E$
I would like to prove that $F_1^{\perp}+F_2^{\perp}\subset(F_1 \cap F_2)^{\perp}$
I asked this question 12 hours ago, but with no answser and no comment, I've prefered to cancel the last one and to rewrite it correctly.
Is my proof correct?
Let $F_1^{\perp}:=\bigg\{\gamma\in E^*, \quad \forall x\in F_1, \;\gamma(x)=0\bigg\}$ and $F_2^{\perp}:=\bigg\{\psi\in E^*, \quad \forall x\in F_2, \;\psi(x)=0\bigg\}$
$\varphi \in F_1^{\perp}+F_2^{\perp},\iff\exists (\gamma,\psi)\in \left(F_1^{\perp}\times F_2^{\perp}\right), \forall x \in E \quad \varphi(x)=(\gamma+\psi)(x)$
We can deduce as well $\exists \varphi'\in F_1^{\perp}+F_2^{\perp}$ such that $\varphi'(x)=(\gamma-\psi)(x)$
Now let : $F :=\bigg\{x\in E\;| \quad \forall \phi \in (F_1^{\perp}+F_2^{\perp}), \;\phi(x)=0\bigg\}$ and any $\varphi \in F_1^{\perp}+F_2^{\perp}$
Thus
$\forall x\in F, \left\lbrace\begin{array}{l} \varphi(x)=0\\ \varphi'(x)=0\end{array}\right. \iff \left\lbrace\begin{array}{l} \gamma(x)+\psi(x)=0\\\gamma(x)-\psi(x)=0\end{array}\right.\iff \left\lbrace\begin{array}{l} 2\gamma(x)=0 \quad \color {red}{(1)}\\2\psi(x)=0 \quad \color {red}{(2)}\end{array}\right. $
Since $(1)\land (2)$ we deduce that $ \forall \varphi \in F_1^{\perp}+F_2^{\perp},\;\forall x\in F,\quad \varphi(x)=0 \implies x\in F_1\cap F_2$
We can conclude $\varphi \in (F_1\cap F_2)^{\perp}\quad \square$
|
You can shorten this a lot. There are a great deal of extraneous constructions that miss the main idea. Since you originally reference a dual space, I assume you are asking about dual spaces, not orthogonal complements and have answered accordingly; be careful, as they are not the same thing.
Let $\phi\in F_1^0+F_2^0$, where $S^0 = \{\rho\in E^* \ | \ \forall x\in S, \ \rho(x)=0 \}$ denotes the annihilator of $S$. (It looks like this is perhaps what you mean by your $S^\perp$ notation.) Then $\phi = \gamma + \psi \ $ for some $\gamma\in F_1^0, \psi\in F_2^0$. Now if $x\in F_1\cap F_2$, then $x\in F_1$ and $x\in F_2$, so
$$\phi(x) = (\gamma + \psi)(x) =\gamma(x) + \psi(x) = 0 + 0 = 0.$$
Hence, $\phi\in(F_1\cap F_2)^0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help needed in understanding Heron's Formula So i just started learning Trigonometry seriously and something doesn't feel right to me, either I'm missing something or not but.
Lets assume we have a triangle
and there are two ways to find the area.
1 is using the standard $$A = \frac{1}{2}bh$$ and by using the example image above we get $A = 10625$
but if I use the other formula, in this case, Heron's Formula
\begin{align*}
s & = \frac{a+b+c}{2}\\
A & = \sqrt{s(s-a)(s-b)(s-c)}
\end{align*}
the area becomes $A = 10620.09$.
They're both gravely close to each other, which has me thinking maybe I missed something.
So my question is, why are the areas different?
|
You shouldn't consider the areas different, because the relative difference is smaller than $0.05\%$.
But you can a priori expect an error on the height to be up to $0.5\%$ as some number is truncated to integer. Hence without deeper error calculus, you can't be conclusive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2418977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Use the quadratic formula to solve trigonometric trinomial Solve $$-\sin^2\theta + 2\cos\theta +\cos^2\theta = 0$$
Using the quadratic formula.
This is what you should get $$\theta = \cos^{-1}\biggl(\frac{-1+\sqrt{3}}{2}\biggr)$$
How do you set this up and solve?
|
As @Mathmore said in the comments, substitute $\sin^2\theta=1-\cos^2\theta$ to obtain
$$-(1-\cos^2\theta)+2\cos \theta+\cos^2 \theta=0$$
$$2\cos^2\theta+2\cos \theta-1=0$$
Substitute $t=\cos \theta$. This is a quadratic equation
$$2t^2+2t-1=0$$
with solutions
$$t=\frac{1}{2}(-1\pm \sqrt 3)$$
Note however that $t=\cos \theta \in[-1,1]$, that is $t\neq1/2(-1-\sqrt3)=-1.366\not\in[-1,1]$
So the solutions are
$$\cos \theta=\frac{1}{2}(\sqrt 3-1)$$
$$\theta=\pm\arccos(\frac{1}{2}(\sqrt 3-1))+2\pi n \quad n\in\mathbb{Z}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2419124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why can't the ratio test be used for geometric series? The ratio test says that, for $a_k\neq 0$, if
$$\lim_{k\to\infty}\left|\frac{a_{k+1}}{a_k}\right|=L$$
exists, then if $0\leq L <1$, then $\sum_k a_k$ converges. If $L>1$, it diverges.
The notes I'm reading say that it's inadmissible to use the ratio test to test for convergence of a geometric series. I can't see why this should be the case.
Say we have some geometric series $\sum_kar^k$. Then
$$\lim_{k\to\infty}\left|\frac{a_{k+1}}{a_k}\right|=\lim_{k\to\infty}\frac{\left|ar^{k+1}\right|}{\left|ar^k\right|}=|r|.$$
So the ratio test tells us that the geometric series converges for $|r|<1$, and diverges for $|r|>1$, which is exactly what we get by using the formula
$$\sum_{k=1}^n ar^k=a\left(\frac{1-r^{n+1}}{1-r}\right).$$
What is an example that demonstrates why the ratio test is inadmissible for a geometric series?
|
The ratio test is not inadmissible for geometric series. Its hypotheses do not exclude geometric series, therefore it applies, and its proof must support this.
One common proof structure would be:
Theorem A: Geometric series converges. Proof: direct argument.
Theorem B: Ratio test with usual hypotheses. Proof: show that this is implied by theorem A as in the answer by Xander Henderson.
Of course, the proof of Theorem A cannot use Theorem B, otherwise we have a circular argument. Undoubtedly this is what your notes are trying to say.
However, once we have a valid proof of Theorem B, it certainly applies to geometric series:
Theorem C: Geometric series converges. Proof: theorem B.
This proof of Theorem C may seem absurdly indirect: why wouldn't we just cite Theorem A? Well, consider:
Theorem D: some other theorem whose hypotheses imply those of the ratio test. Proof: Theorem B.
It would be very annoying, and more importantly unnecessary, to instead write:
Theorem D: some other theorem whose hypotheses imply those of the ratio test. Proof: If the series is geometric, then see Theorem A. Otherwise Theorem B applies.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2419255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Largest set in which $(x, y)\mapsto \sqrt{xe^y - ye^x}$ is defined I have to find the biggest $\mathbb{R}^2$ subset in which the function
$g(x,y) = \sqrt{xe^y - ye^x}$
is defined. In order to do it I have to study
$xe^y - ye^x \ge 0$
but I can't find out a solution. Can someone help me?
|
$$g^2(x,y)=G(x,y)=xe^y-ye^x$$ is clearly well defined when $$G(x,y)=xe^y-ye^x=0$$ which is a curve formed by the union of all the diagonal of $\mathbb R^2$ (i.e. $x=y$) and the set 0f $(x,y)$ such that $x\ne y$ satisfying the relation $$\frac xy=\frac{e^x}{e^y}$$ This second set, looking at the plot of the curve is a kind of positive branch of a hyperbola of equation $xy=a\gt0$ (but obviously not properly a hyperbola). Furthermore, it is important remark that the double point where the diagonal cuts this branch is $(x,y)=(1,1)$ which is easily calculated from
$G(x,y)=G_x(x,y)=G_y(x,y)=0$.
Now, the second quadrant is totally excluded (we excludes coordinates axis of the discussion) because
$$ x\lt0\text{ and }y\gt0\Rightarrow G(x,y)=-|x|e^{|y|}-\ |y|e^{-|x|}\lt0$$ and the fourth quadrant is totally included because of
$$x\gt0\text{ and }y\lt0\Rightarrow G(x,y)=|x|e^{-|y|}+\ |y|e^{|x|}\gt0$$
Now we have for the third quadrant where $x$ and $y$ are negative
$$|x|\gt|y|\Rightarrow -|x|e^{-|y|}+|y|e^{-|x|}=\frac{|y|e^{|y|}-|x|e^{|x|}}{e^{|x|+|y|}}\lt0$$
$$|x|\lt |y|\Rightarrow-|x|e^{-|y|}+|y|e^{-|x|}=\frac{|y|e^{|y|}-|x|e^{|x|}}{e^{|x|+|y|}}\gt0$$
With the first quadrant, where $x$ and $y$ are positive, the discussion is less easy and we must consider the curve $G(x,y)=0$ and its double point $(1,1)$.
From the graph of the curve $G (x, y) = 0$ seen at the beginning, it follows that for all $x\gt0$, there is $y_x\gt0$ such that $G(x, y_x)=0$. We have
$$x\le y\le y_x\text{ for } x\le 1\\x\ge y\ge y_x\text{ for } 1\le x $$
In both cases one has $G(x,x)=G(x,y_x)=0$. By continuity and because for $x$ fixed the function $f_x(y)=$ $G(x,y)$ must have a minimum in the bounded intervals $[x,y_x]$ when $x\le1$ and $[y_x,x]$ when $x\gt1$ (See explanation of this enclosed by ► and◄ below), we deduce that
$G(x,y)\lt0$ when
$x\lt y\lt y_x$ and $x\lt 1$ and when $x\gt y\gt y_x$ and $x\gt1$.
► Let $x$ be fixed, say $x=a$ positive and let the function $f_a(y)=ae^y-ye^a$. Because of $f'_a(y)=ae^y-ye^a=0$ gives $y=a-\ln(a)$ and $f''_a(x)=ae^y\gt0$ it is clear that $f_a$ has a minimum at $y=a-\ln(a)$ ◄
Finally the shape of the searched set is $A\cup B\cup C$ where
$A=$the lower half of the third quadrant.
$B=$ the fourth quadrant.
$C$ contained in the first quadrant and $C=C_1\cup C_2$ where
$$C_1=\{(x,y):\space 0\le x\le1\text{ and }x\ge y\}\cup\{(x,y):\space 0\le x\le1\text{ and } y\ge y_x\}$$
$$C_2=\{(x,y): x\ge1\text{ and }y\le y_x\}\cup\{(x,y): x\ge1\text{ and }x\le y\}$$
NOTE.-It seems impossible to disregard the branch similar to a hyperbola branch of the curve $G (x, y) = 0$. Recall that $G (x, y_x) = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2419476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proof verification: $x^3 + px - q = 0$ has three real roots iff $4p^3 < -27q^2$ I'm trying to prove the problem in the title. I'm not sure if my proof is clear, or even correct. It seems rather long, so I am not sure. Any advice would be great, especially on parts I might need to expand.
Let $f(x) = x^3 + px - q = 0$, then $f'(x) = 3x^2 + p$. Clearly, if $p \geq 0$ then $f'(x) > 0, \forall x$. Since $f(x) < 0$ as x approaches $-\infty$ (can I just state this?) and since $f(x) > 0$ as x approaches $\infty$ and $f'(x) >0, \forall x$ then by the Intermediate Value Theorem there exists only one solution on the interval $(-\infty, \infty)$ if $p \geq 0$.
Therefore, $f(x)$ [Edited: can only have] has three real roots iff $p < 0$. If $p <0$, then $f'(x) = 0$ for $x = \pm \sqrt{\frac{-p}{3}}$. Now, since $f(x) < 0, x \rightarrow -\infty$ and $f'(x) > 0$ for $x < \sqrt{\frac{-p}{3}}$ then by the IVT there exists one real solution on the interval $\left (-\infty, -\sqrt{\frac{-p}{3}}\right)$ iff $f(-\sqrt{\frac{-p}{3}}) > 0$. Likewise, since $f'(x) < 0$ on $\left (\sqrt{\frac{-p}{3}}, -\sqrt{\frac{-p}{3}}\right)$ then by the IVT there exists only one solution on that interval iff $f(-\sqrt{\frac{-p}{3}}) > 0$ and $f(\sqrt{\frac{-p}{3}}) < 0$. Lastly, since $f(x) > 0, x \rightarrow \infty$ and $f(x) > 0$ for $x > \sqrt{\frac{-p}{3}}$ then there exists only one solution on the interval $\left(\sqrt{\frac{-p}{3}}, \infty\right)$ iff $f(\sqrt{\frac{-p}{3}}) < 0$.
Since $f(x)$ has three roots iff there exists the one such root in each interval mentioned above, we require:
$f(\sqrt{\frac{-p}{3}}) < 0$ and $-f(\sqrt{\frac{-p}{3}}) > 0 \implies -2\left(\frac{-p}{3}\right)^{\frac{3}{2}} + q > 0, -2\left(\frac{-p}{3}\right)^{\frac{3}{2}} - q > 0.$ Multiplying the inequalities we get:
$-4\frac{p^3}{27} - q^2 > 0 \implies 4p^3 < -27q^2.$
|
I think your statement is wrong.
Try, $p=-3$ and $q=-2$.
We have $$x^3+px-q=x^3-3x+2=(x-1)^2(x+2),$$
which says that the equation $$x^3+px-q=0$$
has three real roots, but $4p^3<-27q^2$ gives $-108<-108$, which is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2419714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Is the set $f=\{(x^3,x):x\in\mathbb R\}$ a function from $\mathbb R$ to $\mathbb R$? I need to find out if $f$ is a function or not. That is, whether or not the first coordinate of the ordered pair occurs only once in $f$. If yes, then it is a function.
My answer is yes, $f$ is a function because $f=x^{1/3}$ has a different value for every $x$.
Is this enough or do I need more explanation?
|
Let $y = g(x):= x^3$, $x \in \mathbb{R}$.
Domain$_g = \mathbb{R}; $ Range$_g = \mathbb{R}$.
$g$ is injective and surjective, I.e. bijective.
Injective: $g$ is strictly monotonic.
Surjective: $g$ is continuous on $[a,b]$ ,
for any $a \in \mathbb{R}.$
$\Rightarrow :$
An inverse function $g^{-1}$ exists, continuous and strictly monotonic.
$g^{-1}(y) = y^{1/3},$ $y \in \mathbb{R}$.
Any similarity to your function $f?$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2419809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Subgroup of integers I know that the only possible subgroups of $\mathbb Z$ are of the form $m\mathbb Z$. But how can I prove that these are the only possible subgroups?
|
Prove the following two intermediate results:
*
*If a subgroup of $\Bbb Z$ contains some number $m$, then it contains $m\Bbb Z$
*If a subgroup of $\Bbb Z$ contains two numbers $m,n$, then it contains $\gcd(m,n)$
Now let $k$ be the smallest positive integer in your subgroup. If there is an element $a\notin k\Bbb Z$ in the subgroup, reach a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2419875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why Riemann distance better for PSD matrices than Euclidian distance? Can you explain to me why Riemann distance is better for positive semidefinite matrices (for example covariance matrices) than Euclidian distance?
Here is the riemannian distance:
$$
d\left(Σ_A,Σ_B\right)=\sqrt{ \sum_i{\ln^2{λ_i (Σ_A,Σ_B)}}}
$$
Where $$\lambda_i(Σ_A,Σ_B)$$ is the generalized eigeinvalue of A and B (see https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix#Generalized_eigenvalue_problem).
I do not have, or little knowledge in geodesics but it seems to be the reason why it is better.
Any clarification would be very helpful
Thank you very much!
|
I can see several advantages of Riemann distance over Euclidean distance for covariance matrix.
First, Riemann distance is scaling free. if you change unit for both $\Sigma_A$ and $\Sigma_B$, you would expect the similarity should keep unchanged. i.e., $$d(\Sigma_A, \Sigma_B) = d(k\Sigma_A, k\Sigma_B) $$, which is not true for Euclidean distance.
Second, you would expect different impacts for discrepancies among diagonal elements vs. among off-diagonal elements, which is not true for Euclidean distance either.
Third, Riemann distance respects the positive semidefinite constraints. Considering the example that $\Sigma_A - dA$ is semidefinite while $\Sigma_A + dA$ is not, you would not expect they both have the same distance to $\Sigma_A$, which is the case of Euclidean distance.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Integral and limit with geometric factor I would like to calculate the following limit $$\lim_{x \to x_0}{\frac{log\left(\frac{1}{x-x_0}\int_{x_0}^{x}{\frac{\sqrt{f(x)f(x_0)}}{f(u)}du}\right)}{\left(\int_{x_0}^{x}{\frac{1}{f(u)}du}\right)^2}}$$
where $f$ strictly positive and infinitely differentiable.
We can define the functions $$h(x)=\int_{x_0}^{x}{\frac{1}{f(u)}du}$$
and $$h_1(x)=\sqrt{f(x)f(x_0)}h(x)$$
This the limit can be rewritten as
$$\lim_{x \to x_0}{\frac{log(\frac{h_1(x)-h_1(x_0)}{x-x_0})}{x-x_0}}\frac{x-x_0}{h_1(x)^2-h_1(x_0)^2}f(x)f(x_0)$$
Finally define $$h_2(x)=log(\frac{h_1(x)-h_1(x_0)}{x-x_0})$$
Thus,
$$\lim_{x \to x_0}{\frac{h_2(x)-h_2(x_0)}{x-x_0}}\frac{x-x_0}{h_1(x)^2-h_1(x_0)^2}f(x)f(x_0)$$
where $h_2(x_0)$ is obtained by extending $h_2$ by continuity around $x_0$, which is $0$
I would like to conclude using the derivative slope definition but something is not adding up as I get and undefined behavior.
|
Let
$$
f(x)=\frac{f(x_0)}{g(x-x_0)}
$$
Then we have
$$
\begin{align}
g(x)&=\frac{f(x_0)}{f(x+x_0)}\\
g'(x)&=-\frac{f(x_0)\,f'(x+x_0)}{f(x+x_0)^2}\\
g''(x)&=f(x_0)\frac{2f'(x+x_0)^2-f(x+x_0)\,f''(x+x_0)}{f(x+x_0)^3}
\end{align}
$$
Furthermore,
$$
\begin{align}
&\lim_{x\to x_0}\frac{\log\left(\frac1{x-x_0}\int_{x_0}^x\frac{\sqrt{f(x)f(x_0)}}{f(u)}\,\mathrm{d}u\right)}{\left(\int_{x_0}^x\frac1{f(u)}\,\mathrm{d}u\right)^2}\\
&=f(x_0)^2\lim_{x\to0}\frac{\log\left(\frac1{\sqrt{g(x)}}\frac1x\int_0^xg(u)\,\mathrm{d}u\right)}{\left(\int_0^xg(u)\,\mathrm{d}u\right)^2}\\
&=f(x_0)^2\lim_{x\to0}\frac{\log\left(\frac1{\sqrt{g(x)}}\frac1x\int_0^x\left(1+g'(0)u+\frac{g''(0)}2u^2+O\!\left(u^3\right)\right)\,\mathrm{d}u\right)}{\left(\int_0^x\left(1+O\!\left(u\right)\right)\,\mathrm{d}u\right)^2}\\
&=f(x_0)^2\lim_{x\to0}\frac{\log\left(\frac{1+\frac{g'(0)}2x+\frac{g''(0)}6x^2+O\left(x^3\right)}{1+\frac{g'(0)}2x+\left(\frac{g''(0)}4-\frac{g'(0)^2}8\right)x^2+O\left(x^3\right)}\right)}{\left(x+O\!\left(x^2\right)\right)^2}\\
&=f(x_0)^2\lim_{x\to0}\frac{\left(\frac{g'(0)^2}8-\frac{g''(0)}{12}\right)x^2+O\!\left(x^3\right)}{x^2+O\!\left(x^3\right)}\\[6pt]
&=f(x_0)^2\left(\color{#C00}{\frac{g'(0)^2}8}\color{#090}{-\frac{g''(0)}{12}}\right)\\[6pt]
&=f(x_0)^2\left(\color{#C00}{\frac{f'(x_0)^2}{8\,f(x_0)^2}}\color{#090}{-\frac{2f'(x_0)^2}{12\,f(x_0)^2}+\frac{f''(x_0)}{12\,f(x_0)}}\right)\\[6pt]
&=\frac{-f'(x_0)^2+2f''(x_0)\,f(x_0)}{24}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that $\lim_{n \to \infty} \frac{\sqrt {n^2 +2}}{4n+1}=\frac14$?
How to prove that $\lim_{n \to \infty} \frac{\sqrt {n^2 +2}}{4n+1}=\frac14$?
I started my proof with Suppose $\epsilon > 0$ and $m>?$ because I plan to do scratch work and fill in.
I started with our conergence definition, i.e. $\lvert a_n - L \rvert < \epsilon$
So $\lvert \frac{\sqrt {n^2 +2}}{4n+1} - \frac {1}{4} \rvert$ simplifies to $\frac {4\sqrt {n^2 +2} -4n-1}{16n+4}$
Now $\frac {4\sqrt {n^2 +2} -4n-1}{16n+4} < \epsilon$ is
simplified to $\frac {4\sqrt {n^2 +2}}{16n} < \epsilon$ Then I would square everything to remove the square root and simplify fractions but I end up with $n> \sqrt{\frac{1}{8(\epsilon^2-\frac{1}{16}}}$
We can't assume $\epsilon > \frac{1}{4}$ so somewhere I went wrong. Any help would be appreciated.
|
Let $\epsilon>0$
$$\left|\frac {4\sqrt {n^2 +2} -4n-1}{16n+4}\right|\leq\frac{4n+8-4n-1}{16n+4}=\frac{7}{16n+4} \leq \frac{7}{16n}$$
We have that $\frac{7}{16n} \to 0$
Thus exists $n_0 \in \mathbb{N}$ such that $\frac{7}{16n}< \epsilon, \forall n \geq n_0$
So $\frac{1}{n}<\frac{16\epsilon}{7} \Rightarrow n> \frac{7}{16\epsilon}$
Take $n_0=[\frac{7}{16\epsilon}]+1$ and we have that $$\forall n\geq n_0=[\frac{7}{16\epsilon}]+1 \Rightarrow \frac{7}{16n}<\epsilon \Rightarrow \left|\frac {4\sqrt {n^2 +2} -4n-1}{16n+4}\right| < \epsilon $$
Note that $[x]$ is the integer part of $x$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
}
|
Interchanging limit and supremum Sequence of functions is bounded $f_k(x) \le c\ \forall\ k, \forall\ x \ge 0,\ c \in \mathbb{R}$ and decreasing $\forall\ x \ge 0$. Is it possible to show such inequality $$\lim_{x \to \infty} \sup_{k \ge 1} f_k(x) \le \sup_{k \ge 1} \lim_{x \to \infty} f_k(x) \text{ ?}$$
|
Here is a counter-example: let $$f_k(x) = 1 - \left( \frac {x}{1+x}\right)^k, \quad x\ge 0$$
Then:
*
*since $g : [0,\infty) \to [0,1) : x \mapsto \frac {x}{1+x}$ is increasing, and $h : [0,1) \to [0,1) : x \mapsto x^k$ is also increasing, so is their composition $h\circ g$. Therefore $f_k : [0,\infty) \to (0,1] : x \mapsto 1 - h\circ g(x)$ is decreasing and bounded below by $0$.
*For all $k, \lim_{x \to \infty} f_k(x) = 0$, so $\sup_k \lim_{x \to \infty} f_k(x) = 0$.
*For all $x, \sup_k f_k(x) = 1$, so $\lim_{x \to \infty} \sup_k f_k(x) = 1$.
Original post (where I thought the sequence was decreasing, not the functions):
If the sequence of functions is decreasing, then for all $k, x, f_1(x) \ge f_k(x)$. Therefore $\sup_{k \ge 1} f_k(x) = f_1(x)$. And $$\lim_{x\to\infty} f_1(x) \ge \lim_{x\to\infty} f_k(x)$$ when they converge (or diverge to $\pm\infty$). So again $$\lim_{x\to\infty} f_1(x) \ge \sup_{k \ge 1} \lim_{x\to\infty} f_k(x)$$
But since the LHS is also one of the values the supremum is being taken over, we must have
$$\lim_{x\to\infty} f_1(x) = \sup_{k \ge 1} \lim_{x\to\infty} f_k(x)$$
Thus
$$\lim_{x\to\infty} \sup_{k \ge 1} f_k(x) = \sup_{k \ge 1} \lim_{x\to\infty} f_k(x)$$
Being bounded below by a constant is not necessary, but convergence of the limits is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove identity holds Just wondering if there is any other way to show that for each positive integer $n$ holds
$$2\left(\sqrt{n} - 1\right) < 1 + \frac{1}{\sqrt{2}} +\cdots+\frac{1}{\sqrt{n}} < 2\sqrt{n}$$
other than by mathematical induction~
|
Note that
$$\frac{1}{k+1}<\int_k^{k+1}\frac{dx}{\sqrt{x}}<\frac{1}{k}$$
for every $k\geq0$. Adding this relations we get
$$\int_1^{n+1}\frac{dx}{\sqrt{x}}<1 + \frac{1}{\sqrt{2}} +\cdots+\frac{1}{\sqrt{n}}<\int_0^n\frac{dx}{\sqrt{x}}$$
Solving the integrals we get what we wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
}
|
i was wondering how to evaluate $\lim\limits_{x \to0} x\sum 1/n$ I was wondering how to evaluate
$$\lim_{x\to 0}x\sum_{n=1}^\infty \frac 1 n$$
Edit
so that it's clear I mean
$$\lim_{x\to 0}x*\sum_{n=1}^\infty \frac 1 n$$
witch is equvalent to
$$\lim_{x\to 0}\sum_{n=1}^\infty \frac x n$$
|
$$\lim_{x\to 0}x\sum_{n=1}^\infty\frac1n=\left(\sum_{n=1}^\infty\frac1n\right)\lim_{x\to 0}x$$
since the series does not depend on $x$. Hence, the above expression is undefined, as
$$\sum_{n=1}^\infty\frac1n$$
is divergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
To prove that $B\subset A\rightarrow A=A\cup B$ is it necessary to consider the cases $A\cup B=\emptyset$ and $A\cup B\neq \emptyset$? For what I know, to prove $B\subset A\rightarrow A=A\cup B$ it suffices to prove that $A\subset (A\cup B)$ and $(A\cup B)\subset A$.
Prove that $A\subset (A\cup B)$ is trivial, because $A\cup B=\{x:x\in A\vee x\in B\}$.
My teacher in proving that $(A\cup B)\subset A$ separated the proof into two parts: $A\cup B=\emptyset$ and $A\cup B\neq \emptyset$. But I didn't understand why the need to separate this proof in two cases since
$x\in (A\cup B)\to x\in A \vee x\in B$
We know, by hypothesis, that $B\subset A$, so we can conclude that $(x\in A\vee x\in B)\leftrightarrow(x\in A)$ is tautology(using truth table). So saying $x\in A\vee x\in B$ is equivalent to saying $x\in A$ which implies
$x\in (A\cup B)\to x\in A \vee x\in B\equiv x\in A\Rightarrow (A\cup B)\subset A$ (note: the implication $p\to q$ is true whenever $p$ is false).
In my opinion in the proof that $(A\cup B)\subset A$ there was no need to consider two cases, but as my teacher has great knowledge about set theory I feel that I am making a mistake by not considering the two cases. That is why I ask for help in this doubt.
Obs.: If I used some logical symbol wrongly let me know, please, since I still know few things about propositional logic.
|
You are right, and I see no errors. We can also observe that $A\subset B\iff \forall x\;(x\in B\implies x\in A).$ Therefore if $B\subset A$ then for all $x$ we have $$x\in A\cup B)\implies (x\in A\lor x\in B)\implies$$ $$\implies (x\in A\lor x\in A)\implies(x\in A)\implies$$ $$\implies(x\in A \lor x\in B)\implies(x\in A\cup B).$$
From this we infer that for all $x$ we have $x\in A\cup B\iff x\in A$.
This is in my style, which is almost never brief.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2420922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If $X$ is first countable, is $f(X)$ also first countable? I'm trying to understand the proof of this theorem: $X$ is first countable and $f:X\rightarrow Y$ is a function. Then $f$ is continuous at $x\in X$ iff for every sequence $(x_n)$ which converges to $x$, $f(x_n)$ converges to $f(x)$.
Fix $x\in X$. To prove the "only if" statement, it suffices to show that $f(\overline{A})\subseteq\overline{f(A)}$ for each $A\subset X$ such that $x\in\overline{A}$.
Since $x\in\overline{A}$, by a previously proven lemma, there is a sequence $(x_n)$ in $A$ which converges to $x$. By assumption, the sequence $f(x_n)$ in $Y$ converges to $f(x)$. Since $f(x_n)\in f(A)$ for each $n\in\mathbb{N}$, we have $f(x)\in\overline{f(A)}$.
The last line of the proof seems to imply that if $X$ is first countable, then so is $f(X)$. But why is this true? It think it is only true if $f$ is continuous at $x$, but obviously we can't assume that.
|
Firstly: $f[X]$ is not necessarily first countable, e.g. take $X = \mathbb{R}$ in the usual topology and let $\sim$ be the equivalence relation with classes $\{x\} , x \notin \mathbb{Z}$ and $\mathbb{Z}$ (We identify the integers to a point). Then the quotient space $Y = X/\sim$ in the quotient topology, induced by the standard map $q$ mapping $x$ to its class, is a continuous image (of $q$) which is not first countable at the class/point $[\mathbb{Z}]$.
Following your proof, which is quite correct: we assume $f$ preserves sequence limits. Then we want so show that for any $A \subseteq X$, $f[\overline{A}] \subseteq \overline{f[A]}$, which is indeed is one of the characterisations of continuity.
So let $f(x) \in f[\overline{A}]$, so $x \in \overline{A}$, which implies that there is a sequence $x_n$ from $A$ that converges to $x$ (this holds in first countable spaces! this is where we use it). Then $f(x_n) \to f(x)$ by the assumption on $f$, and as all $f(x_n) \in f[A]$, $f(x) \in \overline{f[A]}$ as required.
The last fact is true in all spaces: if $x_n \in B$ and $x_n \to x$ in the space, then $x \in \overline{B}$: let $O$ be any open neighbourhood of $x$. Then $O$ contains all $x_n$ for $n \ge N$ for some $N$. But in particular: $x_N \in O \cap B \neq \emptyset$, so $x \in \overline{B}$. But for the reverse (that being in the closure means that we have a sequence from the set that converges to it), uses (part of ) first countability.
Also, the direction $f$ continuous implies $f$ preserves all sequence limits, holds for all spaces $X,Y$ and $f$ between them. Only the sufficiency needs some assumption on $X$, but no assumptions on $Y$ (or $f[X]$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How do I find the closed form of this integral $\int_0^2\frac{\ln x}{x^3-2x+4}dx$? How do I find the closed form of this integral:
$$I=\int_0^2\frac{\ln x}{x^3-2x+4}dx$$
First, I have a partial fraction of it:
$$\frac{1}{x^3-2x+4}=\frac{1}{(x+2)(x^2-2x+2)}=\frac{A}{x+2}+\frac{Bx+C}{x^2-2x+2}$$
$$A=\frac{1}{(x^3-2x+4)'}|_{x=-2}=\frac{1}{(3x^2-2)}|_{x=-2}=\frac{1}{10}$$
$$Bx+C=\frac{1}{x+2}|_{x^2-2x=-2}=\frac{1}{(x+2)}\frac{(x-4)}{(x-4)}|_{x^2-2x=-2}=$$
$$=\frac{(x-4)}{(x^2-2x-8)}|_{x^2-2x=-2}=\frac{(x-4)}{(-2-8)}|_{x^2-2x=-2}=-\frac{1}{10}(x-4)$$
Thus:
$$\frac{1}{x^3-2x+4}=\frac{1}{10}\left(\frac{1}{x+2}-\frac{x-4}{x^2-2x+2}\right)$$
$$I=\frac{1}{10}\left(\int_0^2\frac{\ln x}{x+2}dx-\int_0^2\frac{(x-4)\ln x}{x^2-2x+2}dx\right)$$
What should I do next?
|
To address your question of how to handle loops when integrating by parts, let $I=\int e^x\sin x \ dx$. Both functions are transcendental. We'll try using $u_1=e^x$ and $dv_1=\sin x \ dx$. These give $du_1=e^x \ dx$ and $v_1=-\cos x$. Thus, $$I=-e^x\cos x+\int e^x\cos x \ dx.$$
Now we have another integral. We'll try by parts again, and it's important we keep the same arrangement as last time (i.e. we need $u_2=du_1$ and $dv_2=v_1$; in this case we used exponential as $u$ and trigonometric as $v$, though we could have dove it the other way as long as we were consistent) otherwise we will just undo our last step. So, $u_2=e^x$ and $dv_2=\cos x \ dx$. These give $du_2=e^x \ dx$ and $v_2=\sin x$. Thus,
$$
\begin{align}
I&=-e^x\cos x+e^x\sin x-\int e^x\sin x \ dx \\
&=e^x(\sin x-\cos x)-I \\
2I&=e^x(\sin x-\cos x) \\
I&=\frac{1}{2}e^x(\sin x-\cos x).
\end{align}
$$
The important step is realising that when you get back to where you started, you can perform algebra to solve for your result.
That said, I do not guarantee this is what's needed here, just thought it might be worth a try and then you asked. So here you go.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Understanding a sentence in a Galois Theory paper about polynomials I've been trying to understand a sentence of a math for some time. I'll be straightforward:
(*) $$x^n + a_{n-1}x^{n-1} +...+a_1x + a_0 = 0$$
(...)
If we denote the roots of (*) by $x_1, x_2, ... , x_n$ so that
$$(x-x_1)(x-x_2)...(x-x_n) = x^n + a_{n-1}x^{n-1} +...+a_1x + a_0$$
Then $a_0, ... , a_{n-1}$ are polynomial functions of $x_1, x_2, ... , x_n$ called elementary symmetric functions:
$$a_0 = (-1)^nx_1x_2...x_n , a_{n-1} = -(x_1 + x_2 + ... + x_n)$$
This last statement between the two lines is the one I have some issues with.
*
*When the author claims that $a_0, ... , a_n$ are functions of $x_1, ... , x_n$, does he simply mean it in a way that $x_1, ... , x_n$ define what the elements $a_0, ... , a_n$ are?
*And more importantly, where does this last equation come from? The only way I can think of making a_0 a function of $x_1, ... , x_n$ and $a_1, ... , a_n$ is to use (*) to write
$$a_0 = (-1)[x^n + a_{n-1}x^{n-1} +...+a_1x]$$
, but I can't seem to relate this last equality to the one the author wrote.
I would truly appreciate any help/thoughts!
|
For $1$, what darij said.
Just expand $(x-x_1)(x-x_2)\cdots (x-x_n)$. For small $n$, we have:
$$(x-x_1)(x-x_2) = x^2 - (x_1 + x_2) x + x_1 x_2 \\(x-x_1)(x-x_2)(x-x_3) = x^3 - (x_1 + x_2 + x_3)x^2 + (x_1 x_2 + x_1 x_3 + x_2 x_3)x - x_1 x_2 x_3$$
and the pattern reveals itself as the author claims:
$$(x-x_1)\cdots(x-x_n) = x^n - (x_1 + \cdots + x_n) x^{n-1} + \cdots + (-1)^n x_1 \cdots x_n$$
now compare these two expressions of $x^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Matrix Linear Least Squares Problem with Diagonal Matrix Constraint How could one solve the following least-squares problem with Frobenius Norm and diagonal matrix constraint?
$$\hat{S} := \arg \min_{S} \left\| Y - XUSV^T \right\|_{F}^{2}$$
where the $S$ is a diagonal matrix and $U,V$ are column-orthogonal matrix. Is there any fast algorithm?
|
The closed-form solution to this problem is
$$S = {\rm Diag}\bigg(\Big(I\odot U^TX^TXU\Big)^{-1}{\rm diag}\Big(U^TX^TYV\Big)\bigg) \\
$$
which was derived as follows.
For typing convenience, define the matrices
$$\eqalign{
S &= {\rm Diag}(s) \\
A &= XUSV^T-Y \\
}$$
Write the problem in terms of these new variables, then calculate its gradient.
$$\eqalign{
\phi &= \|A\|^2_F = A:A \\
d\phi &= 2A:dA \\
&= 2A:XU\,dS\,V^T \\
&= 2U^TX^TAV:dS \\
&= 2U^TX^TAV:{\rm Diag}(ds) \\
&= 2\,{\rm diag}\Big(U^TX^TAV\Big):ds \\
&= 2\,{\rm diag}\Big(U^TX^T(XUSV^T-Y)V\Big):ds \\
\frac{\partial\phi}{\partial s}
&= 2\,{\rm diag}\Big(U^TX^T(XUSV^T-Y)V\Big) \\
}$$
Set the gradient to zero and solve for the optimal vector.
$$\eqalign{
{\rm diag}\Big(U^TX^TYV\Big)
&= {\rm diag}\Big(U^TX^TXU\;{\rm Diag}(s)\;V^TV\Big) \\
&= \Big(V^TV\odot U^TX^TXU\Big)\,s \\
&= \Big(I\odot U^TX^TXU\Big)\,s \\
s &= \Big(I\odot U^TX^TXU\Big)^{-1}{\rm diag}\Big(U^TX^TYV\Big) \\
S &= {\rm Diag}\bigg(\Big(I\odot U^TX^TXU\Big)^{-1}{\rm diag}\Big(U^TX^TYV\Big)\bigg) \\
}$$
In some of the steps above, the symbol $(\odot)$ denotes the elementwise/Hadamard product and $(:)$ denotes the trace/Frobenius product, i.e.
$$\eqalign{ A:B = {\rm Tr}(A^TB)}$$
Finally, the ${\rm diag}()$ function returns the main diagonal of a matrix as a column vector, while the ${\rm Diag}()$ function creates a diagonal matrix from its vector argument.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.