Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Equation for linear subspaces Let $Z$ be a vector space over some field $K$ and $U, V, W \subseteq Z$ linear subspaces. I am trying to prove or disprove the statement $$(U+W)\cap V = U\cap V + W\cap V,$$
where $X + Y := \{x+y\,|\, x\in X, \,y \in Y\}$.
Since I could not come up with counter-examples (I tried with vectors from $\mathbb R^3$, maybe there are examples that are more advanced I did not come up with), I started trying to prove it.
I tried to start with the "$\subseteq$"-direction:
Let $x \in (U + W)\cap V.$ Then $x\in U+W$ and $x\in V$. We want to show that $x\in U\cap V + W\cap V$, i.e. find $u \in U\cap V$ and $w\in W \cap V$ with $x = u + w$. But here I already don't know how to proceed since I don't know how to find those $u, v$. Any help is appreciated.
EDIT: The "$\supseteq $"-dircetion should be true, I already proved that.
|
Take $Z=\Bbb{R}^2$ and $U$, $V$, $W$ to be three distinct one-dimensional subspaces. Then any one of $U$, $V$, and $W$ is contained in the sum of the other two, so, in particular, $V\cap(U+W)=V$. However, the intersection of any two of these subspaces is $\{0\}$, so $V\cap U+V\cap W=\{0\}+\{0\}=\{0\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why is $\frac{1}{4}\int_{-1}^{1}\int_{-1}^{1}e^{it(x+y)}(1+xy(x^{2}-y^{2})) dxdy =\frac{\sin^{2}(t)}{t^{2}}$
Why is
$\displaystyle{{1 \over 4}\int_{-1}^{1}\int_{-1}^{1}
\mathrm{e}^{\mathrm{i}t\left(x + y\right)}\
\left[1 + xy\left(x^{2} - y^{2}\right)\right]\,\mathrm{d}x\,\mathrm{d}y
=
{\sin^{2}\left(t\right) \over t^{2}}}$ ?.
How do I solve this integral ?. The answer must be right and WolframAlpha gives the same solution.
But calculating
$\displaystyle{\int_{-1}^{1}\mathrm{e}^{\mathrm{i}t\left(x + y\right)}
\left[1 + xy\left(x^{2} - y^{2}\right)\right]\mathrm{d}x}$ first will probably give my a really complicated term, so I guess there must be some kind of tricky subsitution or identity that I can't see right now.
|
The square $[-1,1]^2$ is symmetric with respect to its diagonals, hence the given integral equals the same integral with the variables $x$ and $y$ exchanged. By cancellation it follows that
$$ \iint_{(-1,1)^2}e^{it(x+y)}(1+xy(x^2-y^2))\,dx\,dy = \iint_{(-1,1)^2}e^{it(x+y)}\cdot 1\,dx\,dy $$
and by Fubini's theorem the last integral is the square of $\int_{-1}^{1}e^{itz}\,dz$, i.e. $4\frac{\sin^2 t}{t}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What does it mean even? And odd? I'm actually an international student and I'm not very confident with specific mathematical terms. I was doing some exercises when I came up to the words even and odd. What do they mean exactly?
Here's the context:
Which of the following relations are even?
I. something
II. something
III. something
Results:
(A) only I (B) only I and II (C) ... (D) ... (E) ...
Which of the following relations are odd?
I. something
II. something
III. something
Results:
(A) only I (B) only I and II (C) ... (D) ... (E) ...
Thank you very much!
|
A (binary) relation $R \subseteq X \times Y$ is said to be even, if whenever $(x,y) \in R$ so is $(-x,y) \in R$. A binary relation is called odd, if whenever $(x,y) \in R$ so is $(-x,-y) \in R$.
Wasn't sure either at first, but the term is generalized from the usage in the study of functions (for a source see here).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How is the fundamental theorem of calculus used in a proof about the mild representation formula for the solution to an abstract ODE that I'm reading? Let
*
*$E$ be a $\mathbb R$-Banach space
*$S:[0,\infty)\to E$ be a $C^0$-semigroup
*$T>0$
*$f\in C^0([0,T],E)$
Since $S$ is a $C^0$-semigroup, $$[0,\infty)\to E\;,\;\;\;t\mapsto S(t)x\tag1$$ is continuous for all $x\in E$. However, unless $S$ is even uniformly continous, this shouldn't imply the continuity of $$[0,t]\to E\;,\;\;\;s\mapsto S(t-s)f(s)\tag2$$ for all $t\in(0,T]$.
However, how is then the fundamental theorem of calculus used in equation (12.28) of the book An Introduction to Partial Differential Equations by Michael Renardy and Robert C. Rogers?
Excerpt of the mentioned book (the authors use the symbol $T$ for both the maximal time and the semigroup $S$):
|
I think everything is okay. Because $S$ is uniformly bounded in operator norm on any finite interval $[0,T]$, then
\begin{align}
& \|S(t-s)f(s)-S(t-s')f(s')\| \\
&= \|\{S(t-s)-S(t-s')\}f(s)+S(t-s')\{f(s)-f(s')\}\| \\
&\le \|\{S(t-s)-S(t-s')\}f(s)\|+M\|f(s)-f(s')\|.
\end{align}
Hence,
$$
\lim_{s'\rightarrow s}S(t-s')f(s')=S(t-s)f(s).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving an alternate quadratic formula It is well known that the quadratic formula for $ax^2+bx+c=0$ is given by$$x=\dfrac {-b\pm\sqrt{b^2-4ac}}{2a}\tag1$$
Where $a\ne0$. However, I read somewhere else that given $ax^2+bx+c=0$, we have another solution for $x$ as$$x=\dfrac {-2c}{b\pm\sqrt{b^2-4ac}}\tag2$$
Where $c\ne0$. In fact, $(2)$ gives solutions for $0x^2+bx+c=0$!
Question:
*
*How would you prove $(2)$?
*Why is $(2)$ somewhat similar to $\dfrac 1{(1)}$ but with $2a$ replaced with $-2c$?
|
You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
}
|
Hypergeometric Function on the Unit Circle The Gauss's hypergeometric function is given by the series
:$$_2F_1\left(a,b;c;z\right)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{z^{n}}{n!} \;\;\;\;\;\;\left | z \right |<1
$$
But the function admits an analytic continuation on and beyond the unit circle. My question is : how to express the function for :
$$z=e^{it}\;\;\;\; t\in \mathbb{R}$$
|
For example, there is an identity
$$ _2F_1(a,b;c;z)=(1-z)^{-a}{}_2F_1\left(a,c-b;c;\frac{z}{z-1}\right).$$
The hypergeometric function on the right is given by a power series in $\frac{z}{z-1}$ which converges in the half-plane $\Re z<\frac12$. The latter contains a large part of the unit circle, namely, $\frac{\pi}{3}<\arg z<\frac{5\pi}{3}$. The complementary part can be analogously obtained by transforming the argument into $1-z$.
If however your hope is that on the unit circle hypergeometric function simplifies in some way - alas, it doesn't.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What happens to an absolute value graph if $|x|$ has a coefficient I skipped Algebra I in school, and we have Mathematics midterms next week. While going through our review packet, I noticed graphing absolute values, something I had never seen before.
I've figured out the basics: $|x+n|$ translates the graph $n$ units along the x axis, $|x|+d$ translates the graph $d$ units along the y axis, and $-|x|$ flips the graph so it opens downward.
What happens, however, if we have $a|x|$, or $|ax|$? Is there an easy short hand way to draw this, or do I have to make a chart of the points and graph them one by one?
|
For starters,
$$|ax|=|a||x|=\begin{cases}+|a|x;&x\ge0\\-|a|x;&x<0\end{cases}\implies\text{slope is }\pm a$$
which is just a taller or shorter $V$ shaped graph.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Help find this limit I have the limit below
$$\lim_{(x,y)\rightarrow(0,0)}\frac{x^3-2y^3}{x^2+2y^2}$$
I know that the limit must be zero if it exists since coming along the line $y=mx$ for an constant $m$ gives $0$, but I don't know how to prove it. I want to use squeeze theorem but I don't know what function to use for it.
|
Approach with ellipses $x=r \cos (\theta)$ and $y=\frac{1}{\sqrt{2}}r \sin (\theta)$ with $r \to 0^+$ regardless of $\theta$.
This gives,
$$\lim_{r \to 0^+} \frac{r^3 \cos^3(\theta)-\frac{1}{\sqrt{2}}r^3 \sin^3 (\theta)}{r^2}$$
$$=\lim_{r \to 0^+} r(\cos^3 (\theta)-\frac{1}{\sqrt{2}}\sin^3 (\theta))$$
$$=0$$
Or note,
$$|\frac{x^3}{x^2+2y^2}| \leq |\frac{x^3}{x^2}|$$
$$=|x| \to 0$$
Similarly we have,
$$|\frac{-2y^3}{x^2+2y^2 }| \leq |\frac{2y^3}{2y^2}|$$
$$=|y| \to 0$$
Now use $|a+b| \leq |a|+|b|$ to conclude with squeeze.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
independence between one R.V. and its sum with an independent R.V. If I have two independent, poisson-distributed random variables $X$ and $Y$, and their sum, $ {X+Y}=Z $, is $X$ independent of $Z$? I know that $X$ would be independent of any function of numerous individual random variables of which it is independent, but I'm running into a problem applying this reasoning to a function that involves X itself.
|
One simple proof that they are dependent involves showing that the conditional distribution of Z given X is not the same as the unconditional distribution. That is,
$P(Z \leq z | X=x) = P(x + Y \leq z) = P(Y \leq z-x) \neq P(Z \leq z)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
About the determinant bundle Let $p:X\rightarrow Y$ be a finite cover of smooth curves, Consider a family $E$ of vector bundles over $X$ paramitrized by a scheme $T$,
Is it true that the determinant of the cohomology bundles of $E$ and $p_*E$ are the same?
Thanks
|
Let $f:X \times T \to T$ and $g: Y \times T \to T$ be the projections. Then $f = g \circ p$. It follows that
$$
R^if_*E \cong R^ig_*(p_*E),
$$
hence a fortiori
$$
\det(R^if_*E) \cong \det(R^ig_*(p_*E)).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Radius of convergence of sum of two series How can we find the interval of convergence of the following series:
$$\sum_{n=1}^{\infty} x^n +\frac{1}{({x^n}{2^n})}.$$
|
Hint: Both expressions are part of the geometric series. the first converges for $|x|<1$ and the second converges for $|x/2|<1$. Use the intersection of both regions to get your radius of convergence.
In order to get this result from the ratio rule:$x^n+x^n/2^n=(1+1/2^n)x^n.$ The coefficient is $(1+1/2^n)$. Now apply the ratio rule.
$R=\lim_{n\infty}|\frac{a_n}{a_{n+1}}|=\lim_{n\infty}|\frac{1+1/2^n}{1+1/2^{n+1}}|=1.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2105929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the value of this infinite sum Consider the series
$$\sum_{n=1}^{\infty}\frac{n^2-n+1}{n!}$$
From ratio test it is clear that this series is covergent. What is its value ?
|
We have $$\sum_{n=1}^{\infty} \frac {n^2-n+1}{n!} =\sum_{n=1}^{\infty} \frac {n^2}{n!} + \sum_{n=1}^{\infty}\frac {-n}{n!} +\sum_{n=1}^{\infty} \frac {1}{n!} = (2e)+ (- e) + (e-1) = 2e-1$$ Hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Can ZF-AoI prove that all models of ZF-AoI are at least countable? Do we have:
$${\sf ZF-AoI}\vdash\forall X,(X\text{ is a model of }{\sf ZF-AoI})\implies(X\text{ is at least countable})$$
AoI is the axiom of infinity.
I know that some models of $\sf ZF-AoI$ have no at-least-countable sets, but I'm fairly certain that those models contain no models of $\sf ZF-AoI$, making the statement vacuously true in those models.
|
Yes, it does.
Working in a model $V$ of ZF-AoI, suppose $M$ is a set model of ZF-AoI. Then consider the set $X\subset M$ of $M$-ordinals, ordered by $\in^M$. This is a linear order.
Of course, it may not be well-founded, but it has a well-founded part. Let $$Y=\{x\in X: \mbox{the suborder of $M$-ordinals $<x$ is well-founded}\}.$$ This is easily checked to be a well-ordering itself with no greatest element. Now, by Replacement, $Y$ intial-segment-embeds into the ordinals of our background model $V$; let $\alpha$ be the supremum of the image of this embedding. Now it's not hard to show that $\alpha$ (that is, the set of ordinals below $\alpha$) is an inductive set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
is there such a thing as a one-way definition? A definition always looks something like this:
We say $P$ if and only if $Q$.
Is there an example of a definition where the biconditional is replaced with "if" (and we mean just "if")? What about in the other direction?
|
Short answer: Every definition is a bi-conditional, but one direction is vacuous so we omit it for the purposes of logical aesthetic.
Long answer: Think about this in terms of what a definition is which is an assignment of meaning to a string of symbols. A string of symbols, say "$xyz$" has no inherent meaning unless we assign it one. So to say something like "A < something > is "$xyz$" if and only if < condition > " makes sense in both directions, but, because it is the first instance of "$xyz$" one direction in the statement ("< condition > $\implies$ "$xyz$" ") is vacuous because at that instance "$xyz$" means nothing in particular.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Use this hint to show that any decimal expansion which is eventually periodic An infinite decimal $x = a_0.a_1a_2 ...$ is eventually periodic if there are positive integers $n$ and $k$
such that $a_{i+k} = a_i$ for all $i > n$.
Show that any decimal expansion which is eventually periodic represents a rational number.
HINT: Compute $10^{n+k}x−10^{n}x$.
I don't know how use this hint.
|
Think of a power of ten as shifting the digits. For example, consider the decimal $7.8333333\ldots$. Here, taking $n = 2$ and $k = 1$ demonstrates that this is an eventually periodic decimal. $10^{2+1}x - 10^2x = 783.333\ldots - 78.333\ldots = 705$. Now, the interesting thing is that $705$ is an integer! So we have that $1000x - 100x = 705$. But $1000x - 100x = 900x$. So $900x = 705$, so $x = \frac{705}{900}$ and is therefore rational.
This happened because, magically, $10^{2+1}x-10^2x$ turned out to be a whole number. Was that a random coincidence? Or will it happen every time? (hint, hint).
As a general rule, when given a hint like this, try it out on a specific example, and see if you can spot how it's useful in that one case. Don't worry about the general case until you can do it for one specific number. Then see if you can prove that whatever tricks you used for that case always work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Discuss the validity of Rolle's Theorem Guys this is the question and I tried to solve its first part but I am not sure if it is correct or not and also I am unable to solve the second part, kindly help me.
Question: Part (1): Discuss the validity of Rolle's Theorem for the function $f(x)=4x^2-20x+29$ over the interval $[1,4]$
Part (2): Find $c$, if possible.
My Attempt for Part (1): I calculated the $f(1)$ and $f(4)$, both were equal to $13$, hence in my view the Rolle's Theorem is valid for this function
Kindly correct me if I am wrong and also kindly tell me how can the Part (2) of this question be solved. Thanks in advanced.
|
$f$ is polynomial, so we can apply the theorem of Rolle. We have $f'(x)=8x-20$, hence $f'(\frac{5}{2})=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show property of determinant Let $$A = \begin{pmatrix}1&a_1+\hat a_1 &a_2 + \hat a_2\\1& b_1 + \hat b_1 & b_2 + \hat b_2 \\ 1 & c_1 + \hat c_1 & c_2 + \hat c_2\end{pmatrix}$$ be a real $3\times3$ matrix. I need help proving $$\det A = \det\begin{pmatrix}1&a_1 &a_2\\1& b_1 & b_2 \\ 1 & c_1 & c_2 \end{pmatrix} + \det\begin{pmatrix}1&a_1 &a_2\\1& \hat b_1 & \hat b_2 \\ 1 & \hat c_1 & \hat c_2 \end{pmatrix} +\det\begin{pmatrix}1& \hat a_1 & \hat a_2\\1& b_1 & b_2 \\ 1 & \hat c_1 & \hat c_2 \end{pmatrix} +\det\begin{pmatrix}1&\hat a_1 &\hat a_2\\1& \hat b_1 & \hat b_2 \\ 1 & c_1 & c_2 \end{pmatrix}$$
I am only allowed to use "properties" of the determinant (i.e. multilinearity in each argument etc.), calculating both sides and comparing is obviously not allowed.
I first used the multilinearity in each row to get
$$\det A = \det\begin{pmatrix}1&a_1 &a_2\\1& b_1 & b_2 \\ 1 & c_1 & c_2 \end{pmatrix} + \det\begin{pmatrix}1&a_1 &\hat a_2\\1& b_1 & \hat b_2 \\ 1 & c_1 & \hat c_2 \end{pmatrix} +\det\begin{pmatrix}1& \hat a_1 & a_2\\1&\hat b_1 & b_2 \\ 1 & \hat c_1 & c_2 \end{pmatrix} +\det\begin{pmatrix}1&\hat a_1 &\hat a_2\\1& \hat b_1 & \hat b_2 \\ 1 &\hat c_1 &\hat c_2 \end{pmatrix}$$
but from here I don't know what properties I could use to get the result.
|
HINT: Do the exact same thing you did but use multilinearity of the determinant as a function of column vectors instead of row vectors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How would you write this in Sigma notation I stumbled upon this expression:$$p_nq_mx^{m+n}+(p_{n-1}q_m+p_nq_{m-1})x^{m+n-1}+(p_{n-2}q_m+p_{n-1}q_{m-1}+p_nq_{m-2})x^{m+n-2}+\ldots+p_0q_0\tag1$$
And I'm wondering if there is an easier way to represent $(1)$ using the sum notation Sigma: $\sum$.
|
$$\sum_{i=0}^{n}\sum_{j=0}^{m} p_iq_jx^{i+j}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2106741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Using vectors show that $AL$ bisects $BC$. A line $EF$ drawn parallel to the base $BC$ of a
$∆ ABC$ meets $AB$ & $AC$ in $F$ & $E$ respectively. $BE$ & $CF$
meet in $L$. Using vectors we have to show that $AL$ bisects $BC$.
I tried as
Let $B$ is origin and $A$ is vector $\vec a$ and $C$ is vector $\vec c$.
Now let $\frac{AF}{AB} = m$.
Then vectors $\vec f$ and $\vec e$ are respectively $(1-m)\vec a$ and $(1+m)\vec a-m\vec c$.
But now how to proceed .
|
Then vectors $\vec f$ and $\vec e$ are respectively $(1-m)\vec a$ and $(1+m)\vec a-m\vec c$.
It should be $\vec e=(1\color{red}{-}m)\vec a\color{red}{+}m\vec c$.
There exist real numbers $s,t$ such that
$$\vec{l}=s\vec e=(1-m)s\vec a+ms\vec c\quad\text{and}\quad \vec l=t\vec f+(1-t)\vec c=(1-m)t\vec a+(1-t)\vec c$$
to have
$$(1-m)s=(1-m)t\quad\text{and}\quad ms=1-t$$
so
$$s=t=\frac{1}{m+1}$$
Therefore, we get
$$\vec l=\frac{1-m}{m+1}\vec a+\frac{m}{m+1}\vec c$$
which can be written as
$$\vec{AL}=\frac{m}{m+1}(\vec{AB}+\vec{AC})$$
The claim follows from this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Rearranging trigonometry formula How do I rearrange this trig formula to make y the subject? I am having trouble to take the $y$ out of the cosine.
$x = \cos(a+y)\times50$
Any help is appreciated.
|
$$x = \cos(a+y)\times50$$
Divide both sides by $50$:
$$\frac{x}{50} =\frac{ \cos(a+y)\times50}{50}$$
$$\frac{x}{50}=\cos(a+y)$$
Now take the inverse function of cosinus:
$$a+y=cos^{-1}\left(\frac{x}{50}\right)$$
Subtract by $a$:
$$y+a-a=cos^{-1}\left(\frac{x}{50}\right)-a$$
This means that
$$y=cos^{-1}\left(\frac{x}{50}\right)-a\:.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show that axiom of replacement implies from the Axiom of Specification [Proof Verification]
Axiom $3.6$ (Replacement). Let $A$ be a set. For any object $x \in A$ and any
object $y$, suppose we have a statement $P(x, y)$ pertaining to $x$ and $y$,
such that for each $x\in A$ there is at most one $y$ for which $P(x,y)$ is
true. Then there exists a set $\{y: P(x, y) \text{ is true for some } x \in A\}$
such that for апy object $z$, $$ z\in \{y : P(x, y)\text{ is true for some } x \in A\} \iff P(x,z)\text{ is true for some } x \in A.$$
Axiom $3.5$ (Specification). Let $A$ be a set, and for each x$\in$ $A$, let $P(x)$ be a property pertaining to $x$ (i.e., $P(x)$ is either a
true statement or a false statement). Then there exists a set, called
$\{x \in A : P(x) \text{ is true}\}$ (or simply $\{x \in A : P(x) \text{ for short}\}$), whose elements are precisely the elements $x$ in $A$ for which $P(x)$ is true. In other words, for any object $y$, $$у \in \{x \in A: P(x)\text{ is true}\} \iff (y \in A \text{ and } P(y)\text{ is true}).$$
I have to show that $3.6\implies 3.5.$
Proof: By $(3.6)$ we can assume the following set $$\{x:P(x,x)\text{ is true for some }x\in A\}.$$ Let $Q(x)=P(x,x)$ then we get the set $$\{x\in A:Q(x) \text{ is true for some }x\},$$
which is what $(3.5)$ wants. Is this proof correct?
PS. I have read other answers to this question on MSE, but none of them use this formulation of the axiom and I guess use more formal notation. I am learning from Tao's Analysis book and so I've not been introduced to such notation.
|
Your argument isn't quite correct. But it's on the way.
You are given a property $P$ and you want to separate a subset of $A$ which is specified by the property $P$.
For this, you need to find a property $Q(x,y)$ for which if $x\in A$, then there is at most a single $y$ such that $Q(x,y)$ holds, and you need to choose $Q$ in such a way that you get exactly the subset of $A$ specified by $P$.
The obvious solution, of course, is $Q(x,y)$ to be $x=y\land P(x)$. I will leave you to prove that it satisfies the conditions needed to apply Replacement on $Q$, and of course $\{y\mid Q(x,y)\text{ holds for some }x\in A\}$ is exactly—by the definition of $Q$—the same $\{x\in A\mid x=x\land P(x)\}$ which in turn is exactly $\{x\in A\mid P(x)\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Quotient rule of derivatives Quotient rule of derivative is: $(\frac{f}{g})^{\prime}$ = $(\frac{f^{\prime}g - g^{\prime}f}{g^2})$ but when I compute a deriative of $\frac{1}{(1-x)}$ , it gives $\frac{1}{(1-x)^2}$ which is right but taking second derivative of this gives me $\frac{2}{(1-x)^4}$ but the right answer is $\frac{2}{(1-x)^3}$. Where do I make a mistake?
|
You get this because you can cancel out one $(1-x)$ in the denominator.
$\Big(\frac{1}{(1-x)^2}\Big)'=\frac{0*((1-x)^2)'-1*((1-x)^2)'}{((1-x)^2)^2}=\frac{2(1-x)}{(1-x)^4}=\frac{2}{(1-x)^3}$.
If you just follow the formula you had carefully, with $f=1$ and $g=(1-x)^2$, this is what you get.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Why is $9 \times 11{\dots}12 = 100{\dots}08$? While I was working on Luhn algorithm implementation, I discovered something unusual.
$$ 9 \times 2 = 18 $$
$$ 9 \times 12 = 108 $$
$$ 9 \times 112 = 1008 $$
$$ 9 \times 1112 = 10008 $$
Hope you can observe the pattern here.
What to prove this?
What is it's significance?
|
The repunit, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$
Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
}
|
Derivatives, bounds and surjectivity Let $f: \mathbb{R} \to \mathbb{R}$ be a derivable function.
We know that:
$\lim_{x\to-\infty}\ f'(x) = \lim_{x\to+\infty}\ f'(x) = \frac1 2$
Prove that such function is surjective.
Ok my thoughts so far are:
Let's suppose that such a function has an upper bound. Then let $s=\sup(f)$ be such upper bound. Since it is continuous, there has to be a neighbourhood of $+\infty$ in which the derivative remains positive, and we can take it big enough to include $s$ (?). Then we can conclude that in such neighbourhood the function is increasing, which goes against the fact that s is an upper bound.
Therefore the function has to not have an upper bound. A similar reasoning can be done for the lower bound and we can conclude that the function in surjective.
Pretty sure it's completely wrong, but still a try.
|
We may choose $M>0$ and $N<0$ such that if $x\in (-\infty, N)\cup (M,\infty),$ then $f'(x)>0. $
Now, consider separately the intervals $(-\infty, N),\ [N,M],\ (M,\infty). $
On the first and third of these, $f$ is increasing and continuous, hence surjective.
Wlog $f(M)\ge f(N)$. Then, the IVT implies that $f([N,M])$ is onto $[f(N),f(M)],\ $
so $f$ is surjective, as required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Solve $2^x\cdot 6^{x-2}=5^{2x}\cdot 7^{1-x}$ I have to solve this equation, $2^x\cdot 6^{x-2}=5^{2x}\cdot 7^{1-x}$. Now, I started by taking logs on both sides which gives me this funny looking equation
$x\log{2}+(x-2)\log(2\cdot3)=2x\log(\frac{10}{2})+(1-x)\log{7}$
I have been stuck on this step for a while now and can't see how I can go further from here. Is there a way out?
|
It's $$x\ln2+(x-2)\ln6=2x\ln5+(1-x)\ln7,$$ which gives
$$x=\frac{2\ln6+\ln7}{\ln2+\ln6-2\ln5+\ln7}$$ or
$$x=\log_{\frac{84}{25}}252$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Tiles Combinatorics I have the following question:
Let there be a road of length $n$.
there are 3 types of tiles: of lengths 1,2,3.
We'll define $a(n)$ as the number of roads of $n$ tiles where a tile of length 1 and a tile of length 2 can not be adjacent.
I need to find a recurrence relation for $a(n)$.
I've manually calculated $a(n)$ for $0\leq n \leq6$ and I got $a(0)=1, a(1)=1, a(2)=2, a(3)=2, a(4)=4, a(5)=6, a(6)=9$, but I can't find the general relation.
Thanks for the help.
|
disclaimer This is too long to be a comment and can be viewed as a partial answer; as I can't finish it yet, but it may help someone else, I provide with what I have so far.
Let me define 3 sequences with easier to find recurrence relations:
$u(n) $ gives the number of roads of length $n $ that end with a tile of length $1$;
$d(n) $ gives the number of roads of length $n $ that end with a tile of length $2$;
$t(n) $ gives the number of roads of length $n $ that end with a tile of length $3$.
All those roads are restricted to the condition imposed by the OP: no 1-tile is adjacent to a 2-tile.
The following should be immediate:
$$a(n) = u(n) + d(n) + t(n)\\
u(n) = u(n-1) + t(n-1)\\
d(n) = d(n-2) + t(n-2)\\
t(n) = u(n-3) + d(n-3) + t(n-3) $$
Also we have that $u,d,t$ are $0$ if $n <0$ and $u(0) = d(0) = t(0) = 1$ just so the relations work fine from the get go.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to calculate $\lim\limits_{i\to\infty} \frac{5^i - 2^{3i+1}}{7^{i-2}+8^i}$? I'm having trouble finding the answer to the limit
$$\lim\limits_{i\to\infty} \dfrac{5^i - 2^{3i+1}}{7^{i-2}+8^i}$$
I get the answer 98, which to me seem to be wrong. Can someone help me?
Thanks in advance.
|
Thanks everyone
got it
$a_i = \frac{5^i - 2^{3i+1}}{7^{i-2}+8^i}$
$$=\lim_{i\to\infty}\frac{5^i-2(8^i)}{7^i\cdot7^{-2}+8^i}\cdot\frac{\frac{1}{8^i}}{\frac{1}{8^i}}=\lim\limits_{i\to\infty} -2= -2 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
finding out the value of the expression Let $P(x)=x^5+x^2+1$ have roots $x_i,i=1,2,3,4,5$. Let $g(x)=x^2-2$,then the question is to find the value of $$\prod_{i=1}^5 g(x_i)-30g(\prod_{i=1}^5 x_i)$$.
It is clear that $30g(\prod_{i=1}^5 x_i)=-30$.I tried to substitute the values of the $g(x_i)$ and then tried to simplify
it but couldnot proceed.I know that it is not an elegant way to this problem and there must be some trick involved in it.Any hint would be highly appreciated. Thanks.
|
Factoring P gives:
$$P(x)=\prod_{i=1}^5(x-x_i)=-\prod_{i=1}^5(x_i-x)$$
So $P(\sqrt 2)=-\prod_{i=1}^5(x_i-\sqrt 2)$ and $P(-\sqrt 2)=-\prod_{i=1}^5(x_i+\sqrt 2)$
So
$$P(\sqrt 2)P(-\sqrt 2)=\prod_{i=1}^5(x_i-\sqrt 2)\prod_{i=1}^5(x_i+\sqrt 2)=\prod_{i=1}^5(x_i^2-2)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2107841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What do the Hurwitz quaternions have to do with the Hurwitz quaternion order? The Hurwitz quaternions are the ring formed by the elements of the form $w+xi+yj+zij$ where $i^2=j^2=-1$, $ij=-ji$, and where $w,x,y,z$ are either all integers or all half-integers. These form a maximal order of the quaternion algebra $\Big(\frac{-1,-1}{\mathbb{Q}}\Big)$.
The Hurwitz quaternion order, on the other hand, is defined as follows (according to Wikipedia). Let $\rho$ be the primitive seventh root of unity and let $K$ be the maximal real subfield of $\mathbb{Q}(\rho)$. Let $\eta=2\cos(\frac{2\pi}{7})$ (so that $\mathbb{Z}[\eta]$ is the ring of integers of $K$) and consider the quaternion algebra $\Big(\frac{\eta,\eta}{K}\Big)$ (where $i^2=j^2=\eta$). Then let $\tau=1+\eta+\eta^2$ and $j'=\frac{1+\eta i+\tau j}{2}$, and the Hurwitz quaternion order is the maximal order $\mathbb{Z}[\eta][i,j,j']$ in $\Big(\frac{\eta,\eta}{K}\Big)$.
It seems that the Hurwitz quaternion order should be some sort of generalization of the Hurwitz quaternions but there are a lot of decisions here that seem arbitrary to me. What is the motivation for the similar nomenclature? What is special about the order $\mathbb{Z}[\eta][i,j,j']$ in $\Big(\frac{\eta,\eta}{K}\Big)$ and what does it have in common with the Hurwitz quaternions in $\Big(\frac{-1,-1}{\mathbb{Q}}\Big)$?
|
It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
probability of getting 2 or 5 in two throws of a die So I know the probability rule of addition.
Getting 2 or 5 in two throws should be
$P(2)+ P(5)$. $P(2) = 1/6, P(5) = 1/6$ so the combined so it should be $1/3$.
I tried to visualize but not able to do so correctly.
11,12,13,14,15,16, 21,22,23,24,25,26,31,32, ....6,6
total of $36$ possibilities.
12,15,21,22,23,24,25,26,31,35,42,45,51,52,53,54,55,56,61,65
out of which $20$ possibilities, so the probability should be $20/36$ which is not $1/3$.
Where am I going wrong?
|
Your both methods have some mistake.
Method 1 -
Getting 2 on first number and any other number on second except 5. But 2 can be on second die also. So multiply cases with 2.
$2 \left(\frac16 \times \frac46\right)$
= $\left(\frac13 \times \frac23\right)$
= $\left(\frac29\right)$
Similarly for getting 5 on first number and any other number on second except 2. But 5 can be on second die also. So multiply cases with 2.
$2 \left(\frac16 \times \frac46\right)$
= $\left(\frac13 \times \frac23\right)$
= $\left(\frac29\right)$
Case with 2 on both dice or 5 on both dice.
$\left(\frac16 \times \frac16 + \frac16 \times \frac16\right)$
= $\left(\frac1{36} + \frac1{36}\right)$
= $\left(\frac2{36}\right)$
Case with 2 on first die and 5 on second or vice versa.
$\frac16 \times \frac16 + \frac16 \times \frac16$
= $\left(\frac1{36} + \frac1{36}\right)$
= $\left(\frac2{36}\right)$
Combining these,
$\left(\frac29 + \frac29 + \frac2{36} + \frac2{36}\right)$
= $\left(\frac{20}{36}\right)$
Method 2 -
21, 22, 23, 24, 25, 26, 12, 32, 42, 52, 62, 15, 35, 45, 55, 65, 51, 53, 54, 56 = 20 cases.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
vector times cross product
We have vectors $x, y, z$ where $z = x \times y$.
What is $x \cdot z$?
From my intuition, the cross product is perpendicular to both vectors, so dot product should be 0?
|
Correct. If $x$ is not collinear to $y$, $z$ defines the direction of a normal vector to the plane that both $x$ and $y$ lie in. So $z$ is normal to both. In the case where $x$ and $y$ are collinear (one is a scalar multiple of the other), then the cross product between them is the null vector, so the assertion is trivially true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Does $d (x,y)= (x-y)^2$ define metric on a set of real numbers? It is a question in functional analysis by writer Erwin Kryzic
|
In order for it to be a metric it must follow these properties by definition,
$$d(x,y) \geq 0$$
$$d(x,y)=0 \iff x=y$$
$$d(x,y)=d(y,x)$$
$$d(x,z) \leq d(x,y)+d(y,z)$$
Does it?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to show that $\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)?$ How do we show that?
$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)\tag1$$
$(1)$ it a bit difficult to start with
$$\int_{0}^{\pi}(1+2x)\cdot{\sin(x)[1-\sin^2(x)]\over 1+\cos^2(x)}\mathrm dx\tag2$$
Setting $u=\cos(x)$
$du=-\sin(x)dx$
$$\int_{-1}^{1}(1+2x)\cdot{(u^2)\over 1+u^2}\mathrm du\tag3$$
$$\int_{-1}^{1}(1+2\arccos(u))\cdot{(u^2)\over 1+u^2}\mathrm du\tag4$$
$du=\sec^2(v)dv$
$$\int_{-\pi/4}^{\pi/4}(1+2\arccos(\tan(v)))\tan^2(v)\mathrm dv\tag5$$
$$\int_{-\pi/4}^{\pi/4}\tan^2(v)+2\tan^2(v)\arccos(\tan(v))\mathrm dv=I_1+I_2\tag6$$
$$I_1=\int_{-\pi/4}^{\pi/4}\tan^2(v)\mathrm dv=2-{\pi\over2}\tag7$$
As for $I_2$ I am sure how to do it.
|
Using the fact that
$$
\int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx
$$
and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals
$$
(1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx
$$
Writing
$$
\frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x
$$
we find that your integral equals
$$
(1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
}
|
Chessboard, Tiling, Maths A square-shaped garden is divided into an n x n square grid by footpaths (like a chessboard).
The owner wishes to have all the sections covered by grass; however, the grass grows in a peculiar way. If at least two of the neighbouring squares (i.e. squares that share an edge) of a given square become fully covered with grass, then so will the given square.
What is the minimum number of squares that need to be planted initially so that the grass will eventually extend to the whole garden?
|
Take $2*2$ case, and plant grass in diagonal squares initially this will cover complete 4 squares. Now add 5 squares along adjacent edges of given square to make it a $3*3$ square. No squares of the newly added squares grow grass on its own. So you need to plant one square, any square of the newly added squares with grass and you are done. This logic can be extended to higher $n$. Thus minimum square need to be planted with grass initially is $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Incircle problem Triangle ABC has incircle $ \beta$ which meets BC at D. A diameter of the incircle has endpoints E and D. A line joining A and E meets BC at F. Given that $DC \gt BC$. Prove that $BD =FC$
I couldn't find any synthetic geometry methods, so I resorted to coordinate geometry. I took the incircle as a circle with radius 1 and centre (0,1). $B \equiv (-x,0) , C \equiv (y,0)$
I found $A \equiv (\frac{x-y}{xy-1} , \frac{2xy}{xy-1} )$
Then on extending AE we get $ F \equiv (y-x,0)$ and hence it is proved that $\overline{BD} = \overline{FC}$.
I hope anyone could provide me proof with Euclidean geometry, which is more intuitive and brainy than bashing.
Note: Please help in putting a suitable title.
|
First, we focus on the segment $FC$. Since $B'C'\, || \, BC$, by Thales' intercept theorem, the triangles $\Delta\, AB'C'$ and $\Delta\, ABC$ are similar and therefore
$$\frac{AB}{AB'} = \frac{BC}{B'C'} = \frac{AC}{AC'} = m$$
By the same fact, that $EC'\, || \, FC$, and the same theorem:
$$\frac{FC}{EC'} = \frac{AC}{AC'} = m$$
so $FC = m\, EC'$.
By the tangency of the incircle $k$ of triangle $\Delta \, ABC$ to the (extended) sides of triangle $\Delta\, AB'C'$,
$$EC' = TC' = TA - AC' = p' - AC'$$ where $p'$ is half of the perimeter of $\Delta\, AB'C'$. This implies
$$FC = m\, EC' = m\, (p' - AC') = m\,p' - m\, AC' = p - AC$$ where $p$ is half of the perimeter of $\Delta\, ABC$.
Now we focus on the segment $BD$. Since $k$ is the incircle of $\Delta \, ABC$,
$$BD = p - AC$$ Therefore
$$FC = p - AC = BD$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
How many ways to select atleast one book of each subject? Suppose there are $6$ books on Maths, $3$ books on English and $2$ books on science . How many ways to select atleast one book of each subject, assuming the books of same subject are different ?
My try:
I have not solved it, but I just need to check my logic.
Number of ways:-
Total ways to select $3$ books - (Total ways to select $3$ books on Maths + Total ways to select $3$ books on English + Total ways to select $3$ books on Science + Total ways to select $3$ books on Maths and English + Total ways to select $3$ books on Maths and science + Total ways to select $3$ books on Science and English)
Is this logic right ?
|
I think its not work. As you are not including one subject in all 3 cases.
I think more easy way is to find -
Total cases - (Cases with only 1 subject books selected + 2 subjects books selected)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to formalise: There are no geniuses but Newton was a genius How would you formalise: "There are no geniuses but Newton was a genius"?
I thought it could be:
$$\neg\forall x(Px) \land Pa$$
and also $$\neg\exists x (Px) \land Pa$$These both seem to make sense and formalise the sentence, however they are not identical statements. The way it is worded is really confusing!
|
One way to formalize this would be $\bot$, since that statement is a contradiction ... Unless of course the point of the sentence is that there are no geniuses now, though we have had geniuses in the past, like Newton.
One way to formalize that would be to use a predicate $G(x,t)$ which says that $x$ is a genius at time $t$. So then:
$\neg \exists x G(x,t_{now}) \land \exists t (t<t_{now} \land G(newton,t))$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Probability with balls and a box - complementary event of "exactly" There are $12$ balls in a box: $b$ blue, $y$ yellow and $3$ red balls. Three balls are randomly chosen. If the probability of choosing one blue, one yellow and one red is $3/11$, find the number of yellow balls in a box.
Attempt:
If $t$ is the total number of balls in a box, then:
$t=3+b+y$
$A$: "We choose exactly one blue, yellow and red ball."
$$P(A)=\frac{b}{t}\cdot\frac{y}{t-1}\cdot\frac{3}{t-2}=\frac{3}{11}$$
Substituting $t$ gives
$$P(A)=\frac{b}{3+b+y}\cdot\frac{y}{2+b+y}\cdot\frac{3}{1+b+y}=\frac{3}{11}$$
Now we have one equation with two unknowns, so we need to define another event.
Because we already know $P(A)$, that new event should be the complementary event of $P(A)$.
How to define that event and how to evaluate it?
|
Remember that you are given $t=12$.
Also,
$$\frac{b}{12}*\frac{y}{11}*\frac{3}{10}=\frac{3}{11}$$ is not correct, as the order in which you draw the balls doesn't matter. There are 6 total ways to draw three distinctly colored balls, so your equation should be:
$$6*\frac{b}{12}*\frac{y}{11}*\frac{3}{10}=\frac{3}{11}$$
This leads to:
$$by=20$$
Now, since we also know that $b+y=9$, we can conclude there are either 4 or 5 yellow balls.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2108979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Show that three complex numbers $z_1, z_2, z_3$ are collinear iff $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$ I need to show that $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0 \iff z_1,z_2,$ and $z_3$ are collinear.
I know that $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$ implies that $\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1 \in \mathbb{R}$, but I am not sure how to argue in either direction. Please help. Thank you
|
Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one.
To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not.
And indeed, we'll see that
$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$
Observe that
\begin{align*}
\operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big)
&= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt]
&= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt]
&= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0}
\end{align*}
where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,
\begin{align*}
\operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big)
&= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt]
&= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\
&= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big)
\end{align*}
Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.
Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$.
But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
}
|
nonlinear ODE shooting method using Newton $y'' = 2y'-2y+2$
with $y(0)=1 $ and $ y(\frac{\pi}{2})=2$
I have to solve this using shooting method (Newton).
First thing I need to do it replace the right boundary problem with a specified slope at left boundary. I am told in the question that $y'(0) = 0$. Then the note tells me to use Newton's method for $y(\frac{\pi}{2};0)-2$ ?
I am quite confused as to how to proceed and do this question. I cannot find any similar examples.
Thanks.
|
I'll write out the idea of the shooting method in your problem, although the idea is pretty general. Any solution to $y''=2y'-2y+2,y(0)=1,y(\pi/2)=2$ is also a solution to $y''=2y'-2y+2,y(0)=1,y'(0)=s$ for some unknown number $s$. Denoting the solutions to this family of IVPs by $y(x;s)$, we can define $F(s)=y(\pi/2;s)$. $F$ can be approximately numerically evaluated using an IVP solver.
We then want to solve the equation $F(s)=2$, which can be done using a numerical method for 1D root finding, such as bisection, Newton's method, or the secant method. Newton's method is not easy to implement in this situation, because it is not easy to compute $F'(s)$. But bisection and the secant method are both easy to implement in this situation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
problem on convergence of series $\sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a$ Finding the set of all positive values of $a$ for which the series
$$
\sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a
$$ converges.
How will it depend on the the value of a that is its power of the term?
After expanding the arc tan term I get the form of summation of $[ -1/n^3(1/3+1/n^2+......]^a $. now how does it depend on a ?
|
Hint. One may use a Taylor series expansion, as $n \to \infty$,
$$
\arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right)
$$ giving, as $n \to \infty$,
$$
\left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right)
$$ I hope you can take it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
determine whether the vector is in the image of A
I am having a really hard time trying to understand what "the image of A" means and how to start such a question.
I was thinking of creating a matrix with b1 as the augumented part of the matrix.
|
The hint is in the text of your exercise. The image of $A$ is the abbreviation for the image of a linear map defined by the matrix $A$, i.e. $T:\Bbb R^3\to\Bbb R^3$ given by $T(x)=Ax$. To check whether a given vector $y$ lies in this image, you need to find the vector $x$ s.t. $y=Ax$. This leads to the system of linear equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find all $x$ such that $x^6 = (x+1)^6$.
Find all $x$ such that $$x^6=(x+1)^6.$$
So far, I have found the real solution $x= -\frac{1}{2}$, and the complex solution $x = -\sqrt[3]{-1}$.
Are there more, and if so what/how would be the most efficient find all the solutions to this problem? I am struggling to find the rest of the solutions.
|
Hint We may factor this as:
$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$
So then we have a linear and two quadratics which you should be able to solve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
What does the p in p-value stand for? Just to clarify, I know more or less how the p-value works, and that the topic of how to properly use the p-value for statistics has already been addressed on this site.
I was really just wondering what it actually stood for, as I couldn't find an answer elsewhere on the internet.
|
The p stands for probability. A p-value is the probability that we get a sample like the one you tested by random chance alone. Thus, a low p-value tells you that it is extremely unlikely for a sample like the one you have to occur based on random chance.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How can I solve $u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$ with the method of characteristics. I am trying to solve the following PDE: $u_{xt} + uu_{xx} = -\frac{1}{2}u_x^2$, with initial condition: $u(x,0) = u_0(x) \in C^{\infty}$ using the method of characteristics.
I am a beginner with the method of characteristics and PDE in general. Here is what I have so far.
Define $\gamma(x,t)$ as the characteristic curves.
$\frac{\partial}{\partial t} u_x(\gamma(x,t),t) = u_{xt} + u_{xx}\gamma_t(x,t) = - \frac{1}{2}u_x^2$
Set $u_t = u_x$
$\Rightarrow \frac{\partial}{\partial t} u_x(\gamma(x,t),t)= (u_t)_x + u_{xx}\gamma_t(x,t)$
$ = u_{xx} + u_{xx}\gamma_t = - \frac{1}{2}u_x^2$
From this I get $\gamma_t = -\frac{1}{2}\frac{u_x^2}{u_{xx}} - 1$
However, I am not sure this is the right approach and do not fully understand how to use the method of characteristics when the solution $u(x,t)$ is constant on the characteristic curves.
Any help is much appreciated.
Edit: I made some progress by using $v=u_x$ and getting $\frac{dv}{dt} = \frac{-1}{2} v^2$ and $\frac{\partial x}{\partial{t}} = 1$. Then separating the first ODE, I get $\frac{2}{v} = t + c$. However, I am not sure if my solution after integrating with respect to $x$ and using the initial condition is correct. I end up with $u(x,t) = \frac{2}{t+c}x + c_1$, $u(x,0) = \frac{2}{c}x + c_1$.
|
Not so sure:
$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$
$2u_{xt} + 2uu_{xx} + u_x^2 = 0$
Let's define
$u = x t$
Then:
$u_{x} = x_{x} t$
$u_{t} = x t_{t}$
$u_{xt} = x_{x} t_{t}$
$u_{xx} = x_{xx} t$
Now:
$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$
$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$
We divide by $x_{x} t^2$:
$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$
Please, verify.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
Sum of series $\sum \limits_{k=1}^{\infty}\frac{\sin^3 3^k}{3^k}$ Calculate the following sum: $$\sum \limits_{k=1}^{\infty}\dfrac{\sin^3 3^k}{3^k}$$
Unfortunately I have no idea how to handle with this problem.
Could anyone show it solution?
|
Using
$$\sin(3a)=3\sin a-4\sin^3a \to \color{red}{\sin^3(a)=\frac14\Big(3\sin a-\sin(3a)\Big)} $$
so
\begin{eqnarray}
\sum_{k=1}^{\infty}\frac{\sin^3(3^k)}{3^k}
&=&
\frac14\sum_{k=1}^{\infty}\frac{3\sin(3^k)-\sin(3.3^k)}{3^k}\\
&=&
\frac14\sum_{k=1}^{\infty}\frac{\sin(3^k)}{3^{k-1}}-\frac{\sin(3^{k+1})}{3^{k}}\\
&=&
\frac14\sum_{k=1}^{\infty}f(k)-f(k+1)\\
&=&\frac14\Big(\frac{\sin3}{3^{1-1}}-\lim_{n \to \infty}\frac{\sin(3^{n+1})}{3^n}\Big)\\
&=&\frac{\sin(3)}{4}
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2109942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 0
}
|
Textbook on absolute continuity Can someone please recommend a textbook that gives a substantial treatment of absolute continuity and is accessible (written for students; sticks to the real numbers instead of turning to more abstract generalizations)?
|
The Wikipedia page on absolute continuity frequently uses Royden, H.L. (1988), Real Analysis (third ed.) as a reference. I think newer editions of this book are co-authored by Fitzpatrick, as Google searching the reference brought me to this pdf. See page 119 in the book numbering, page 130 in the pdf. I hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2110071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Maclaurin series of $f(g(x))$ I was doing some exercise about Maclaurin expansion when I notice something, I used to remember the series formula of some common functions with $x$ as argument, but when I had to calculate the expansion for the same function but with $x^2$ as argument, for example, I always recalculate the series from scratch.
Then I started to realise that I could have just substituted $x$ with $x^2$. So is it wrong to say that, given a polynomial function $P(x)$ which represent the series of Maclaurin for a function $f(x)$, the series of Maclaurin for $f(g(x))$ is equal to $P(g(x))$ when $g(x)$ approach to $0$?
If it's not completely wrong can you give me some hints in order to understand when it's correct?
|
The most important point here is that each function has a unique Taylor Series at a each point in its domain. You might find different ways to write the same series, but in the end the forms are really the same. In your case, this means that both a substitution and a direct calculation will give the valid Taylor Series (though the series might look a bit different).
Moreover, as pointed out prior, you will have to be careful about where your new series converges when you substitute. Always ensure you are in the disk of convergence for your new series when you substitute.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2110193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Finding dimension of a vector space, Hoffman and Kunze problem page 190 Let A be an $n\times n$ diagonal matrix with characteristic polynomial:
$$\prod_{i=1}^{k}(x-c_{i})^{d_{i}}$$
where $c_{1},...,c_{k}$ are distinct.
Let $V$ be the space of all $n\times n$ matrices $B$ such that $AB=BA$.
Prove that the dimension of $V$ is $$\sum_{i=1}^{k}d_{i}^{2}.$$
I have no idea how to proceed. Thanks in advance.
|
Note that $A$ is diagonal so we have the direct sum decomoposition
$$ \mathbb{F}^n = \bigoplus_{i=1}^k \ker(A - c_i I) $$
where each $V_i := \ker(A - c_i I)$ is a $d_i$-dimensional eigenspace of $A$ associated to the eigenvalue $c_i$. If $BA = AB$ then each $\ker(A - c_i I)$ is $B$-invariant (so $BV_i \subseteq V_i$). On the other hand, if $BV_i \subseteq V_i$ then using the direct sum decomposition it is easy to see that $AB = BA$.
Hence, the subspace $C_A$ you are looking for is $\{ B \in M_n(\mathbb{F}) \, | \, BV_i \subseteq V_i \}$. You can always conjugate $A$ by a permutation matrix to make sure that the diagonal entries of $A$ come as
$$ \underbrace{c_1, \dots, c_1}_{d_1 \text{ times}}, \dots, \underbrace{c_k, \dots, c_k}_{d_k \text{ times}} $$
and this won't change the dimension of the space $C_A$. Assuming this is the case, $C_A$ consists of block-diagonal matrices $\operatorname{diag}(B_1,\dots,B_k)$ where each $B_i$ is an arbitrary $d_i \times d_i$ matrix. Hence,
$$ \dim C_A = \sum_{i=1}^k d_i^2. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2110280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\binom{n}{r}=\binom{n}{n-r}$ by using the Binomial Formula Problem: Use the Binomial Formula to show that if $n$ and $r$ are integers with $0 \leq r \leq n$, then $\binom{n}{r}=\binom{n}{n-r}$.
My attempt: I am using the general binomial expansion formula to establish the following.
$(n+r)^{n}=n^{n}+nn^{n-1}r + ...$
But am not sure where to go from here. Should I do a proof by induction with base case $n=0$? I have a feeling induction is not necessary and this problem is easier than I think...
Edit: This question is different than the proposed duplicate because the "duplicate" does not use the binomial theorem method, which I must use. It uses a different method.
|
Hint:
The polynomials $(x+y)^n$ and $(y+x)^n$ are the same. Expand and equal their coefficients.
Full answer:
The polynomials $(x+y)^n$ and $(y+x)^n$ are equal. If we apply the Binomial Formula to both of them, we obtain
$$\sum_{k=0}^n\binom nk x^ky^{n-k}=\sum_{j=0}^n\binom nj y^jx^{n-j}$$
Fix a degree $r$ for the variable $y$. The term that has this degree in the LHS is
$$\binom n {n-r}x^{n-r}y^r$$
and in RHS we find
$$\binom nr y^rx^{n-r}$$
This implies $$\binom nr=\binom n{n-r}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2110388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Prove or disprove: $a\mid(bc)$ if and only if $a\mid b$ and $a\mid c$
Prove $a\mid(bc)$ if and only if $a\mid b$ and $a\mid c$.
My attempt is proving the converse first so if $a|b$ and $a|c$ then $a|bc$
So since $a\mid b$ and $a\mid c$, then $b=ax$ and $c=ay$ for some integers $x$ and $y$.
So $bc=a(xy)$ therefore $a|bc$. Now the forward direction if $a|bc$ then $bc=az$ for some integer $z$. Letting $z=xy$ implies that $bc=(ax)(ay)$ so $b=ax$ and $c=ay$ thus $a|b$ and $a|c$. I'm not confident with the forward direction.
|
Counter-Example to prove the $\Rightarrow$ statement is not true for every value $a, b, c \in \mathbb Z$.
Put $a = 6,\; b=3,\; c = 4$
$a \mid (bc),\;\;$ but $\;a\not \mid b\;$ and $\;a \not \mid c$.
On the other hand, if $\;(a\mid b$ and $a\mid c),\rightarrow\,a\mid (bc)$.
Like you've shown: "So since $a|b$ and $a|c$ then $b=ax$ and $c=ay$ for some integers $x$ and $y$."
From there we have $bc= (ax)(ay).\,$ So $bc=a^2(xy)$ therefore $a|bc$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2110579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Pons Asinorum solution: why is this so hard? Background
You will recognize the image below as a variation on Euclid's famous Pons Asinorum (Donkey's bridge) problem. I ran across it on a friends facebook page except that specific values had been given for x and y. I could not solve it on my own, and I ended up looking up a solution online (there are several). I realized that the solutions I was finding were constructing equilateral triangles by knowing x and y and I would never get there because I was seeking a more general solution to the problem as shown below.
This lead me to two Questions:
1) How to express the angle α in terms of x and y when x and y are not known in advance?
2) I made a number of attempts at forming a system of equations from the rules for similar triangles, and I always ended up with inadequate systems (that is ones that could not be reduced to a single solution) What is it about the nature of this problem that so resists an algebraic solution using similar triangles?
|
Recall first of all that in a triangle with base $a$ and base angles $\beta$, $\gamma$, the altitude can be expressed as $a/(\cot\beta+\cot\gamma)$. This can be easily proved by the sine rule, for instance.
Consider now our triangle below and set $AB=l$. By the above rule we get then
$$
DH={l\over\cot(80°)+\cot(80°-x)}={l\over\tan(10°)+\tan(10°+x)},
\quad
EG={l\over\tan(10°)+\tan(10°+y)}.
$$
In addition we have
$$
EF=GH=l-EG\tan10°-DH\tan10°,
$$
and from triangle $DEF$ one gets, after some algebra:
$$
\tan(\alpha+10°+x)={EF\over DH-EG}={\tan^2 10°-\tan(10°+x)\tan(10°+y)\over\tan(10°+x)-\tan(10°+y)}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2110684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Newton's Law of Cooling Differential Equation I know and understand how to solve Newton's Law of Cooling, but came across a book that did the following and is slightly confusing me. It states the following:
Newton's Law of Cooling:
$\frac{dT}{dt} = k(T_{\infty} -T)$, where it calls $T_{\infty} = $ surrounding temperature.
It says the solution approaches $T_{\infty}$. Include that constant on the left side to make the solution clear: $\frac{d(T - T_{\infty})}{dt} = k(T_{\infty} - T)$. The solution ends up being $T - T_{\infty} = e^{-kt}(T - T_0)$.
What allows us (or how to derive) just to replace $\frac{dT}{dt}$ with $\frac{d(T - T_{\infty})}{dt}$?
|
What allows it is the assumption that $T_{\infty}$ is constant.
To explain what they did in detail: let's introduce a new function of $t$:
$$
F(t) = T_{\infty} - T(t).
$$
This gives $T' = -F'$ (the $'$ denotes differentiation w.r.t. $t$), and so Newton's equation can now be written in terms of $F$:
$$
-F' = T' = k (T_{\infty} - T) = k F.
$$
Consequently,
$$
-F' = k F.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is a polynomial that has the roots: 3 and 5-i and also crosses the origin with integer coefficients? question
What is a polynomial that has the roots: 3 and 5-i and also crosses the origin with integer coefficients?
My thoughts
on my first instinct, i wrote this:
$(x-(5-i))(x+(5-i))(x-3)(x)$
But then i've realized that when you factor out the complex parts, you dont get integer coefficients so I was really confused if this is even possible
|
If you want real coefficients, which integers implies, you need conjugate imaginary roots. To cross the origin you need a factor $x$. The simplest choice is then $$x(x-3)(x-5+i)(x-5-i)=x^4 - 13 x^3 + 56 x^2 - 78 x$$
You can multiply this by any polynomial with integer coefficients that you like. You are close but did not get the conjugate right.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to prove: $2^\frac{3}{2}<\pi$ without writing the explicit values of $\sqrt{2}$ and $\pi$ How to prove: $2^\frac{3}{2}<\pi$ without writing the explicit values of $\sqrt{2}$ and $\pi$.
I am trying by calculus but don't know how to use here in this problem. Any idea?
|
It's pretty easy to see that $\pi > 3$, by inscribing a circle inside a regular hexagon. Then squaring both sides gives $\pi^2>9>8$. Taking square roots again gives $\pi > \sqrt{8}=2^\frac{3}{2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
}
|
Does this simple proof-by-contradiction, also require contrapositive? Simple exercise 6.2 in Hammack's Book of Proof. "Use proof by contradiction to prove"
"Suppose $n$ is an integer. If $n^2$ is odd, then $n$ is odd"
So my approach was:
Suppose instead, IF $n^2$ is odd THEN $n$ is even
Alternatively, then you have the contrapositive, IF $n$ is not even ($n$ is odd), then $n^2$ is not odd ($n^2$ is even).
$n = 2k+1$ where $k$ is an integer. (definition of odd)
$n^2 = (2k+1)^2$
$n^2 = 4k^2 + 4k + 1$
$n^2 = 2(2k^2 + 2k) + 1$
$n^2 = 2q + 1$ where $q = 2k^2 + 2k$
therefore $n^2$ is odd by definition of odd.
Therefore we have a contradiction. Contradictory contrapositive proposition said $n^2$ is not odd, but the derivation says $n^2$ is odd. Therefore the contradictory contrapositive is false, therefore the original proposition is true.
Not sure if this was the efficient/correct way to prove this using Proof-By-Contradiction.
|
To prove
$$
n^2\text{ is odd}\implies n\text{ is odd}\tag{1}
$$
by contradiction, you need to prove that
$$
n^2\text{ is odd}\wedge n\text{ is even}\tag{2}
$$
is false. That is, you need to suppose that $n^2$ is odd and that $n$ is even and obtain a contradiction from those two statements.
This method of proof becomes clearer when the implication
$$
n^2\text{ is odd}\implies n\text{ is odd}
$$
is written in a logically equivalent way as
$$
\neg((n^2\text{ is odd})\wedge\neg(n\text{ is odd}))\tag{3}
$$
The proof by contradiction assumes the negation of the statement and obtains a known contradiction from it. In this case, you see that the negation of $(3)$ is $(2)$.
You propose to show
$$
n^2\text{ is odd}\implies n\text{ is even}\tag{4}
$$
is false in order to show $(1)$. That is incorrect.
For example, one could prove that
$$
x>0\implies\sin(x)\geq0
$$
is false and yet
$$
x>0\implies\sin(x)<0
$$
is also false.
In fact, what you did is show the converse of $(1)$. That is, you showed
$$
n\text{ is odd}\implies n^2\text{ is odd}
$$
In this case, in order to prove $(1)$, a proof of its contrapositive is the simplest way to go. Indeed, if $n=2k$ is even, then $n^2=(2k)^2=2(2k^2)$ is even. Here there is no real difference between the proof by contradiction and the proof by contrapositive: the hypothesis that $n^2$ is odd in $(2)$ doesn't need to be used.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
$\sum_{n=1}^{\infty}a_n$ converges, with $a_n >0$ for all $n$, then $\sum_{n=1}^{\infty} \frac{a_n}{n}$ converges I want to prove that if $\sum_{n=1}^{\infty}a_n$ converges, with $a_n >0$ for all $n$, then $\sum_{n=1}^{\infty} \frac{a_n}{n}$ converges.
My book gives a proof where shows that $S_k=\sum_{n=1}^{k} \frac{a_n}{n}$ is monotonically increasing and bounded, so converges by the completeness axiom.
However can I prove it by the comparison test?
I know that $\forall n\in\mathbb{N}$ $$0\leq \frac{a_n}{n} \leq a_n$$ and also, I know that $\sum_{n=1}^{\infty}a_n$ converges, so by the comparison tests, also $\sum_{n=1}^{\infty} \frac{a_n}{n}$ converges.
Is this acceptable?
|
Since $\sum a_n$ converges, then $a_n\to0$, thus for sufficiently large $n$ we have $0<a_n<1$, multiplying by $a_n$ we get $0<a_n^2<a_n$. Using the comparison test, $\sum a_n^2$ converges. Note that for every $n\in\mathbb{N}$ we have
$$\left(|a_n|-\frac{1}{n}\right)^2\geq0\Longrightarrow 2\frac{|a_n|}{n}\leq a_n^2+\frac{1}{n^2}.$$
Then we can use the comparison test to assert the convergence of $\sum\dfrac{|a_n|}{n}$. Once again, by the same test, we have that $\sum\dfrac{a_n}{n}$ converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How the Derivative of Distance is Different from Speed If the velocity vector is given by $\frac{d\overrightarrow{{r}}}{dt}$ and physically, the speed observed is just the magnitude of the velocity vector at any time $t$, then what is the physical interpretation of the derivative of the scalar distance with respect to time (i.e. the $\frac{d|\overrightarrow{{r}}|}{dt}$)? Expressing the displacement $\overrightarrow{r}$ in rectangular coordinates, I verified that $\frac{d|\overrightarrow{{r}}|}{dt}$ which is the derivative of the scalar distance is not identical to $|\frac{d\overrightarrow{{r}}}{dt}|$ which is the "speed" of the moving body. So, again, what is the physical or geometric interpretation of $\frac{d|\overrightarrow{{r}}|}{dt}$?
|
The difference between a position $\vec r$ and a distance $|\vec r|$ is that the distance is taken with respect to an implicit reference point, the origin of the coordinates. As this is an arbitrary point in space, it cannot have a physical meaning unless there is something special related to it (for instance if this is the center of mass of the system, or a point where a non-moving object is located). The velocity $\frac{\mathrm d\vec r}{\mathrm dt}$ is completely independent from the location of the origin while the derivative of the distance, $\frac{\mathrm d\,|\vec r|}{\mathrm dt}$ is not.
In polar coordinates, $\vec r=r\,\hat u_r(\theta)$, where $\hat u_r(\theta)$ is the unit radial vector and $\theta$ is the direction. Cartesian coordinates are given by
$x=r\cos\theta$ and $y=r\sin\theta$. We have $|\vec r|=r$. Derivating $\vec r$ with respect to time, we get
$$\frac{\mathrm d\,\vec r}{\mathrm dt}=\left(\frac{\mathrm dr}{\mathrm dt}\right)\hat u_r+r\left(\frac{\mathrm d\theta}{\mathrm dt}\right)\hat u_\theta.$$
This formula proves that @Paul's answer is right if he defines displacement as the vector for an origin to the object. It also shows that the speed $\left|\frac{\mathrm d\vec r}{\mathrm dt}\right|$ is always larger than $\frac{\mathrm d\,|\vec r|}{\mathrm dt}$.
Consider an object moving at constant speed $s$ and chose an origin at a distance $b$ from its trajectory. We can call $t_0$ the time at which the object is at the shortest distance from the origin. The distance between the origin and the object is
$$ |\vec r(t)|=\sqrt{b^2+s^2(t-t_0)^2}$$
and its derivative is
$$ \frac{\mathrm d\,|\vec r|}{\mathrm dt}=\frac{s^2t}{\sqrt{b^2+s^2(t-t_0)^2}}.$$
We observe that this depends on $b$ and $t_0$, meaning that if we choose another origin we change the value of $\frac{\mathrm d\,|\vec r|}{\mathrm dt}$ while the speed remains equal to $s$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Calculate the probability of winning
Suppose you are playing a game where you flip a coin to determine who plays first. You know that when you play first, you win the game 60% of the time and when you play the game second, you lose 52% of the time. A Find the probability that you win the game?
Let $A = \{ \text{Play first}\}$ and $\overline{A} = \text{Play second}$, let $B = \{ \text{win} \}$
We want $P(B)$.
I know that $P(B | \overline{A}) = 0.48$ and $P(B | A) = 0.6$
Actual problem:
I get that $P(B) = P(B | A)P(A) + P(B | \overline{A}) P(\overline{A}) = 0.6P(A) + 0.48P(\overline{A})$
I'm not sure how to move ahead. Can someone give me a hint?
|
You flip a coin to decide who plays first and the probability of getting a favourable outcome in the coin toss is $0.5$. Hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Converting between explicit, implicit and parametric function
*
*Given an explicit function $y = \sin(x) + \cos(x)$, how to convert it to the respective parametric functions $x = f_1(t)$, $y = f_2(t)$?
*Given parametric functions $x = \sin(t)$ and $y = \cos(t)$, how to obtain the respective implicit function $f(x,y) = 0$?
*Given parametric functions $x = 1+2t$ and $y = 3+4t$, how to obtain the respective implicit function $f(x,y) = 0$?
|
*
*You can choose for example $x$ as parameter, which leads to :
$$\cases{x=t\cr y=\sin(t)+\cos(t)}$$
*It is well known that $\sin^2(t)+\cos^2(t)=1$ for all $t\in\mathbb{R}$. So this curve (I pretend to ignore which curve it is ...) is included in the one with implicit equation :
$$x^2+y^2=1$$
and it should be verified that te reverse inclusion is true (provided that $t$ can take any real value, or at least any value in some $[a,a+2\pi)$).
*You have to "eliminate" $t$ between those two equations. The first one gives you : $t=\frac{x-1}{2}$. Putting that in the second one leads to :
$$y=3+2(x-1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Solve the equation: $\sin 3x=2\cos^3x$ Solve the equation :
$$\sin 3x=2\cos^3x$$
my try :
$\sin 3x=3\sin x-4\sin^3x$
$\cos^2x=1-\sin^2x$
so:
$$3\sin x-4\sin^3x=2((1-\sin^2x)(\cos x))$$
then ?
|
1st of all you need to learn the formula Sin 3x and remember it so that whenever you see such a question you will choose the right way.
1st learn it . This may help you to learn this
https://youtu.be/He4JXYBwTj4
Come to our answer
sin 3x = 2 cos³ x
3sinx-4sin³x=2cos³x
(here look at the expression should you expand it in terms of cubic terms, obviously no it will mess up as in your case, it will be like doing so much thing for getting a small result. So here we will rather than expanding it we will divide cos³x on rhs. )
3tanxsec²x-4tan³x=2
Let tan x = t
So,
3 t (1+t²)-4t³=2
Then solve it and get
Tan x = 1 and -2
Hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2111959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Combinatorics: throwing a dice three times to get an even number. Suppose you throw a six face dice three times, how many times will be the sum of the faces even?
I approached it this way:
You either get all three times even face, or twice odd and once even.
As there are only 3 faces that are even, you have $3^3$ possibilities.
Then, for the second situation, you have $3$ choices for then face, twice $3$ choices for the odd one. Thus giving again $3^3$ possibilities.
Overall, there will be $3^3 + 3^3$ possibilities, yet my textbook shows $3^3 + 3^4$ possibilities. What's wrong with my reasoning?
|
P(all even)$=\left(\dfrac{1}{2}\right)\left(\dfrac{1}{2}\right)\left(\dfrac{1}{2}\right)=\dfrac{1}{8}$
P(one even and two odd)$=\dbinom{3}{2}\left(\dfrac{1}{2}\right)\left(\dfrac{1}{2}\right)\left(\dfrac{1}{2}\right)=\dfrac{3}{8}$
Required probability $=\dfrac{1}{8}+\dfrac{3}{8}=\dfrac{1}{2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Explain why $(a−b)^2 = a^2 −b^2$ if and only if $b = 0$ or $b = a$. This is a question out of "Precalculus: A Prelude to Calculus" second edition by Sheldon Axler. on page 19 problem number 54.
The problem is Explain why $(a−b)^2 = a^2 −b^2 $ if and only if $b = 0$ or $b = a$.
So I started by expanding $(a−b)^2$ to $(a−b)^2 = (a-b)(a-b) = a^2 -2ab +b^2$. To Prove that $(a−b)^2 = a^2 −b^2 $ if b = 0 I substituted b with zero both in the expanded expression and the original simplified and I got $(a−b)^2 = (a-0)^2 = (a-0)(a-0) = a^2 - a(0)-a(0)+0^2 = a^2$ and the same with $a^2 -2ab +b^2$ which resulted in $a^2 - 2a(0) + 0^2 = 2a$ or if I do not substite the $b^2$ I end up with $a^2 + b^2$. That's what I got when I try to prove the expression true for $b=0$.
As for the part where $b=a$, $(a−b)^2 = (a-b)(a-b) = a^2-2ab+b^2$, if a and b are equal, let $a=b=x$ and I substite $a^2-2ab+b^2 = x^2-2(x)(x) + x^2 = x^2-2x^2+x^2 = 1-2+1=0$ I do not see where any of this can be reduced to $a^2-b^2$ unless that equals zero......I do see where it holds but I do not see how would a solution writting out look.After typing this it seems a lot clearer but I just can't see how to phrase a "solution".
P.S: This is my first time asking a question here so whatever I did wrong I am sorry in advance and appreciate the feedback.
|
It might just be easier to use that $a^2-b^2=(a-b)(a+b)$.
So if $a-b=0$ then $(a-b)^2=(a-b)(a+b)$, and if $a-b\neq 0$ then $(a-b)^2=(a-b)(a+b)$ if and only if $a-b=a+b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 0
}
|
Parabolas, What does "b" do? Given a quadratic,
$ax^2+bx+c$.
I know c is the intercept, and the sign of $a$ tells us wether it is a positive "u" shape, or negative, an upside down u. But what about b? Is my following observation correct;
If b>0 the min or max is to the left of the y axis
If b<0 the min or max is to the right of the y axis?
Are these statements true?
|
If $b=0$ the graph of $y=ax^2+bx+c$ is symmetric with respect to the $y$ axis, if $b \ne 0$ it is symmetric with respect to the stright line $x=\frac{-b}{2a}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
solving a strange Diophantine equation ${\sqrt{n}}^\sqrt{n} -11 =m!^2$
Does anyone know how to solve Diophantine equation: $${\sqrt{n}}^\sqrt{n}-11 =m!^2.$$
I tried to substitute $\sqrt{n}=k$ then equation becomes $$k^k-11=m!^2\\\implies k^k=m!^2+11=(m!-1)(m!+1)+12$$ which means suppose $m\geq2$ then $\gcd{(m!-1,m!+1)}=1$. Does this give any hint? I could think upto here only. Please help.
|
Let
$$k^k-11 = (m!)^2.$$
If $k>=11$ then $$121\not|\, LHS,\quad 121\,|\, RHS,\quad LHS\not= RHS.$$
In the other hand, $k$ is odd and $k^k > 11$, so
$$k\in\{3,5,7,9\}.$$
Note than:
$$\sqrt{3^3-11} = 4 \not= m!,$$
$$\sqrt{5^5-11} = \sqrt{3114}\in(55,56),\quad \sqrt{5^5-11}\not\in\mathbb N,$$
$$\sqrt{7^7-11} = \sqrt{823532}\in(907,908),\quad \sqrt{7^7-11}\not\in\mathbb N,$$
$$\sqrt{9^9-11} = \sqrt{387420478}= \sqrt{11683^2-1}\in(11682,11683),\quad \sqrt{9^9-11}\not\in\mathbb N.$$
So the issue diophantine equation has not solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\lim_{n\to\infty}\frac{6n^4+n^3+3}{2n^4-n+1}=3$ How to prove, using the definition of limit of a sequence, that:
$$\lim_{n\to\infty}\frac{6n^4+n^3+3}{2n^4-n+1}=3$$
Subtracting 3 and taking the absolute value of the function I have:
$$<\frac{n^3+3n}{2n^4-n}$$
But it's hard to get forward...
|
$$\left|\frac{6n^4+n^3+3}{2n^4-n+1}-3\right|<\epsilon\Leftrightarrow \left|\frac{n^3+3n}{2n^4-n+1}\right|<\epsilon.$$ Now, $4n^3\ge n^3+3n$ and $2n^4-n+1\ge n^4$ for all $n$ positive integer. So $$\left|\frac{n^3+3n}{2n^4-n+1}\right|\le \frac{4n^3}{n^4}=\frac{4}{n},$$ and choosing $n_0=\lfloor 4/\epsilon \rfloor+1$ we have $\dfrac{4}{n}<\epsilon$ if $n\ge n_0,$ as a consequence $$\left|\frac{6n^4+n^3+3}{2n^4-n+1}-3\right|<\epsilon\text{ if }n\ge \left\lfloor \frac{4}{\epsilon} \right\rfloor+1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Two subsets of the plane?
Aren't planes described by $(x,y,z)$?
I know that the point (2,0) satisfies a, and the point (1.5,4) satisfies b. But how would I turn these into equations and geometric descriptions?
I'm stuck.
|
Hint: Using the distance formula, a point $(x,y)$ will satisfy a) exactly when
$$
\sqrt{(x-4)^2 + y^2} = 2\sqrt{(x-1)^2 + y^2} = 1
$$
and it will satisfy b) if
$$
|x+1| = \sqrt{(x-2)^2 + (y-4)^2}
$$
However, both of these equations can be simplified.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Calculating the $100th$ term of a series I came across a series
$1,5,13,25,41,61.......$
I have to calculate the $100^{th}$ term of this series. How do I do it?
Is there any formula to calculate the $n^{th}$ term of a series?
|
The difference of successive terms forms an Arithmetic progression: $4,8,\cdots $.
We can write the terms of our sequence as: $$0\times 4+1,1\times 4+1, 3\times 4+1, 6\times 4+1, 10\times 4+1, \cdots $$ $$=\frac {0\times 1}{2}\times 4+1, \frac {1\times 2}{2}\times 4+1, \frac {2\times 3}{2}\times 4+1, \frac {3\times 4}{2}\times 4+1, \cdots $$
We can see a pattern emerging. Thus the $n$th term of the sequence is $$\frac {n (n-1)}{2}\times 4+1 =4\binom {n}{2} +1$$ The hundredth term is thus $\boxed {4\binom {100}{2}+1=19801}$. Hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2112893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Find coefficient of generating function f(x). Find coefficient of generating function.
$ f(x) = \frac{2x}{1-x^{2}} +x$
MY WAY OF SOLVING SIMILAR PROBLEM:
1) $ g(x) = \frac{2x}{1-x^{2}}$
2) partial fraction $g(x) = \frac{A}{1-x} + \frac{B}{1+x} $
3) $ g(x) = \sum\limits_{n=0}^\infty Ax^{n} + \sum\limits_{n=0}^\infty B (-1)^nx^{n} = \sum\limits_{n=0}^\infty (A+(-1)^nB)x^{n} $ -solution
But what can I do with $f(x)$? I can't use my method because:
$f(x) = \frac{2x+x(1-x^2)}{1-x^2} $
$\frac{-x^3 +3x}{1-x^2} = \frac{A}{1-x} + \frac{B}{1+x}$
$ -x^3+3x = A(1+x) + B(1-x) $
$-x^3 = 0 \cdot x^3 $
$ -1 =0 $
|
There is no reason to add up the right hand terms, since the term $x$ is simple and convenient. The other term can be expanded using the geometric series expansion
\begin{align*}
\frac{1}{1-y}=\sum_{n=0}^\infty y^n\qquad\qquad |y|<1
\end{align*}
with $y=x^2$.
We obtain
\begin{align*}
f(x)&=\frac{2x}{1-x^2}+x\\
&=2x\sum_{n=0}^\infty x^{2n}+x\\
&=3x+2\sum_{n=1}^\infty x^{2n+1}
\end{align*}
We conclude the coefficient $[x^n]$ of $f(x)$ is
\begin{align*}
[x^n]f(x)=
\begin{cases}
3&n=1\\
2&n>1, odd\\
0&n\geq 0, even
\end{cases}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the difference between variable, argument and parameter? I'm sure that these terms should be different since there exists a difference between parameter and argument in computer science but I'm not sure about their differences in math.
|
Variables : A variable is a quantity that may change within the context of a mathematical problem or experiment. Typically, we use a single letter to represent a variable. The letters $~x,~ y~$ and $~z~$ are common generic symbols used for variables. Sometimes, we will choose a letter that reminds us of the quantity it represents, such as $~t~$ for time, $~v~$ for voltage etc.
Parameters : A parameter is a quantity that influences the output or behavior of a mathematical object but is viewed as being held constant.
Arguments : The word argument is used in several differing contexts in mathematics. The most common usage refers to the argument of a function, but is also commonly used to refer to the complex argument or elliptic argument.
An argument of a function $~f(x_1,...,x_n)~$ is one of the $~n~$ parameters on which the function's value depends. For example, the $~\sin x~$ is a one-argument function, the binomial coefficient $~\binom{n}{m}~$ is a two-argument function, and the hypergeometric function $~_2F_1(a,b;c;z)~$ is a four-argument function.
Note: In general, mathematical functions may have a number of arguments. Arguments that are typically varied when plotting, performing mathematical operations, etc., are termed variables, while those that are not explicitly varied in situations of interest are termed parameters. In some contexts, one can imagine performing multiple experiments, where the variables are changing through each experiment, but the parameters are held fixed during each experiment and only change between experiments. One place parameters appear is within functions.
Examples :
Ex -$\bf(1)~:~$ A function might a generic quadratic function as $$~f(x)=ax^2+bx+c~.$$
Here, the variable $~x~$ is regarded as the input to the function. The symbols $~a,~ b ~$and $~c~$ are parameters that determine the behavior of the function $~f~$. For each value of the parameters, we get a different function.
Ex -$\bf(2)~:~$In the standard equation of an ellipse
$$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1~,$$
$x~$ and $~y~$ are generally considered variables and $~a~$ and $~b~$ are considered parameters.
The decision on which arguments to consider variables and which to consider parameters may be historical or may be based on the application under consideration. However, the nature of a mathematical function may change depending on which choice is made.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Why μ of partial recursive functions requires values of least zero's predecessors be defined The definition of partial recursive function says that given a partial recursive function $G(x,y)$, a new partial recursive function can be generated as$$F(x)\simeq\mu y[G(x,y)=0]$$
$F(x)$ is equal to the least $y$ such that $G(x,y)=0$ and for all $y'$ less than $y$, $G(x,y')$ is defined and is not zero, otherwise $F(x)$ is undefined. I don't think the $\forall y'<yG(x,y')\neq\bot$ requirement is needed. Suppose $G$ is implemented by a register machine program $P$. In searching for the least zero, one can use dovetailing technique by executing the first instruction of $P(x,0)$, then the first two instructions of $P(x,0)$ and $P(x,1)$ and so on, until $0$ is reached. Am I right?
|
The problem is that when you find the first $y$ such that $P(x, y) = 0$ using dovetailing technique, it doesn't mean that there is no $z < y$ with $P(x, z) = 0$. There are two cases:
*
*such $z < y$ exists, but the computation for $P(x, z)$ just takes more time than for $P(x, y)$ (and the only way to "check" this is to wait for the convergence of $P(x, t)$ for all $t < y$);
*there is no such $z$, and $y$ is indeed the least such number.
There is the halting problem inside. Let's show that there is a partial recursive $G(x, y)$ such that $F(x) = \min(y)[G(x, y) = 0]$ defined as "the least $y$ such that $G(x, y) = 0$" is not recursive.
Consider the following function
$$
G(x, y) = \begin{cases}
0,& \text{if } y = 1 \text{ or } (y = 0 \text{ and } \varphi_x(x)\!\downarrow),\\
\uparrow,& \text{otherwise.}
\end{cases}
$$
Clearly,
$$F(x) = \begin{cases}
0,& \text{if } \varphi_x(x)\!\downarrow,\\
1,& \text{otherwise},
\end{cases}$$
which is the characteristic function of the halting set $K = \{x \mid \varphi_x(x)\!\downarrow\}$. Given that $K$ is not recursive, we have that $F(x)$ is not recursive too.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find two numbers whose $AM+...$ Find two numbers whose $AM + GM =25$ and $AM:GM=5:3$.
My Attempt;
Given,
$\frac {AM}{GM}=\frac {5}{3} = k (let) $
$\AM=5k,
GM=3k$.
Also,
$AM+GM=25$
$5k+3k=25$
$8k=25$
$k=\frac {25}{8}$.
Am I going right? Or, is there any other simple alternative.?
|
we have $$AM=\frac{5}{3}GM$$ from here we get with the first equation:
$$\frac{5}{3}GM+GM=25$$ thus we have $$\frac{8}{3}GM=25$$ and $$GM=\frac{75}{8}$$
from here we get $$ab=\left(\frac{75}{8}\right)^2$$ and $$a+b=\frac{375}{12}$$ you can solve one equation e.g. for $a$ and plug these equation in the other one
ok with $$b=\frac{375}{12}-a$$ we get
$$a\left(\frac{375}{12}-a\right)=\left(\frac{75}{8}\right)^2$$
this is equivalent to $$0=a^2-\frac{375}{12}a+\left(\frac{75}{8}\right)^2$$
solveing this we get $$a=\frac{225}{8}$$ or $$a=\frac{25}{8}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
$f(x)= \int e^x \left(\frac{x^4+2}{(1+x^2)^{5/2}}\right)dx$ I have been unable to solve this integral $$f(x)= \int e^x \left(\frac{x^4+2}{(1+x^2)^{5/2}}\right)dx$$
I tried to solve the expression by trying to make the expression come into the form of $$e^x(g(x)+g'(x))dx$$ but I have been unable to carry out any manipulation to make the expression come into this form. Kindly help me in solving this integration.
P.S. One of the tricks that my book used was to divide the expressions and then solving the integral. Kindly suggest some other method which deals completely with algebraic manipulation.
|
we have $$g(x)+g'(x)=\frac{x^4+2}{(1+x^2)^{5/2}}$$ solving this equation we get
$$g(x)=\frac{x^2+x+1}{(x^2+1)^{3/2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Pumping water out of a truncated cone using integration. So the issue I'm stuck with, is that I can do a cone, but I have no idea where to start with a cone that is truncated.
I have a truncated cone that has a base with radius of 3 meters and a top with a radius of 4 meters It's height is 4 meters and I want to pump the water out of a pipe that expands 1 meter above the truncated cone. I need to calculate the work required to pump out all the water.
I did a cylinder earlier and a cone, but I just can't seem to figure out this truncated cone.
Any advice on where to start with a problem like this?
|
In this case, you need to realise that one can obtain a cone by simply revolving/rotating a linear curve about the x-axis (or y-axis). You can then use the following integral. In general, the volume given by rotating the function $f(x)$ over an interval $[a,b]$ is given by:
$V(x)=\pi \int_a^b f(x)^2 \ dx$.
Combined with having a look at the following picture, I am sure you can calculate the volume of your truncated cone.
(from: http://www.nabla.hr/DIASurfXFig.gif)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Does every compact subset of Euclidean space have a centroid? Let $A$ be a compact subset of $\mathbb{R}^n$. Consider the set of hyperplanes in $\mathbb{R}^n$ that separate $A$ into two pieces with equal measure (Lebesgue). Now take the intersection over all such planes. Call a point $x$ a centroid of $A$ if it lies in this intersection. It seems like this definition is equivalent to the usual definitions for centroid, but if not please let me know. My main question is: Is the resulting intersection always non-empty?
I'm mostly interested in the cases $n=2$ and $n=3$, but it seems like the answer will probably be the same for all $n > 1$.
For example, consider the sphere $S^2 \subset \mathbb{R}^3$. Any plane through the origin will cut $S^2$ into two sets of equal measure, and no other plane will, so there is a unique centroid.
EDIT: This post may be related, but I don't think it answers my question. https://mathoverflow.net/questions/248206/a-question-about-the-centroids-of-compact-subsets-of-euclidean-spaces
EDIT 2: I realized that I'm asking the wrong question, since the answer to the above is clearly no, as demonstrated in the comments. My revised question is: Given a compact set $A$, does there exists a point $x$ such that any hyperplane through $x$ separates $A$ into two sets of equal measure? In the example of a union of two disjoint disks given below, the centroid of the set would be such a point. Does such a point always exist?
|
Not every line through the centroid of a triangle splits the triangle in pieces with the same area:
and not every line splitting the triangle in halves goes through a common point: $GH$ is a "splitting segment" iff $CG\cdot CH=\frac{1}{2}CB\cdot CA$, but here
the red splitting segments are not concurrent since $\frac{1}{\sqrt{2}}\neq\frac{2}{3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Martingale representation using Ito If we consider a process $$X_T=e^{\int_0^TtdW_t}$$it holds that it can be expressed by $$X_T=\mathbb{E}[X_T]+\int_0^Th(t)dW_t$$but how do we derive this $h(t)$?
I calculated $\mathbb{E}[X_T]=e^{t^3/6}$, and defined a new process $$Z_T=e^{-T^3/6}X_T \qquad Z_0=1$$with the hope that I could then use the martingale representation theorem, but $Z_T$ is not a martingale, since by Itô we get $$dZ_T=-\frac{1}{2}T^2e^{-T^3/6}X_TdT+Te^{-T^3/6}dW_T$$which contains a drift term. How could we solve this?
|
Your formula is wrong you wrote $TdW_T$ which is $dlog(X_T)$, not $dX_T$
We have $$dZ_T=-\frac{1}{2}T^2e^{-T^3/6}X_TdT+e^{-T^3/6}dX_T$$
Do that instead :
$$X_T=e^{Y_T}$$ with $$dY_T=TdW_T$$
Apply Ito's lemma on $X$
We have $$dX_T=e^{Y_T}dY_T+\frac{1}{2}e^{Y_T}d<Y_T,Y_T>=X_TdY_T+\frac{1}{2}X_Td<Y_T,Y_T>$$
Because
$$d<Y_T,Y_T>=T^2dT$$
We have
$$dX_T=X_TdY_T+\frac{1}{2}X_TT^2dT$$
Using the first equation ,
$$dZ_T=-\frac{1}{2}T^2e^{-T^3/6}X_TdT+e^{-T^3/6}(X_TdY_T+\frac{1}{2}X_TT^2dT)=e^{-T^3/6}X_TdY_T=TZ_TdW_T$$
which is what you want
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2113994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Commutator of a matrix to the power of k The question asks me to show that $$[A,B^k] = \Sigma_{r=1}^k B^{r-1}[A,B]B^{k-r}$$ (A, B are nxn matrices) but I can't get even close. I suspect there's some definition of $B^k$ that I don't know but is required. I've tried expanding the RHS, to get
$$\Sigma_{r=1}^k B^{r-1}(AB-BA)B^{k-r}$$
$$= \Sigma_{r=1}^k B^{r-1}ABB^{k-r} - B^{r-1}BAB^{k-r}$$
So starting from the LHS, what can I do to $B^k$? Substituting in the diagonalised matrix such that $B^k = P\Lambda^k P^{-1}$ didn't get me anywhere and I'm not sure where the sum comes into it. The only sums I've seen in matrix calculations come from exp(B), but I don't think that's related. Any help or hints are much appreciated!
|
You are close.
\begin{align}
\sum_{r=1}^k B^{r-1}(AB-BA)B^{k-r}&=\sum_{r=1}^k (B^{r-1}AB^{k-r+1} - B^{r}AB^{k-r})
\\
&=\sum_{r=1}^k B^{r-1}AB^{k-r+1}-\sum_{r=1}^k B^{r}AB^{k-r}
\\
&=\sum_{r=0}^{k-1} B^{r}AB^{k-r}-\sum_{r=1}^k B^{r}AB^{k-r}
\\
&=AB^{k}+\sum_{r=1}^{k-1} B^{r}AB^{k-r}-\sum_{r=1}^{k-1} B^{r}AB^{k-r}-B^kA
\\
&=AB^{k}-B^kA
\\
&=[A,B^k]
\end{align}
As @Andreas comment, this holds not only for matrix, but for general ring.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Method to find solutions for $\{nz\}\in (x,x+\epsilon)$ for an irrational value $z$. (approximation with the fractional part) Suppose that we are given an irrational number $z$, a value $x\in[0,1)$ and soe $\epsilon> 0$. We want to find an integer $n$ such that $\{nz\}\in (x,x+\epsilon)$ . (Note that by $\{\alpha\}$ I mean the fractional part of $\alpha$ in this context).
Such an $n$ clearly must exist as $\{nz\}$ is dense in $[0,1]$. But I have not been able to find a good method to come up with an $n$. I thought about putting $z$ as a periodic fraction but I am not sure how to proceed.
Some code in c or a similar language would be greatly appreciated.
|
For a treatment without continued fractions, see Diophantine Approximations by Niven.
Well, I did an example with $x + \frac{\epsilon}{2}= 1/2$ and $z=\pi.$ I wanted to get
$$ n \pi - m - \frac{1}{2} = n \pi - \frac{2m+1}{2} $$ small, or
$$ \pi - \frac{2m+1}{2n} $$ very small.
The first convergent to $\pi$ with odd numerator and even denominator was
$333/106,$ and we do get
$$ 53 \pi \approx 166.50441 $$
which is pretty good.
Needs work
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Normality of $\mathbb{C}[x, y]/(y^2-x^3+x)$ Let $R:=\mathbb{C}[x, y]/(y^2-x^3+x)$. I want to determine if $R$ is a normal ring.
The field of fractions of $R$ is $K=\mathbb{C}(x)[y]/(y^2-x^3+x)$. I think $R$ is normal, so I want to show that $R$ is integrally closed in $K$. I've noted that $R$ is integral over $\mathbb{C}[x]$, so $R$ is normal iff the integral closure of $\mathbb{C}[x]$ in $K$ is $R$, but this is not very useful. I've also tried to use Serre's criterion for normality, but this is not very useful too.
Any other ideas?
|
Let $f=y^2-x^3+x$. The conditions $0= \partial_x f =1-3x^2$ and $0=\partial_y f = 2y$ imply $y=0$ and $x^2 =\tfrac13$. However, $f$ does not vanish at either of these two points. Therefore, the curve defined by $f$ is nonsingular and in particular, it is normal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A vector calculus problem..
I am a beginner in vector calculus . It will be great if someone can guide me solve this problem . Thanks. I don't know how to proceed. I am studying by online sources only so I have no teacher it will be great if someone solves this problem for me
|
First parametrize your surface $S$. One way to parametrize is given by spherical coordinates:
$$\phi:[0,\frac{\pi}{2}]\times[0,\frac{\pi}{2}]\to\mathbb{R}^3,\quad \phi(s,t)=(\cos s\cos t, \cos s\sin t,\sin s).$$
Here the domain is $[0,\frac{\pi}{2}]\times[0,\frac{\pi}{2}]$ because we want all three coordinates of $\phi(s,t)$ to be non-negative, as $S$ lies in the first octant.
In this case we have $x=x(s,t)=\cos s\cos t.$ Then find the cross product of the partial derivatives of $\phi$:
$$\frac{\partial\phi}{\partial s}\times\frac{\partial\phi}{\partial t},$$
Then by definition of surface integral
$$\int x dS=\int_{[0,\frac{\pi}{2}]\times[0,\frac{\pi}{2}]}x(s,t)\left\|\frac{\partial\phi}{\partial s}\times\frac{\partial\phi}{\partial t}\right\|dsdt.$$
I left the detailed calculations for you to fill in.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Absolute value of a matrix Let $A=(a_{ij})$ be an infinite matrix. Consider $|A|=(A^*A)^{1/2}$ and $A'=(|a_{ij}|)$.
Is there any relation between $|A|$ and $A'$?
|
The notation $|A|$ for $(A^*A)^{\frac{1}{2}}$ is due to an analogy of the polar decomposition of the matrix $A=U|A|$ where $U$ is a partial isometry to
the polar decomposition of a complex number $z=e^{i\arg(z)}|z|$.
There is no obvious connection to $A^{'}$.
The notation should not be interpreted as an absolute value. One does not have in general
$|A+B|\leq |A|+|B|$ in the sense that the difference is a positive matrix.
If $A$ represents a Hilbert-Schmidt operator there is the following connection between $|A|$ and $A^{'}$:
$\sum_{1\leq ij<\infty}|a_{ij}|^2=\sum_{j=1}^{\infty}\sigma_j^2(A)$
where $\sigma_j(A)$ are the singular numbers of $A$, i.e. the eigenvalues of $|A|$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$\tan {\frac{A}{2}} + \tan {\frac{B}{2}} +\tan{\frac{C}{2}} \geq 4 - \sqrt {3} $
In a triangle $ABC$ with one angle exceeding $\frac {2}{3} \pi$, prove that
$\tan {\frac{A}{2}} + \tan {\frac{B}{2}} + \tan{\frac{C}{2}} \geq 4 - \sqrt {3} $
I tried expanding that half angle, applying AM-GM on various sets, using Sine rule and Napier's Analogy, but without success.
Can anyone provide a hint ?
Also, how does the left hand side of the inequality behave when the condition of one angle exceeding $\frac {2} {3 }\pi$ is removed?
Thanks in advance :) .
|
Let $\gamma\geq\frac{2\pi}{3}$ and $\tan\frac{\gamma}{4}=x$.
Hence, $\frac{1}{\sqrt3}\leq x<\frac{1}{\sqrt2}$ and since $\tan$ is a convex function on $\left[0,\frac{\pi}{2}\right)$, by Jensen we obtain:
$$\tan\frac{\alpha}{2}+\tan\frac{\beta}{2}\geq2\tan\frac{\alpha+\beta}{4}=2\tan\left(\frac{\pi}{4}-\frac{\gamma}{4}\right)=\frac{2(1-x)}{1+x}.$$
Thus, it remains to prove that
$$\frac{2(1-x)}{1+x}+\frac{2x}{1-x^2}\geq4-\sqrt3$$ or
$$(\sqrt3x-1)\left(x+\frac{1}{4+3\sqrt3}\right)\geq0,$$
which is obvious.
Done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
The ratio of their $n$-th term. The sum of $n$ terms of two arithmetic series are in the ratio of $(7n+ 1) : (4n+ 27)$. We have to find the ratio of their $n$-th term.
�
I tried to find the ratio by using the formula of summation of A.P.
But it becomes too long due to many variables that is $a_1,a_2,d_1,d_2$
|
It is actually quite simple. Let $a_1$ and $a_1'$ denote the first terms of the first and second progressions with their common differences $d$ and $d'$ respectively. We thus get $$\frac{S_1}{S_2} = \frac {0.5n (2a_1 +(n-1)d)}{0.5n (2a_1' +(n-1)d')} = \frac {2a_1+(n-1)d}{2a_1' +(n-1)d'} = \frac {7n+1}{4n+27} $$
The ratio of the $n$th term of the two AP's can be thus calculated as $$\frac{a_n}{a_n'} = \frac {a_1 +(n-1)d}{a_1'+(n-1)d'} = \frac {2a_1 +((2n-1)-1)d}{2a_1' + ((2n-1)-1)d'} $$ $$=\frac {S_{2n-1}}{S_{2n-1}'} = \frac {14n-6}{8n+23} $$ Hope it helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2114910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
Twice differentiable function to infinity Let $f: \mathbb R \to \mathbb R$ be twice differentiable function, to which both $f'(x) > 0$ and $f''(x) > 0$ for all $x \in \mathbb R$. Show that $\lim_{x\to\infty}$ $f(x) = \infty$.
Tried using the definitions of differentiation but got nowhere.
|
Suppose that $f'(0)=a>0$. Since $f''(x)>0$, then $f'(x)>a$ for all $x>0$ (otherwise by MVT there is a point where $f''(x)\le 0$). Then $f(x)\ge ax+f(0)$ for all $x\ge 0$ - to argue this just consider $g(x)=f(x)-ax-f(0)$ and draw out a contradiction assuming $g<0$ for some $x$. Then since it's plainly true that $ax+f(0)\to \infty$ ($a>0$) you have the result
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Given 3 orthogonal vectors how to calculate the ellipsoid induced by them? Given 3 orthogonal vectors, how can I define their inscribing ellipsoid?
$$ax^2 + by^2 + cz^2= 1$$
or
$$(\mathbf{x-v})^\mathrm{T}\! A\, (\mathbf{x-v}) = 1$$
Meaning, given 3 principal orthogonal directions how to craft the ellipsoid that inscribes them in the two form above.
|
Let's say $v_1,v_2,v_3$ are your principal directions.
If you're using the equation $(x-v)^t A (x-v) = 1$ then $v$ is the center of the ellipsoid, and $A$ is a SPD-matrix that has $v_i$ as eigenvectors. The eigenvalues of $A$ are the inverse squared lengths of the principal directions. So if we want an ellipsoid with $0$ as center the equation simplifies to $x^t A x = 1$.
Let $w_i = \frac{1}{||v_i||} v_i$ be the normalized principal directions, and let $W = [w_1 | w_2 | w_3]$. Then $W^t W = I$ i.e. $W$ is orthogonal.
Surely now $AW = WD$ for some diagonal matrix $D = diag(d_1,d_2,d_3)$ since the columns of $W$ are eigenvectors of $A$, i.e. $A = WDW^t$. Now we just need to find the $d_i$.
We obviously want $1= v_i^t A v_i$ since $v_i$ are on the ellipsoid.
Let $e_i$ be the $i$-th unit vector i.e. $e_1 = (1,0,0)^t, e_2 = (0,1,0)^t, e_3 = (0,0,1)^t$.
So $1 = v_i^t Av_i = v_i^t WDW^t v_i = ||v_i ||e_i^t D e_i ||v_i|| = d_i \cdot ||v_i||^2$.
This holds since $v_i^t W = v_i^t [w_1|w_2|w_3] = v_i^t \left[\frac{1}{||v_1||} v_1 |\frac{1}{||v_2||} v_2 |\frac{1}{||v_3||} v_3 \right] = e_i \frac{v_i^t v_i}{||v_i||} = e_i \frac{||v_i||^2}{||v_i||} = e_i ||v_i||$.
So we have to choose $d_i = \frac{1}{||v_i||^2}$. With that choice of $W$ and $D$ the matrix $A$ satisfies $1=v^t A v$ for all points $v$ on the ellipsoid.
Only of $v_i$ are multiples of $e_i$ we can write this equation as $ax^2+by^2+cz^2=1$, for general $v_i$ and $v$ this becomes $ax^2+by^2+cz^2+dxy+exz+fyz+gx+hy+iz+d=1$.
This is easy to see: Let $v_i$ be multiples of $e_i$ then $W = I$ so $A = D$. Then let $v = (x,y,z)^t$ and we get $1= v^t A v = v^t D v = d_1 x^2 + d_2 y^2 + d_3 z^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Creating a $2\times n$ rectangle out of two block type The question:
I want to pave a $2\times n$ rectangle with blocks of two types, A and B, as illustrated
long edges are length $2$ and short edges are length $1$. I want to know in how many ways this can be done. Reflections of combinations DO count as separate combinations.
(a) Find a linear recursive equation for $X_n$, the number of pavings of a $2\times n$ rectangle.
I have no idea what to do except start by placing blocks on the left-side of the $2\times n$ rectangle, but I don't know what else I can do.
|
You have $X_n$ as the number of ways to pave a $2 \times n$ rectangle. At the right hand end you might have a type B piece, a vertical type A piece, or two horizontal type A pieces. If you take off the A piece(s) you are left with a tiled rectangle. If you take off a B piece you are left with a tiled rectangle with one extra square, so define $Y_n$ at the number of ways to tile a $2 \times n$ rectangle plus the top square of the next column, which is the same as the number of ways to tile the rectangle plus the bottom square of the next column. One of these rectangles plus square can either have a B piece attached to a rectangle, or a horizontal A piece sticking out. This should suggest a set of coupled recurrences for $X_n,Y_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Are $\ell^p$ spaces a special case of $L^p$ spaces? My professor said that $\ell^p$ spaces are $L^p$ spaces with a discrete measure. But how can that be true if that the inclusion of the spaces is in different directions in the two cases? In the first case, $\ell^p \subset \ell^q$ if $p < q$, while $L^q \subset L^q$ if $p < q$!
|
The proof of the inclusion $L^q(\mu) \subset L^p(\mu)$ for $p<q$ you saw probably assumed that $\mu$ is a finite measure. This inclusion does not hold in general (consider $\mu$ the Lebesgue measure on $\mathbb{R}$).
What your professor was hinting towards was the fact that $\ell^p=L^p(\mu)$, with $\mu$ the measure on $\mathbb{N}$ (with $\sigma$-algebra the power set of $\mathbb{N}$, hence the term discrete measure) given by
$$\mu(A)=|A|,$$
where $|A|$ denotes the number of elements of $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Truth of statements about numbers Given the following statements:
*
*$\forall\, x,y \in \Bbb Q \quad \exists\, z \in \Bbb Q $ $\;$ $ : \left(x<z<y\right) \vee \left(x>z>y\right)$.
*$\forall \, x \in \Bbb R \quad \exists\,y\in \Bbb R : y^2= x$
*$\forall \, x \in \Bbb R^+ \quad \exists\,y\in \Bbb R : y^2= x$
*$\forall \, x \in \Bbb Z : | x | > 0$
*$\forall\, x,y \in \Bbb Q : \left(x<y \rightarrow \exists\, z \in \Bbb Q: x<z<y \right)$
*$\forall \, x \in \Bbb N \quad \exists\, y \in \Bbb N : x>y$
*$\forall \, x \in \Bbb R \quad \exists\, y \in \Bbb R : y^2=|x|$
*$\forall \, x \in \Bbb Z \quad \exists\, y \in \Bbb Z: x \lt y \lt x+1 $
*$\exists\,x \in \Bbb N \quad \forall\, y \in \Bbb N : x\le y$
*$\forall \, x,y \in \Bbb N \quad \exists \, z \in \Bbb N: x+z=y$
*$\forall\,x\in\Bbb R\quad\exists\,y\in\Bbb R:x\lt y \lt x+1$
*$\forall\,a,b\in\Bbb Q\quad\exists\,x\in\Bbb Q:ax=b$
*$\forall\,x\in\Bbb Z\quad\exists\,y\in\Bbb Z : x\gt y$
*$\forall\,x,y\in\Bbb Z\quad\exists\,z\in\Bbb Z:x+z=y$
*$\forall\,a,b\in\Bbb Z\quad\exists\,x\in\Bbb Z:ax=b$
*$\forall\,m\in\Bbb Z\quad\exists\,q\in\Bbb Q:m\lt q\lt m+1$
list which are true.
Only $2,6$ and $8$ are false, correct?
|
1 isn't true as I've pointed out in the comments, as $x=y$ gives a counterexample;
2 of course isn't true;
3 is true;
4 is obviously wrong;
5 is the correct version of 1, it's true;
6 is obviously wrong;
7 is true;
8 is wrong;
9 is true;
10 is wrong;
11 is right;
12 is wrong, take $a=0, b \neq 0$;
13 is true;
14 is true;
15 is wrong, there are the counterexamples of 12, and even more;
16 is true;
So the list of false statements is more than OP thought, it's actually 1, 2, 4, 6,8,10,12,15
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
How many solutions does $1=x^π$ have? I was wondering how many solutions there are to $1 = x^\text{irrational number}$, since the cube root of 1 has 3 solutions and the 4th root has 4 etc and since the number of solutions to $x = x^{a/b}$ is b (where $a$ and $b$ share no factors), how many would $x^π=1$ have? Infinity, none or something else?
|
$$1^\pi=e^{\pi(\ln1+2k\pi i)}=e^{i2k\pi^2}\hspace{1cm}k\in\mathbb{Z}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
The map $f(z) = \frac{z-a}{1-\bar{a}z}$ preserves unit circle and open unit ball. Given $f(z) = \dfrac{z-a}{1-\bar{a}z}$, with $|a|<1$.
I showed that if $|z|=1$, then $|f(z)|=1$; if $|z|<1$, then $|f(z)|<1$.
However, I am stuck at showing that the map $f$ is "onto".
Is there any elementary way of showing that this map is onto?
I looked at similar questions here, but they are only showing that $f$ is "into".
Thank you very much!
|
Note that
$$f(z)=\frac{z-a}{1-\overline{a}z}$$
has inverse
$$g(z)=\frac{z+a}{1+\overline{a}z}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Plot phase plane for system of differential equations I'm in need of some help with matlab code. I'm working on a problem which gives the following system:
$$x'=x^2 - x - y$$
$$y'=x-y$$
We are asked to solve the system numerically starting with $(x(0), y(0))=(-0.3,-0.3)$ for $t \in [0,10]$. Additionally, we are asked to plot the solution in a phase plane and also as a function of time.
My initial reaction is to try and use the ode45 function, then plot the $x$ and $y$ components as functions of time. The thing that I'm really having trouble with is plotting the phase plane...
Any help/links/advice is greatly appreciated!
|
The phase portrait of a system of two first-order ODEs can be obtained in a similar manner as described in this post, e.g. using Matlab's quiver function. Otherwise, one can plot several trajectories $(x (t), y (t))$ obtained by numerical integration (here with ode45) and having different initial conditions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $F(x) = \int_{1}^{\,x}{f(t)}\,dt$, where $f(t) = \int_{1}^{\,t^2}{\frac{\sqrt{9+u^4}}{u}}\,du$, find $F''(2).$ If $\displaystyle F(x) = \int_{1}^{\,x}{f(t)}\,dt$, where $\displaystyle f(t) = \int_{1}^{\,t^2}{\frac{\sqrt{9+u^4}}{u}}\,du$, find $F''(2).$
I used FTC to get
If $\displaystyle F(x) =\int_{1}^{\,x}{\frac{\sqrt{9+x^8}}{x}}\,dx $
Then I tried to use FTC again to find $F'(x)$ but then I got lost cause it's just the same thing over again. So then I decided that i'll just plug in 2 to the function and got 8.14 but I know this is incorrect. Any ideas?
|
The FTC says that $F'(x)=f(x)=\int_{1}^{x^2}{\frac{\sqrt{9+u^4}}{u}}\,du$. Now use the FTC again along with the chain rule. To do that note that $f(x)=g(h(x))$ where $g(x):=\int_{1}^{x}{\frac{\sqrt{9+u^4}}{u}}\,du$ and $h(x):=x^2$. Hence $F''(x)=f'(x)=g'(h(x))h'(x)=\frac{\sqrt{9+(x^2)^4}}{x^2}\cdot2x$. Evaluating at $2$ gives $\sqrt{265}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2115884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How can I solve this integral with complex number $$\int_{|z-1| = 1}\frac{1}{(1-z^2)}dz$$
I tried to do this by residue calculus
$$I=2\pi*iRes(f(z),1)$$
but I coudn't get the answer..
I would be grateful if you could give a clue.
Additional question)
$$\int_{|z| = 3}\frac{z}{(1-z^2)}dz$$
Is $$I=2\pi*i{Res(f,0)}+Res(f,1)$$ and the answer is 0 right?
|
HINT:
Note that
$$\frac{1}{1-z^2}=\frac{1/2}{1-z}+ \frac{1/2}{1+z}$$
The poles are at $z=\pm 1$ and the residues are $\pm 1/2$. Now, which poles, if any are contained in $|z-1|<1$? Which are contained in $|z-1| <3$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proving a well-known inequality using S.O.S Using $AM-GM$ inequality, it is easy to show for $a,b,c>0$, $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3.$$
However, I can't seem to find an S.O.S form for $a,b,c$
$$f(a,b,c) = \frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}S_A(b-c)^2 \ge 0.$$
Update:
Please note that I'm looking for an S.O.S form for $a, b, c$, or a proof that there is no S.O.S form for $a, b, c$. Substituting other variables may help to solve the problem using the S.O.S method, but those are S.O.S forms for some other variables, not $a, b, c$.
|
Here is another SOS (Shortest)
$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b
\left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^
{2}}} \geqslant 0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
}
|
Find $\sqrt{1.1}$ using Taylor series of the function $\sqrt{x+1}$ in $x^{}_0 = 1$ with error smaller than $10^{-4}$ I should find $\sqrt{1.1}$ using Taylor series of the function $\sqrt{x+1}$ in $x^{}_0=1$ with error smaller than $10^{-4}$.
The first derivatives are
$$f'(x)=\frac{1}{2\sqrt{x+1}}$$
$$f''(x)=\frac{-1}{4\sqrt{x+1}^ 3}$$
$$f'''(x)=\frac{3}{8\sqrt{x+1}^5}$$
Applying $x^{}_0$ we have:
$$f(1)=\sqrt{2}$$
$$f'(1)=\frac{1}{2\sqrt{2}}$$
$$f''(1)=\frac{-1}{4\sqrt{2}^ 3}$$
$$f'''(1)=\frac{3}{8\sqrt{2}^5}$$
And we can build the Taylor polynomial
$$T(x)=\sqrt2 + \frac{1}{2\sqrt{2}}(x+1)+\frac{-1}{2!·4\sqrt{2}^3}(x+1)^2+\frac{3}{3!·8\sqrt{2}^5}(x+1)^3+R(\xi)$$
Is everything right until here?
What I don't understand is how can I check that $R(\xi) > 10^{-4}$
|
The Taylor Theorem tells us that the estimation error after $n$ terms is given by $f'(c)\frac{(x-a)^{n+1}}{(n+1)!}$, for som $c \in (x-a)$. Now you should be able to find an upper bound on the derivative in that interval, which should give you an upper bound on the error.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Show that $\text{arg}(f(z))$ is a constant $\Rightarrow$ $f(z)$ is constant in $D$. The full question is as follows
Let $f(z)$ be an analytic function in a region $D$ and $f(z) \neq 0$ in $D$. Show that $\text{arg}(f(z))$ is a constant $\Rightarrow$ $f(z)$ is constant in $D$.
My approach would be to use Cauchy Riemann in terms of polar coordiantes.
In polar coordinates the Cauchy-Riemann equations become $$\dfrac{du}{dr}=\dfrac{1}{r}\dfrac{dv}{d\theta} ~~,~~ \dfrac{dv}{dr} = -\dfrac{1}{r}\dfrac{du}{d\theta}$$
The derivative in polar version at a point $z$ whose polar coordinates are $(r,\theta)$ is then $$f^{'}(z) = e^{-i\theta}(\dfrac{du}{dr}+i\dfrac{dv}{dr}) = \dfrac{1}{r}e^{-i\theta}(\dfrac{dv}{d\theta}-i\dfrac{du}{d\theta})$$
So how do i go on from here?
Since $arg(f(z))$ is equivalent to the $\theta$ in question, can i just say that $v_\theta = u_\theta = 0$?
Any help would be appreciated.
|
I think you answer is correct. Write
$$f^{'}(z) =\dfrac{1}{r}e^{-i\theta}(\dfrac{dv}{d\theta}-i\dfrac{du}{d\theta})=\dfrac{1}{r}e^{-i\theta}\dfrac{-idf}{d\theta}=\dfrac{-i}{r}e^{-i\theta}\dfrac{df}{d\theta}$$
from the $argf$ is constant it conclude $f'(z)=0$ on $D$. Thus $f$ is constant on $D$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
If $ \sin B=3 \sin (2A+B)$, prove that $2\tan A+\tan (A+B)=0$
Given $\sin B=3\sin(2A+B)$, prove $ 2\tan A+\tan(A+B)=0$.
My book uses componendo and dividendo approach to do this which I feel is bit unintuitive. I tried to do this by using identity for $\sin(x+y)=\sin x\cos y+\cos x\sin y$ but could not reach to answer. How do I do this?
|
Knowing where to use componendo and dividendo is just a result of building one's intuition through practice.
You'll notice that, if I write $\theta = B$ and $\phi = 2A + B$, then
$$
\frac{\phi - \theta}{2} = A
$$
and
$$
\frac{\phi + \theta}{2} = A + B
$$
which are exactly the angles that you expect in the result.
If you now take the sines to the same side, in the form of a fraction, and then apply componendo and dividendo, this allows you to transform the sums of sines into products, with the arguments of the resulting sines and cosines being the angles you expect.
You can get around using componendo and dividendo, but that'll make the solution longer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Does there exist a measurable $A \subseteq [0,1]$ with $\lambda(A \cap [0,a]) = a/2$ $\forall a \in [0,1]$? does there exist a Borel set $A \in [0,1]$ such that, for any $a \in [0,1]$, the Lebesgue measure of the set $A \cap [0,a]$ is $a/2$?
Thanks
|
If so this would force $\lambda(A \cap [a,b]) = \dfrac{b-a}2$ for every interval $[a,b] \subset [0,1]$.
Whenever $\{I_k\}$ is a cover of $A$ by closed subintervals of $[0,1]$ this would imply $$\lambda (A) = \lambda \left( A \cap \cup I_k \right) \le \sum_k \lambda (A \cap I_k) \le \frac 12 \sum_k \ell(I_k)$$
and by taking the infimum of all such coverings, $\lambda(A) \le \dfrac 12 \lambda(A)$ which forces $\lambda(A) = 0$.
tl;dr NO
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to evaluate $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$ Is there an easy way to evaluate the integral $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$?
I know that I can plugin the $e$-function and use the linearity of the integral. However this would lead to 16 summands which I really dont want to calculate separately.
|
HINT: We have the following identities
$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and
$\cos(A-B) = \cos A \cos B + \sin A \sin B$
$2\cos A \cos B = \cos(A+B) + \cos (A-B)$
$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$
Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.
Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.
Now you can do with your usual integration formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
tom Dieck's universal definition of a tangent space In page 362 of tom Dieck's Algebraic Topology, the author gives a definition for the tangent space of a premanifold (locally ringed space locally isomorphic to open subset of Euclidean space) which I understand as follows:
Let $(X,\mathcal O _X)$ be a premanifold. A tangent space at $p$ consists of a pair $(\mathrm T_pX,\jmath)$ satisfying the following data.
*
*$\mathrm T_pX$ is a vector space.
*For each chart $\bf x$ of $X$ about $p$, $\jmath_\mathbf{x}:\mathrm T_pX\to \mathbb R ^n$ is a linear isomorphism.
*$\jmath_\mathbf{y}\circ \jmath_{\mathbf{x}}^{-1}=\mathrm d_{\mathbf x p}(\mathbf y\circ \mathbf x^{-1}):\mathbb R ^n \cong \mathbb R^n$.
These properties imply that if $(\mathrm T_pX,\jmath)$ and $(\mathrm T_p^\prime X,\jmath ^\prime)$ are two tangent spaces of $X$ at $p$ then the composite $\jmath_\mathbf{x}^{-1}\circ \jmath_\mathbf{x}^\prime:\mathrm T^\prime_pX\cong \mathrm T_pX$ does not depend on the chart $\mathbf x$. From this the author concludes the tangent space is unique up to unique isomorphism "by the universal property".
What are the categories involved and in what precise sense is the tangent space an initial/terminal object?
|
It seems that he's just proving that the category of tangent spaces is a contractible grouped: take the category of all tangent spaces at $p$ with morphisms the linear maps factoring through the $j$ maps for some chart. Then you've shown there exists a unique morphism between any two tangent spaces, and all morphisms are isomorphisms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2116824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.