Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Is it possible to get the largest conceivable score in Hearts? I should explain the necessary rules: in the four-player game of Hearts, the object is to get as few points as possible. The points you receive are determined by the cards you pick up through a hand: hearts are worth $1$ point each, and the queen of spades is worth $13$ points. Players can thus receive any score between 0 and 25 on a given hand. However, if on a hand a player obtains all 13 hearts and the queen of spades, then that player receives $0$ points and all other players receive $26$ points, an act referred to as "shooting the moon".
The game ends when a player exceeds $100$ points, at which point the player with the fewest points is the winner. In this way, the best conceivable score is
$$
\begin{matrix}0&99&99&99\end{matrix} → \begin{matrix}0&125&125&125\end{matrix}
$$
My question is, is this score possible? And, more generally, how can one determine whether any given score in Hearts is possible?
As best I can tell, we must determine whether $\left[\begin{matrix}99\\99\\99\end{matrix}\right]$ is a linear combination of all the possible final scores.
I would lean towards the idea that this is not possible; it is simple to produce even multiples of 13:
$$
\begin{matrix}0&0&13&13 \\ 0&13&26&13 \\ 0&26&26&26\end{matrix}
$$
But I cannot think of a good way to produce any given score.
|
After each hand, the sum of all the scores must be a multiple of 26.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
The Largest Gaps in the History of Mathematics Edit: Based on the useful comments below. I edited the original post in order to seek for other important historical gaps in mathematics.
Mathematics is full of the historical gaps.
The first type of these gaps belongs to the statements which has been proved long after that mathematicians conjectured that they should be true.
The second type is equivalent statements and equivalence theorems (if and only if theorems). Sometimes detecting existence of an "equivalence" relation between two mathematical statements is not easy. In some cases a direction of a particular equivalence proof (if direction) is as immediate as a simple (or even trivial) observation but it takes many other years for the greatest mathematicians of the world to prove the other side (only if direction).
Here I'm looking for examples of these types of long historical gapes between statements and proofs in order to find a possibly eye-opening insight about the type of those critical mysterious points which make the gape between seeing the truth of a mathematical statement and seeing its proof such large.
Question 1: What are examples of conjectures which have been proved long after that they have been stated?
Question 2: What are examples of equivalence theorems with a large historical gap between their "if direction" and "only if direction" proofs?
Remark: Please don't forget to add references, names and dates.
|
The Kepler conjecture:
No arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close packing arrangements.
The proof was overwhelmingly complex. Only recently a formal proof has been finished.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 2
}
|
Show that the function 1/t is not in L2 (0,1] Need some help getting started with this problem:
$$f(t) = \frac{1}{t}$$
Show that $f(t)$ is not in $L_2(0,1]$, but that it is in the Hilbert space $L_{2}w(0,1)$ where the inner product is given by $$\langle x,y\rangle = \int(x(t)\overline{y(t)}w(t)dt$$ where $w(t)=t^2$.
I am thinking that I have to show that the function is continuous over the interval.
|
SOME HELP: $\int 1/t^2\,dt$ is divergent on $(0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Separable Subset of the Real Line Consider $\mathbb{R}$ with the usual topology. Let $A$ be an infinite subset of $\mathbb{R}$. Prove that $A$ is separable.
So, if $A$ is to be separable, it must contain a countably dense subset $H$. I need to show that this $H$ exists to prove that $A$ is separable. If I don't know more information about $A$, I am not sure how to do this.
|
Since $\mathbb{R}$ is second countable, it has a countable basis. Intersecting each such basis element with $A$ is a basis of $A$ itself, so $A$ is also second countable. The standard proof applies: for each basis element $B$, choose some $x_B\in B\cap A$, and let $D$ be the set of all such $x_B$. This $D$ is countable, dense, and a subset of $A$. To check that it is dense, for any open set $U\subset A$, there is some basis element $B$ inside it, and hence the point $x_B$, so $U\cap D$ is nonempty.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
For a measure zero set $A$, the union $A\cup B$ has zero measure if and only if $B$ does Definition: A set $A$ has measure $0$ iff $\forall \epsilon > 0, \exists$ system of intervals $(I_\tau): A \subseteq \cup_\tau (I_\tau), 0 \leq \sum_\tau (\operatorname{length}(I_\tau)) < \epsilon$.
Using this definition, prove that the following is true: Let $A$ be a set with measure $0$. Then the set $B$ has measure $0$ iff the set $A \cup B$ has measure $0$.
My thoughts: It seems obvious and trivial, which is actually the difficulty of it for me. The backward direction is easy, I think. If $A \cup B$ is contained in a system of intervals with total length less than $\epsilon$ (for some given $\epsilon$), then clearly $B$ is too (since $B \subseteq A \cup B$). But this is true of $\forall \epsilon > 0$, so we are done. In the forward direction, I would attempt the following: $\forall \eta > 0$, $A$ and $B$ each can be covered by systems of intervals of total length less than $\eta$, by assumption. If $A$ and $B$ are utterly/maximally disjoint, then $A \cup B$ can be covered by the union of their respective interval systems, which may also be disjoint (or not). If these systems are disjoint, then their union has total length less than $2 \eta$ (how do I prove it?). So, if we want to have the system of intervals covering $A \cup B$ (which clearly can be the union of the systems of intervals covering $A$ and $B$ separately and respectively) have a total length less than some given $\epsilon > 0$, we demand that $\eta < \frac{\epsilon}{2}$. But this works $\forall \epsilon > 0$ because $0 < \eta < \frac{\epsilon}{2}$ was arbitrary too (because each of $A$ and $B$ may be covered by satisfactory systems of intervals for any such $\eta$, by the definition of "measure 0" and by assumption). Done?
|
Yes, this proof works. The notes I would have towards it are:
It should be fairly obvious that, the total length in the union of two systems of intervals is exactly the sum of the total length in each interval - after all, those are both arising from absolutely convergent sums, so summing the results is the same as summing both sequences together.
Moreover, you don't need to fuss about whether a system of intervals is disjoint or not; we're only concerned about the sum of the lengths of the intervals, so we're not actually speaking about what set the intervals cover, but directly about the set of intervals. So, for instance, we'd probably say that the system $\{(0,1),(0,2)\}$ has length $1+2=3$ despite the fact that the union of those intervals is $(0,2)$, with length only $2$. Since the measure is defined as the infimum of the length of all systems of intervals covering the set, we would never actually worry about clearly suboptimal cases like this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How can this English sentence be translated into a logical expression? ( Translating " unless")
You cannot ride the roller coaster if you are under 4 feet tall unless you are older than 16 years old.
Let:
*
*$P$ stands for "you can ride the roller coaster"
*$Q$ stands for "you are under 4 feet tall"
*$R$ stands for "you are older than 16 years old"
Is this logical expression correctly translated?
$$P \rightarrow (Q \wedge R)$$
|
The suggestion of $P\to (Q \wedge R)$ would say that in order to ride the roller coaster you must be at least $4$ feet tall and you must me at least $16$ years old. But I would say the meaning of the given sentence is that you need to satisfy one of the age and height conditions, not both.
I think the sentence means: In order to ride the roller coaster, you must be at least $4$ feet tall, or you must be over $16$ years old.
Symbolically (using your $P, Q, R$), this would be $P\to (Q\vee R)$. In contrapositive form (which would tell you what keeps you from riding the roller coaster: $(\neg P\wedge \neg Q)\to \neg R$. (If you are under 4 feet tall and younger than $16$, then you can't ride the roller coaster).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Probability that a student guesses an answer (for multiple independent instances)? I read this question recently: Probability that a student knows the answer
Putting this in reverse, if the student guessed the answer then the probability would be:
\begin{eqnarray*}
A &=& \mbox{Student knows the correct answer} \\
C &=& \mbox{Student answered correctly.} \\
\end{eqnarray*}
GIVEN: Probability that the student knows the answer is 2/3
GIVEN: Probability that the student guesses an answer and is correct is 1/4
Finding $P(A^C \mid C)$ using Bayes' theorem:
\begin{eqnarray*}
P(A^C \mid C) &=& \dfrac{P(C \mid A^C )P(A^C )}{P(C \mid A)P(A) + P(C \mid A^c)P(A^c)} \\
&& \\
&=& \dfrac{\frac{1}{3} \times \frac{1}{4}}{1 \times \frac{2}{3} + \frac{1}{4} \times \frac{1}{3}} \\
&& \\
&=& \dfrac{1}{9}
\end{eqnarray*}
So my question is, if the student did a 2nd or $nth$ test and it had the same question, how would you calculate the probability that the student guessed the answer for both tests assuming he got the answer right on both times?
I'm assuming that the student doesn't know the results (or even remember the previous test) so he's basically just getting lucky doing the same guess and being correct each time he does the test question. So since each test taken is independent of each other, does that mean that everything can essentially be taken to the power of 2 for two tests or to the $nth$ power for $n$ number of tests taken?
\begin{eqnarray*}
D &=& \mbox{Student knows the correct answer for n questions} \\
E &=& \mbox{Student answered correctly for n questions} \\
\end{eqnarray*}
$$P(D)=P(A)^n ?$$
$$P(E)=P(C)^n ?$$
Then you could follow through that the probability that the student is guessing the answer, after we witnessed them being all correct, can be calculated as:
$$P(D^C \mid E)=P(A^C \mid C)^n=(\dfrac{1}{9})^n$$
Right?
EDIT: Ignore $(D)$ and $(E)$ for now (if that is helpful) and do in terms of $(A)$ and $(C)$ for two tests (or more). Here's some helpful visualization:
So the blue circle shows the probability path I'm trying to find out.
Rephrasing a little better: What is the probability that the student is guessing the answer for 2 tests (or $n$ tests) after witnessing the event C (that he answered correctly)?
|
Unfortunately, it's not that simple. You have to use Bayes' Theorem again. Firstly, since all answers are made independently of each other, given that the student is guessing, the probability of answering all $n$ questions correctly is $P(E \mid D^c) = \left(\frac{1}{4}\right)^n$. So,
\begin{eqnarray*}
P(D^c \mid E) &=& \dfrac{P(E \mid D^c)P(D^c)}{P(E \mid D^c)P(D^c) + P(E \mid D)P(D)} \\
&& \\
&=& \dfrac{\left(\frac{1}{4}\right)^n \times \frac{1}{3}}{\left(\frac{1}{4}\right)^n \times \frac{1}{3} + 1 \times \frac{2}{3}} \\
&& \\
&=& \dfrac{1}{1 + 2\times 4^n}.
\end{eqnarray*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1003911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Can a collection of random processes be not random? A friend and I were having a debate about randomness and at one point, I said that it was possible to have a collection of random processes which were not random when "put together." He disagreed.
So, I put the question here more concretely and with more detail.
Suppose I have a large number of random processes, is it possible for the collection of those processes to non-random and also, is it possible to have a part of that collection be not random?
Thanks for the help
|
You are correct. The key is using dependence. For example, let $X$ be distributed as a continuous uniform random variable on the interval $[0,1].$ Let $Y=1-X.$
Then define $Z=X+Y.$ Now $Z$ is a constant, but composed of two random components.
There are more practical examples. Imagine a closed-loop system where components move among several states randomly. The sum of all components is fixed and non-random, but the number in each state is a random variable. You can also have one or more states that are not random, satisfying your last version.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is it possible to write a sum as an integral to solve it? I was wondering, for example,
Can:
$$ \sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)}$$
Be written as an Integral? To solve it. I am NOT talking about a method for using tricks with integrals.
But actually writing an integral form. Like
$$\displaystyle \sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)} = \int_{a}^{b} g(x) \space dx$$
What are some general tricks in finding infinite sum series.
|
We can indeed write the sum as an integral, after research. Consider:
Find: $\psi(1/2)$
By definition:
$$\psi(z+1) = -\gamma + \sum_{n=1}^{\infty} \frac{z}{n(n+z)}$$
The required $z$ is $z = -\frac{1}{2}$
so let $z = -\frac{1}{2}$
$$\psi(1/2) = -\gamma + \sum_{n=1}^{\infty} \frac{-1}{2n(n - \frac{1}{2})}$$
Simplify this:
$$\psi(1/2) = -\gamma - \sum_{n=1}^{\infty} \frac{1}{n(2n - 1)}$$
The sum seems difficult, but really isnt.
We can telescope or:
$$\frac{1}{1-x} = \sum_{n=1}^{\infty} x^{n-1}$$
Let $x \rightarrow x^2$
$$\frac{1}{1-x^2} = \sum_{n=1}^{\infty} x^{2n-2}$$
Integrate once:
$$\tanh^{-1}(x) = \sum_{n=1}^{\infty} \frac{x^{2n-1}}{2n-1}$$
Integrate again:
$$\sum_{n=1}^{\infty} \frac{x^{2n}}{(2n-1)(n)} = 2\int \tanh^{-1}(x) dx$$
From the tables, the integral of $\tanh^{-1}(x)$
$$\sum_{n=1}^{\infty} \frac{x^{2n}}{(2n-1)(n)} = \log(1 - x^2) + 2x\tanh^{-1}(x)$$
Take the limit as $x \to 1$
$$\sum_{n=1}^{\infty} \frac{1}{(2n-1)(n)} = \log(4)$$
$$\psi(1/2) = -\gamma - \sum_{n=1}^{\infty} \frac{1}{(2n-1)(n)}$$
$$\psi(\frac{1}{2}) = -\gamma - \log(4)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57",
"answer_count": 7,
"answer_id": 5
}
|
Examples of mathematical discoveries which were kept as a secret There could be several personal, social, philosophical and even political reasons to keep a mathematical discovery as a secret.
For example it is completely expected that if some mathematician find a proof of $P=NP$, he is not allowed by the government to publish it as same as a usual theorem in a well-known public journal because of high importance and possible uses of this special proof in breaking security codes which gives an undeniable upper hand to the state intelligence services with respect to other countries. Also by some social reasons publishing such a proof publicly is not suitable because many hackers and companies may use it to access confidential information which could make a total chaos in the community and economy.
The example shows that in principle it is possible to have some very significant brilliant mathematical proofs by some genius mathematicians which we are not even aware of. But in some cases these "secrets" unfold by an accident or just because they lost their importance when some time passed and the situation changed.
Question: What are examples of mathematical discoveries which were kept as a secret when they discovered and then became unfolded after a while by any reasons?
|
In the 1920s Alfred Tarski found proofs that if a system of logic had either {CpCqp, CpCqCCpCqrr} or {CpCqp, CpCqCCpCqrCsr} as theses of the system, then it has a basis which consists of a single thesis. In other words, if both of {CpCqp, CpCqCCpCqrr} or both of {CpCqp, CpCqCCpCqrCsr} belong to some system S, then S has a single formula which can serve as the sole axiom for proving all other theses of the system. The proof never got published, but did seem known to several authors for a while. Then it got lost. A few years later, proofs of Tarski's results did get found.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "163",
"answer_count": 22,
"answer_id": 9
}
|
Solving equations with mod So, I'm trying to solve the following equation using regular algebra, and I don't think I'm doing it right: $3x+5 = 1\pmod {11}$
I know the result is $x = 6$, but when I do regular algebra like the following, I do not get 6:
$3x=1 - 5\pmod{11}$
$x = \dfrac{(1 \pmod{11} - 5)} 3$
So, I figured that since $1 \pmod{11} = 1$ the equation becomes
$x = \dfrac{-4} 3$
Which is not 6! I am totally lost here and would appreciate any help......
|
The division in $\mathbb{F_{11}}$ is not like the division in $\mathbb{Q}$ !!! If you want to find $3(mod \; 11)^{-1}$ think of which element of $\mathbb{Z_{11}}$ multiplied by $3$ gives $1 \; (mod \; 11)$? It's $4$. So $$3x \equiv -4 \; (mod \; 11) \Leftrightarrow x \equiv -4*4 \; (mod \; 11)$$
And $-16 \equiv 6 \; (mod \; 11)$, because $11$ divides $-16 - 6 = -22$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
prove that a space is not connected Let $S=\{(x,0)\} \cup\{(x,1/x):x>0\}$. Prove that $S$ is not a connected space (the topology on $S$ is the subspace topology)
My thoughts: Now in the first set $x$ is any real number, and I can't see that this set in open in $S$. I can't find a suitable intersection anyhow.
|
The set $\{(x,0) : x\in\mathbb R\}$ is open in $S$ because every point $(x,0)$ has an open neighborhood that does not intersect the graph of $y=1/x$. Just use $I\times\{0\}$ where $I$ is any open interval containing $x$.
Then do a similar thing with the set $\{(x,1/x): x>0\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Slight confusion on Hom functor for a group seen as a category Let $(G,\cdot)$ be a group and let $BG$ be the category of this group with one formal object $*$ and the elements of $G$ as morphisms.
Now take the the covariant hom-functor $\text{Hom$(*,\_)$}:BG \to \mathbf{Set}$
$*\mapsto \text{Hom$(*,*)$} = [\text{the set of morphisms from $*$ to $*$ }] = \{g\in G\} = G$, seen as a set
Now to my question and confusion:
Let $g\in G$ then,
$$g \mapsto \text{Hom$(*,g)$} $$
This is
$$\text{Hom$(*,g)$}:\text{Hom$(*,*)$} \to \text{Hom$(*,*)$}$$
$$\_ \mapsto g\circ \_$$
If $\text{Hom$(*,*)$}$ is the group seen as a set then what is $\text{Hom$(*,g)$}$? Is it endomorphisms or even bijections of $G$? with left multiplication of $g$? Can anyone please explain how this works?
|
In general, if $X$ is an object and $g : Y \to Z$ is a morphism, then $\hom(X,Y) \to \hom(X,Z)$ is defined by $h \mapsto g h$. This doesn't change if $X$ is the only object. Hence, your map is left multiplication with $g$ (this is a bijection, but not a homomorphism unless $g=1$). Hence, the functor $\hom(\star,-)$ is the left regular representation of $G$.
(Now it is a good exercise to deduce Cayley's Theorem from the Yoneda Lemma.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that: $\lim_{x\to 0}\frac{x}{\sin^2(x) + 1} = 0$ Prove
$$\displaystyle \lim_{x\to 0} \frac{x}{\sin^2(x) + 1} = 0$$
The proof:
Let $$|x| \le 1 \implies -1 \le x \le 1$$
$$\displaystyle \frac{|x|}{|\sin^2(x) + 1|} < \epsilon\text{ for }\displaystyle |x| < \delta$$
$$-1 \le x \le 1
\\\implies \sin(-1) \le \sin(x) \le \sin(1) \implies -\sin(1) \le \sin(x) \le \sin(1)
\\\implies \sin^2(1) \le \sin^2(x) \le \sin^2(1) \implies |\sin^2(x) + 1| = |\sin^2(1) + 1| \implies \displaystyle |\frac{1}{\sin^2(x) + 1}| = |\frac{1}{\sin^2(1) + 1}|$$
(1) $$|x| < \delta_1$$
(2) $$\displaystyle |\frac{1}{\sin^2(x) + 1}| = |\frac{1}{\sin^2(1) + 1}|$$
(3) $$\displaystyle \frac{|x|}{|\sin^2(x) + 1|} < \frac{\delta_1}{|\sin^2(1) + 1|}$$
(4) $$\displaystyle \frac{|\delta_1|}{|\sin^2(1) + 1|} = \epsilon \implies \delta_1 = (|\sin^2(1) + 1|)(\epsilon) $$
Finally, $\delta = \min(1, (|\sin^2(1) + 1|)(\epsilon)) \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \blacksquare$
Thoughts?
EDIT:
The original proof was indeed terrible, here's a new approach.
Let $|x| < 1 \implies -1 < x < 1$
$\sin^2(-1) + 1 < \sin^2(x) + 1 <\sin^2(1) + 2$
$\implies \displaystyle \frac{1}{\sin^2(-1) + 1} > \frac{1}{\sin^2(x) + 1} > \frac{1}{\sin^2(1) + 1}$
$\implies \displaystyle \frac{1}{\sin^2(-1) + 1} > \frac{1}{\sin^2(x) + 1} \implies \frac{1}{|\sin^2(-1) + 1|} > \frac{1}{|\sin^2(x) + 1|} \implies \frac{1}{|\sin^2(x) + 1|} < \frac{1} {|\sin^2(-1) + 1|} $
$(1) |x| < \delta_1$
$(2) \displaystyle \frac{1}{|\sin^2(x) + 1|} < \frac{1} {|\sin^2(-1) + 1|}$
$(3) \displaystyle \frac{|x|}{|\sin^2(x) + 1|} < \frac{\delta_1} {|\sin^2(-1) + 1|}$
Finally,
$\epsilon(\sin^2(-1) + 1) = \delta_1$
Therefore,
$\delta = \min(1,\epsilon \cdot (\sin^2(-1) + 1)) \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \blacksquare$
|
This is way too complicated, don't you think?
Why not just say that
$$
\left| \frac{x}{1+\sin^2 x}
\right| \le |x| \le \epsilon
$$
as
soon as $|x|<\delta = \epsilon$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does an uncountable Golomb ruler exist? Does there exist an uncountable set $G\subset \mathbb{R}$ such that, for $a,b,c,d \in G$, if $a-b=c-d$ then $a=c$ and $b=d$?
|
I think so (assuming Choice). Let $G$ be a vector space basis of $\Bbb{R}$ over $\Bbb{Q}$. Then $G$ is, among other things, linearly independent over $\Bbb{Z}$ and uncountable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Eigenspace of finite abelian group Let $\rho: G\to {\rm GL}_n(\mathbb{C})$ be faithfull representation of finite abelian group $G$ and $V$ is the eigenspace of some $g\in G$.
Is it true that $V$ is also eigenspace for all $G$ (that is $\rho(g)v=\lambda_g v$ for all $v\in V$ and $g\in G$)?
|
No.
For a counterexample, take any faithful representation of a nontrivial group (simplest is an action of $\Bbb Z_2$ on $\Bbb C^2$, say, by reflection through a line), and consider the unit element, that has the whole space as eigenspace, while other elements can have more complex eigenspace decomposition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Give a partition of ω I have a question which I'm deeply confused about. I was trying to do some problems my professor gave us so we could practice for exam, one of them says:
Give a partition of ω in ω parts, everyone of them of cardinal ω.
I know that $ ω=\left \{ 1,2,3,...,n,n+1,.... \right \}$ , but I thought that |ω|=ω.
Can somebody help me?
|
There are many ways to do it, since $|\omega\times\omega|=\omega$. Here’s one. Every $n\in\Bbb Z^+$ can be written uniquely in the form $n=2^km$, where $m$ is odd, and you can start by letting $S_k=\{2^km:m\in\omega\text{ is odd}\}$: $S_0$ is the set of odd positive integers, $S_1$ the set of even positive integers that are not divisible by $4$, and so on. That isn’t quite a partition of $\omega$, since it doesn’t cover $0$, so throw $0$ into one of the parts: let $P_0=\{0\}\cup S_0$, say, and $P_k=S_k$ for $k>0$, and $\{P_k:k\in\omega\}$ is then a partition of the kind that you want.
Another way is to use the pairing function, which is a bijection $\varphi:\omega\times\omega\to\omega$, and set $P_k=\varphi[\{k\}\times\omega]$ for each $k\in\omega$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Proof of divergence/convergence of a series Consider the series :
$$\sum\limits_{n = 1}^\infty {\frac{{(n + 2)^3 n^\alpha }}{{\sqrt[3]{{n^2 + 4n + 7\,}}\sqrt {n + 1} }}} $$
where $
\alpha \in \Re $ . I managed to determine that when $
\alpha \ge \frac{{ - 13}}{6}$ the series diverges, but what about the other cases ? Can anyone help ?
|
Hint : Divide denominator and numerator by $n^{(\frac{2}{3}+\frac{1}{2})}=n^\frac{7}{6}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1004905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Limit with arctan: $\lim_{x\rightarrow 0} \frac{x\sin 3x}{\arctan x^2}$ $$\lim_{x\rightarrow 0} \frac{x\sin 3x}{\arctan x^2}$$
NB! I haven't learnt about L'Hôpital's rule yet, so I'm still solving limits using common limits.
What I've done so far
$$\lim_{x\rightarrow0}\left[\frac{x}{\arctan x^2}\cdot 3x\cdot \frac{\sin 3x}{3x}\right] = 0\cdot1\cdot\lim_{x\rightarrow0}\left[\frac{x}{\arctan x^2}\right] = 0??$$
Obviously I'm wrong, but I thought I'd show what I tried.
|
You should put in
$$
\frac{x^2}{\arctan x^2}
$$
that has limit $1$:
$$
\lim_{x\to0}\frac{x\sin3x}{\arctan x^2}=
\lim_{x\to0}3\frac{\sin3x}{3x}\frac{x^2}{\arctan x^2}=\dots
$$
If you don't know the limit above, just substitute $t=\arctan x^2$, so $x^2=\tan t$ and the limit is
$$
\lim_{x\to0}\frac{x^2}{\arctan x^2}=\lim_{t\to0}\frac{\tan t}{t}
$$
that you should be able to manage.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Find $\int \ln(\tan(x))/(\sin(x) \cos(x))dx$ I was given this question in a review package, and it has me stumped:
I started off using the identity $\tan(x) = \sin(x) / \cos(x)$ and then used the fact that $\sin(x) \cos(x) = .5\sin(2x)$ to try and simplify the denominator. I looked around for a basic $u$ substitution but couldn't find any. I broke the $ln(\sin(x)/\cos(x))$ into $\ln(\sin(x)) - ln(cos(x))$ thinking I could maybe split the integral in two and that would help, but to no avail. Using parts looks very messy and I'm pretty lost at this point, anyone know how to get me on the right track?
|
\begin{eqnarray}
\int\frac{\ln\tan x}{\sin x\cos x}dx&=&\int\frac{\ln\tan x}{\tan x}\sec^2xdx
=\int\frac{\ln\tan x}{\tan x}d\tan x\\
&=&\int\ln\tan xd\ln\tan x=\frac{1}{2}(\ln\tan x)^2+C
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
}
|
Solve $y'' - y' = yy'$ and find three Other Distinct Solutions Been stuck on this for a while. I need to solve the following differential equation by finding the constant solution y = c and three other distinct solutions.
$$y'' - y' = yy'$$
If someone could give me a complete step by step explanation, it would be greatly appreciated as I want to fully understand it.
|
I'll assume the independent variable is $x$, you can make the necessary changes if it's $t$ or anything else.
Let $z=y'$. Then
$$y''=\frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx}=z\frac{dz}{dy}$$
and substituting into the DE gives
$$z\frac{dz}{dy}-z=yz\ .$$
See if you can take it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
What is the order of the alternating group $A_4$? When I write out all the elements of $S_4$, I count only 11 transpositions. But in my text, the order of $A_4$ is $12$. What am I missing?
$A_4=\{(12)(34),(13)(24),(14)(23),(123),(124),(132),(134),(142),(143),(234),(243)$
$|A_4|=11$
|
The order of $A_n$ is always half the order of $S_n$, consider the bijective map from the even permutations to the odd permutations where $\varphi(\pi)=(12)\pi $. This is a bijection since the inverse is the map from the odd permutations to the even permutations $\varphi^{-1}(\pi)=(12)\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Prove that $\lim_{x \to \infty} \frac{\log(1+e^x)}{x} = 1$ Show that
$$\lim_{x \to \infty} \frac{\log(1 + e^x)}{x} = 1$$
How do I prove this? Or how do we get this result? Here $\log$ is the natural logarithm.
|
Hint: Use that
$$\log(1 + e^x) = \log[e^x (1 + e^{-x})]$$
$$\cdots= \log e^x + \log(1 + e^{-x}) = x + \log(1 + e^{-x}).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
}
|
Determine "winner" in exponential contest If I have one light bulb that could be one of $2$ kinds, ($A$ and $B$ are the lifetimes of first and second type: $A\sim \exp(1)$ and $B\sim \exp(3)$), and each time a bulb dies, another bulb replaces it (with probability $0.5$ to be $A$ or $B$).
$X$ is the lifetime of the light bulb (not knowing which type it is).
The initial light bulb could be $A$ or $B$ with probability $0.5$.
Knowing that the bulb didn't die until time $t$, what is the probability that the light bulb type is the first type (with lifetime $A$)?
|
You have $\pi(A)=\pi(B)=0.5$. You want the posterior value $$\pi(A\mid\text{failure at }t)=\dfrac{\pi(A)f_A(t)}{\pi(A)f_A(t)+\pi(B)f_B(t)}.$$
The densities are $f_A=e^{-t}$ and $f_B=3e^{-3t}$ so $$\Pr(\text{type }A\mid\text{failure at }t) = \dfrac{0.5 \times e^{-t}}{0.5 \times e^{-t}+0.5 \times 3e^{-3t}}= \dfrac{1}{1+ 3e^{-2t}}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Symmetric matrices with the eigenvalues comparable Let $A,B$ be $n\times n$ real symmetric matrices, with eigenvalues $\lambda_i$ and $\mu_i$ respectively, $i=1,\cdots,n$. Suppose that
$$\lambda_i\leq\mu_i,\forall\ i.$$
Show that there exists an orthogonal matrix $O$ such that
$$O^TBO-A$$
is non-negative definite.
I do want to show that for some orthogonal matrix $P$ such that $P^tAP$ commutes with $B$, but this idea could not be forwarded...
|
Let $P,Q$ be orthogonal with $P^tAP$ the diagonal matrix with diagonal $\lambda_1,\dots,\lambda_n$ and $Q^tBQ$ the diagonal matrix with diagonal $\mu_1,\dots,\mu_n$. Then $Q^tBQ-P^tAP$ is non-negative definite, so $PQ^tBQP^t-A$ is. Let $O=QP^t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding a bound for $\sum_{n=k}^l \frac{z^n}{n}$ For $z\in\mathbb{C}$ such that $|z|=1$ but $z\neq1$ and $0<k<l$, I'm trying to prove that:
$$\left|\sum_{n=k}^l \frac{z^n}{n}\right| \leq \frac{4}{k|1-z|}$$
It's more of a game that slowly frustrates me... I've got
$$\left|\sum_{n=k}^l \frac{z^n}{n}\right|=\left|\frac{1}{l}\frac{1-z^{l+1}}{1-z}-\frac{1}{k}\frac{1-z^k}{1-z}-\sum_{n=k}^{l-1}\left(\frac{1-z^{n+1}}{1-z}\right)\left(\frac{1}{n+1}-\frac{1}{n}\right)\right|$$
Then I've tried to use the Triangle Inequality over and over again, but I never actually got to the point... Do you have any idea from this point?
|
Start from
$$|1-z|\cdot\left|\sum_{n=k}^l \frac{z^n}{n}\right|=\left|\sum_{n=k}^l\frac{z^n}n-\sum_{n=k+1}^{l+1}\frac{z^n}{n-1}\right|.$$
The term for $n=k$ and $n=l+1$ can be bounded by $1/k$. To conclude, notice that
$$\left|\sum_{n=k+1}^lz^n\left(\frac 1n-\frac 1{n-1}\right)\right|\leqslant \sum_{n=k+1}^l\left|z^n\left(\frac 1n-\frac 1{n-1}\right)\right|=\sum_{n=k+1}^l\left(\frac 1{n-1}-\frac 1n\right)=\frac 1{k}-\frac 1l\leqslant \frac 1k.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
How can I measure a frustum inside a frustum? If I know the measurements of a frustum A, how can I find the measurements of frustum B if I only know B's bottom radius, slanted side angle and volume?
This problem arose after finding how deep the booze in my cocktail glass is, but when not filling the glass to the brim.
If it makes a difference, I am mostly interested in the height of each layered nested frustum inside the container frustum.
heightA = 80.00 mm
radiusBottomA = 40.00 mm
radiusTopA = 57.00 mm
volumeA = 597.24 mm³
slantA = 81.79 mm
bottomAngleA = 12.00 °
heightB = ??
radiusBottomB = 40.00 mm
radiusTopB = ??
volumeB = 500.00 mm³
slantB = ??
bottomAngleB = 12.00 °
The graphic is supposed to depict a 2D side-view of a frustum containing another frustum. They are supposed to be symmetrical and the proportions are not matching the indicated measurements because I have poor MS Paint skills.
|
So when you are finding the area of a frustum, you are basically finding the volume of one cone and subtracting it from the volume of another cone. So, when doing this problem, just find the area of the cone formed at a given height h and subtract it from the cone at the bottom with a radius of 40.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How would chemical equations be balanced with matrices? For example, I have this equation:
$$\mathrm{KMnO_4 + HCl = KCl + MnCl_2 + H_2O + Cl_2}$$
Then I get this:
$$a \cdot \mathrm{KMnO_4} + b \cdot \mathrm{HCl} = c \cdot \mathrm{KCl} + d \cdot \mathrm{MnCl_2} + e \cdot \mathrm{H_2O} + f \cdot \mathrm{Cl_2}$$
$$
\begin{align}
\mathrm{K}&: &a &= c \\
\mathrm{Mn}&: &a &= d \\
\mathrm{O}&: &4a &= e \\
\mathrm{H}&: &b &= 2e \\
\mathrm{Cl}&: &b &= c + 2d + 2f
\end{align}
$$
$$
\begin{bmatrix}
a&b&c&d&e&|&f\\
1&0&-1&0&0&|&0\\
1&0&0&-1&0&|&0\\
4&0&0&0&-1&|&0\\
0&1&0&0&-2&|&0\\
0&1&-1&-2&0&|&2
\end{bmatrix}
$$
How would I get the values of $a, b, c, d, e,$ and $f$ from here?
Side note: I'm following this.
|
K: a = c Mn: a = d O: 4a = e H: b = 2e Cl: b = c + 2d + 2f
How would I get the values of a, b, c, d, e, and f from here?
Well... Reading the equations in the order they were given and using a as a parameter, one gets successively c = a, d = a, e = 4a, b = 2e = 8a, and 2f = b - c - 2d = 8a - a - 2a = 5a.
This is solved by a = c = d = 2, e = 8, b = 16 and f = 5, thus, the balanced equation is
$$\text{2 KMnO$^4$ + 16 HCl $\to$ 2 KCl + 2 MnCl$^2$ + 8 H$^2$O + 5 Cl$^2$}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1005938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Smoothness of Fourier transform of a measure Is the Fourier transform of a finite Borel measure on $\mathbb{R}$ necessarily a smooth function?( $\widehat{\mu}(x)=\int_\mathbb{R}e^{-i\pi xy} d\mu(y)$)
|
No. It's continuous, but in general not smooth.
If we take for $\mu$ the measure given by the density
$$f(x) = \frac{1}{1+x^2}$$
with respect to the Lebesgue measure, we find that
$$\hat{\mu}(y) = \pi e^{-\lvert y\rvert},$$
which is not differentiable at $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can basis vectors have fractions? So I was diagonalizing a matrix in a book, and one of the basis vectors was [3/2, 1], after doing the problem, the answer in the book was different than mine. It came with an explanation, and in it the basis vector was [3,2]. They are the same thing, just multiples of each other, so I was curious, is it mandatory to take fractions out of a basis vector?
|
You are right and they are also right. This is because multiplication by non-zero constants does not affect the span of the basis nor the linear independence of the vectors in that basis. You should be able to see this quite obviously once you think about it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Ball and urn problem Let's say there are three types of balls, labeled $A$, $B$, and $C$, in an urn filled with infinite balls. The probability of drawing $A$ is $0.1586$, the probability of drawing $B$ is $0.81859$, and the probability of drawing $C$ is $0.02275$.
If you draw $6$ balls, what's the probability that you drew $3 A$s, $2 B$s, and $1 C$?
I know what the answer is but I don't know how to set up the problem appropriately. It's not just a simple counting problem because the balls have weighted probabilities... Any push in the right direction is appreciated!
|
Any sequence of the sort $ABACAB$ has the same probability to occur. So it comes to finding how many distinct words there are having $3$ times an $A$, $2$ times a $B$ and $1$ time a $C$.
You could start with $6$ open spots and then placing the $A$'s. That gives $\binom{6}{3}$ possibilities.
Then place the $B$'s. That gives $\binom{3}{2}$ and finally place $C$ on the single spot that is left.
There are $\binom{6}{3}\times\binom{3}{2}=60$ possibilities so the corresponding probability is $60\times p_A^3p_B^2p_C$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Independence and uncorrelatedness between two normal random vectors. If $X$ and $Y$ are normal random vectors in $\mathbb R^n$ and in $\mathbb R^m$, and they are jointly normally distributed i.e. $(X,Y)$ is normally distributed in $\mathbb R^{n+m}$, then are the following equivalent
*
*$\operatorname{Cov}(X,Y)=0$;
*$X$ and $Y$ are independent.
Note that it is true when $n=m=1$. Thanks.
|
$\newcommand{\E}{\operatorname{E}}\newcommand{\var}{\operatorname{var}}\newcommand{\cov}{\operatorname{cov}}$
In comments you say you have a theorem that if two multivariate normal distributions have the same mean and the same variance, then they are the same distribution.
You have
$$
\E\begin{bmatrix} X \\ Y \end{bmatrix} = \begin{bmatrix} \E X \\ \E Y \end{bmatrix}
$$
and
\begin{align}
\var \begin{bmatrix} X \\ Y \end{bmatrix} & = \E\left( \left(\begin{bmatrix} X \\ Y \end{bmatrix} -\E\begin{bmatrix} X \\ Y \end{bmatrix} \right) \left(\begin{bmatrix} X \\ Y \end{bmatrix} -\E\begin{bmatrix} X \\ Y \end{bmatrix} \right)^T \right) \\[10pt]
& = \begin{bmatrix} \var X & \cov(X,Y) \\ \cov(X,Y)^T & \var(Y) \end{bmatrix}.
\end{align}
The two off-diagonal matrices are $0$, by hypothesis.
Now consider another normal distribution of $(n+m)\times 1$ column vectors: The first $n$ components are distributed exactly as $X$ is distributed and the last $m$ components as $Y$, and they are independent. That multivariate normal distribution has the same mean (in $\mathbb R^{n+m}$) and the same variance (in $\mathbb R^{(n+m)\times(n+m)}$) as the distribution of $X$ and $Y$. Now apply the theorem mentioned in the first paragraph above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability of passing this multiple choice exam
*
*A multiple choice exam has 175 questions.
*Each question has 4 possible answers.
*Only 1 answer out of the 4 possible answers is correct.
*The pass rate for the exam is 70% (123 questions must be answered correctly).
*We know for a fact that 100 questions were answered correctly.
Questions: What is the probability of passing the exam, if one were to guess on the remaining 75 questions? That is, pick at random one of the 4 answers for each of the 75 questions.
|
Hint : Use the formula for binomial distributed random variables.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Help with finding the $\lim_{x\to 0} \tan x \csc (2x)$ So I am trying to figure out the limit
$$\lim_{x\to 0} \tan x \csc (2x)$$
I am not sure what action needs to be done to solve this and would appreciate any help to solving this.
|
Note that $\csc (2x) = \frac{1}{\sin(2x)} = \frac{1}{2 \sin x \cos x}$, and $\tan x = \frac{\sin x}{\cos x}$.
So $$\lim_{x \to 0} \tan x \csc (2x) = \lim_{x \to 0} \frac{1}{2 \sin x \cos x} \frac{\sin x}{\cos x} = \lim_{x \to 0} \frac{1}{2 \cos^2 x} = \frac{1}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Fundamental theorem of finitely generated abelian groups. If $G :=\langle x,y,z \ | \ 2x+3y+5z = 0\rangle$ then find what group $G$ is isomorphic to.
I think I'm supposed to use the fundamental theorem of finitely generated abelian groups, but I don't know if I am using it correctly below.
Since $2x = 0$ we know that $H_x<G$ is a subgroup isomorphic to $\mathbb{Z}_2$ and the same can be said where $H_y\cong \mathbb{Z}_3$ and $H_z\cong\mathbb{Z}_5$. Does this imply that $$G\cong \mathbb{Z}_2\oplus\mathbb{Z}_3\oplus\mathbb{Z}_5$$
Is the logic here correct? Or is there something else I should be doing.
More generally is $\langle x,y,z \ | \ lx+my+nz = 0\rangle \cong \mathbb{Z}_l\oplus\mathbb{Z}_m\oplus\mathbb{Z}_n$ if $l,m,n$ are coprime.
|
This is not a rigorous answer, but it provides some intuition.
Per my comment, we can get $(1,1,-1)$ as a generator of a subgroup isomorphic to $\mathbb{Z}$.
We can also get $(4,-1,-1)$ as another generator of a subgroup isomorphic to $\mathbb{Z}$, and it should be clear that these two generators are independent (i.e. they cannot generate the other).
We can also get $(1,-4,2)$ as another generator of a subgroup isomorphic to $\mathbb{Z}$, except we now have $(4,-1,-1)-3\times(1,1,-1)=(1,-4,2)$. So this third generator can be generated by the prior two.
This is nothing more than an application of the fact that if we have an equation in 3 variables, then if we know two of them we can solve for the third (this is where the 3 coefficients being relatively prime comes into play).
@Derek Holt's answer is telling you that you only have two independent free generators. His answer is the one to select, but perhaps this answer shows what's going on somewhat more concretely.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Inconsistency in two-sided hypothesis testing Suppose you have two sets of data with known population variances and want to test the null hypothesis that two means are equal, ie. $H_{0}: \mu_{1} = \mu_{2}$ against $H_{1}: \mu_{1} > \mu_{2}$. There's a certain way I want to think about it, which is the following:
\begin{align}
P(\mu_{1} > \mu_{2}) &= P(-(\mu_{1} - \mu_{2}) < 0) \\
&= P\left(\frac{\bar{x}_{1} - \bar{x}_{2} - (\mu_{1} - \mu_{2})}{\sigma_{\delta \bar{x}}}<\frac{\bar{x}_{1} - \bar{x}_{2}}{\sigma_{\delta \bar{x}}} \right) \\
&=P\left(z < \frac{\bar{x}_{1} - \bar{x}_{2}}{\sigma_{\delta \bar{x}}} \right)
\end{align}
To me, this 'derivation' makes it perfectly clear what's actually going on. You're actually calculating the probability that $H_{1}$ is true and not just blindly looking up some $z$-score. However, now suppose that $H_{1}: \mu_{1} \neq \mu_{2}$. The problem with this is that the method I just described doesn't seem to work. If I write
$$
P(\mu_{1} \neq \mu_{2}) = P(\mu_{1} < \mu_{2}) + P(\mu_{1} > \mu_{2})
$$
Then all that happens is $P(\mu_{1} \neq \mu_{2}) = 1$. I think I'm probably not interpreting the above equation correctly.
|
If is probably that $\mu_1<\mu_2$, then $\mu_1\geq\mu_2$ is rejected, then not is possible that holds $\mu_1<\mu_2$ and $\mu_1>\mu_2$ simultaneusly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Show the diffusion equation is a normalised distribution. The diffusion equation is defined to be $$P(x,t) = \dfrac{1}{\sqrt{4D\pi t}} \exp \left(-\dfrac{x^2}{4Dt}\right),$$ where $D$ is a physical constant.
Show that the reaction diffusion equation is a normalised distribution.
I take that this means that I need to show that $$\int_{-\infty}^\infty P(x,t) \,dx = 1$$ (the sum of probabilities is $1$), which I have shown, and that $$\int_{-\infty}^{\infty} x P(x,t) \,dx = 0$$ or that the expectation is zero.
However, previous parts of the question make us show that $$\int_{-\infty}^\infty x^2 e^{-\alpha x^2} \,dx = 1.$$
Have I got the definition of the expectation wrong? Or am I trying to show the wrong thing? If so, why?
I don't think I'll need help with calculations, but if I do, I'll ask again.
|
$$
u(x,t) = \frac{1}{\sqrt{4D \pi t}}e^{-x^2/(4Dt)}
$$
is the fundamental solution to the initial value problem,
$$
\frac{\partial u}{\partial t} = D\frac{\partial^2 u}{\partial x^2},\quad u(x,0) = \delta (x)\,.
$$
Note the following:
*
*For $\alpha>0$, it is well-known that
$$
\int\limits_{-\infty}^{\infty}e^{-\alpha x^2}\mathrm dx = \sqrt{\frac{\pi}{\alpha}}\,\,\mbox{ and }\,\,\int\limits_{-\infty}^{\infty}xe^{-\alpha x^2}\mathrm dx = 0.\tag{1}
$$
*The r.h.s of each equality in $(1)$ is a differentiable function of $\alpha$. In fact, by appealing to Leibniz's integral rule, we see that
$$
\int\limits_{-\infty}^{\infty}x^2e^{-\alpha x^2}\mathrm dx = -\int\limits_{-\infty}^{\infty}\left(\frac{\partial}{\partial \alpha}e^{-\alpha x^2}\right)\mathrm dx = -\frac{\partial}{\partial \alpha}\left(~\int\limits_{-\infty}^{\infty}e^{-\alpha x^2}\mathrm dx\right) = - \frac{\partial}{\partial \alpha}\left(\sqrt{\frac{\pi}{\alpha}}\right) = \frac{1}{2}\sqrt{\frac{\pi}{\alpha^3}}\,.
$$
Therefore,
$$
2\sqrt{\frac{\alpha^3}{\pi}}\int\limits_{-\infty}^{\infty}x^2e^{-\alpha x^2}\mathrm dx = 1,\,\,\color{red}{\mbox{and not}\int\limits_{-\infty}^{\infty}x^2e^{-\alpha x^2}\mathrm dx = 1}.\tag{2}
$$
*Furthermore, as you deduced, using $\alpha = \frac{1}{4Dt}$ in both $(1)$ and $(2)$ gives,
$$
\frac{1}{\sqrt{4D\pi t}}\int\limits_{-\infty}^{\infty}e^{-\frac{1}{4Dt} x^2}\mathrm dx = 1,\quad\frac{1}{\sqrt{4D\pi t}}\int\limits_{-\infty}^{\infty}xe^{-\frac{1}{4Dt} x^2}\mathrm dx = 0,\quad\frac{1}{\sqrt{4D\pi t}}\int\limits_{-\infty}^{\infty}x^2e^{-\frac{1}{4Dt} x^2}\mathrm dx = 2Dt.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1006921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Maximization of a ratio Edit: Removed solved in title, because I realize I need someone to check my work.
Ok, so the problem is a lot more straight forward than I originally approached it (which was a false statement -- so it was excluded).
Question:
Let R,S, x $\in$ N with x $\le$ R*S and $0 \lt$ R $\le$ S. Next, define B as a multiplicative factor of x - c with c $\ge 0$ and B $\le$ S such that $\frac{x - c}{B} = A \le R$ and A $\in$ N. What value of B maximizes A?
|
Consider the expression $\frac xB$. For B to be a multiplicative factor of x - c, c must equal the remainder between B and x. Therefore, we can rewrite c as c = x - dB, where d is the unique natural number satisfying both dB $\le$ x and (d + 1)B $\gt$ x. Substituting,
A = $\frac{x - (x - dB)}{B}$ = d.
However, d depends on B, so find $B \in N$ with $B \le$ S such that $d_B \le R$ and $d_{B-1} \gt R$. Following these two sets of inequalities will maximize A.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Book recommend for topics of Integrals in multivariable calculus. I am an average student and have to study following topics on my own for the exam :
The measure of a bounded interval in $\mathbb R^n$ , the Riemann integral of a bounded function defined
on a compact interval in $\mathbb R^n$ , Sets of measure zero and Lebesgue’s criterion for existence of a
multiple Riemann Integral, Evaluation of a multiple integral by iterated integration.
Please can anyone suggest some good self-study book providing good insight into the above topics ..
|
I would say Mathematical Analysis II by Zorich fits the bill.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
}
|
Differentiate $f(x)=\int_x^{10}e^{-xy^2}dy$ with respect to $x$ I am trying to find $f'(x)$ when $0\leq x\leq 10$. I know I could use the formula given on this wikipedia page: http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign but I have been asked to justify all steps of the calculation so this isn't allowed.
I have been given a hint to let $I(a,b,c)=\int_a^bf(x,c)dx$ and then told to show that $f$ satisfies all conditions necessary for FTC1 and the theorem of differentiation of integrals depending on a parameter.
The problem I am having is translating $f(x)$ into something of the same form as $I(a,b,c)$. Can anyone help?
EDIT: I think I've done it now using the method described by @mvggz . Is this the final answer once the $u$ has been substituted back out:
$$ f'(x)=-\frac{1}{x} \int_x^{10} e^{-xy^2} dy + \frac{5}{x} e^{-100x}-\frac{3}{2}e^{-x^3}$$
|
There are indeed formula for the differentiation of functions of the form $x\mapsto \int_{a(x)}^{b(x)}f(x,t)\mathrm{dt}$ under some good conditions on the functions $a$, $b$ and $f$.
However, in our case, the function $f$ has a quite nice form. We can start from the substitution $xy^2=t^2$, hence $t=\sqrt x\cdot y$ which gives $\mathrm dt=\sqrt x\cdot\mathrm dy$. Therefore
$$f(x)=\int_x^{10}e^{-xy^2}\mathrm dy=\frac 1{\sqrt x}\int_{x^{3/2}}^{10 \sqrt x}e^{-t^2}\mathrm dt=\frac 1{\sqrt x}\int_{0}^{10 \sqrt x}e^{-t^2}\mathrm dt-
\frac 1{\sqrt x}\int_{0}^{10 \sqrt x}e^{-t^2}\mathrm dt.$$
The derivative of the rand hand side is easier to compute using the fundamental theorem of analysis and the chain rule.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 3
}
|
Question on series $\frac {\Gamma'(z)}{\Gamma(z)}$ Prove that:
$$\frac {2\Gamma'(2z)}{\Gamma(2z)}-\frac {\Gamma'(z)}{\Gamma(z)}-\frac {\Gamma \prime(z+\frac{1}{2})}{\Gamma(z+\frac{1}{2})} =2 \log 2$$
But I obtain this equal zero:
$$\frac {2\Gamma'(2z)}{\Gamma(2z)} - \frac {\Gamma'(z)}{\Gamma(z)} - \frac {\Gamma'(z+\frac{1}{2})}{\Gamma(z+\frac{1}{2})} = 0$$
What's the correct answer? $0$ or $2\log 2$?
Can anyone help?
|
Since
$$\Gamma(z)\cdot \Gamma(z+1/2)=2\sqrt{\pi}\cdot 4^z\cdot \Gamma(2z) $$
by considering the logarithmic derivative of both sides we get:
$$\frac{\Gamma'}{\Gamma}(z)+\frac{\Gamma'}{\Gamma}(z+1/2)=2\frac{\Gamma'}{\Gamma}(2z)+2\log 2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Inequality proof by induction, what to do next in the step I have to prove that for $n = 1, 2...$ it holds: $2\sqrt{n+1} - 2 < 1 + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{n}}$
Base: For $n = 1$ holds, because $2\sqrt{2}-2 < 1$
Step: assume holds for $n_0$.
$2\sqrt{n+2} - 2 < 1 + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{n + 1}}$. But I do not know what to do next? How this can be proved?
|
For induction step, it's enough to prove $\frac{1}{\sqrt{n+1}}>2(\sqrt{n+2}-\sqrt{n+1})$.
$$2(\sqrt{n+2}-\sqrt{n+1})=\frac{2}{\sqrt{n+2}+\sqrt{n+1}}<\frac{2}{2\sqrt{n+1}}=\frac{1}{\sqrt{n+1}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
Finding the positive integer numbers to get $\frac{\pi ^2}{9}$ As we know, there are many formulas of $\pi$ , one of them $$\frac{\pi ^2}{6}=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}......
$$
and this $$\frac{\pi ^2}{8}=\frac{1}{1^2}+\frac{1}{3^2}+\frac{1}{5^2}......$$
Now,find the positive integer numbers $(a_{0}, a_{1}, a_{2}....)$ to get $$\frac{\pi^2 }{9}=\frac{1}{a_{0}^2}+\frac{1}{a_{1}^2}+\frac{1}{a_{2}^2}....$$
|
Hint: $\frac 8 9 = \frac 1 2 + \frac 1 3 + \frac 1 {18}$ Now try using your second formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 4
}
|
Trigonometric equation, missing some solutions I'm missing part of the answer, and I'm not quite sure why. The given answer doesn't even seem to hold...
Solve for x: $$\tan 2x = 3 \tan x $$
First some simplifications:
$$\tan 2x = 3 \tan x $$
$$\tan 2x - 3 \tan x = 0$$
$$\frac{\sin 2x}{\cos 2x} - \frac{3 \sin x}{\cos x} = 0$$
$$\frac{2 \sin x \cos^2x - 3 \sin x \cos 2x}{\cos 2x \cos x} = 0$$
$$\frac{\sin x(2 \cos^2x - 3 \cos 2x)}{\cos 2x \cos x} = 0$$
$$\frac{\sin x(2 \cos^2x - 3 (\cos^2 x - \sin^2 x))}{\cos 2x \cos x} = 0$$
$$\frac{\sin x(2 \cos^2x - 3\cos^2 x + \sin^2 x)}{\cos 2x \cos x} = 0$$
$$\frac{\sin x(\sin^2 x - \cos^2 x)}{\cos 2x \cos x} = 0$$
$$\frac{\sin x(\sin^2 x - \cos^2 x)}{(\sin^2 x - \cos^2 x) \cos x} = 0$$
$$\frac{\sin x}{\cos x} = 0$$
Looks much simpler. Now solving for x, since $\frac{\sin x}{\cos x} = 0 $ when $\sin x = 0$ and $\sin x = 0$ for every half rotation, the answer must be $k\pi$.
Alas, according to my answer sheet, I'm missing two values: $\frac{\pi}{6} + k\pi$ and $\frac{5\pi}{6} + k\pi$. But since $\frac{\sin(\frac{\pi}{6})}{\cos(\frac{\pi}{6})} = \frac{\sqrt3}{3}$, I'm not sure where these answers come from.
Furthermore, this is the kind of mistake I'm making all over these exercises, I'd like to avoid that, but how can I be sure I have ALL the answers needed?
|
Setting $\tan x=t$
we have $$\frac{2t}{1-t^2}=3t\iff2t=3t(1-t^2)\iff t(2-3+3t^2)=0$$
If $t=0,\tan x=0, x=n\pi$ where $n$ is any integer
$2-3+3t^2=0\iff 3t^2=1\implies\cos2x=\dfrac{1-t^2}{1+t^2}=\dfrac12=\cos\dfrac\pi3$
$\implies2x=2m\pi\pm\dfrac\pi3$ where $m$ is any integer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Expected number of rolls A fair m-sided dice is rolled and summed until the sum is at least N. What is the expected number of rolls? In other words what is the number of rolls if we roll a m-sided dice and the sum of rolls become at least N.
|
If $f(N)$ is the expected number of rolls, by conditioning on the first roll we have
$f(N) = 1 + m^{-1} \sum_{j=1}^m f(N-j)$ for $N > 0$, with $f(N) = 0$ for $N \le 0$. The generating function is $$g(z) = \sum_n f(n) z^n = \dfrac{mz}{m - (m+1)z + z^{m+1}}$$
EDIT: If you're interested in the asymptotic behaviour of $f(N)$ as $N \to \infty$ for fixed $N$, you want to look at the smallest root of the denominator, which is $z=1$. We have $$g(z) = \dfrac{2}{(m+1)(z-1)^2} - \dfrac{2(m-4)}{3(m+1)(z-1)} + h(z)$$
where $h(z)$ is analytic in a neighbourhood of the unit disk. Corresponding to this we get
$$ f(N) = \dfrac{2 N}{m+1} + \dfrac{2(m-1)}{3(m+1)} + O(c^{-N}) \ \text{as}\ N \to \infty $$
for some $c > 1$ (depending on $m$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Equality of exponential functions from geometric series I'm currently trying to understand why the first and second line of this equation
are in fact equal. This is taken from "Introduction to the Physics of Waves" by Tim Freegarde from a chapter about diffraction gratings. The notation is somewhat ambiguos (I think), but from the following lines it becomes clear that $\sin \vartheta /2$ is to be read as $\frac{\sin\vartheta}{2}$. But my question remains, what steps are neccessary to go from the first line to the second line?
|
Hint: Multiply the numerator and denominator by the factor $-e^{ikd \sin(\vartheta)/2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Study of a function on interval $[0,1]$ Let $f(x)$ be a function defined on the interval $[0,1]$ such that
$$x \mapsto \dfrac{x^2}{2-x^2}$$
Show that, for all $x \in [0,1[,\ 0\leq f(x)\leq x <1.$
Attempt:
Let $$g(x)=\dfrac{f(x)}{x} $$
$$g'(x)=\dfrac{2+x^2}{(2-x^{2})^2}\geq 0 $$
i'm stuck here
Thanks for your help
|
$$0\leqslant x<1\Rightarrow 0\leqslant x^2<x<1\\2-x^2>1\\\frac{1}{2-x^2}<1\\\text{multiply by }x^2\\x^2\frac{1}{2-x^2}<x^2*1\\\frac{x^2}{2-x^2}<x^2\leqslant x <1\\$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Finding zeroes of a complex function over a lattice Question:
Let $L = \mu\mathbb{Z}[i]$ be a lattice in $\mathbb{C}$, where $\mathbb{Z}[i] = \{n+mi:n,m\in\mathbb{Z}\}$ and $\mu \in \mathbb{R}_{+}$.
Let $$\mathrm{G}_k = \displaystyle\sum_{\omega \in L}_{\omega \not= 0} \dfrac{1}{\omega^k} $$
be the corresponding Eisenstein series for $k \in \mathbb{Z}_{>0}$ and let
$$ p_L(z) = \dfrac{1}{z^2} + \displaystyle\sum_{\omega \in L}_{\omega \not= 0} \dfrac{1}{(z-\omega)^2} - \dfrac{1}{\omega^2} $$ be its Weierstrass $p$-function.
Find the zeros of $p_L$.
Answer: It's easy to show that $L=iL$ and $L$ is closed under complex conjugation.
Now $L=iL \Rightarrow \left(\omega \in L \iff i\omega \in L\right) \Rightarrow \displaystyle\sum_{\omega \in L}_{\omega \not= 0} \dfrac{1}{\omega^2} = 0 $
$L$ closed under complex conjugation $\Rightarrow \left(\omega \in L \iff \bar{\omega} \in L\right) \Rightarrow p_L(x) \in \mathbb{R}, \forall x \in \mathbb{R}, x \notin L$
So $p_L$ reduces to
$$ p_L(x) = \dfrac{1}{x^2} + \displaystyle\sum_{\omega \in L}_{\omega \not= 0} \dfrac{1}{(x-\omega)^2}$$.
How would I find the zeros of $p_L$ from here?
I've tried resolving the sum by splitting into the different $\omega$s ($i\omega$,$\bar{\omega}$,$-\omega$) but that doesn't seem to go anywhere.
|
For an explicit formula of the zeroes of the Weierstrass $\wp$-function $P_L(z)$ see the article of Eichler and Zagier here. We have $\tau=i$ for $\mathbb{Z}[i]$.
Theorem: The zeroes of $P_L(z)$ are given by
$$
z=m+\frac{1}{2}+ni\pm \Biggl(\frac{\log(5+2\sqrt{6})}{2\pi i}+144\pi i\sqrt{6} \int_i^{i\infty}(t-i)\frac{\Delta(t)}{E_6(t)^{\frac{3}{2}}}dt\Biggr),
$$
where $m,n\in \mathbb{Z}$, $E_6(t)$ and $\Delta(t)$ denote the normalized Eisenstein series of weight $6$ and unique normalised cusp form of weight $12$ on $SL_2(\mathbb{Z})$, respectively.
The authors give two proofs, one using modular forms, the other one elliptic integrals.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Help proving $\partial (A \cup B) = \partial A\cup\partial B$? I know this is a duplicate but the other two haven't helped me much.
Fist attempt: Tried proving through double inclusion, but wasn't sure of how to convey being an element of one implied being an element of the other in either direction, although I suspect from left to right would be the easier of the two.
Second attempt: Tried proving equality directly, using the fact that the boundary of a set is equal to the set difference of its closure and interior, but struggled proving closure of union is a union of closures, or the interior of a union is a union of interiors.
I'm getting pretty frustrated and any help would be greatly appreciated!
|
It is false.
Consier $\Bbb R$ with usual topology, $A=[0,2]$, $B=[1,3]$.
$$\partial(A\cup B)=\{0,3\}$$
$$\partial A\cup\partial B=\{0,1,2,3\}$$
Just think that some of the border of $A$ can be in the interior of $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1007949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
An annoying Pell-like equation related to a binary quadratic form problem Let $A,B,C,D$ be integers such that $AD-BC= 1 $ and $ A+D = -1 $.
Show by elementary means that the Diophantine equation
$$\bigl[2Bx + (D-A) y\bigr] ^ 2 + 3y^2 = 4|B|$$
has an integer solution (that is, a solution $(x,y)\in\mathbb Z^2$). If possible, find an explicit solution (involving $A,B,C,D$, of course).
Motivation: I arrived at this equation after trying to find explicitly the matrix $g$ suggested by Will Jagy on his answer to this question of mine. Concretely, if $\gamma=\binom{A\ \ B}{C\ \ D}$, then $\gamma$ has order $3$ in $\operatorname{SL_2}(\mathbb Z)$. By indirect methods it can be shown that $\gamma$ is conjugated in $\operatorname{SL_2}(\mathbb Z)$ to one of the matrices $P$ or $P^{-1}$, being $P=\binom{\ \ \,0\quad1}{-1\ \ -1}$ (see studiosus' answer to the same question.). Unfortunately this argument is rather sophisticated to my knowledge, and besides I think that a direct argument is possible.
Because of this I tried to find a explicit matrix $g=\binom{x\ \ y}{z\ \ w}\in\operatorname{SL_2}(\mathbb Z)$ such that $gP=\gamma g$ or $gP^{-1}=\gamma g$. The matricial equalities lead to a system of $4$ linear equations in the unknowns $x,y,z,w$ , which can be easily solved. Plugging these solutions $(x,y,z,w)$ (recall that we are considering the two possibilities of conjugation, to $P$ or $P^{-1}$) into the equation $xy-zw=1$ yields $Bx^2+(D-A)xy+(-C)y^2=\pm1$. Completing the square and using the equalities $AD-BC=1$ and $A+D=-1$ we obtain the required equation. I tried to solve it explicitly, with no success.
|
You have the binary quadratic form
$$ \color{red}{ f(x,y) = B x^2 + (D-A)xy - C y^2} $$ in your last paragraph. The discriminant is $$ \Delta = (D-A)^2 + 4 B C. $$
You also have $AD-BC = 1$ and $A+D = -1.$ So, $BC - AD = -1$ and
$$ A^2 + 2 AD + D^2 = 1, $$
$$ 4BC - 4AD = -4, $$
$$ A^2 - 2 AD + D^2 +4BC = -3, $$
$$ \Delta = (A - D)^2 +4BC = -3. $$
If we had $\gcd(A-D,B,C) > 1$ we would have a square factor of $\Delta,$ so that is out, the coefficients of $f(x,y)$ are relatively prime (as a triple, not necessarily in pairs). Next, $-3$ is not a square, so we cannot have $B=0$ or $C=0.$
Finally $f$ is definite. If, say, $B > 0,$ it is positive definite. With discriminant $-3,$ it is then equivalent in $SL_2 \mathbb Z$ to
$$g(u,v) = u^2 + u v + v^2$$ and both $g$ and $f$ integrally represent $1.$
If $B < 0,$ then $f$ is negative definite. With discriminant $-3,$ it is then equivalent in $SL_2 \mathbb Z$ to
$$h(u,v) = -u^2 - u v - v^2$$ and both $h$ and $f$ integrally represent $-1.$
That is enough for what you asked. I should point out that the reduction for positive binary forms is very similar to the Euclidean algorithm for finding GCD is very similar to the algorithm for finding the matrix $W$ in $SL_2 \mathbb Z$ that is conjugate to your original and has, for example, the smallest value of
$(w_{22} - w_{11})^2.$ I recommend Buell, Binary Quadratic Forms if I have not already. I was going to answer with a reduction that minimized $(A-D)^2,$ then looked again at your question and realized there was a shortcut. Note that many books do reduction for positive definite binary forms, and give short lists for discriminants of small absolute value.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
The group $\mathbb{Z}_{10}$ has precisely $4$ subgroups True/False: The group $\mathbb{Z}_{10}$ has precisely $4$ subgroups.
Solution:
True, since there are $4$ divisors $1,2,5,10$ thus it has $4$ subgroups.
|
You are correct.
If you define $\mathbb{Z}_{10} = \mathbb{Z}/ 10\mathbb{Z}$, then this follows from the correspondence theorem: the subgroups of $\mathbb{Z}_{10}$ correspond to the subgroups of $\mathbb{Z}$ that contain $10\mathbb{Z}$, and there is exactly one such subgroup for each divisor of $10$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Linear independence of $\sin^2(x)$ and $\cos^2(x)$ The Wronskian for $\sin^2x, \cos^2x$ is
\begin{align}
& \left| \begin{array}{cc} \sin^2 x & \cos^2 x \\ 2\sin x\cos x & -2\cos x\sin x \end{array} \right| \\[8pt]
= {} & -2\sin^2x \cos x \sin x - 2 \cos^2 x \sin x \cos x,
\end{align}
with $x = \frac{π}{6},$ this is $=$
$$
-\sqrt{\frac{3}{2}}dx
$$
Does this mean $\sin^2x, \cos^2x$ are linearly independent on the interval from $(-∞, ∞)$?
|
It suffices to show that the Wronskian is not zero for a single value of $x$. We have: $$W(x) = \begin{vmatrix} \sin^2x & \cos^2x \\ 2\sin x \cos x & -2 \sin x \cos x\end{vmatrix} = -2\sin^3x \cos x - 2\sin x \cos^3 x$$
$$W(x) = -2\sin x \cos x = -\sin(2x)$$
Then, $W(\pi/4) = -1 \neq 0$, so the functions are linearly independent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Humorous integration example? I was just reading though an introductory calculus book and it has the note:
NOTE When integrating quotients, do not integrate the numerator and denominator separately. This is no more valid in integration than it is in differentiation.
Now that's fair enough to point out and it gives a nice example too. But out of curiosity...
Are there examples of functions $f ,g: \Bbb R \rightarrow \Bbb R $ whereby:
$\int \frac {f}{g} =\frac {\int f}{\int g}$.
Say for clarity you have a choice of the constants in the antiderivatives, and $f \not\equiv 0$.
I imagine it might possibly be easier if you choose definite integrals, just none spring to mind!
Maybe there's a link to a similar question on here?
|
It is easy to verify that, in the version suggested by @MPW, for $f=e^{ax}$, $g=e^{bx}$, we have
$$
a=\frac{b}{b-1}.
$$
For $b=2$ we have the answer obtained by Sam.
Edit:
There is a sketch, that this is near general solution.
We denote integral by $F$, as it is up to constant some function. Then we have
$$
F(fg)=F(f)F(g).
$$
It is known, that the solution (up to constants again) is $F(g)=g^a$, $a\in\bf R$.
It means that $g(x)=ag^{a-1}(x)g'(x)$. It is a Bernoulli equation, with solutions of the form $Ce^{bx}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 4
}
|
Complex analysis, showing a function is constant Let $\Omega$ be the right half plane excluding the imgainary axis and $f\in H(\Omega)$ such that $|f(z)|<1$ for all $z\in\Omega$. If there exists $\alpha\in(-\frac{\pi}{2},\frac{\pi}{2})$ such that $$\lim_{r\rightarrow\infty}\frac{\log|f(re^{i\alpha})|}{r}=-\infty$$ prove that $f=0$.
The hint is define $g_n(z)=f(z)e^{nz}$, then by previous exericise $|g_n|<1$ for all $z\in\Omega$.
|
I can only give a partial proof.
Let $g_n(z)=f(z)e^{nz}$, $n=1,2,\ldots$, and $g_n(z)\in H(\Pi)$.
Assume $g_n(z)\in C(\bar\Pi)$.
Let $K_1=\{re^{i\theta}\mid\theta\in(\alpha,\pi/2)\}$, $K_2=\{re^{i\theta}\mid-\theta\in(\alpha,\pi/2)\}$ and $K_3=\Pi\setminus(K_1\cup K_2)$. By the assumption $\lim_{r\rightarrow\infty}\frac{\log|f(re^{i\alpha})|}{r}=-\infty$, $g_n(z)$ is bounded on each sector $K_i$. Since $g_n(z)$ doesn't grow too fast, by Phragmen-Lindelöf's principle for sectors, $g_n(z)$ is bounded in $\Pi$.
By Phragmen-Lindelöf's principle again, this time for the right half plane, $|g_n(z)|\le1$. Since it is true for all $n$, we must have $f=0$.
It remains to show that the assumption $g_n(z)\in C(\bar\Pi)$ is true, or it is redundant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that no hamiltonian cycle exists in the graph ** Show that the graph below has a hamiltonian path but no hamiltonian cycle.
You can find more than one hamiltonian path such as $(b,a,c,f,e,g,d)$.
Actually, I tried many times to find a hamiltonian cycle, but I couldn't find a cycle. The problem is how to prove that no hamiltonian cycle exists. I need a logical proof. I can't say "I tried many times but I couldn't find a hamiltonian cycle" because it's not reasonable answer.
|
Notice that if you delete the edge joining $g$ and $f$, then you get the complete bipartite graph $K_{3,4}$ with sides $\{g,a,f\}$ and $\{b,d,c,e\}$.
It may help now to redraw the graph in the usual way $K_{3,4}$ is drawn. In any case, now you need prove that there is no Hamilton cycle in $K_{3,4}$ and no Hamilton path from $g$ to $f$. Use the fact that the two sides have different number of vertices.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
what is the sum of square of all elements in $U(n)$? I know that $\sum\limits_{a\in U(n)} a=\frac{n\varphi(n)}{2}$ where $U(n):=\{1\leq r\leq n: (r, n)=1\}$ is a multiplicative group. And I know how to prove this result.
What I was willing to know was this $\sum\limits_{a\in U(n)} a^2$. is it possible to find in closed form?
what I tried is the following:
Let $S=\sum\limits_{a\in U(n)} a^2$. Now $(n, a)=1$ shows that $(n, n-a)=1$ which again under the fact $(a, b)=1, (a, c)=1\Rightarrow (a, bc)=1$, shows that $(n, (n-a)^2)=1$. Hence $\{(n-a_1)^2, \cdots, (n-a_{\varphi(n)})^2\}$ is nothing but a permutation of the original set $\{a_1^2, \cdots, a_{\varphi(n)}^2\}$ in some order. Hence $S=\sum\limits_{a\in U(n)} (n-a)^2$.
In other words we must have:
\begin{align*}
S=&\sum\limits_{a\in U(n)} (n^2-2an+a^2)\\
=&n^2 \sum\limits_{a\in U(n)} 1-2n\sum\limits_{a\in U(n)} a+S\\
=&n^2\varphi(n) -2n\times \frac{n\varphi(n)}{2}+S\\
=&S
\end{align*}
and no result is obtained.
What to do ? Please help me.
Thanks
EDIT:
After the link has been provided below by Robert Israel
the formula reads $ n = p_1^{a_1}p_2^{a_2}\cdots p_r^{a_r}$ then $S = n^2\frac{\varphi(n)}{3}+(-1)^r p_1p_2\cdots p_r\frac{\varphi(n)}{6}$ but how to establish this ?
|
See OEIS sequence A053818 and references there.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
How Find the value $\det{(A)}$ if know $A\cdot\begin{bmatrix}1 \end{bmatrix}$ let $A_{n\times n}$matrix,and $A^{*}$is Adjugate matrix of the $A$,$p,q>0$ is give numbers,and such following condition
$$A\cdot\begin{bmatrix}
1\\
1\\
\vdots\\
1
\end{bmatrix}=\begin{bmatrix}
p\\
p\\
\vdots\\
p
\end{bmatrix},and ,A^{*}\cdot\begin{bmatrix}
1\\
1\\
\vdots\\
1
\end{bmatrix}=\begin{bmatrix}
q\\
q\\
\vdots\\
q
\end{bmatrix}$$
and $A^{-1}$ is exsit,Find the $\det{(A)}$
My idea: I knw this matrix $A$ one eigenvalue is $p$,and $A^{*}$ have one eigenvalue is $q$
so
$$AX=pX,A^{*}Y=qY$$
|
There is a theorem which states that
$$A \cdot A^*= A^* \cdot A= \det(A) \cdot I$$
Where $I$ is the identity matrix.
So, applying $A \cdot A^*$ to the vector $[1, \dots, 1]^T$ you get that the determinant of $A$ is $pq$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving that $\frac{\pi ^2}{p}\neq \sum_{n=1}^{\infty }\frac{1}{a_{n}^2}$ we have many formula for $\pi ^2$ as follow $$\frac{\pi ^2}{6}=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}....$$
and $$\frac{\pi ^2}{8}=\frac{1}{1^2}+\frac{1}{3^2}+\frac{1}{5^2}....$$ and for $\frac{\pi ^2}{9}$, $\frac{\pi ^2}{12}$, $\frac{\pi ^2}{16}$,$\frac{\pi ^2}{24}$.. and so on. If you see the number in denominator is not prime.Now we want to prove that
$$\frac{\pi ^2}{p}\neq \sum_{n=1}^{\infty }\frac{1}{a_{n}^2}$$ Where $p$ is a prime number and $a_{n}$ a positive integer number
|
I ran the greedy algorithm from the previous question for the first 1000 primes with the added caveat that the terms in the sequence have to be distinct. It appears that your statement is only true for the primes 2, 3, 5, 11, 13 and false for the rest! I've only used 40 terms in the algorithm for each of the primes. For those that work, the accuracy is quite good (hundreds of digits for the larger primes). For example if $A_p$ stands for the terms for prime p then
$$A_7 = \{1, 2, 3, 5, 11, 42, 991, 59880, 21165404, 81090759408,\ldots \}$$
and if $S_p(n)$ stands for the partial sum we have
$$S_7(10) = \frac{9409086091684410250487330038662170074145032522104397}{66733781786148561048500788381308231
67971682291718400} $$
with
$$\frac{\pi^2}{7}- S_7(10) \approx 3.0292 \cdot 10^{-34}$$
EDIT: I ran the algorithm again but this time with the integers from 1 to 100 because I suspected the problem is not just with the primes themselves. The algorithm failed to converge within 15 terms for the numbers
$$E = \{1, 2, 3, 4, 5, 10, 11, 12, 13, 14, 15\}$$
The algorithm is way off for these integers. I'm willing to conjecture that for every integer $k\notin E$ there exists a sequence of positive integers $a_{n,k}$ such that
$$\frac{\pi^2}{k} = \sum_{n = 1}^{\infty} \frac{1}{a_{n,k}^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
Examples of the Geometric Realization of a Semi-Simplicial Complex I am reading The Geometric Realization of a Semi-Simplicial Complex by J. Milnor and here are the definitions:
I find it difficult to visualize without specific examples. Can anyone help to provide some typical examples of geometric realization of semi-simplicial complexes?
|
Let $K_0=\{a,b\}$ and let $K_1=\{X,Y\}$ with $\partial_0(X)=\partial_1(Y)=a$ and $\partial_1(X)=\partial_0(Y)=b$. This is a $\Delta$-complex.
The space $\bar{K}$ is a disjoint union of two discrete points given by $a$ and $b$, and two disjoint intervals given by $X$ and $Y$. The equivalence relation just says that we join the beginning of the $X$ interval and the end of the $Y$ interval to the point $a$, and the end of the $X$ interval and the beginning of the $Y$ interval to the point $b$. Draw a picture of this and you will see that the geometric realisation $|K|=\bar{K}/{\sim}$ is homeomorphic to a circle.
If we instead want to represent the circle by a simplicial complex $K$, then we need at least three $1$-simplices. Let $K_0=\{a,b,c\}$ and let $K_1 = \{\{a,b\},\{b,c\},\{c,a\}\}$ with $$\partial_0(\{a,b\})=\partial_1(\{c,a\})=a\\ \partial_0(\{b,c\}) = \partial_1(\{a,b\})=b\\ \partial_0(\{c,a\}) = \partial_1(\{b,c\})=c$$
and this time we have three edges which are glued together in the geometric realisation according to the ordering of their boundary faces given by the image of the $\partial_i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to integrate the dilogarithms? $\def\Li{\operatorname{Li}}$
How can you integrate $\Li_2$? I tried from $0 \to 1$
$\displaystyle \int_{0}^{1} \Li_2(z) \,dz = \sum_{n=1}^{\infty} \frac{1}{n^2(n+1)}$
$$\frac{An + B}{n^2} + \frac{D}{n+1} = \frac{1}{n^2(n+1)}$$
$$(An + B)(n+1) + D(n^2) = 1$$
Let $n = -1, \implies D = 1$
Let $n = 0, \implies B = 1$
Let $n = 1, \implies A = -1$
$$\frac{-n + 1}{n^2} + \frac{1}{n+1} = \frac{1}{n^2(n+1)}$$
$$= \sum_{n=1}^{\infty} \frac{-n + 1}{n^2} + \frac{1}{n+1} = \sum_{n=1}^{\infty} \frac{1}{n^2(n+1)} = \sum_{n=1}^{\infty} \frac{1}{n^2} - \frac{1}{n} + \frac{1}{n+1} $$
The $1/n$ is the problem, it is the harmonic series, which diverges.
|
Maybe you should look at your decomposition as
$$\frac1{n^2 (n+1)} = \frac1{n^2} - \frac1{n (n+1)}$$
The sum over the second term is easy, given that the indefinite sum is telescoping, i.e.,
$$\sum_{n=1}^N \frac1{n (n+1)} = 1-\frac1{N+1}$$
We take the limit as $N \to \infty$ and then we may view this as the infinite sum. (Otherwise, as you say, there are convergence issues.)
Thus the sum in question is $$\frac{\pi^2}{6}-1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1008949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
}
|
$L^{2}$ convergent, subsequence, directed family of points I have a question about a convergence.
Let $(E,\mathcal{B},m)$ be a measure space. I think the following assertion is very famous:
Let $f_{n},f \in L^{2}(E;m)\quad(n=1,2,\cdots)$. If $f_{n}\to f $ in $L^{2}(E;m)$ then there exists subsequence $(f_{n_{k}})_{k=1}^{\infty}$ of $(f_{n})_{n=1}^{\infty}$ such that $f_{n_{k}}\to f$ $m$-a.e.
By the way, let $(f_{t})_{t>0} \subset L^{2}(E,m),\,f \in L^{2}(E;m)$. We suppose that $f_{t} \to f \,(t \to 0) $ in $L^{2}(E;m)$.
In above situation, can we conclude $f_{t} \to f$ $m$-a.e.?
|
No. All you can conclude is that for every sequence $\{t_n\}$, $t_n\to0$, there is a subsequence $\{t_{n_k}\}$ such that $f_{t_{n_k}}\to f$ $m$-a.e.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Analogy to the purpose of Taylor series I want to know an analogy to the purpose of Taylor series. I did a google search for web and videos : all talks about what Taylor series and examples of it. But no analogies. I am not a math geek and this is my attempt to re-learn Calculus in a better way, to understand Physics and Linear Algebra.
Having an analogy will indeed help to view its use in real life. Learning seems lacking, if the concept can't be applied. Appreciate a more laymen term explanation at this challenging point (for me).
Have read this post as of now: What are the practical applications of Taylor series?. That post (the answers, comments) indeed increases the bar of my expectation for a satisfying answer to my question.
|
I can't think of any good analogies.
Perhaps the best way to get comfortable with Taylor Series is to look at some interactive examples:
http://demonstrations.wolfram.com/GraphsOfTaylorPolynomials/
Starting with $f(x) = e^x$, step up the highest number of terms from 1 to 10 to see how higher polynomials do an increasingly better job of approximating the function around the point of expansion.
Dare I say it one of the reasons to learn mathematics is it gives you new ways of thinking about things. Taylor series and the idea of an $n$-th degree approximation is itself an analogy or metaphor used many places in the sciences: physical, biological and social.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
How to find the determinant of this $(2n+2)$ x $(2n+2)$ matrix? I need to calculate the determinant of the following matrix:$$\begin{bmatrix}0&0&-2x_1& \cdots &-2x_n&0& \cdots &0\\0&0&0& \cdots&0&-2x_1& \cdots&-2x_n\\-2x_1&0&-1& \cdots&0&1& \cdots&0\\ \vdots&\vdots &\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\-2x_n&0&0&\cdots&-1&0&\cdots&1\\0&-2x_1&1&\cdots&0&-1&\cdots&0\\\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\0&-2x_n&0&\cdots&1&0&\cdots&-1
\end{bmatrix}$$
Consider that all the elements that are only on the diagonals denoted by the dots are $1$ or $-1$.
Let $H_j$ be the upper leftmost $j$ x $j$ submatrix of A. I need a method to evaluate all the determinants for which $5\le j\le2n+2$. Could someone give me an advice?
|
Write w.l.o.g. your matrix in the form
$$
A:=\begin{bmatrix}
0 & 0 & x^T & 0 \\
0 & 0 & 0 & x^T \\
x & 0 & -I & I \\
0 & x & I & -I
\end{bmatrix}.
$$
Take any nonzero vector $z$ orthogonal to $x$ ($z^Tx=0$), there is a whole $(n-1)$-dimensional subspace of them. Now
$$
A\begin{bmatrix}
0 \\ 0 \\ z \\ z
\end{bmatrix}=0.
$$
Hence for $n>1$, $A$ is singular and $\det(A)=0$.
For $n=1$, it is easy to verify that $\det(A)=x^4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the topological properties of $\mathbb R $ that makes it uncountable (as compared to $\mathbb Q $)? What is the topological properties of $\mathbb R $ that makes it uncountable (as compared to $\mathbb Q $)?
Further, what axioms (or properties) of $\mathbb R $ do these topological properties depend on? (I suppose completeness, and of course also the ordering, since this is what generates the usual topology...)
There is a proof in Munkres topology that a nonempty compact Hausdorff space which has no isolated points is uncountable. Obviously, this is satisfied by closed intervals in $\mathbb R $, but in $Q $, compact subsets must have isolated points ( this follows from an argument using Baire's category theorem). And can this be shown to follow from completeness?
So can one say that the uncountability of the real numbers hinges on the fact that we have compact sets that are perfect, whereas the set of rationals havn't?
Is it possible to nail down this distinction even further? That is to say that compactness and that closed sets are perfect depend on some other topological property.
Thanks in advance!
|
One topological property of $\mathbb{R}$ that makes it uncountable (more precisely, of size at least contiuum) is connectedness. Any Tychonoff (or even functionally Hausdorff) connected at least two point space has size at least continuum. The connectedness comes from completeness, actually the order completeness and no isolated points is enough. Any (at least two point) densely lineary ordered topological space that comes from complete ordering has size at least continuum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Median of truncated / limited normal distribution The peoples weight is normally distributed $\mathcal{N}(0,\,1)$
The $\mu \; , \; \sigma \; and \; \sigma^2$ are known.
How can i calculate the median weight of people if everyone who weights less than amount of x is removed / ignored.
I would appreciate some hints on what would be the best way to begin to solve this problem.
|
Let $F$ denote the CDF of the uncensored weight, then the median $m_x$ of the weight censored below the value $x$ solves $F(m_x)=\frac12(1+F(x))$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is irrational times rational always irrational? Is an irrational number times a rational number always irrational?
If the rational number is zero, then the result will be rational. So can we conclude that in general, we can't decide, and it depends on the rational number?
|
If $a$ is irrational and $b\ne0$ is rational, then $a\,b$ is irrational. Proof: if $a\,b$ were equal to a rational $r$, then we would have $a=r/b$ rational.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 3
}
|
$\int \frac{2^{\sin \left(\sqrt{x}\right)} \cos \left(\sqrt{x}\right)}{\sqrt{x}} \, dx$ I have been asked to integrate:
$$\int \frac{2^{\sin \left(\sqrt{x}\right)} \cos \left(\sqrt{x}\right)}{\sqrt{x}} \, dx$$
In such a small integration you dont have to write it down but to see where I am struggling I have provided a step by step approach:
$$u=\sin \left(\sqrt{x}\right)$$
$$2 \text{du}=\frac{\cos \left(\sqrt{x}\right)}{\sqrt{x}}dx$$
$$2 \int 2^u \, du$$
Know this is where I get stuck cause I do not see that the answer from this should be:
$$\frac{2^{u+1}}{\log (2)}$$ Is there systematic approach to solving this and if not how do you reason?
Please notice it s not the substitution I am struggling with.
|
$$I=\int\frac{2^{\sin\sqrt{x}}\cos\sqrt{x}}{\sqrt{x}}dx$$
$u=\sin\sqrt{x}\Rightarrow \frac{du}{dx}=\frac1{2\sqrt{x}}\cos\sqrt{x}$ and so: $dx=\frac{2\sqrt{x}}{\cos\sqrt{x}}du$ which converts our integral into:
$$I=2\int2^{u}du$$
now to integrate this notice that:
$$2^{u}=e^{\ln(2^{u})}=e^{u\ln(2)}$$
now if we make the substitution: $v=u\ln(2)\Rightarrow du=\frac{dv}{\ln(2)}$ and so:
$$I=\frac{2}{\ln 2}\int e^vdv$$
now this should be easy, just remember the constant of integration and then back-substitute :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Could the Riemann hypothesis be provably unprovable? I don't know much about foundations and logic, so I ask forgiveness if my question is just plain stupid.
Assume we have a statement of the form:
There exist no $x\in X$ such that $P(x)$.
where $X$ is some kind of set (or class, or similar stuff) and $P$ is a set of properties. An example of such a statement could be the Riemann hypothesis:
There exist no $x\in\mathbb{C}$ such that $\Re(x)\neq\frac{1}{2}$ such that $x$ is not a negative even integer and $\zeta(x)=0$.
Can such a statement be provably unprovable?
Intuitively, I would say no, because to show that it is unprovable we should show that we cannot find $x\in X$ such that $P(x)$ (else finding such an $x$ would be a proof that the statement is false), but doing so would prove the statement to be true.
Is this correct, or am I missing something?
Edit: Please read the question correctly: it is not properly a question on the RH, but more a question on logic!
|
You can check out the answers to this related MO question:
"Can the Riemann hypothesis be undecidable?"
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
}
|
how to find out if something in general(scalar, function, vector) is in the span of a set im pretty sure that if a vector is in the span of a set of vectors, then it can be written as a linear combination of the vectors in the set, whch you can find out by setting up a system of equations. But what if you have an arbitrary object and want to find out if it is in the span of a set?
such as this:
$S = \{\cos^2x, \sin^2x, \tan^2x\}$
a) $1$
b) $\sec^2x$
c) $\cos2x$
d) $0$
which of these is in the span of this set? I want to say that $1$ can't be written as a linear combination of the three functions, but don't know how to find out concretely, I also want to say $0$ can be, because you can have $k1,k2,k3 = 0$, but am I allowed to have all scalars be $0$? and I'm not sure how to concretely figure out if the other functions or a value in general is in the span of this set. what is the most guaranteed way to find out whether something spans a set or not?
|
You do it the same way as with Euclidean vectors -- you check if you can represent a given function (/vector) by a linear combination of your basis:
That is, you try to solve:
$a)\ 1=a\cos^2(x)+b\sin^2(x)+c\tan^2(x)$
$b)\ \sec^2(x)=a\cos^2(x)+b\sin^2(x)+c\tan^2(x)$
$c)\ \cos(2x)=a\cos^2(x)+b\sin^2(x)+c\tan^2(x)$
$d)\ 0=a\cos^2(x)+b\sin^2(x)+c\tan^2(x)$
NOTE: the $a$'s, $b$'s, and $c$'s need not be the same for each problem.
If there is at least one solution $(a,b,c)$ to any of these, then that function is in the space spanned by $\cos^2(x)$, $\sin^2(x)$, and $\tan^2(x)$.
You should also note that some sort of $0$ (scalar, function, n-tuple, etc) will ALWAYS be in your space precisely because you can always have $a=b=c=0$.
So let's do $(a)$:
Remember that we have the Pythagorean identity: $\sin^2(x) + \cos^2(x) =1$. So clearly one solution is $1=(1)\cos^2(x)+(1)\sin^2(x)+(0)\tan^2(x)$, AKA $(a,b,c) = (1,1,0)$. Because we were able to find this solution, it means that $1 \in \text{span}(\cos^2(x),\sin^2(x),\tan^2(x))$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What are the theorems in mathematics which can be proved using completely different ideas? I would like to know about theorems which can give different proofs using completely different techniques.
Motivation:
When I read from the book Proof from the Book, I saw there were many many proof for the same theorem using completely different fields of mathematics.
(There were Six proofs of the infinity of primes based on different ideas even using topology.)
I wonder How mathematicians give this kind of proofs.
If you know such theorems and proofs please share them with me. Thank you.
|
A large rectangle is tiled with smaller rectangles. Each of the smaller rectangles has at least one integer side. Must the large rectangle have at least one integer side?
What if you replace 'integer' with 'rational' or 'algebraic'?
Here are fourteen proofs.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1009922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 13,
"answer_id": 8
}
|
Divergent Sequence for Wau So I just "learned" about the number Wau from Vi Hart's video. It's amusing, to be sure, but the actual "definition" she presents got me thinking.
We can formalize the construction in this way: set $x_0=\frac2{1+3}=\frac12$ and then
$$x_{n+1} = \frac{2}{\frac12x_n+\frac32x_n} = \frac{2}{2x_n}=\frac{1}{x_n}.$$
Clearly, this sequence does not converge. However, if we assume it does converge to $x$, then $x=\frac1x$ and the only sensible solution to this is $x=1$.
I know that there are treatments of divergent series, and there are methods (for example, Cesàro summation) of assigning meaningful finite quantities to them, and I was wondering if the same is true for sequences. In particular, is there some more legitimate sense in which this sequence should be assigned the value $1$ than the simple "hope it converges" idea?
I tried to turn it into a series and then perform Cesàro summation; but as one would expect, this gives the arithmetic average of the distinct terms:
$$x_n = 2 - \frac32\sum_{k=0}^n (-1)^k \xrightarrow{\text{Cesàro}} 2-\frac34 = \frac54$$
|
It may not be a correct intuition, but my idea is to perturb the iteration as follows: let $\epsilon > 0$ and define
$$ x_{0} = \tfrac{1}{2}, \qquad x_{n+1} = \frac{1}{x_{n}} + \epsilon. \tag{1}$$
Then it follows that $x_{n}$ converges to $\sqrt{1+\epsilon^{2}} + \epsilon$, which indeed converges to $1$ as $\epsilon \to 0^{+}$.
If we assume that this iteration procedure is given by some physics model, then it may make sense of thinking this perturbation. But this interpretation have some pitfall, in that (1) turns out to be very unstable if $\epsilon < 0$. Thus even if this viewpoint makes sense, we need a great care when we try to understand the meaning of the perturbation term $\epsilon$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the geometric interpretation of general solution Here are the linear equation:
$$2x+z=0$$
$$-x+3y+z=0$$
$$-x+y+z=0$$
I have found that the general solution is,
$$t
\begin{bmatrix}
\frac{-1}2\\
\frac{-1}2 \\
1 \\
\end{bmatrix}
$$
The question asks me to find the geometric interpretation of general solution.
But I have no idea how. I think my limit knowledge of geometric interpretation is not helping me, so some explanation about geometric interpretation would help me a lot.
Thanks you for any help!
|
What you have is the parametric equation for a line. Do you see why?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If the normalizer of a subgroup in a group is equal to the subgroup then is the subgroup abelian? If $H$ is a proper subgroup of $G$ such that $H=N_G(H)$ ( the normalizer of $H$ in $G$ ) , then is it true that $H$ is abelian ?
|
Since no one posted a simple counterexample yet here it goes:
Consider $G=S_4$ and $H=S_3$ considered as a subgroup of $G$ (i.e. as the stabilizer of one of the $4$ points $G$ naturally acts on). Then there are $4$ conjugates of $H$ (the stabilizers of the $4$ points are all conjugate) but also $4=[G:H]$ hence $N_G(H)=H$ and $H$ is obviously not abelian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 1
}
|
positive fractions, denominator 4, difference equals quotient (4,2) are the only positive integers whose difference is equal to their quotient. Find the sum of two positive fractions, in their lowest terms, whose denominators are 4 that also share this same property.
|
Let's work with $n$ instead of 4. We achieve $a=\frac{b^2}{b-n}=b+n+\frac{n^2}{b-n}$, hence $b-n$ must divide $n^2$. If $a/b$ must be in lowest terms, this is only possible if $b=n+1$: would $b$ be a proper divisor of $n^2$ it would contain a proper divisor $k$ of $n$. Now if $k$ is a proper divisor of $n$ it is a proper divisor of $b=n+k$ and $b/n$ would not be in lowest terms.
Thus all fractions satisfying the conditions are $\frac{n+1}{n}$ and $\frac{(n+1)^2}{n}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Subtle Twist on the Monty Hall Problem---Does It Make a Difference? In the Monty Hall problem, when the host picks a door and reveals an goat, does it make any difference if he did not know which door the real car was behind, and he just happened to pick a door with a goat?
|
Ok, ill turn my comment into a full answer:
Let $C$ be the event where you choose the car and let $G$ be the event that Monty Hall shows the goat. Then the probability that the other door has a car given that monty shows you a goat is:
$P(\neg C|G)=\frac{P(\neg C)(P(G|\neg C)}{P(\neg C)(P(G|\neg C) + P(C)P(G|C)}=\frac{\frac{2}{3}\times\frac{1}{2}}{\frac{2}{3}\times\frac{1}{2} + \frac{1}{3}\times1} = \frac{1}{2}$, which is different from the original problem.
Intuitively, this is because Monty Hall is not really providing you with any additional information by showing you a goat (obviously he does if he reveals a car)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
Is the angle between a vector and a line defined?
Is the angle between a vector and a line defined?
The angle between two lines $a,b$ is defined as the smallest of the angles created.
The angle between two vectors $\vec{a},\vec{b}$ is the smallest angle one of them has to be rotated by so that the directions of $\vec{a},\vec{b}$ are the same.
I have not yet found a definition of the angle between a vector and a line, which makes me wonder if, and if so, how, it is defined.
|
You could just take a vector pointing along the direction of the line and use the angle between the original vector and this vector.
Of course, you would need to choose the orientation your new vector should point: this will affect the angle you get. I would expect you would choose the one which minimizes the angle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Properties possessed by $H , G/H$ but not G i) Does there exist a group $G$ with a normal subgroup $H$ such that $H , G/H$ are abelian but $G$ is not ?
ii) Does there exist a group $G$ with a normal subgroup $H$ such that $H , G/H$ are cyclic but $G$ is not ?
|
Sure, here's an answer to both. Consider the quaternions, $Q_{8}$, and the normal subgroup $H = \langle i \rangle$. Then $Q_{8}/H \cong \mathbb{Z}_{2}$, but $Q_{8}$ is neither cyclic nor abelian.
Here's another more general example. Consider $D_{2n} = \langle r, s \mid r^{n}=s^{2} =1, rs=sr^{-1}\rangle$. The subgroup $H = \langle r \rangle$ is always normal, since $H$ has index $2$ in $G$, and $H$ is always cyclic. Furthermore, $D_{2n}/H \cong \mathbb{Z}_{2}$, and so this again answers both questions, since $D_{2n}$ is not abelian for $n\geq 3$.
You can see this will also generalize to any group with a cyclic subgroup of index $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Finding multiple functions with same $f_{even}$ but different $f_{odd}$? A function can be decomposed as $f(x) = f_{even}(x) + f_{odd}(x)$ where $f_{even}(x)=\dfrac{f(x)+f(-x)}{2}$ and $f_{odd}(x)=\dfrac{f(x)-f(-x)}{2}$.
If we know only $f_{even}$, how can we find different values for $f_{odd}$ that work (we can't just plug in any function)? A graphical method works too.
|
There is a one to one correspondence between $A\times B=\{\text{even}\}\times \{\text{odd}\}$ and $C=\{\text{functions}\}$, given by
$$
F(a,b) = a+b
\\
G(f) = \left( x\to \frac{f(x) + f(-x)}2,x\to \frac{f(x) - f(-x)}2
\right)
$$
In linear algebra terms, $C = A\bigoplus B$.
In other words, $f_{even}$ is not informative at all on $f_{odd}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Some questions about synthetic differential geometry I've been trying to read Kock's text on synthetic differential geometry but I am getting a bit confused. For example, what does it mean to "interpret set theory in a topos"? What is a model of a theory? Why does Kock use [[ ]] rather than { } for sets? Does it serve to indicate that these sets are not "classical"?
As a side question, are there any drawbacks to synthetic differential geometry compared to the usual approach? Are there any aspects of classical differential geometry that cannot be done synthetically, or require more effort and machinery? Can physical theories like general relativity be expressed synthetically? If so, does this make it easier or more difficult to perform calculations and simulations based on the synthetic formulation?
With regards to my background, I'm educated in "classical" differential geometry at the level of John Lee's series, I know a bit of general relativity from O'Neill, I'm familiar with elementary category theory at the level of Simmons' book, and I know the definition of a topos, but I don't know any categorical logic or model theory.
|
Any elementary topos comes with an internal language which allows you to formally import constructive logic and some set-theoretical notions into it. This enables you to manipulate objects (and arrows) inside it as though they were concrete sets. This is no formal coincidence: the notion of elementary topos was distilled by Lawvere as he worked to categorically axiomatize the category of sets. The double square brakets are indeed used to emphasize the 'sets' need not be objects in the category of sets.
As far as provability, many classical results in differential geometry are elegeantly expressible but not provable in the synthetic context. This paper discusses "constructing intuistionistic models of general relativity in suitable toposes".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Solve easy sums with Binomial Coefficient How do we get to the following results:
$$\sum_{i=0}^n 2^{-i} {n \choose i} = \left(\frac{3}{2}\right)^n$$
and
$$\sum_{i=0}^n 2^{-3i} {n \choose i} = \left(\frac{9}{8}\right)^n.$$
I guess I could prove it by induction. But is there an easy way to derive it?
Thank you very much.
|
For the first one consider the binomial expansion of $(1+\frac{1}{2})^n$ and see how close that is the left hand side while adding the values will give the right hand side.
For the second, consider putting in $\frac{1}{8}$ and note what fraction is 2 to the negative 3.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1010955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
showing an operator is normal If I have that H is an inner product space with inner product $( . , . )$ over the complex numbers, and $T∈L(H,H)$. Let $R=T+T^*$, $S=T-T^*$ .
If I suppose T is normal, how do I show that :
1) $T^*$ is normal and
2) $R∘S=T∘T-T^*∘T^*$
I'm having trouble even getting started on this problem. I appreciate any and all help. Thanks!
|
Definitions:
*
*T normal <=> TT* = T*T
*T* normal <=> T*T** = T**T*
Furthermore we have the following properties of the adjoint:
*
*T** = T
*(R + S)* = R* + S*
can you take it from there?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
If the pot can contain twice as many in volume, is it twice as heavy? Suppose that I have two pots that look like similar cylinders (e.g. Mason jars). I know that one of them can contain twice as many in volume than the other. If both are empty, is the bigger one twice as heavy as the other?
Intuitively I would say that it is less than twice as heavy, but I am not sure. Mathematically, I do a cross multiplication and conclude that the bigger one is twice as heavy, but it seems wrong to me. How to solve this problem?
Note: This is for an actual real life problem. I filled up the bigger pot and sealed it but I forgot to weigh it before and now I want to know how many grams of food it contains.
EDIT: The width of the walls seem to be the same. Can we find what is the weigh of the bigger one given the weigh of the smaller one?
|
No, it is not twice as heavy. The weight of the pot is determined by its mass, which is proportionate to its surface, rather than its volume.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Can anyone find out an counter example to this unitary statement? If $A$ and $B$ are unitary, then $A+B$ is not unitary.
I think this statement is true .
I tried to find out counter example but I failed.
|
The $0 \times 0$ matrix is unitary, thus it is a counterexample.
Exluding degenerate matrices, $1 \times 1$ matrices are the simplest; they're usually a good place to look for counterexamples, if you aren't trying to exploit noncommutativity.
A $1 \times 1$ unitary matrix is simply a complex number of absolute value $1$. So the question becomes, can you find three complex numbers of absolute value $1$ such that $x+y=z$?
(if you have trouble with this, try thinking what this would mean geometrically)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving the harmonic series, multiplied by a factor of 1/n, decreases monotically to zero. I would like to show that $\left(1 + \frac 12 + \dots + \frac 1n\right) \cdot \left(\frac 1n\right)$ decreases monotonically to zero.
I have seen one method: to first show that the difference $\left(1 + \frac 12 + \dots + \frac 1n\right) - \log(n)$ decreases monotonically to the Euler-Mascheroni constant, but I was wondering if there is another (perhaps cleaner or more instructive) way of proving the limit is zero.
Any help would be greatly appreciated.
Thanks,
|
This is the arithmetic mean of the numbers $1, 1/2,\ldots,1/n$. If you add a number that is lesser than the others, say $1/(n+1)$, the mean decreases.
The limit is $0$ because
$$\frac{H_n}n<\frac{\log n}n\to0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why does dividing by zero give us no answer whatsoever? I've heard about this and I know that division can be used in one way like this:
For example, if I want to do $30$ divided by $3$, how many times can I subtract $3$ from $30$ to get to $0$? Well, I can do it this way: $30-3=27-3=24-3=21-3=18-3=15-3=12-3=9-3=6-3=3-3=0$As you just saw, I subtract $3$ from $30$ 10 times to get to $0$. So, $30$ divided by $3$ is $10$. Well, what if I divide by a negative number or what if I divide a negative number by a positive number? Well, I do know how to solve $15$ divided by $-5$ this way. Just do it like in the mentioned way, but $15$ will get bigger if I subtract $-5$. So, what should I do. That's easy; do it backwards by subtracting the opposite of $-5$, or $5$: $15-5=10-5=5-5=0$. As you can see, I just did it backwards, and because I did it backwards, I need to subtract the number of times I subtract $5$, since it's the opposite of $-5$. So, I subtracted $-5$ -3 times. So, $15$ divided by $-5$ is $-3$.
As you have seen, I have used subtraction while doing division. Let's cut to the chase now. What if we divided any number by $0$? Well, this would happen: If I want to solve $10$ divided by $0$, I would just subtract $0$ from $10$ until I make the $10$ a $0$. So, $10-0=10-0=10-0=10-0=10-0=10-0=10-0=10...$ and it just keeps going on and on forever. So, this must explain why any number divided by zero has no answer at all. Also, for right here, right now, I can say it's infinity since it just goes on and on. So, is this why any number divided by $0$ never has an answer? Good answers at the bottom down there, if possible!
|
It is an interesting attempt, but you rather cannot induce, that any number $A$ divided by 0 is infinity. We know, that $-0=0$ and from your considerations $A$ divided by 0 must be equal to $-\infty$ in the same time. Moreover, your attempt gives $0/0=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Categorial description of the subposet of $\prod_{i \in I}P_i$ of all $x \in \prod_{i \in I}P_i$ with $\{i \in I \mid x_i = \bot_{P_i}\}$ cofinite By a grounded poset, I mean a poset $P$ with a least element $\bot_P$.
Definition. Whenever $P$ is a family of grounded posets, write $\bigotimes_{i \in I}P_i$ for the subposet of $\prod_{i \in I}P_i$ consisting of all $x \in \prod_{i \in I}P_i$ such that $\{i \in I \mid x_i = \bot_{P_i}\}$ is cofinite.
Example. Let
*
*$\mathbf{N}$ denote the set $\{0,1,2,\ldots\}$ of natural numbers
*$\mathbb{N}$ denote the poset of natural numbers equipped with the usual order
*$\mathbb{N}^\times$ denote the poset of non-zero natural numbers equipped with the divisibility order
Then there is an order isomorphism $f : \bigotimes_{n \in \mathbf{N}}\mathbb{N} \rightarrow \mathbb{N}^\times$ given as follows.
$$x \mapsto \prod_{i \in \mathbf{N}}p_i^{x_i}$$
(where $p_i$ is the $i$th prime number.)
Question. Is there a sleek category-theoretic description of $\bigotimes_{i \in I}P_i$?
|
Your $\bigoplus_{i \in I} P_i$ is the colimit of all finite products of some of the $P_i$; in symbols, $\bigoplus_{i \in I} P_i =\mathop{\mathrm{colim}}_{F \subset I, |F|<\infty} \prod_{i \in F} P_i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
If $y=-e^x\cos2x$, show that $\frac{d^2y}{dx^2}=5e^x\sin(2x+\tan^{-1}(\frac{3}{4}))$
If $y=-e^x\cos2x$, show that $\frac{d^2y}{dx^2}=5e^x\sin(2x+\alpha)$ where $\alpha=\tan^{-1}(\frac{3}{4})$.
I've managed to figure out that
$$
\frac{d^2y}{dx^2}=e^x(4\sin2x+3\cos2x)
$$
But I'm not sure how I can massage it into the form above. Wolfram Alpha lists it as an alternative form but doesn't show how you get there - what do I need to do and which trig identities are relevant here?
|
$4\sin2x+3\cos2x=5(\frac{4}{5}\sin2x+\frac{3}{5}\cos2x)$
Note we want $\frac{4}{5}\sin2x+\frac{3}{5}\cos2x=\sin(2x+\alpha)$ for some $\alpha$.
By compound angle formula, we know $$\sin(2x+\alpha)=\sin2x\cos(\alpha)+\cos2x\sin(\alpha)$$
So in order to fulfill the requirement, we only need to set $$\cos(\alpha)=\frac{4}{5},\sin(\alpha)=\frac{3}{5}$$
Such $\alpha$ exists.
Hence $5(\frac{4}{5}\sin2x+\frac{3}{5}\cos2x)=5\sin(2x+\alpha)$ with $\cos(\alpha)=\frac{4}{5},\sin(\alpha)=\frac{3}{5}$, that is $\tan(\alpha)=\frac{3}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Group homomorphism, the uniqueness of k for g' = gk Group homomorphism is $ \Phi: G \rightarrow H $
Show, that for all $ h \in H $ and all $ g,g' \in \Phi^{-1}(\{h\}) $ there exists a unique $k \in \ker(\Phi) $, so that $g'=gk$.
$$
\forall h \in H, \forall g,g' \in \Phi^{-1}(\{h\})\exists! k \in \ker(\Phi): g'=gk
$$
I do not want a solution for this problem, just maybe a hint or a first step into the right direction, because I really don't know where to start.
|
Notice that $k=g^{-1}g'$ is in $\ker \Phi$ and $g'=gk$ and by the unicity of its form $k$ is unique
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1011960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Give all values of h for which the matrix A fails to be invertible Can someone please help me out with this please. I know the answer is h = 8 and I know the determinant is 21h - 168 and I even know the steps to find those answers. For some reason this is giving me fits and I must be making silly mathematical mistakes that I keep missing. First I have been simply using cofactor expansion and cannot come up with the correct answer. Then I used row reduction where I had a seven and two zeros in the first column.
I came up with 21h - 28 at one point and 119h + 28 at one point.
Thanks to anyone taking a look at this. I know this is a pretty simple problem but I'm not getting something right. I am studying for a test and fill like I am understanding everything pretty well, but this one problem is making me crazy!
First I use two row reductions. R2 becomes R1(-2)+R2 and R3 becomes R1(-1)+R3
So I get {{7,-5,3},{0,3,-5},{0,-3,h-3}}
|
\begin{align*}
\operatorname{det}\begin{bmatrix} 7 & -5 & 3 \\ 14 & -7 & 1 \\ 7 & -8 & h \end{bmatrix}
&= 3\operatorname{det}\begin{bmatrix} 14 & -7 \\ 7 & -8 \end{bmatrix} - 1\operatorname{det}\begin{bmatrix} 7 & -5 \\ 7 & -8 \end{bmatrix} + h\operatorname{det}\begin{bmatrix} 7 & -5 \\ 14 & -7\end{bmatrix} \\
&= 3(14(-8) + 7(7)) - (7(-8) + 7(5)) + h(7(-7)+14(5))\end{align*}
From here, I suggest factoring $7$ out from every term. If you agree with what I have so far, then your problem is arithmetic, not linear algebra.
An easier way is to row-reduce to find $h$ such that the homogeneous system corresponding to the matrix has nontrivial kernel:
$$ \begin{bmatrix} 7 & -5 & 3 \\ 14 & -7 & 1 \\ 7 & -8 & h \end{bmatrix} \longrightarrow \begin{bmatrix} 7 & -5 & 3 \\ 0 & 3 & -5 \\ 0 & -3 & h-3 \end{bmatrix}\longrightarrow \begin{bmatrix} 7 & -5 & 3 \\ 0 & 3 & -5\\0 & 0 & h-8 \end{bmatrix}$$
The determinant is zero exactly when $h=8.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Reference textbook about proof techniques I am looking for some good recommended reference textbooks about proof techniques.
Someone told me "G. Polya - How to solve it" is kind of standard, but quite old.
I am looking for a book that handles both classical (manual work) proofs and modern proof techniques using proof assistants or automated theorem provers. Or two separate books each handling just one of these topics.
|
I personally recommend Problem Solving Strategies
and Putnam and Beyond.
These are math-olympiad oriented. But I think they are excellent sources and they include loads of examples that are certainly not your average textbook problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
An example for conditional expectation A factory has produced n robots, each of which is faulty with probability $\phi$. To each robot a test is applied which detects the faulty (if present) with probability $\delta$. Let X be the number of faulty robots, and Y the number detected as faulty.
Assuming the usual indenpendence, determine the value of $\mathbb E(X|Y)$.
Please explain me the result in detail or give me a good hint pls, since I am very new to this concept (of conditional expectation).
Thanks!
|
Denote with $F$ the event that a robot is faulty, with $P(F)=\phi$ and with $T$ the event that it was tested faulty with $$P(T|F)=\delta$$ and $$P(T|F')=0$$ Thus according to the law of Total Probability the probability that a random robot is tested faulty is equal to $$P(T)=P(T|F)P(F)+P(T|F')P(F')=\delta\cdot\phi+0=\delta\cdot\phi$$ Now, if a robot was not tested faulty then the probability that it is nevertheless faulty is equal to $$P(F|T')=\frac{P(T'|F)P(F)}{P(T')}=\frac{(1-\delta)\phi}{1-\delta\phi}$$ Assume, now that $Y$ robots were found faulty, then $$E[X|Y]=Y+(n-Y)\cdot\frac{(1-\delta)\phi}{1-\delta\phi}=\frac{n\phi(1-\delta)+(1-\phi)Y}{1-δ\phi}$$ since according to the above calculations if there where $Y$ robots tested faulty, then these $Y$ are for sure faulty, and there is a$\frac{(1-\delta)\phi}{1-\delta\phi}$ probability that each of the remaining $n-Y$ robots that were not tested faulty, is actually faulty as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Clarification: Proof of the quotient rule for sequences My Problem
I am currently looking for a proof for the quotient rule for sequences:
$a_n$ and $b_n$ are two sequences with the limes a,b. So:
When
$ a_n \rightarrow a$
and
$ b_n \rightarrow b$
Then:
$\frac {a_n}{b_n} \rightarrow \frac{a}{b}$
Awesome stuff, but how do i prove it?
Solving attempts
I found a proof in my textbook, but i have a hard time understanding it. It goes as follows.
To prove the quotient rule, we have to acknowledge: $b \ne 0$.Then
$b_n \rightarrow b > 0 $
Thus, for every $\epsilon : \frac{|b|}{2}$, There is a $n_0$, so that $|b_n|>|b|-\epsilon= \frac{|b|}{2}$ for every $n>n_0$. For those n's (This is the thing i don't get) You can say:
$$| \frac {1}{b_n}-\frac {1}{b}|=|\frac{b-b_n}{b_nb}| \le \frac {2}{|b|^2}|b-b_n|$$
Because of the factor rule(Which is the equivalent of the quotient rule, just for factors), the right side of the inequality goes towards 0. Because of the rule 22.3 you can conclude that $\frac{1}{b_n} \rightarrow \frac{1}{b}$, and if you apply the product rule again, also that
$\frac{a_n}{b_n} \rightarrow \frac {a}{b}$.
Rule 22.3:
$\alpha_n $is a null sequence. If the inequality $|a_n- a|\le \alpha_n$ is valid from a certain point with a limited number of exeptions, then $\alpha_n \rightarrow n$
I understand the beginning of the proof, but not how we get to the right side of the inequality. And i mean i understand why this thing on the right is a null sequence. But how does this prove our point.
If somebody could clarify or point me to another proof of this rule, that is maybe easier to understand, i would be very grateful.
|
If, for eny $\epsilon > 0 \in \mathbb{R}$, there exists an $n \in \mathbb{Z}$ such that $\forall i > n, |a_i - c| < \epsilon $, then $c$ is defined as the limit of the sequence of $a_i$.
Assuming the limit of sequence $a_i$ is $a > 0$, and the limit of $b_i$ is $b > 0$, we want to prove the limit of $\dfrac{a_i}{b_i} = \dfrac{a}{b}$.
If we apply the limit definitions to $a_i$ and $b_i$ at $\delta < \frac{b}{2}$, how different can $\dfrac{a_i}{b_i}$ be from $\dfrac{a}{b}$?
We apply the limit definition and get a $n_a$ such that $\forall i \ge n_a, |a_i - a| < \delta$, and we get a $n_b$ such that $\forall i \ge n_b, |b_i - b| < \delta$. Let $n' = \max(n_a, n_b)$. We know that for all $i > n', $ both $|a_i - a| < \delta$ and $|b_i - b| < \delta$. We can also say this as $a - \delta \le a_i \le a + \delta$ and $b - \delta \le b_i \le b + \delta$.
So $\dfrac{a - \delta}{b + \delta} \le \dfrac{a_i}{b_i} \le \dfrac{a+\delta}{b-\delta}$. How far do these minimum and maximum bounds stray from $\dfrac{a}{b}$?
$$\begin{align}
\frac{a}{b} - \frac{a-\delta}{b+\delta} & = \frac{a(b+\delta) - (a-\delta)b}{b(b+\delta)}\\
& = \frac{ab + a\delta - ab + \delta b}{b(b+\delta)} \\
& = \frac{(a+b)\delta}{b(b+\delta)}\\
& < \frac{a+b}{b^2}\delta
\end{align}$$
$$\begin{align}
\frac{a + \delta}{b - \delta} - \frac{a}{b} & = \frac{(a + \delta)b - a(b - \delta)}{b(b - \delta)}\\
& = \frac{ab + \delta b - ab + a\delta}{b(b - \delta)}\\
& = \frac{a + b}{b(b - \delta)}\delta\\
& < \frac{a + b}{b\frac{b}{2}}\delta && \text{remember $\delta < \frac{b}{2}$}\\
\frac{a + \delta}{b - \delta} - \frac{a}{b} & < \frac{a + b}{b^2}2\delta
\end{align}
$$
So now we can prove the limit of $\dfrac{a_i}{b_i}$. When we are given an $\epsilon > 0$, we can use the limits of $a_i$ and $b_i$ for the convenient $\delta = \min(\frac{b}{2}, \frac{b^2}{2(a+b)}\epsilon)$ to get $n_a$ and $n_b$. Then we determine $n' = \max(n_a,n_b)$. Then (omitting the trivial $\frac{b}{2} < \frac{b^2}{2(a+b)}\epsilon $ case) we know that for all $i > n'$,
$$\begin{align}
|\frac{a_i}{b_i} - \frac{a}{b}| & \le \frac{2(a+b)}{b^2}\delta \\
& \le \frac{2(a+b)}{b^2}\frac{b^2}{2(a+b)}\epsilon \\
& \le \epsilon
\end{align}$$
And so we fulfill the definition of the limit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does a vector space need to be closed? What part of the definition of a vector space (see here) requires it to be closed under addition and multiplication by a scalar in the field? I would understand if we defined a vector space as a group of vectors rather then a set but we don't, also non of the axioms require this to be a condition?
|
Because, addition and multiplication (by a scalar) operations are functions from $V \times V$ to $V$ and $K \times V$ to $V$, respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that a positive polynomial function can be written as the squares of two polynomial functions
Let $f(x)$ be a polynomial function with real coefficients such that $f(x)\geq 0 \;\forall x\in\Bbb R$. Prove that there exist polynomials $A(x),B(x)$ with real coeficients such that $f(x)=A^2(x)+B^2(x)\;\forall x\in\Bbb R$
I don't know how to approach this, apart from some cases of specific polynomials that turned out really ugly. Any hints to point me to the right direction?
|
Survey article by Bruce Reznick called Some Concrete Aspects of Hilbert's 17th Problem, includes your case in the paragraph on Before 1900:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 3,
"answer_id": 1
}
|
Limit cycle of dynamical system $x'=xy^2-x-y$, $y'=y^3+x-y$ Consider a planar ODE system $z'=f(z)$ with $z=(x,y)$,
$$
f(x,y)=(xy^2-x-y,y^3+x-y).
$$
Using polar coordinates, one can get
$$
r'=r(r^2\sin^2\theta-1),\quad \theta'=1.
$$
With Mathematica one can get
As one can see from the figure, there is a limit cycle for the system. I tried to apply the Poincaré–Bendixson theorem to show the existence of the limit cycle by using some negatively invariant annular region. But there seems no hope of finding such region. How should I go on?
|
Using the change of variables $(u,v)=(x+y,y)$, the $(x,y)$-differential system is equivalent to the $(u,v)$-differential system $$u'=uv^2-2v,\qquad v'=v^3+u-2v$$ In particular, $$(u^2+2v^2)'=2(uu'+2vv')=2v^2(u^2+2v^2-4)$$
This shows that the ellipsis $(E)$ of equation $u^2+2v^2=4$ is invariant by the dynamics. In the $(x,y)$-plane, the equation of $(E)$ is $$x^2+2xy+3y^2=4.$$ Starting from every point inside $(E)$, one converges to $(x_\infty,y_\infty)=(0,0)$. Starting from every point outside $(E)$, one diverges in the sense that $x(t)^2+y(t)^2\to+\infty$.
Finally, starting from every point on $(E)$, one cycles on $(E)$ counterclockwise with time period $$\int_0^{2\pi}\frac{\mathrm dt}{\sqrt2-\sin(2t)}=2\pi.$$ To evaluate the period, recall that, for every $a\gt1$, $$\int_0^{2\pi}\frac{\mathrm dt}{a+\sin(t)}=\frac{2\pi}{\sqrt{a^2-1}}.$$
Edit: Some simulations of the systems $x'=x(y^2−c)−y$, $y'=y(y^2−c)+x$ for some $c\gt0$. The cycle of each such system is the ellipsis $(E_c)$ with equation $$x^2+2cxy+(1+2c^2)y^2=2c(1+c^2).$$
For $c=4$: streamplot[{x(y^2-4)-y,y(y^2-4)+x},{x,-20,+20},{y,-5,+5}]
$\qquad\qquad$
For $c=.2$: streamplot[{x(y^2-.2)-y,y(y^2-.2)+x},{x,-2,+2},{y,-2,+2}]
$\qquad\qquad$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
}
|
Is there a real number $r$ such that $\sum\limits_{k=0}^{\infty}\frac{p_k}{r^k}=e$? Let $p_n$ denote the sequence of prime numbers, with $p_0=2$.
I'm looking for a real number $r$ such that $\sum\limits_{k=0}^{\infty}\frac{p_k}{r^k}=e$.
It's easy to show that $r>5$, with $\frac{2}{5^0}+\frac{3}{5^1}+\frac{5}{5^2}=2.8>e$.
I suppose it shouldn't be too hard to show that $r<6$.
So we know that $5<r<6$ (my observation shows that $r\approx5.7747052$).
But does it prove that there must be some value of $r\in\mathbb{R}$ such that $\sum\limits_{k=0}^{\infty}\frac{p_k}{r^k}=e$?
|
Write $f(x) = \sum_k p_k x^k$. Assume first that the series for $f(1/4)$ converges. Then the function $f(x)$ is defined and continuous on $[0,1/4)$. You've shown $f(1/5) > e$, and clearly $f(0) = 2$. By the intermediate value theorem, there must be some $c \in (0,1/5)$ such that $f(c) = e$. You can take $r = 1/c$.
It remains to show that $f(1/4)$ is finite. The convergence of this series will follow from the root test if we can establish, for instance, that $p_n < 3^n$. I imagine that there is an easier way to show this, but at minimum this follows by induction from Bertrand's postulate.
I don't believe the general arguments you gave are sufficient unless the series is shown to converge at some point $d$ for which $f(d) \geq e$, which will inevitably involve some estimate of the growth of $p_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1012940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Approximation of $f\in L_p$ with simple function $f_n\in L_p$ Let us use the definition of Lebesgue integral on $X,\mu(X)<\infty$ as the limit$$\int_X fd\mu:=\lim_{n\to\infty}\int_Xf_nd\mu=\lim_{n\to\infty}\sum_{k=1}^\infty y_{n,k}\mu(A_{n,k})$$where $\{f_n\}$ is a sequence of simple, i.e. taking countably (infinitely or finitely) many values $y_{n,k}$ for $k=1,2,\ldots$, functions $f_n:X\to\mathbb{C}$ uniformly converging to $f$, and $\{y_{n,k}\}=f_n(A_{n,k})$ where $\forall i\ne j\quad A_{n,i}\cap A_{n,j}=\emptyset$.
I know that if any sequence $\{f_n\}\subset L_p(X,\mu)$, $p\geq 1$ uniformly converges to $f$ then it also converges with respect to the norm $\|\cdot\|_p$ to the same limit, which is therefore an element of $L_p(X,\mu)$.
I read in Kolmogorov-Fomin's Elements of the Theory of Functions and Functional Analysis (1963 Graylock edition, p.85) that hence an arbitrary function $f\in L_2$ can be approximated [with respect to norm $\|\cdot\|_2$, I suspect] with arbitrary simple functions belonging to $L_2$.
I do not understand how the last statement is deduced. If it were true, I would find the general case for $L_p$, $p\geq 1$, even more interesting. I understand that if $f\in L_p\subset L_1$ then for all $\varepsilon>0$ there exists a simple function $f_n\in L_1$ such that $\forall x\in A\quad |f(x)-f_n(x)|<\varepsilon$ and then, for all $p\geq 1$, $\|f-f_n\|_p<\varepsilon$, but how to find $f_n\in L_p$ (if $p=2$ or in general $p>1$)?
Can anybody explain such a statement? Thank you so much!!!
|
In many sources, simple functions are those measurable functions that have a finite set of values. But here a countably infinite set of values is allowed. This allows one easily approximate any measurable function $f$ uniformly by simple functions: for example, let
$$
f_n(x) = \frac{\lfloor n f(x)\rfloor }{n}
$$
and observe that $f-\frac{1}{n}\le f_n\le f$ everywhere. On a finite measure space, this implies $f_n\to f$ in $L^p$ norm for every $p\in [1,\infty]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1013057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solve to find $y(x)$ of the $\frac{1}{\sum_{n=0}^{\infty }y^n}-\sum_{n=0}^{\infty }x^n=0$ Solve the equation to find the $y$ as a function to respect $x$ without $n$ $$\frac{1}{\sum_{n=0}^{\infty }y^n}-\sum_{n=0}^{\infty }x^n=0$$
|
Assuming $|x| < 1$ and $|y| < 1$, the two geometric series simplify to:
$$\frac{1}{\frac{1}{1 - y}} = \frac{1}{1 - x}.$$
Consequently:
$$(1 - x)(1 - y) = 1.$$
Solving for $y$ in terms of $x$, you get:
$$1 - y = \frac{1}{1 - x}$$
which implies that
$$y(x) = 1 - \frac{1}{1 - x} = \frac{x}{x - 1}.$$
Note that $|y(x)| < 1$, so that $|x/(x - 1)| < 1$. This implies that:
$$|x| < |x - 1|,$$
which further implies that
$$x^2 < (x - 1)^2 = x^2 - 2x + 1.$$
Consequently,
$$x < \frac{1}{2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1013138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to find coordinates of reflected point? How can I find the coordinates of a point reflected over a line that may not necessarily be any of the axis?
Example Question:
If P is a reflection (image) of point (3, -3) in the line $2y = x+1$, find the coordinates of Point P.
I know the answer is $(-1,5)$ by drawing a graph but other than that, I cannot provide any prior workings because I don't know how to start...
|
Similar answer to @Vrisk, but a bit faster
Consider the line $L=Ax+By+C=0$ and find image of point $(u,v)$ assuming $Au + Bv + C \neq 0 $
If we extend the space which the equation exists into $R^3$ , we will find that the equation $Ax+By+C=0$ denotes a plane with a unit normal vector as : $$ \hat{n} = \frac{<A,B>}{\sqrt{A^2 +B^2} } $$
Now, consider the unsigned perpendicular distance of this point from the line/plane :
$$ d = \frac{Au +Bv +C}{\sqrt{A^2 +B^2}}$$
Now, depending on sign of the above quantity we can see which part of the plane/ line which the point is (see here).
If, from our point we were translate in twice the direction of unit normal scaled up by the distance from line to point, we would reach the image point. Hence, the image coordinates$(I)$ are given by the vector:
$$I =<u,v>- \frac{2(Au+Bv+c)}{A^2 +B^2} <A,B>$$
Or more concisely:
$$I= <u,v> -2d \hat{n}$$
Here is a useful diagram:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1013230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 7,
"answer_id": 4
}
|
How to find $P(S_1 \cap S_2^C | K_i)$ given $P(S_1 \cap S_2 | K_i)$ From medical investigations it is known that the symptoms $S_1$ and $S_2$ can appear with three different diseases $K_1, K_2, K_3$. The conditional probabilities $a_{i,j}=P(S_j|K_i), i \in \{1,2,3\}, j \in \{1,2\}$ are given by the following matrix.
$$ A= (a_{i,j}) = \left(
\begin{array}{cc}
0.8 & 0.3 \\
0.2 & 0.9 \\
0.4 & 0.6 \\
\end{array}
\right)$$
In the first part of the question I already calculated $P(S_j)$ and $P(K_i|S_j)$. For the second part of the question, I'm given the conditional probabilities $P(S_1 \cap S_2 |K_i)$ by the following vector $(0.2, 0.1, 0.3)$. Assuming that a patient shows symptoms $S_1$, but not $S_2$, what is the probability that he suffers from $K_1, K_2$ and $K_3$?
So I'm looking for $P(K_i|S_1 \cap S_2^C)$. I can get $P(S_1 \cap S_2^C)$ using $P(S_1 \cap S_2^C) = P(S_1) - P(S_1 \cap S_2)$, but I have problems getting the joint distribution $P(S_1 \cap S_2, K_i)$.
First I was trying to show that $S_1$ and $S_2$ are independent and use that to derive the joint probability, but they are not. Alternatively, is it true that the formula $P(S_1 \cap S_2^C) = P(S_1) - P(S_1 \cap S_2)$ remains true when conditioning on K, so that I have $P(S_1 \cap S_2^C | K_i) = P(S_1 | K_i) - P(S_1 \cap S_2 | K_i)$? Or can anybody help me by providing an alternative way to solve this?
|
The formula you mention remains true under conditioning, so can be
used:
$P\left(S_{1}\mid K_{i}\right)=\frac{P\left(S_{1}\cap K_{i}\right)}{P\left(K_{i}\right)}=\frac{P\left(S_{1}\cap S_{2}\cap K_{i}\right)}{P\left(K_{i}\right)}+\frac{P\left(S_{1}\cap S_{2}^{c}\cap K_{i}\right)}{P\left(K_{i}\right)}=P\left(S_{1}\cap S_{2}\mid K_{i}\right)+P\left(S_{1}\cap S_{2}^{c}\mid K_{i}\right)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1013347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.