Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Connectivity and Euler characteristic for surfaces I learn the concept of connectivity from Hilbert's Geometry and the Imagination as follows:
A polyhedron is said to have connectivity $h$ (or to be $h$-tuply connected) if $h-1$, but not $h$, chains of edges can be found on it in a certain order that do not cut the surface in two, where it is stipulated that the first chain is closed and that every subsequent chain connects two points lying on the preceding chains.
Then he claimed the relationship between connectivity of a polyhedron and the Euler characteristic:
$$\chi=V-E+F=3-h$$
In the next section, he defined the connectivity on general surfaces (you can get it from google books), including surface-with-boundary and non-bounded surfaces. However, it seems that $\chi=3-h$ doesn't work for the general case. I want to see why the relation works for a polyhedron, but I didn't find the same concept via google. I need some reference for that concept. Thanks!
|
It may be hard to find because I'm not sure that this term is used very much; at the very least, I have never really heard it used. In fact, using it we find the unusual descriptions that
*
*$S^{2}$ is 1-connected
*$T^2 = S^1 \times S^{1}$ is 3-connected
*More generally a Riemann surface of genus $g$ is ${2g + 1}$ connected.
Anyhow, as to how one can see this:
Loosely speaking, choose a "triangulation" of your surface (which has no boundary) with only one vertex, and such that the ${h-1}$ edges which we will cut will be the 1-cells. If we do so, there will only be one face on the surface (since after you perform all of the cuts, the result is a polyhedron with ${2h-2}$ edges; otherwise you could cut it further). If we then put this together into the formula for the Euler characteristic, we have that
$$
V - E + F = 1 - (h-1) + 1 = 3 - h
$$
as claimed.
If this seems a bit weird, I recommend following through this argument with a torus. It's not too hard to pick the appropriate "triangulation", from which you can see the end result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that a counterexample exists without knowing one I strive to find a statement $S(n)$ with $n \in N$ that can be proven to be not generally true despite the fact that noone knows a counterexample, i.e. it holds true for all $n$ ever tested so far. Any help?
|
Define $k$ to be 42 if the Riemann Hypothesis is true, and 108 if it is false.
Now consider $S(n) \equiv n\ne k$.
Alternatively consider $S(n)$ to be "There is a two-symbol Turing machine with 100 states that runs for at least $n$ steps when started on an empty tape, but eventually terminates".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/710950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 12,
"answer_id": 2
}
|
Rubik cube number of alternative solutions If a cube is in a configuration that requires 20 moves to solve, is that sequence unique, or are there multiple sequences that arrive at a solution? That is: are there are two or more sequences that only have the start and finish position in common?
|
Of course there can be multiple possible solutions.
Before it was proved (via computer search) that every position required 20 moves or less, the Superflip was shown to require exactly 20 moves to solve. (See also here.) One sequence of moves which solves the Superfilp is
U R2 F B R B2 R U2 L B2 R U' D' R2 F R' L B2 U2 F2
However, the Superflip is symmetric in both rotations of the cube and in reflections of the cube. Therefore, one can modify the above sequence of moves to solve the position in a variety of other ways.
For example:
U' L2 F' B' L' B2 L' U2 R' B2 L' U D L2 F' L R' B2 U2 F2 (reflection)
R D2 F B D B2 D R2 U B2 D R' L' D2 F D' U B2 R2 F2 (rotation)
Also, the Superflip when executed twice, results in the solved cube. So the reverse sequence works also:
F2 U2 B2 L' R F' R2 D U R' B2 L' U2 R' B2 R' B' F' R2 U' (reverse)
In summary, employing various kinds of symmetry you can easily generate multiple sequences for solving the same position.
Notes
*
*Reflecting the cube across a plane parallel to faces R and L interchanges U with U', D with D', F with F', B with B', R with L', and L' with R'. This was used to generate the reflection sequence.
*Rotating the cube with axis perpendicular to F and B is just a cycle on the faces R, U, L, D.
*Reversing a sequence of moves is done by writing the sequence in reverse order and interchanging clockwise (no ') with counterclockwise (').
*I have verified by computer that the above sequences of moves all generate / solve the Superflip.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is this a poorly worded probability question? Unsolvable? The question says: "For a recent year, 0.99 of the incarcerated population is adults and 0.07 is female. If an incarcerated person is picked at random, find the probability that the person is female given they are an adult."
I've been thinking about this for more than 4 hours and it just doesn't seem solvable to me.
We need the intersection(percentage of those that are females and adults) to use the following formula, but there is no way to find that.
${\mathbb{P(}F|A)= \mathbb{P}(F\cap A)}/{\mathbb{P}(A)}$
I tried to solve a similar but simpler problem that I made up such as:
In a population of 10 people, 8 of them(80%) are adults, and 4 of the total population are females(40%); what is the percentage of female adults?
I tried to visualize it as a set: $\{1, 3, 6, 8, 10, 12, 14, 16, 18, 20\}$ and ask if 80% of the set elements are positive and 40% of the set elements are divisible by 3, what is the percentage of positive numbers that are divisible by 3 in the set?
Is the original question really flawed? Or did my brain stop working because I can't think clearly anymore since I saw that question.
|
On the assumption of independence the answer is obviously $0.07$.
That assumption is not necessarily reasonable. So indeed the question is poorly worded.
Imagine as an extreme case that no child females are put in jail. Then the probability a jailed person is female given the person is an adult is $\frac{0.07}{0.99}$.
As an opposite extreme case, suppose that no male children are put in jail. Then the probability is $\frac{0.06}{0.99}$. This is because the females account for all the child prisoners, leaving the proportion $0.06$ of the total that are both adult and female.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Has this theorem (on existence of inverse) an analogous for unbounded operators? Let $S,T:X\to X$ be bounded linear operators, where $X$ is a Banach space. It's a consequence of the Banach Fixed Point Theorem that if $T$ is invertible and $\|T-S\|\|T^{-1}\|<1$ than $S$ is invertible.
I would like to know if is there a similar result for unbounded operators that are defined just in a dense subset of $X$. More precisely, let $T,S:Y\subset X\to X$ be linear operators, where $X$ is a Banach space and $Y$ is dense in $X$ (you can suppose $X$ Hilbert if you want). Supose that:
(1) $T$ is invertible;
(2) $T^{-1}$ and $T-S$ are bounded;
(3) $\|T-S\|\|T^{-1}\|<1$.
Is $S$ invertible?
Thanks.
|
From the inequality $\lVert T-S\rVert \lVert T^{-1}\rVert < 1$, it follows that
$$R = (I - T^{-1}(T-S))$$
is an invertible bounded operator. Since $TR = T(I - T^{-1}(T-S)) = T - (T-S) = S$, we get the representation $S^{-1} = R^{-1}T^{-1}$ in the bounded case. Now we can verify that $R^{-1}T^{-1}$ is also the inverse of $S$ when $S$ is unbounded.
Since $T-S$ is bounded, we have $D(T) = D(S)$, and the identity $S = T - (T-S) = TR$ holds. Thus for every $x\in X$, we have
$$SR^{-1}T^{-1}x = TRR^{-1}T^{-1}x = TT^{-1}x = x,$$
and for $x \in D(S) = D(T)$,
$$R^{-1}T^{-1}Sx = R^{-1}T^{-1} TRx = R^{-1}Rx = x.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Upper bound for $\Vert f \Vert^{2}$, where $f: [0,1] \to \mathbb{R}$ continuously differentiable. Let $f: [0,1] \to \mathbb{R}$ be continuously differentiable with $f(0)=0$. Prove that $$\Vert f \Vert^{2} \leq \int_{0}^{1} (f'(x))^{2}dx$$
Here $\Vert f \Vert$ is given by $\sup\{|f(t)|: t \in [0,1]\}$.
I'm just a bit unclear how to proceed.
|
Let x = argsup f. Because f(0) = 0, f(x) is the integral up to x of f', which is less than or equal to the L2 norm of f' up to x by convexity of the squaring function, which is less than or equal to the L2 norm of f' on the entire interval. Now square both sides.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Is there a topological characterisation of non-Archimedean local fields? A local field is a locally compact field with a non-discrete topology. They classify as:
*
*Archimedean, Char=0 : The Real line, or the Complex plane
*Non-Archimedean, Char=0: Finite extensions of the p-adic rationals
*Non-Archimedean, Char=p: Laurent series over a finite field
This is shown by the natural absolute value built from field by using the Haar measure (which is unique) for the additive structure, that is $|a|:=\mu(aK)/\mu(K)$ for any set $K$ of finite measure. This is well-defined since scalar factors cancel.
Is there a topological characterisation of the Archimedean property here? The characterisation I've seen in wikipedia uses the ring structure:
it is a field that is complete with respect to a discrete valuation and whose residue field is finite.
|
Archimedian local fields are connected, non-archimedian local fields are totally disconnected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Looking for the lowest number divisible by 1 to A. What would the math equation be for finding the lowest number divisible by 1 to A? I know factorial can make numbers divisible by 1 to A but that dosn't give me the lowest number.
Example of what I'm talking about:
the lowest number divisible by 1,2,3,4,5,6,7,8 = 840
Example of factorial (What I'm not talking about):
8! = 40,320
Note:
A is anything that's more then 1 and is whole.
|
Look at the LCM. Note $lcm\{a,b\} = \frac{ab}{gcd(a, b)}$.
If you have a prime $p$ and a power of $p$, $p^{n}$, you will simply discard $p$ and keep $p^{n}$ instead. As $p^{n}|x \implies p|x$, but the converse is not true.
Note as well you can ignore $1$ in your $gcd$ calculations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
poisson distribution and the cdf $Y (t)$ is the number of events occurring in $[0,1]$ where for each $t> 0$, $Y (t)~\sim\operatorname{Poi} (\lambda t)$ and $X$ measures the time taken for the $r$th event to occur.
Am I right in saying that the event $(X \le t) = (Y(t) \ge t)$?
Also, how can I write the cdf of $X$ as the sum of poisson probabilities using the above?
|
The $r$th event occurs before time $t$ if and only if the number of events before time $t$ is at least $r$.
So $[X<t] = [Y(t)\ge r]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Linear Algebra Self Study I'm currently a high school student with a love for math. I have taken Plane and Coordinate Geometry, both Algebra I and II, Trigonometry, and am halfway done with Calc A.
I want to major in quantum physics, and feel that a background with linear algebra would help. As there are no courses available at my school, I must self study.
What seems most promising is the MIT OCW course along with the accompanying textbook. Would there be a better book/online resource for teaching myself? Thanks.
|
I'm in a similar situation, and I learn linear algebra from Axler's text, "Linear Algebra Done Right." The problem sets are very nice and I really like the book; it's very easy to understand and explanations are lucid.
The MIT OCW course uses Strang's text, I believe, which I'm not familiar with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Epsilon-Delta More Confusion
Use Epsilon-Delta to prove:
$$
\lim_{x \to 1} (x^2 + 3) = 4
$$
So, we need to find a $\delta$ s.t.
$$
0 < x - 1 < \delta \; \implies \; 0 < |(x^2 + 3) - 4| < \epsilon
$$
We simplify
$$0< |(x^2 + 3) - 4| < \epsilon$$ to get $$0 < |x^2 - 1| < \epsilon$$
This is where I'm stuck.
How do I find a delta in terms of epsilon now?
|
You can factor $x^2-1$. From there, the condition $|x-1|<\delta$ implies that $|x+1|<\delta +2$. Can you use this to find an expression for $\delta$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Proof : An event is independent from every other event iff its probability is 0 or 1 As said in the title I need to prove that an event is independent from all other events iff its 0 or 1. One side is pretty simple, if I assume the event is 0 or 1 probability the answer is immediate.
I'm having trouble formulating the other side, i.e if I assume there is an event which is independent from all other, I need to show its probability is 0 or 1. I understand that if I assume such an event exists its occurrence will never affect the probability of every other event, therefore if I assume by contradiction that its probability is not 0 or 1,
an observer cannot be sure that such an event is independent from all others, because he will not be sure of when it will happen or not.
Frankly what I wrote doesn't sound exactly right to me, but I can't seem to formulate any mathematical proof, I'm not sure of how to represent the fact its independent from all the other events mathematically.
Hopefully you could give me a hint on where to start at least, thanks!
|
I'll retract this as an answer but leave it here for those similarly confused
Not true it seems to me.
Consider the probablity of getting heads or tails (H, T) on the toss of a coin and also getting a number (1 to 6) on the throw of a dice. p(H) = 1/2 and p(2) = 1/6. But the events are surely indenpendent ?
From the subsequent answer and comments I assume that I should be considering $\Omega $ as a probability space consisting of the combined events. So (H, 2) is not independent from all the other outcomes - correct ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/711964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Riemann Sphere/Surfaces Pre-Requisites I have recently developed a large interest in everything to do with Riemann Sphere/Surfaces. I wish to understand the topic quite well but I know that I will need to read a good number of books on topology and complex and real analysis.
Can you recommend any good books that will allow me to move onto Riemann Surfaces? I am theoretical physics student so my knowledge in maths isn't as detailed as that of a typical maths student, but I'm not letting that stop me.
Thanks.
|
try
J. Jost Compact Riemann Surfaces, third edition Springer Universitext. (I think is a very good book)
http://www.zbmath.org/?q=an:05044797
and
S. Donaldson Riemann Surfaces. (this is beautiful but it is more "concentrated")
http://www.zbmath.org/?q=an:05900831
Also you may find interesting this fantastic book about topology of manifolds.
Milnor, Topology from a differential viewpoint.
http://www.zbmath.org/?q=an:01950480
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Book on the first-order modal logic Is there a book on the metatheory for the first-order modal logic, or do I just need to take FOL as a base and use the standard translation?
|
D. M. Gabbay & V. B. Shehtman & D. P. Skvortsov. Quantification in Nonclassical Logic (2009) (Series: Studies in Logic and the Foundations of Mathematics, Volume 153. Elsevier)
It covers a lot of material on first-order modal and first-order intuitionistic logic. E.g. Kripke semantics, algebraic semantics, completeness, etc. Don't be misguided with the titles of the Chapters in the book: "Chapter 1. Basic propositional logic" and "Chapter 2. Basic predicate logic" both mean modal and (super-)intuitionistic logics (propositional ones in Ch.1 and predicate ones in Ch.2).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Order of operations when using evaluation bar Suppose we have the function
\begin{align*}
f(x) = \sin(x)
\end{align*}
with first derivative
\begin{align*}
\frac{d}{dx}f(x) = \cos(x).
\end{align*}
If we evaluate $f'(x)$ at $x=0$, the result depends on whether you evaluate $f(0)$ or differentiate $f(x)$ first.
\begin{align*}
\displaystyle \frac{d}{dx}f(x)\mid_{x = 0} = \cos(x)\mid_{x = 0} = 1\\
\displaystyle \frac{d}{dx}f(x)\mid_{x = 0} = \frac{d}{dx}f(0) = \frac{d}{dx}0 = 0
\end{align*}
First question: Does this mean the following two statements are not equivalent?
\begin{align*}
\displaystyle \left(\frac{d}{dx}f(x)\right)\mid_{x = 0}\\
\displaystyle \frac{d}{dx}\left(f(x)\mid_{x = 0}\right)
\end{align*}
Second question: if so, which of the following is true, and why?
\begin{align*}
\displaystyle \frac{d}{dx}f(x)\mid_{x = 0} =
\left(\frac{d}{dx}f(x)\right)\mid_{x = 0}
\end{align*}
or
\begin{align*}
\displaystyle \frac{d}{dx}f(x)\mid_{x = 0} = \frac{d}{dx}\left(f(x)\mid_{x = 0}\right)
\end{align*}
|
If the following were true:
$$\frac{d}{dx}f(x)\mid_{x = n} = \frac{d}{dx}\left(f(x)\mid_{x = n}\right)$$
Then it would always be 0. Why? Because once you evaluate a function of x at a particular value of x, it is no longer a function of x; it's a value (or, a constant function). The example (sin, cos) you gave is a little misleading. Consider instead the function $f(x)=x^2$ , and instead the point $x=3$ :
$$\frac{d}{dx}f(x)\mid_{x = 0} = \frac{d}{dx}\left(f(x)\mid_{x = 0}\right)\\
f(x)\mid_{x = 0}=9\\
\frac{d}{dx}9=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove $f(x) = x^2$ is continuous at $x=4$ I want to show that $f(x) = x^2$ is continuous at $x=4$
and here's how the proof goes:
$\forall\epsilon>0$, $\exists\delta>0$ s.t $\forall x$, $|x-4|<\delta$ $\implies |f(x)-16|<\epsilon$
So working backwards we get:
$$|f(x)-16|<\epsilon ⇔ |x^2 - 16| < \epsilon$$
$$⇔ |x+4||x-4| < \epsilon$$
Ideally we're trying to get it into this form: $|x-4|<\delta$
Since we can't divide $\epsilon$ by $|x+4|$ we need to control that term by assuming a priori that $\delta \le 1$
Here's where I have few questions:
*
*How do you control a term
*What are we ultimately trying to get by controlling the term by assuming a priori that $\delta \le 1$ (how's it going to help us proceed?)
If anyone can answer the above question as if I'm completely new to the idea of continuity and $\epsilon-\delta$ that would be really appreciated.
I think if I'm able to understand how to control a term and the purpose behind it(how's it going to help us in the end) than I think I might be able to finish the proof on my own.
Note: I don't want the complete proof for this question, but rather if someone can answer those questions above.
|
If you assume $\delta < 1$, then you know that:
$$\begin{align*}
|x-4|<\delta \\
\Rightarrow |x-4|<1 \\
\Rightarrow -1 < x-4 < 1 \\
\Rightarrow 3 < x < 5
\end{align*} $$
But then, we can determine what this means about $|x+4|$:
$$\begin{align*}
&\Rightarrow 7 < x+4 < 9 \\
&\Rightarrow |x+4| < 9
\end{align*} $$
So this means that if we assume $\delta<1$, we have:
$$|f(x)−16| = |x+4||x-4| < 9|x-4|$$
And if you let $\delta = \min\left(1,\frac{\epsilon}{9}\right)$, then:
$$|f(x)-16| < 9|x-4| < 9\left(\frac{\epsilon}{9}\right) = \epsilon$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
complex analysis: If $f$ is analytic and $\operatorname{Re}f(z) = \operatorname{Re}f(z+1)$ then $Im\;f(z) - Im\;f(z+1)$ is a constant I am having trouble deciphering the reason behind a line in a complex analysis textbook (Complex made Simple by Ullrich, page 360 5 lines down in Proof of Theorem B, for those who are interested).
Basically it says that since $f(z)$ is a holomorphic function with $\operatorname{Re} f(z+1) = \operatorname{Re}f(z)$, that for all $z$:
$$\begin{align}
f(z+1) - f(z) &= \operatorname{Re}f(z+1) + i\cdot\operatorname{Im}f(z+1) - \operatorname{Re}f(z) - i\cdot\operatorname{Im}f(z)\\
&= i\cdot\operatorname{Im}f(z+1) - i\cdot\operatorname{Im}f(z)\\
&= i\cdot \text{constant}.
\end{align}$$
I do not understand why we must have that for all $z$ $\quad\operatorname{Im}f(z+1) - \operatorname{Im}f(z) = $ a constant.
Can anyone help?
Thanks
|
That is wrong. Consider $f(z) = \cos 2 \pi z$, there $f(z + 1) = f(z)$, but $\Im(f(z))$ isn't constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Prove $f(x) = \sqrt{x}$ is continuous at $x = 4$ Show that $f(x) = \sqrt{x}$ is continuous at $x = 4$
So my textbook has a proof for this and this is their scratch work:
$\forall\epsilon>0$ $\exists\delta>0$ s.t $\forall x$ $0<|x-4|<\delta \implies |\sqrt{x}-2|<\epsilon$
Working backwards:
$$|\sqrt{x}-2|<\epsilon \iff \frac{1}{\sqrt{x}+2}|x-4| < \epsilon$$
then they stated that 'let us assume that $\delta\le4$'
$$|x-4|<\delta \implies |x-4|<4 ...$$
My question is why did they assume that $\delta\le4$ (there was no explaintion in the text) instead of the normal cases when you assume that $\delta\le1$?
|
If $\delta \gt 4$ then there are negative $x$ which satisfy $|x-4|\lt \delta$ but for which there is not real $\sqrt{x}$.
Using "let us assume $\delta \le 1$" would in fact have worked here, but would not have worked if the original question had been "Prove $f(x)=\sqrt{x}$ is continuous at $x=\frac12$." So it is reasonable to assume an upper bound for $\delta$ which relates to the actual question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Normal Distribution in I am so confused with this problem:
The middle 95% of adults have an IQ between 60 and 140. Assume that IQ for adults is normally distributed.
a. What is the average IQ for adults? The standard deviation?
I got the average by subtracting the values given and then multiply it with 95%. But I dont know how to get the standard deviation because a certain number of population isn't given. Any ideas anyone?
|
Actually the average IQ is 100 and its standard deviation is 15.
Intelligence tests are scored in such a way the resulting IQ distribution conform to these properties.
http://en.wikipedia.org/wiki/Intelligence_quotient
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
In how many ways can a teacher divide a group of seven students into two teams each containing at least one student? Can someone please help me with this?
In how many ways can a teacher divide a group of seven students into two teams each containing at least one student? two students? What about when seven is replaced with a positive integer n≥4?
I thought about using combinations.But not sure how to go from there.
|
well, the basic breakdown goes:
1-6, 2-5, 3-4
For the 1-6, there are 7 choices for the student that is by himself, and the other 6 are dictated by the one by himself, so there are 7 ways to break them up into 2 groups with one student in one and 6 in the other.
For 2-5, you have 7 options for the first person in the group of 2, and then 6 options for the second person, and again -- the other 5 people are the other team. 7*6 means there are 42 ways to break it up into 2 and 5.
For 3-4, you have 7 options, then 6 options, then 5 options for the group of 3, the other 4 people are automatically on the other team. 7*6*5=210 ways to to break it up into 3 and 4.
Hope that helps.
Edit: What I did above has repeats that I did not consider, so adding a a different, hopefully more useful explanation.
Consider A,B,C,D,E,F,G as the seven people in your group. If you break the group into to a group of 1 and a group of six, you have seven possibilities (any one of the seven people could be in the group all by themselves.
If you break the group into 2 and 5, as shown above, there are 42 ways to pick 2 people out of a group of 7, but some of those ways are repeats. Specifically, (A,B) = (B,A). In fact, every pair of students has a repeat (just picking them in reverse order), so really only half of the 42 methods are different from each other. So, we only get 21 different ways to break them up into a group of 2 and a group of 5.
If you break them up into a group 3 and 4, there are 210 ways to do this, but again, we get repeats. Consider picking (A,B,C). This is the same as picking (A,C,B), (B,A,C), (B,C,A), (C,A,B), (C,B,A) (this is arranging 3 objects, so we have 3 choices for first place, 2 for the second, and one choice for the last object). So there are six ways to pick the same group. In other words, only $\dfrac{1}{6}$ of the groups are unique. 210/6 = 35 different ways to break them up into a group of 3 and a group of 4.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
A definite integral $$\int_0^1\sqrt{\left(3-3t^2\right)^2+\left(6t\right)^2}\,dt$$
I am trying to take this integral. I know the answer is 4.
But I am having trouble taking the integral itself.
I've tried foiling and the simplifying. I've tried u-sub. I just can't get the correct way to take the integral.
Any help would be appreciated. Sorry if the layout doesn't look right.
|
Hint : By doing some manipulation and expansion, notice that
$$\begin{align}
(3-3t^2)^2 + (6t)^2 &= 3^2(1-2t^2 +t^4 +4t^2)\\
&= 9(1+ 2t^2 + t^4) \\
&=9(1+t^2)^2\end{align}$$
Now, just take the square root of this and integrate the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Using a compass and straightedge, what is the shortest way to divide a line segment into $n$ equal parts? Sometimes I help my next door neighbor's daughter with her homework. Today she had to trisect a line segment using a compass and straightedge. Admittedly, I had to look this up on the internet, and I found this helpful site. There they claim that the fewest number of "steps" necessary to trisect the segment is $4$, where by one "step" we mean any stroke of the pencil, either with the compass or straightedge.
Immediately, this got me thinking about the length of other optimal constructions, which has led to the question of this post:
What is a the minimum number of steps necessary to construct a segment of length $\frac{1}{n}$ given a segment of length $1$?
If $s(n)$ is the quantity in question, then this other helpful site shows that $s(n)\le n+6$. However, $s(2)=3$ and $s(3)=4$, so the bound is not sharp. Also, we can see that $s(mn)\le s(m)+s(n)$ by creating a segment of length $\frac{1}{mn}$ from one of length $\frac{1}{n}$.
Finally, at the bottom of the first site, they hint at one method of construction, which involves drawing larger and larger circles. Assuming their hint leads to an optimal construction (which would need to be proved), I believe that the first eight values of $s(n)$ starting with $n=1$ are:
$$0,3,4,5,5,5,5,6$$
This returns nothing on OEIS. (The above numbers assume that the initial segment of length $1$ is marked off on a very long ray, else we'd have to add one for $n\ge3$ to lengthen the segment appropriately).
Any ideas?
|
Let starting segment is $AB$.
As one can see from first link, starting condition is "segment-on-the-line".
Anyway one can add $1$ line at the start to get this starting condition.
Consider odd $n$.
Let coordinates of starting points are: $A(-1,0)$, $B(0,0)$.
If point $C$ has coordinates $C(n,0)$, then (see Figure 1) coordinates of other points are:
$D\left(\dfrac{1}{2n},\dfrac{\sqrt{4n^2-1}}{2n}\right)$;
$\qquad$
$E(1,0)$;
$\qquad$
$P\left(-1+\dfrac{1}{n},0\right)$.
Figure 1:
Consider even $n$.
Let coordinates of starting points are: $A(0,0)$, $B(1,0)$.
If point $C$ has coordinates $C\left(\dfrac{n}{2},0\right)$, then (see Figure 2) coordinates of other points are:
$D\left(\dfrac{1}{n},\dfrac{\sqrt{n^2-1}}{n}\right)$;
$\qquad$
$E\left(\dfrac{1}{n},-\dfrac{\sqrt{n^2-1}}{n}\right)$;
$\qquad$
$P\left(\dfrac{1}{n},0\right)$.
Figure 2:
Main idea: For given $m$ to draw point $C(m,0)$ as fast as possible.
As I checked (up to $m=2048$), it is possible to draw point $C(m,0)$, applying
$$
1+\lfloor \log_2 m \rfloor
$$
steps, where $\lfloor \cdot \rfloor$ is floor rounding function.
According to described construction, upper bound of (total) steps is
$$
3+\lfloor \log_2 n \rfloor, \mbox{ if } n \mbox{ is odd} ;
$$
$$
2+\lfloor \log_2 n \rfloor, \mbox{ if } n \mbox{ is even}.
$$
Upper bounds of steps (starting with $n=1$) are:
$$
0; ~~ 3, 4; ~~ 4, 5, 4, 5; ~~ 5,6,5,6,5,6,5,6; ~~ 6,7,6,7, ...
$$
Examples:
$n=11$: build point $C(11,0)$ and follow figure 1 (total $6$ steps).
$n=12$: build point $C(6,0)$ and follow figure 2 (total $5$ steps).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
}
|
Minimizing the value of this integral $ I \equiv \int_{0}^{\pi/2}\left\vert\,\cos\left(x\right) - kx^{2}\,\right\vert \,{\rm d}x $
Given the integral
$$
I \equiv \int_{0}^{\pi/2}\left\vert\,\cos\left(x\right) - kx^{2}\,\right\vert
\,{\rm d}x
$$
Find the value of $k$ so that $I$ is minimum.
How do I start?
|
We have to minimize the function
$$I(p):=\int_0^{\pi/2}\bigl|\cos x -p x^2\bigr|\ dx\qquad(p\in{\mathbb R})\ .$$
When $p<0$ then obviously $I(p)>I(0)=1$. When $p>0$ then the parabola $y=p\,x^2$ intersects the curve $y=\cos x$ at a point $x=t$ with $0<t<{\pi\over2}$ depending on $p$. We now let $t>0$ be our new parameter and then have
$$p={\cos t\over t^2}\ .$$
This means that we now have to minimize the function
$$\eqalign{g(t)&:=I(\cos t/ t^2)\cr &=\int_0^t (\cos x-x^2\cos t/t^2)\ dx+
\int_t^{\pi/2} (x^2\cos t /t^2-\cos x)\ dx\cr
&=2\sin t+{\pi^3-16t^3\over 24 t^2}\cos t -1\qquad(0<t<\pi/2)\ .\cr}$$
When the function $g$ is not monotone then this will necessitate the solution of the transcendental equation $g'(t)=0$. Therefore we shall look at the problem using Mathematica. Here is the output:
It turns out that $g$ has a minimum at $t_*\doteq1.25$, and one obtains $g(t_*)\doteq0.89592$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/712996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
Modular Arithmetic - Quadratic Solutions Problem I've just been given the following question in my crypto class, and I think I'm fairly sorted for it, but I was just wondering whether there might be any extra solutions to the ones I've worked out.
Compute all solutions of $x^2 + 4x - 21 \equiv 0\,\bmod\,33$
First, I factorised this equation to give $(x + 7)(x - 3)$, which gives me the solutions $-7$ and $3$. However, under the conditions of modular arithmetic, I know that adding or subtracting $33$ as many times as we like will also provide an answer of zero.
IE: Let's try x = 26. $$(26 + 7)(26 - 3) = 33 \times 23 \equiv 0\,\bmod\, 33$$
Thus, it becomes fairly obvious to see that solutions to this equation will take the form $[3]$ and $[26]$, where the square brackets denote congruence classes.
I was given the hint in class that we should try to make the brackets equal to the factors of 33 - IE; try and get to $11 \times 3$, for example. But I really can't see how this would work.
Any further input would be great, thank you!!
|
$(x+7)(x-3)\equiv0\mod 33$
Now, as you noticed $ x=-7,3$ are obvious solutions. Also, any number congrunet to $-7$ or $3$ modulo $33$ will be a solution, for instance, $26$, as it is congruent to $-7$ modulo $33$, or $36$, congruent to $3$ modulo $33$.
But there are other possible solutions, congruent to neither $-7$ nor $3$ modulo $33$
If we can solve $x+7=11k$ and $x-3=3k$, for integers $x,k$ then we have a solution, since one of the factors is divisible to $11$ and the other divisible to $3$.
Similarly, we can solve $x+7=3k$ and $x-3=11k$,under the same constraints we have another solution.
Incidentally, both these sets of equations have solutions in integers, namely $15$ and $14$.
Thus all solutions of $x$ must be congruent to one of $-7,3,15,14$ modulo $33$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Can we apply squeeze in that way? Claim:
if $a_n\leq b_n\leq c_n$ for all $n\in \mathbb{N}$ and $\displaystyle\sum\limits_{n=0}^{\infty} a_n,\displaystyle\sum\limits_{n=0}^{\infty} c_n$ are convergent then$\displaystyle\sum\limits_{n=0}^{\infty} b_n$ is convergent.
I think it is a wrong statement but I could not find any counterexample.
If you find a counterexample or prove it,I would be thankful.
|
$$\sum_{i=k}^l a_i \le \sum_{i=k}^l b_i \le \sum_{i=k}^l c_i$$
This implies that
$$|\sum_{i=k}^l b_i| \le\max \{ |\sum_{i=k}^l a_i|, |\sum_{i=k}^l c_i|\}$$
Now $\sum a_i$, $\sum c_i$ exist, hence and right term becomes arbitrarily small (Cauchy-criterion), if $k,l$ are chosen large enough, hence the partial sums $B_k:=\sum_0^k b_i$ are Cauchy as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate a twice differentiable limit Evaluate $$\lim_{h \rightarrow 0} \frac{f(x+h) -2f(x) + f(x-h) } { h^2}$$
if $f$ is a twice differentiable function.
I'm not sure how to understand this problem. If I differentiate the numerator I get $f'(x+h) - 2f'(x) + f'(x-h)$ but that doesn't seem to take me anywhere?
|
Hint
Rewrite
$$ [f(x+h) -2f(x) + f(x-h)] =[f(x+h)-f(x)] -[f(x)-f(x-h)]$$
Divide each portion by $h$. What does that become ? Do it again to arrive to ...
I am sure that you can take from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Understanding the fundamental theorem of calculus
I'm having problems understanding why $$ \frac{d}{dx} \int_{a}^{x} f(t)\,dt = f(x)$$
I'm somewhat weirded out by the fact that there is a $dt$ at the end of $$F(x) = \int_a^x f(t)\,dt$$ too.
We are differentiating with respect to $x$...I understand that $ \frac{d}{dx} \int f(x) \, dx = f(x)$ but why is the $t$ in the definition?
|
http://en.wikipedia.org/wiki/Free_variables_and_bound_variables
The expression
$$
\sum_{i=1}^3 \cos(i^2 k^3)
$$
means
$$
\cos(1^2k^3) + \cos(2^2k^3)+\cos(3^2k^3),
$$
and that's the same as
$$
\sum_{j=1}^3 \cos(j^2 k^3),
$$
i.e. $i$ and $j$ are "bound variables", whereas $k$ is a "free variable".
The $t$ in
$$
\int_a^x f(t)\,dt
$$
is a bound variable, like $i$ and $j$ above, and $x$ is a free variable, like $k$ above. The value of the sums above depends on $k$, but not on anything called $i$ or $j$, and similarly the value of the integral above depends on $x$, but not on anything called $t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
How to prove whether a given matroid is a Gammoid? Statement : Given a matroid in some representation say $(E,I)$. How do we prove it is a gammoid?
For example to prove a matroid is transversal, we try to create a bipartite graph. If we are unable to(i.e. if we get some contradiction) then it is not transversal.
Similarly in cotransversal, we create a directed graph, and show all independent sets are linked to the fixed base using disjoint paths.
But if a matroid is a general gammoid, how can we prove it?
Answer according to me: One way I think is to show it is contraction of some transversal matroid(or restriction of cotransversal matroid). But how do we find that transversal(or cotransversal) matroid then?
|
According to Oxley, deciding whether a given matroid is a gammoid is still open. There is a way to check whether a given matroid is a strict gammoid, though, and you can find it in Oxley as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is $\epsilon^2/\epsilon^2=1$ or $0/0$? Is it possible in the system of dual numbers ($a+\epsilon b$; $\epsilon^2=0$) to calculate $\epsilon/\epsilon =1$? How then does one deal with $\epsilon^2/\epsilon^2=1$ versus $\epsilon^2/\epsilon^2=0/0$?
The same question for infitesimal calculus using hyperreal numbers where: $\epsilon \neq 0$ but $\epsilon^2=0$?
I probably did not use the correct formulation w.r.t. hyperreal numbers. I meant the axiom (?) in smooth infinitesimal analysis where it is assumed: $\epsilon \neq 0$ but $\epsilon^2=0$.
I am not quite sure how this analysis is related to nonstandard-analysis and hypercomplex numbers. I came across this topic in the book: A Primer of infinitesimal analysis (John L. Bell).
|
In the dual numbers, ${\mathbb R}[\epsilon]$ ($={\mathbb R}[X]/(X^2)$), $\epsilon$ is not invertible, so the expression $\epsilon / \epsilon$ ($= \epsilon \epsilon^{-1})$ is undefined.
In hyperreals, as Asaf Karagila mentions in the comments, $\epsilon^2 \neq 0$. There you do have $\epsilon / \epsilon = \epsilon^2 / \epsilon^2 = 1$ (as the hyperreals are a field and $\epsilon$ is a non-zero element).
I had a very quick look at the book by Bell. That's not only using infinitesimals, but also a different kind of logic (no law of excluded middle!). That's not for the faint-of-heart :-): for a given $x$, the statement "$x = 0 \lor x \neq 0$" is not necessarily true in that setting.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
What values of a is the set of vectors linearly dependent? The question is is "determine conditions on the scalars so that the set of vectors is linearly dependent".
$$ v_1 = \begin{bmatrix} 1 \\ 2 \\ 1\\ \end{bmatrix}, v_2 = \begin{bmatrix} 1 \\ a \\ 3 \\ \end{bmatrix}, v_3 = \begin{bmatrix} 0 \\ 2 \\ b \\ \end{bmatrix}
$$
When I reduce the matrix I get
$$\begin{bmatrix} 1 & 0 & 0 \\ 0 & a-2 & 0 \\ 0 & 0 & b - \frac{4}{(a-2)} \end{bmatrix}$$
If the matrix is linearly independent then shouldn't $a-2 = 0$ and $b - \frac{4}{(a-2)} = 0$? So, I said the solution is when $a-2 \neq 0 $ and $b - \frac{4}{(a-2)} \neq 0$. The textbooks says the answer is when $ b(a-2) = 4 $. I understand how they got to $ b(a-2) = 4 $ but why is it equals instead of not equals?
|
The Determinant Test is appropriate here, since you have three vectors from $\mathbb{R}^{3}$. The set of vectors is linearly dependent if and only if $det(M) = 0$, where $M = [v_{1} v_{2} v_{3}]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Minimum number of operations (divide by 2/3 or subtract 1) to reduce $n$ to $1$ This question is inspired by a Stack Overflow question which involves the task to find an algorithm to solve the following problem:
Given a natural number $n$, what is the least number of moves you need to reduce it to $1$? Valid moves are:
*
*subtract $1$
*divide by $2$, applicable if $n \equiv 0 \pmod{2}$
*divide by $3$, applicable if $n \equiv 0 \pmod{3}$
For example, you can reduce 10 in 3 steps: $10 \rightarrow^{-1} 9 \rightarrow^{/3} 3 \rightarrow^{/3} 1$.
Let's define $f(n)$ as the answer for number $n$. Then we have $f(1) = 0$ and for $n > 1$:
$f(n) = 1 + \min \{ f(n-1), (n \mod 2) + f(\lfloor\frac{n}{2}\rfloor), (n\mod 3) + f(\lfloor\frac{n}{3}\rfloor) \}$
$n$ is restricted to $10^9$ in the original source, which makes it easy to solve in $O(n)$ using dynamic programming or a breadth-first search, but that isn't really interesting.
Initially I thought that the tricky range for $n$ would only be small (below $10^6$ or so) and for larger $n$ we could apply some simple greedy algorithm that prefers division by 3 or 4, even if we need to subtract 1 first. I tried to test some identities that could lead to such a heuristic:
*
*$f(n) = 1 + f(n - 1) \ \ \forall n: n \equiv 1,5 \pmod{6}$ (that's easy to prove, because there's only one valid move)
*$f(n) = \min \{ f(\frac{n}{2}), f(\frac{n}{3}) \} \ \ \forall n: n \equiv 0 \pmod{6}$
*$f(3n) \geq f(n)$
*$f(n) = f(\frac{n}{3}) \ \ \forall n: n \equiv 0 \pmod{3^3}$
*$f(n) = f(\frac{n}{3}) \ \ \forall n: n \equiv 0 \pmod{3} \textbf{ and } n \not\equiv 0 \pmod{2}$
*...
But all but the first three have turned out not to be correct, and those are not very helpful because you still have a branching factor of 2. You can use the third inequality to prune during a depth- or breadth-first search, but I also can't prove that this yields a "good" algorithm, $O(\log^c n)$ or something.
I understand that it might have something to do with the exponents of 2 and 3 in the prime factorization of $n$, but I can't put my finger on it, since you always have the possibility to get to any equivalency class modulo 2 or 3 within at most 2 steps and change up everything.
Do you have any ideas on how to formalize this or prove useful properties of the $f$ function? I'm not only looking for approaches that necessarily lead to an algorithm for larger $n$, also for general insights that have escaped me so far.
|
We will use a very straightforward method that produces a solution of order o(N), still though not polynomial in the size of the input.
We will use the following recursive function to calculate the result:
$$m(N) = 1 + min( Ν \ mod \ 2 + m( \frac{N}{2} ), N \ mod \ 3 + m( \frac{N}{3} ) )$$
$$m(1) = 0$$
The correctness of the above method is obvious, we pretty much try every possible way to reduce the number to 1.
The total amount of operations for the above method is given by the following recurrence relation:
$$ T(N) = T(\frac{N}{2}) + T(\frac{N}{3}) + O(1) $$
To analyze this relation, we cannot use the master theorem since this relation is not in the appropriate form. We have to use the Akra-Bazzi method. In short, what the method says is that if we have a recurrence relation of the following form:
$$ T(x) = g(x) + \sum_{i=1}^{k} a_i T( b_i x + c_i ) $$
Then we can found the asymptotic behaviour of T(x) by first determining the constant $p \in \mathbb{R}$ such that $\sum_{i=1}^{k} a_i b_i^p = 1$ and then evaluating the following integral:
$$I(x) = \int_1^x \frac{g(u)}{u^{p+1}} du$$
Then we know that $ T(x) \in \Theta( x^p ( 1 + I(x) ) ) $.
We will now apply this method to our problem. We know that $g(x) \in O(1)$ and furthermore $a_1=a_2=1, b_1 = \frac{1}{2}, b_2 = \frac{1}{3}$. Evaluating the integral gives us:
$$ I(x) = \int_1^x \frac{g(u)}{u^{p+1}} du = \int_1^x u^{-(p+1)}du = \frac{-x^{-p}}{p} + \frac{1}{p} = \frac{1 - x^{-p}}{p} $$
Finally by substituting we get: $ T(x) \in \Theta( x^p( 1 + I(x) ) ) = \Theta( x^p ) $.
What remains to be done is to find the value of p. We solve the following equation, $2^{-p} + 3^{-p} = 1$, by analytically computing its root (we can use either the Newton-Raphson or the bisection method) we obtain that p = 0.78788...
Therefore, the complexity of our algorithm is of order $O( N^{0.79} )$.
For further details for the Akra-Bazzi method you can check here: https://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 2
}
|
find flux,using Cartesian and spherical coordinates Find the flux of the vector field $\overrightarrow{F}=-y \hat{i}+ x \hat{j}$ of the surface that consists of the first octant of the sphere $x^2+y^2+z^2=a^2(x,y,z \geq 0).$
Using the Cartesian coordinates,I did the following:
$$ \hat{n}=\frac{\nabla{G}}{|\nabla{G}|}=\frac{x\hat{i}+y\hat{j}+z\hat{k}}{a} $$
$$\int_C{\overrightarrow{F} \cdot \hat{n}}ds=\int_C{(-y\hat{i}+x\hat{j})\frac{x\hat{i}+y\hat{j}+z\hat{k}}{a}}ds=\int_C{\frac{-xy+xy}{a}}ds=\int_C{0}ds=0 $$
Using the spherical coordinates,I did the following:
$$x =a \cos{\theta} \sin{\phi}$$
$$y=a \sin{\theta} \sin{\phi}$$
$$z=a \cos{\phi} $$
$$0 \leq \theta \leq \frac{\pi}{2}$$
$$ 0 \leq \phi \leq \frac{\pi}{2} $$
$$ \hat{n}=\frac{\nabla{G}}{| \nabla{G}|}=...=\cos{\theta} \sin{\phi} \hat{i}+\sin{\theta} \sin{\phi} \hat{j}+\cos{\phi} \hat{k}$$
$$\overrightarrow{F}= -a \sin{\theta} \sin{\phi} \hat{i}+a \cos{\theta} \sin{\phi} \hat{j}$$
$$ ds=r^2 \sin{\phi}d \theta d \phi $$
$$\overrightarrow{F} \cdot \hat{n}=0 $$
So Flux=$ \int_0^{\frac{\pi}{2}} \int_0^{\frac{\pi}{2}}{0 r^2 \sin{\phi} } d \theta d \phi=0 $
Are both ways right or have I done something wrong?
EDIT: And something else.... Is $ds$ equal to: $\frac{|\nabla{f}|}{|\nabla{f} \cdot \hat{k}|}$ ?
|
Both of your methods are correct, and the flux through the sphere being $0$ is actually what we would expect, as your field is purely rotational and therefore the field vectors all point along the surface of the sphere. We can see this by observing:
$$\nabla \cdot \vec{F}=\frac{\partial(-y)}{\partial x}+\frac{\partial x}{\partial y}=0$$
And:
$$\nabla \times \vec{F}=\begin{vmatrix}\boldsymbol{\hat{\imath}} & \boldsymbol{\hat{\jmath}} & \boldsymbol{\hat{k}} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ -y & x & 0\end{vmatrix}=2\boldsymbol{\hat{k}}$$
And so we can see that the field behaves purely rotationally, if we look at the vector plot of your vector field, this becomes more clear:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convex function, sets and which of the following are true? (NBHM-$2014$) Let $f:]a,b[ \to\Bbb R$ be a given function. Which of the following statements are true?
a. If $f$ is convex in $]a,b[$, then the set $\tau=\{(x,y) \in\Bbb R^2| x\in ]a,b[, y\ge f(x)\}$ is a convex set.
b. If $f$ is convex in $]a,b[$, then the set $\tau=\{(x,y) \in\Bbb R^2| x\in ]a,b[, y\le f(x)\}$ is a convex set.
c. If $f$ is convex in $]a,b[$,then $|f|$ is also convex in $]a,b[$.
|
a. True.
c. False. Example: take $f(x) = -1 + x^2$ on $[-1, 1]$. $f''(x) = 2 > 0$ on this interval so $f$ is
convex on this interval. But $g(x) = |f(x)| = |x^2 - 1|$ is not convex on this interval. Look
at the graph of $g$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Let $k$ be a division ring, then the ring of upper triangular matrixes over $k$ is hereditary I'm reading Ring Theory by Louis H. Rowen, and he claimed that The ring of upper triangular matrices over a division ring is hereditary (it's on page 196, Example 2.8.13 of the book).
I think it should be pretty much straight-forward, despite the fact that I'm not clear how it must be true.
Here's my thought on the problem:
I know that a left (resp, right) hereditary ring is a ring such that all of its left (resp, right) ideals are projective.
Now, let $k$ be a division ring, and the upper triangular matrice ring over $k$, $U_n(k) = \left(\begin{array}{ccccc} k & k & k & \cdots & k \\
0 & k & k & \cdots & k \\
0 & 0 & k & \cdots & k \\
\vdots & \vdots &\vdots &\ddots &\vdots \\
0 & 0 & 0 & \cdots & k \end{array} \right)$, and I'll now try to find the form of its left ideals (i.e, how they look like). Let $I_n(k) = \left(\begin{array}{ccccc} I_{11} & I_{12} & I_{13} & \cdots & I_{1n} \\
0 & I_{22} & I_{23} & \cdots & I_{2n} \\
0 & 0 & I_{33} & \cdots & I_{3n} \\
\vdots & \vdots &\vdots &\ddots &\vdots \\
0 & 0 & 0 & \cdots & I_{nn} \end{array} \right)$ be any $U_n(k)$'s ideal. Since $k$ is a division ring, its ideal are $0$, and itself. Using the fact that $U_n(k) I_n(k) \subset I_n(k)$, I manage to arrive at the following facts:
*
*Each $I_{ij}$ must be an ideal of $k$, hence either $0$, or $k$.
*Consider the column $j$ if $I_n(k)$, $\left(\begin{array}{c} I_{1j}\\
I_{2j}\\
I_{3j} \\
\vdots \\
I_{jj} \\
0 \\
\vdots \\
0 \end{array} \right)$, using the fact that $U_n(k) I_n(k) \subset I_n(k)$, we must have the descending chain $I_{1j} \supset I_{2j} \supset \dots \supset I_{jj}$
So, basically, the following form is one possible left ideal of $U_n(k)$: $\left(\begin{array}{ccccccc} k & k & k & 0 & k & \cdots & 0 \\
0 & k & k & 0 & k & \cdots & 0 \\
0 & 0 & 0 & 0 & k & \cdots & 0 \\
0 & 0 & 0 & 0 & 0 & \cdots & 0 \\
\vdots &\vdots &\vdots & \vdots &\vdots &\ddots &\vdots \\
0 &0 &0 & 0 & 0 & \cdots & 0 \end{array} \right)$
Now, I don't think the above should be a $U_n(k)-$projective module at all, since I don't think it is a direct factor of any free $U_n(k)-$ modules.
Where have I gone wrong? :(
Thank you so much guys,
And have a good day,
|
For each $1\leq i \leq n$, the set $C_i = U_n(k)e_{i,i}$ is a left ideal, which is projective, since $\bigoplus C_i \cong U_n(k)$. Suppose $J$ is any ideal in $U_n(k)$ - let us show that we can write it as $J'\oplus C_i$ for some $i$, hence from the previous argument+induction it will be projective.
Let $i$ be the maximal index such that $e_{i,i}J\neq 0$ and choose some $0\neq v \in e_{i,i}J$. Note that by multiplying $J$ from the right by a matrix $A=I+\alpha e_{i,j}$ for $j>i$ we get an isomorphic left ideal and the index $i$ is still maximal such that $e_{i,i}JA\neq 0$. You can use such multiplications to transform $v$ into $\beta e_{i,j}$ for some $j\geq i$, and wlog assume that $\beta=1$.
The subideal $U_n(k)v$ is now just $Je_{j,j}\cong C_i$, and it is also easy to see that $J'=J(I-e_{j,j})$ is also a subideal so that $J\cong J'\oplus C_i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/713970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
primitive root of residue modulo p I was trying to prove that for the set $\{1,2,....,p-1\}$ modulo p there are exactly $\phi(p-1)$ generators.Here p is prime.Also the operation is multiplication.
My Try:
So I first assumed that if there exists a generator, then from the set $\{a^1,a^2,...,a^{p-1}\}$ all those powers which are co prime to $p-1$ are also generators.But i am having difficulty in proving the existence of such an $a$. If anyone can help it would be great . Thanks.It would be better if it avoids fundamental theorem of algebra.
|
Assuming $p$ is prime then $\mathbb{Z}/p\mathbb{Z}$ is the finite field $\mathbb{F}_p$ and the set you are interested in is the multiplicative group $\mathbb{F}_p^*$. In this context what you are looking for is a proof that the multiplicative group is cyclic. And with this formulation you can find a lot of answers, for example this collection of proofs: https://mathoverflow.net/questions/54735/collecting-proofs-that-finite-multiplicative-subgroups-of-fields-are-cyclic
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is infinity to the power zero I have this notation:
$$\lim_{k->\infty} k^ {1/k}$$
Is it correct to say that the output is 1, or is there some other result?
|
$$\lim_{n\to\infty}n^{^\tfrac1n}=\lim_{n\to\infty}\Big(e^{\ln n}\Big)^{^{\tfrac1n}}=\lim_{n\to\infty}e^{^\tfrac{\ln n}n}=e^{^{\displaystyle{\lim_{n\to\infty}}\tfrac{\ln n}n}}=e^{^{\displaystyle{\lim_{t\to\infty}}\dfrac t{e^t}}}=e^0=1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Number of $3$ letters words from $\bf{PROPOSAL}$ in which vowels are in the middle
The number of different words of three letters which can be formed from the word $\bf{"PROPOSAL"}$, if a vowel is always in the middle are?
My try: We have $3$ vowels $\bf{O,O,A}$ and $5$ consonants $\bf{P,P,R,S,L}$. Now we have to form $3$ letter words in which vowels are in the middle.
First we will select $2$ consonants from $5$ consonants. This can be done in $\dbinom{5}{2}$ ways. The middle vowel can be selected in $\dbinom{3}{1}$ ways.
So the total number of ways of selecting $3$ letter words is $\dbinom{5}{2}\cdot \dbinom{3}{1}$.
Now I did not understand how can I solve after that.
Help required. Thanks
|
I am not very sure about the methodology. Please let me know if the logic is faulty anywhere.
We have $3$ slots.$$---$$
The middle one has to be a vowel. There are only two ways in which it can be filled: O, A.
Let us put O in the middle.$$-\rm O-$$
Consider the remaining letters: O, A, $\bf P_1$, $\bf P_2$, R, S, L. Since we will be getting repeated words when we use $\bf P_1$ or $\bf P_2$ once in the word, we consider both these letters as a single letter P. So, the letters now are O, A, P, R, S, L. We have to select any two of them (this can be done in $^6C_2$ ways) and arrange them (this can be done in $2!$ ways). So, total such words formed are $(^6C_2)(2!)=30$. Now there will be one word (P O P) where we will need both the P's. So we add that word. Total words=$31$.
Now, we put A in between.$$-\rm A-$$
Again proceeding as above, we have the letters O, P, R, S, L. Total words formed from them are $(^5C_2)(2!)=20$. We add two words (O A O) and (P A P). Thus, total words formed here=$22$
Hence, in all, total words that can be formed are $22+31=53$.
(P.S: Please edit if anything wrong.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Converting from radius of convergence to interval of convergence Using the root test I have determined that $$\sum n^{-n} x^n$$ has a radius of convergence of infinity and $$\sum n^{n} x^n$$ has a radius of convergence of 0. Does this mean that the respective intervals of convergence are $(-\infty,\infty)$ and $\emptyset$? Do i still have to evaluate the endpoints, and if so, how?
|
You're done for the first one; there are no endpoints to evaluate. The second one has interval of convergence either $\emptyset$ or $[0,0]$; you need to determine whether $x=0$ leads to a convergent series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\Gamma(p) \cdot \Gamma(1-p)=\frac{\pi}{\sin (p\pi)}$ for $p \in (0,\: 1)$ Prove that $$\Gamma(p)\cdot \Gamma(1-p)=\frac{\pi}{\sin (p\pi)},\: \forall p \in (0,\: 1),$$
where $$\Gamma (p)=\int_{0}^{\infty} x^{p-1} e^{-x}dx.$$
Here's what I tried:
We have
$$B(p, q)=\int_{0}^{1} x^{p-1} (1-x)^{q-1}dx=\frac{\Gamma(p)\cdot \Gamma(q)}{\Gamma(p+q)}$$
Hence
$$B(p, 1-p)=\frac{\Gamma(p)\cdot \Gamma(1-p)}{\Gamma(1)}=\Gamma(p)\cdot \Gamma(1-p)=\int_{0}^{1} x^{p-1} (1-x)^{-p}dx$$
But from here I don't know how to proceed.
|
$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left( #1 \right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
The usual argument is as follows:
$\ds{\Gamma\pars{z}\Gamma\pars{1 - z}\sin\pars{\pi z}}$ is analytical in
${\mathbb C}$ and is bounded. Then, it is a constant. Setting $\ds{z =\half}$ we can discover that constant value:
$$
\Gamma\pars{z}\Gamma\pars{1 - z}\sin\pars{\pi z}=
\Gamma\pars{\half}\Gamma\pars{1 - \half}\sin\pars{\pi\,\half}=
\Gamma^{2}\pars{\half} = \pi
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 0
}
|
Find the point on the y-axis which is equidistant from the points $(6, 2)$ and $ (2, 10)$. Find the point on the y-axis which is equidistant from the points $(6, 2) $ and $ (2, 10)$.
Please help, there are no examples of this kind of sum in my book! I don't know how to solve it.
|
Find the locus of all the points that are equidistant from the points $(6, 2)$ and $(2, 10)$, that is, the line that passes through the middle point of $(6, 2)$ and $(2, 10)$. All the points of that line are equidistant from both points.
You are looking for the point that satisfies two condition: (1) It's equidistant from both points and (2) it's in the Y axis (the x=0 line). Therefore, the point you are looking for is the intersection of the two lines.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Integrating $\int{\frac{\sqrt{1-x^2}}{(x+\sqrt{1-x^2})^2} dx}$ I am a little bit lost with integral: $$\int{\frac{\sqrt{1-x^2}}{(x+\sqrt{1-x^2})^2} dx}$$
I have already worked on in and done substitution $x = \sin(t)$:
This brings me to: $$\int{\frac{\cos(t)^2}{(\sin(t)+\cos(t))^2}dt}$$
Further treating denominator to achieve:
$$\int{\frac{\cos(t)^2}{\sin(2t)+1}dt}$$
I can split this fraction into two integrals by doing $\cos(t)^2 = 1-\sin(t)^2$ but this doesn't help me to solve the integral further.
Please, can you show me how to continue to "break it" :-)
Best thanks!
|
Instead of doing a trig substitution, expand the denominator and simplify the integrand.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Real roots plot of the modified bessel function Could anyone point me a program so i can calculate the roots of
$$ K_{ia}(2 \pi)=0 $$
here $ K_{ia}(x) $ is the modified Bessel function of second kind with (pure complex)index 'k' :D
My conjecture of exponential potential means that the solutions are $ s=2a $ with
$$ \zeta (1/2+is)=0. $$
|
I can quantify somewhat Raymond's suggestion that the zeros of $K_{ia}(2\pi)$ are much more regular than the zeros of $\zeta(1/2+i2a)$. The calculations below are rough and I didn't verify the details, so this is perhaps more of a comment than an answer.
The Bessel function in question has the integral representation
$$
K_{ia}(2\pi) = \int_0^\infty e^{-2\pi \cosh t} \cos(at)\,dt.
$$
Naively applying the saddle point method to this integral indicates that, for large $a$,
$$
\begin{align}
K_{ia}(2\pi) &\approx \frac{e^{-a\pi/2} \sqrt{\pi}}{(a^2-4\pi^2)^{1/4}} \Bigl( \sin f(a) + \cos f(a) \Bigr) \\
&=\frac{e^{-a\pi/2} \sqrt{2\pi}}{(a^2-4\pi^2)^{1/4}} \cos\left( f(a) - \frac{\pi}{4} \right),
\end{align}
$$
where
$$
f(a) = \frac{a}{2} \log\left(\frac{a^2-2\pi^2+a\sqrt{a^2-4\pi^2}}{2\pi^2}\right) - \sqrt{a^2-4\pi^2}.
$$
Here's a plot of $K_{ia}(2\pi)$ in blue versus this approximation in red (both scaled by a factor of $e^{a\pi/2}\sqrt{a}$ as in Raymond's graph).
The approximation has zeros whenever
$$
f(a) = \frac{3\pi}{4} + n\pi, \tag{$*$}
$$
and since
$$
f(a) \approx a\log(a/\pi) - a
$$
for large $a$ we expect that solutions of $(*)$ for large $n$ satisfy
$$
a \approx \frac{3\pi/4+n\pi}{W\Bigl((3\pi/4+n\pi)/e\Bigr)},
$$
where $W$ is the Lambert W function.
Here's a plot of $e^{a\pi/2} \sqrt{a} K_{ia}(2\pi)$ in blue with these approximate zeros in red.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to detect an asymptote So I'm trying to write a program to draw graphs that are entered by the user. The way I draw them is by finding y values at $x=a$ number of $x$ values across the graph and then connecting them by lines. However, some graphs (like $\tan(x)$) have asymptotes, so the bottom and top values are connected. Is there any mathematical way to detect whether there is a asymptote at a point or not?
|
There is a limit test at a point $x=a$. So if $\lim_{x\to 0}f(x)=\pm\infty$, then you say $x=a$ is a vertical asymptote.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
With regards to vector spaces, what does it mean to be 'closed under addition?' My linear algebra book uses this term in the definition of a vector space with no prior explanation whatsoever. It proceeds to then use this term to explain the proofs.
Is there something painfully obvious I'm missing about this terminology and is this something I should already be familiar with?
The proof uses $u + v$ is in $V$
|
Consider the collection of points that literally lie on the $x$-axis or on the $y$-axis. We could still use Cartesian vector addition to add two such things together, like $(2,0)+(0,3)=(2,3)$, but we end up with a result that is not part of the original set. So this set is not closed under this kind of addition.
If addition is defined at all on a set, to be closed on that set, the result of an addition needs to still be in that set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 4
}
|
Modeling a chemical reaction with differential equations The problem says:
Two chemicals $A$ and $B$ are combined to form a chemical $C$. The rate, or velocity, of the reaction is proportional to the product of the instantaneous amounts of $A$ and $B$ not converted to chemical $C$. Initially, there are $40$ grams of $A$ and $50$ grams of $B$, and for each gram of $B$, $2$ grams of $A$ is used. It is observed that $10$ grams of $C$ is formed in $5$ minutes. How much is formed in $20$ minutes? What is the limiting amount of $C$ after a long time? How much of chemicals $A$ and $B$ remain after a long time?
$\ A_0$ = 40 g,
$\ B_0$ = 50 g.
Well, first of all, $\alpha $ = $\ A_0 \frac{M+N}{M} $ and $\beta $ = $\ B_0 \frac{M+N}{N} $
and then our differential equation must be:
$\frac{dX}{dt} = k(\alpha-X)(\beta-X) $ which can be easily solved. In order to create $\ x$ part of the chemical C we will need 2 parts of $\ A$ and one part of $\ B$. This lead me to believe that $\ M$ = 2 and $\ N$ = 1
By calculating $\alpha$ = $\ 40 \frac{2+1}{2}$ = 60 and $\beta $=$\ 50 \frac{2+1}{1}$ = 150. The differential equation must become $\frac{dX}{dt} = k(\ 60-X)(\ 150-X) $ right?
I separate the variables and solved the equation
$$
\int \frac{dx}{(60-x)(150-x)}\, = \int kdt
$$
$$
\ln \frac{150-x}{60-x} = 90kt+C_1
$$
By using X(0)=0,
$$
\frac{150-x}{60-x} = Ce^{90k0}, C=\frac{5}{2}
$$
and using X(5)=10 and solving for k,
$$
\frac{150-10}{60-10} = \frac{5}{2}e^{450k}, k= 2.5184X10^{-4}
$$ and this is different to the solution of k in the solution manual which is $\ k$ = $\ 1.259X^{-4}$ and also, the differential equation is different, they obtain
$\frac{dX}{dt} = k(\ 120-2X)(\ 150-X)$
And I'm wondering why! I assume my mistake is in the values for $\ M$ and $\ N$. Can you give me a hand with this?
|
Solving our equation for x: $$
\frac{150-x}{60-x}=C_1e^{90kt}
$$
We obtain:
$$
X(t)=\frac{60 C_1e^{90kt}-150}{C_1e^{90kt}-1}
$$
And if we substitute $\ k=2.5184X10^{-4}$ and $\ C_1 = \frac{5}{2}$ this would be:
$$
X(t)=\frac{150 e^{0.0226656t}-150}{\frac{5}{2}e^{0.0226656t}-1}
$$
Now we're able to answer Part A:
$$
X(20)=\frac{150 e^{0.0226656*20}-150}{\frac{5}{2}e^{0.0226656*20}-1}=29.32
$$
Part B:
$$
\lim_{x \to \infty} f(x) = \frac{150 e^{0.0226656t}-150}{\frac{5}{2}e^{0.0226656t}-1} = 60g
$$ <- This would be the maximum amount of C possible.
And finally Part C:
Chemical A remaining after a long time:
$$
A = A_0-\frac{M}{M+N}(X) = 40-\frac{2}{3}(60) = 0 g
$$
And Chemical B after a long time:
$$
B = B_0-\frac{N}{M+N}(X) = 50-\frac{1}{3}(60) = 30 g
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/714941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
tangent line vs secant line According to definition, secant lines intersect the curve on two different points say $P,Q$ while tangent lines intersect only at one point. Also according definition with $P$ fixed and $Q$ variable as $Q$ approaches $P$ along the curve direction of secant approaches that of tangent.
Now my question is if is curve like sine curve then can we find a tangent line on arbitrary point if so i guess it will contradict the definition of tangent line that it only intersect only a single point of the curve
Please help
Ahsan
|
The definition of tangent is not that it just intersects at one point. It has to do with precisely the way the line touches the curve at that point, and nothing to do with what happens anywhere else.
If you zoom in closer and closer to the point of tangency, and as you get closer, the curve and the line become indistinguishable, then it's a tangent line. It doesn't matter how many times it might contact the curve at other points, as long as it matches at the point we're interested in.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Solving this recurrence relation Hi all I'm preparing for a midterm and the following appeared as a practice problem that I'm not quite sure how to solve. It asks to find a tight bound on the recurrence using induction
$$
{\rm T}\left(n\right)
={\rm T}\left(\left\lfloor{n \over 2}\right\rfloor\right)
+{\rm T}\left(\left\lfloor{n \over 4}\right\rfloor\right)
+{\rm T}\left(\left\lfloor{n \over 8}\right\rfloor\right)
+n
$$
I'm aware a similar question has been asked here before, but that thread dates back to a year or so ago and I never really understood the rationale behind the answer given (see here: Need some help with this recurrence equation). My guess is that it is in $\Theta(n)$, but I'm not sure how to get a more precise relation than that. I've tried expanding out the recurrence a bit, but I'm not seeing any obvious pattern.
|
The solution is cleary increasing. Use the change of variables $n = 2^k$ and $T(2^k) = a_k$, so that:
$$
a_k = a_{k - 1} + a_{k - 2} + a_{k - 3} + 2^k
$$
This is the same as:
$$
a_{k + 3} = a_{k + 2} + a_{k + 1} + a_k + 8 \cdot 2^k
$$
Define $A(z) = \sum_{k \ge 0} a_k z^k$, multiply the recurrence by $z^k$ and sum over $k \ge 0$ to get, after recognizing some sums:
$$
\frac{A(z) - a_0 - a_1 z - a_2 z^2}{z^3}
= \frac{A(z) - a_0 - a_1 z}{z^2}
+ \frac{A(z) - a_0}{z}
+ A(z) + 8 \cdot \frac{1}{1 - 2 z}
$$
Set $a_0 = a_1 = a_2 = 1$ so that:
$$
A(z) = \frac{1 - 2 z - 2^2 + 10 z^3}{(1 - 2 z) (1 - z - z^2 - z^3)}
$$
By Descarte's rules of signs, $p(z) = 1 - z - z^2 - z^3$ has at most one positive zero, and we know $p(0) = 1$ and $p(1) = -2$ (the positive root is 0.543689, but in any case it is less than 1). Considering $p(-z) = 1 + z - z^2 + z^3$, again by the same rule there are at most 2 negative zeros. But we can write:
$$
p(z) = 1 - \frac{z (z^3 - 1)}{z - 1}
$$
For negative $z$, the second term is positive, there are no negative zeros.
A simple bound on the zeros of $a_0 + \dotsb + a_n z^n$ is that they are in the circle of radius $\rho$ around the origin, where:
$$
\rho = \min\left\{ n \left\lvert \frac{a_0}{a_1} \right\rvert, \sqrt[n]{\left\lvert \frac{a_0}{a_n} \right\rvert} \right\}
$$
This gives $\rho = 1$ in our case. By Bender's theorem (Bender, "Asymptotic Methods in Enumeration", SIAM Review 16:4 (oct 1974), pp 485-515), which states that if $A(z) = \sum_{n \ge 0} a_n z^n, B(z) = \sum_{n \ge 0} b_n z^n$, with convergence radii $\alpha \ge \beta > 0$, $C(z) = A(z) \cdot B(z) = \sum_{n \ge 0} c_n z^n$, with $\lim_{n \to \infty} \frac{b_{n - 1}}{b_n} = b$, $A(b) \ne 0$ then $c_n \sim A(b) b_n$.
Apply this with $A(z) = (1 - 2 z - 2^2 + 10 z^3) / (1 - z - z^2 - z^3)$ and $B(z) = (1 - 2 z)^{-1}$. The limit $b = 1/2$, so the theorem tells us that:
$$
a_k \sim 8 \cdot 2^k
$$
Translating back, $T(n) \sim 8 n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
$N$ balls and $M$ boxes, probability of last $ i$ boxes are empty I encountered this problem. There are $M$ boxes and $N$ balls. Balls are thrown to the boxes randomly with probability of $\frac1M$. The boxes are numbered $1, 2, 3, ..., M$.
what is the probability of last $i$ slots are empty, $i = 1, 2, 3, ...,M-1$?
I appreciate any insight on the problem.
|
There are $M^N$ functions from the set of balls to the set of boxes, all equally likely. The number of functions that miss $i$ specific boxes is $(M-i)^N$.
Equivalently, the probability that the first ball misses $i$ specified boxes is $\frac{M-i}{M}$. By independence, the probability they all do is $\left(\frac{M-i}{M}\right)^N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Completeness proof? First of all, this is not a question about a specific problem, but more about a general technique. When I face a problem such as "show that a metric space $(M,d)$ is complete", the first thing I do is to say: if a metric space is complete, then every Cauchy sequence $(x_n)$ where $x_n\in M$ for all $n$ converges in $M$. From there, I am clueless as to how to proceed: are we allowed to assume that $(x_n)$ at least converges to something that may or may not be in $M$?
|
In practice, you often construct your limit object. It can be by using the completeness of a well-known space, generally $\mathbb{R}$ itsel. You probably know the following examples but the principle is very general.
Consider $C_b$ the space of continuous and bounded functions from $\mathbb{R}$ to itself, with the norm of uniform convergence $\lVert \cdot \lVert_\infty$. Given a Cauchy sequence $f_n$, you know then that $f_n(x)$ is a real Cauchy sequence for all $x$ and it allows you to defined $f(x)$ as the limit (using that $\mathbb{R}$ is complete) of this sequence. Thus, you have define a function $f$ and then can show that your Cauchy sequence converges to that limit.
Another example that doesn't use the completeness of $\mathbb{R}$. To show that the space $L^p$ of measurable functions with finite $L^p$-norm is complete, you can proceed the following way. Given a Cauchy sequence for the $L^p$ norm, you show that a subsequence converges almost surely, which defines clearly your limit object, and then show that the whole sequence converges.
Very often, you will have to follow that plan: construct your limit, and show that your
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Is this node proper or improper We have the Jacobian matrix (note: $c > b$ and $a,b,c>0$): $$J\left(\frac{a}{b}, 0\right) = \begin{pmatrix} -a & -\frac { ca }{ b } \\ 0 & a-\frac { ca }{ b } \end{pmatrix}$$ which is lower triangular so has eigenvalues $\lambda_{1} = -a < 0, \lambda_{2} = a-\frac{ca}{b}$. Since $c > b$ we have that $\lambda_{2} < 0$. So this is a stable node, since $\lambda_{1}, \lambda_{2} < 0$ it is an improper node.
Is this correct?
|
We have the eigenvalues as:
$$\lambda_{1,2} = \left\{-a,\frac{a (b-c)}{b}\right\}$$
We have:
*
*For the first eigenvalue, $a \gt 0 \implies \lambda_1 \lt 0$.
*For the second eigenvalue, we have cases:
*Case 1: $c \le 0, c \gt b, ~\lambda_2 \gt 0$. This is a saddle.
*Case 2: $c \gt 0 \gt b, \lambda_2 \lt 0$. Eigenvalues are real, unequal, both negative, we have an improper node (asymptotically stable).
Note: We can have an improper node when we have real, unequal, both positive (unstable)/negative (asymptotically stable). We can have a proper or improper node when we have real and equal, both positive (unstable) / negative (asymptotically stable).
For example, if we have a real, double eigenvalue (positive or negative) with only one linearly independent eigenvector, the critical point is called an improper or degenerate node.
Here is a nice summary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability of multiple events: continued I have 5 independant events. How do I calculate the probability of event A occurring + 1 other? How do I calculate the probability of event A occurring + 2 others? What about the occurrence of A & B plus one other, but not the rest? I have multiple variations of this I'm trying to figure out.
Event A: 15%
Event B: 30%
Event C: 15%
Event D: 10%
Event E: 10%
|
The key here is to leverage independence. The first thing to note is that $A$ is independent of the event "at least one of $B,C,D,E$ occurs", so that
$$
P(A\cap\{\text{at least one of }B,C,D,E\})=P(A)\cdot P(\text{at least one of }B,C,D,E).
$$
So, how can you compute the probability that at least one of $B,C,D,E$ occurs? Well, for starters,
$$
P(\text{at least one of }B,C,D,E)=1-P(\bar{B}\cap\bar{C}\cap\bar{D}\cap\bar{E}),
$$
where $\bar{X}$ denotes the event that $X$ does not occur. But this probability is not bad to compute: the fact that $B,C,D,E$ are independent implies that $\bar{B},\bar{C},\bar{D},\bar{E}$ are independent as well, so that
$$
\begin{align*}
P(\bar{B}\cap\bar{C}\cap\bar{D}\cap\bar{E})&=P(\bar{B})\cdot P(\bar{C})\cdot P(\bar{D})\cdot P(\bar{E})\\
&=0.7\cdot0.85\cdot0.9\cdot0.9\\
&=0.48195.
\end{align*}
$$
Thus the probability that at least one of the events occurs is
$$
P(\text{at least one of }B,C,D,E)=1-0.48195=0.51805.
$$
I'll let you finish it up from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
recursive formulas I'm studying linear algebra. I don't know how to deal with recursive formulas, for example
*
*fibonacci
*when you should find a big determinant of size $n$ (I think Vandermonde uses a recursive formula is found there, or is it induction?)
*how do you go from a recursive formula to an explicit one?
Bad question, but I do need some guidance here.
|
(From a bit of a computer science perspective)
A recursive formula has two parts: a terminal condition, and a recursive call.
For instance, let's say Fibonacci numbers. Let's say that $f(n)$ will calculate the $n$-th Fibonacci number. Our domain is the natural numbers.
The function is therefore piecewise:
$$
f(n) =
\begin{cases}
1, & \text{if }n\text{ = 1 or $n$ = 2} \\
f(n-1) + f(n-2), & \text{if }n\text{ > 2}
\end{cases}
$$
The terminal condition is when $n = 1$ or $n=2$. In either case, the function returns a hard value.
The recursive call is the other part. We are defining the function (for certain values) in terms of itself. If $n$ is large enough, this will cause more recursive calls. Eventually, one of the recursive calls will be at $n=1$ or $n = 2$, so it will return a hard value. And then the recursive call that made that recursive call has a hard value, and so on.
Here's an example with $n =5$:\begin{align}
f(5) = f(3) + f(4)\\
= f(2) + f(3) + f(2) + f(1)\\
= 1 + f(1) + f(2) + 1 + 1\\
= 1+1+1+1+1\\
=\boxed{5}\\
\end{align}
Also, you could have a recursive formula for the determinant of an $n \times n$ matrix. Think about what the terminal condition would be: for which values of $n$ do you know a simple way to compute the determinant? Undoubtedly for $n=1$ and $n=2$, but maybe even $n=3$? And then think about how to recursively define the determinant of a larger matrix based on the determinants of smaller matrices (hint: cofactor expansion).
However, a warning: A recursive formula for determinants is very inefficient, and will take extremely long times to compute (even by a computer) for larger values of $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Isomorphism between this subgroup of complex numbers and all finitely generated abelian groups ? Every cyclic (abelian) group of infinite order is isomorphic to $G=(\mathbb{Z},+)$. Is there a corresponding set of groups $S_G=\{G\}$ such that every finitely generated abelian group of infinite order is isomorphic to at least one group in $S_G$?
More precisely,
*
*Let $A$ be the set of all finitely generated abelian groups of infinite order, then is there an $S_G$ so that $\exists$ a mapping $f: A \to S_G$ such that $f(A_1)=G\Leftrightarrow A_1 \approx G$?
*Can $S_G,f$ be chosen such that $f$ is an isomorphism?
Here's an idea I had for a possible $S_G$:
$I_N=\{1,...,N\}$, integer $m$ and $p_i$ being the $i$th prime, I posit that each group, with binary operation of multiplication, of the form $$G_{m,N,M}=\{ re^{i \theta}| r=\prod_{i \in I_N}p_i^{a_i} , \theta \in \{\frac{1}{m}\prod_{i \in I_M}p_i^{b_i} \mod 1\}$$
Given that $m,N,M$ are free to vary, I think $S_G$ is sufficiently big for an $f$ to exist. Is this provable, and can a very similar $G_{m,N,M}$ be contructed with fewer free parameters (i.e. here, $m,N,M$)?
|
Take
$$
S_G=\left\{\mathbb Z^r\times \prod_{i=1}^n \mathbb Z/p_i^{a_i}\mathbb Z:n,r,a_i\in\mathbb Z^+,p_i\text{ prime}\right\}.
$$
This is using the classification theorem for finitely generated abelian groups.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proof that $f(x)=x^{1/n}$ is continuous. Here's what I've done:
According to the definition, a function is continuous at $c$ if, for any $\epsilon>0$, there exists a $\delta>0$ so that, if $|x-c| < \delta$, then $|f(x)-f(c)| < \epsilon$.
$$\begin{split}
|f(x)-f(c)| < \epsilon
& \Leftrightarrow \left|x^{1/n}-c^{1/n}\right| < \epsilon \\
& \Leftrightarrow c^{1/n}-\epsilon < x^{1/n} < c^{1/n}+\epsilon \\
& \Leftrightarrow \left(c^{1/n}-\epsilon\right)^n < x < \left(c^{1/n}+\epsilon\right)^n,
\end{split}
$$
(which we can call $a < x < b$).
Thus, if we make $\delta = \min\{c-a,b-c\}$, then
$$|x-c| < \delta \Rightarrow |f(x)-f(c)| < \epsilon$$
Which proves its continuity. Have I done anything wrong?
|
I think you want to take $\delta = \min\{(c^{1/n}+\epsilon)^n-c, c-(c^{1/n}-\epsilon)^n\}$ for $c\ne0$. Otherwise just take $\delta=\epsilon^n$.
EDIT: Looks like you made the correction as I was posting.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/715909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Limits involving logarithm and argument in the complex plane
*
*$\operatorname{Log}((2/n) + 2i)$ as $n \to \infty$
*$\operatorname{Log}(2 + (2i/n))$ as $n \to \infty$
*$\operatorname{Arg}((1+i)/n)$ as $n \to \infty$
*$(\operatorname{Arg}(1+i))/(n)$ as $n \to \infty$
For the Log questions, I am getting $(i\pi)/2 + \log(2)$ for the first problem, then for the second I am getting only $\log(2)$. Because the Log's in the questions are capitalized, I think I may have to add on $2\pi i k$ to each of the answers. Is that correct?
for the last two problems (the Arg problems) I got zero for both because as n goes to infinity the n is the denominator for each so I thought they probably each go to zero. but also, the Arg is capitalized here as well, so I am getting the feeling I am doing these wrong. Can anybody help? Thanks!
|
The meaning of capitalized names such as $\operatorname{Log}$ varies by source. I assume that $ \operatorname{Log}$ has been defined so that it's continuous at $2i$ and at $2$; this is the case for the common definitions I'm familiar with. Check your definition. Then
*
*$\operatorname{Log}((2/n) + 2i) \to \operatorname{Log}(2i)$ as $n \to \infty$
*$\operatorname{Log}( 2 + 2i/n) \to \operatorname{Log}(2)$ as $n \to \infty$
*$\operatorname{Arg}((1+i)/n) = \operatorname{Arg}(1+i)$ for all $n$; this is a constant sequence. Argument of a complex number is not affected by scaling.
*$(\operatorname{Arg}(1+i))/n \to 0$ since numerator does not depend on $n$, while the denominator grows indefinitely.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $A \cong \mathbb{C}^n$ with A a commutative algebra
Let A be a commutative algebra of finite dimension, and if $A$ has no nilpotent elements other than $0$, is true that $A \cong \mathbb{C}^n$ ?
The question emerge to my mind, I thought that the finite dimension tell us that the scheme is Artinian (geometrically dimension 0).
I think the pattern is just a meeting of $n$ points but I have not managed to prove it.
Someone can enlighten me please ?
Thanks
|
An artinian ring $A$ has only finitely many prime ideals, which are all maximal. Thus, by the Chinese remainder theorem,
$$A / \mathfrak{n} \cong A / \mathfrak{m}_1 \times \cdots \times A / \mathfrak{m}_r$$
where $\mathfrak{m}_1, \ldots, \mathfrak{m}_r$ are the distinct prime/maximal ideals of $A$ and $\mathfrak{n}$ is the nilradical/Jacobson radical.
In particular, if the only nilpotent element of $A$ is $0$, then $A$ is a product of finitely many fields. Moreover, if $A$ is a finite $\mathbb{C}$-algebra, then each $A / \mathfrak{m}_i$ must also be a finite $\mathbb{C}$-algebra, hence, must be (isomorphic to) $\mathbb{C}$ itself.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Combining results with Chinese Remainder Theorem? $9x^2 + 27x + 27 \equiv 0 \pmod{21}$
What is the "correct" way to solve this using the Chinese Remainder Theorem? How do I correctly solve this modulo $3$ and modulo $7$ without brute force?
|
First, modulo $3$, your congruence reduces to $0\equiv 0$, because all coefficients are multiples of $3$. Therefore there are three solutions: $x\equiv 0,1,2 (\mod 3)$.
Working modulo $7$ the congruence becomes $2x^2+6x+6\equiv 0$, or $x^2+3x+3\equiv 0$, since we can multiply both sides by the inverse of $2$. To solve this, we use the quadratic formula, noting that in our case $\frac{1}{2a}=4$. Thus $x\equiv 4(-3\pm\sqrt{9-12})\equiv 4(4\pm\sqrt{4})\equiv 4(4\pm2)\equiv 3$ or $1$.
Now, you have three solutions modulo $3$ and two solutions modulo $7$, so that's six combinations to feed into the Chinese Remainder Theorem.
$x\equiv_3 0, x\equiv_7 1 \implies x\equiv_{21} 15 \\
x\equiv_3 1, x\equiv_7 1 \implies x\equiv_{21} 1 \\
x\equiv_3 2, x\equiv_7 1 \implies x\equiv_{21} 8 \\
x\equiv_3 0, x\equiv_7 3 \implies x\equiv_{21} 3 \\
x\equiv_3 1, x\equiv_7 3 \implies x\equiv_{21} 10 \\
x\equiv_3 2, x\equiv_7 3 \implies x\equiv_{21} 17 \\$
As noted in the comments above, since the solution is everything modulo $3$, it makes more sense in this case to just solve the problem modulo $7$. However, you wanted to see the gory details, and the above method generalizes just fine.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Question about $T_n = n + (n-1)(n-2)(n-3)(n-4)$
The formula $T_n = n + (n-1)(n-2)(n-3)(n-4)$ will produce an arithmetic sequence for $n < 5$ but not for $n \ge 5$. Explain why.
I think it is because if n is less than five the term with multiplication will be equal to zero and the common difference will be one. If n is greater than five, the term added will have multiplication and there will not be a common difference.
Is this correct?
|
Your reasoning is correct. You can generalize it to say that
$$T_n = (an + b) + (n-1)(n-2)\cdots(n-k)$$
is an arithmetic progression for $1 \le n \le k$, for some arbitrary integer $k \ge 1$. This is because similarly, the product evaluates to $0$ for these values of $n$, to leave
$$T_n = an + b$$
This remainder essentially defines an arithmetic progression with common difference $a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
A Flat Tire Excuse I have this multi-part question on an assignment that I don't understand. Hopefully someone can help.
There's a story that 4 students missed their final and asked their professor for a make-up exam claiming a flat tire as their excuse. The professor agreed and put them in separate rooms. The first question of the test was easy and only worth a few points. The second question comprised all of the remaining points and asked "which tire went flat? RF, RR, LF, LR?"
*
*Which hypothesis set is true?
HS1: Ho: Students told the truth. HA: Students lied OR
HS2: Ho: Students lied. HA: Students told the truth
I think that HS2 is the correct answer but I'm not certain as I feel like it could be either depending on how you look at it.
*
*What is the rejection region of your test?
I know this relies on question 1 but I'm not sure how to move forward in general not to mention that I'm not positive of my answer to question 1.
*
*What is the Type 1 error rate alpha of the rejection region you defined?
*After collecting the students answers, how do you define the p-value of such answers for the test?
I'm really so confused about all of this. So I'm really hoping someone can help.
|
Hm ... how about this:
H0: flat tire
HA: students lied
If H0 holds then the probablity of different answers is exactly 0 (assuming students are in a good memory).
1) Hence, reject H0 if there are any different answers and the p-value is exactly 0.
2) Keep H0 if answers agree and the power of the test is 1-1/256.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why is $\pi_1(\Bbb{R}^n,x_0)$ the trivial group in $\Bbb{R}^n$? My Algebraic Topology book says
Let $\Bbb{R}^n$ denote Euclidean n-space. Then $\pi_1(\Bbb{R}^n,x_0)$ is the trivial subgroup (the group consisting of the identity alone).
I wonder why that is. I can imagine infinite continuous "loops" in $\Bbb{R}^3$ that start and end at $x_0$.
Thanks in advance!
|
The problem with your last sentence is that $\pi_1(X,x_0)$ is not the set of loops based on $x_0$, but of homotopy classes of loops based on $x_0$.
Can you see why every loop in $\mathbb R^n$ based on $x_0\in \mathbb R^n$ is homotopic to the constant map based in $x_0$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Proving Vector Subspaces Question 1:
The set $\mathbb R^3$ of all column vectors of length three, with real entries, is a vector space. Is the subset $$B=\{xyz \in \mathbb R^3 \mid xy+yz= 0\}$$ a subspace of $\mathbb R^3$?
Justify your answer.
Attempted answer:
$(0)$: let $0$ vector be in set $B$, then $0\cdot0 + 0\cdot0 = 0$, hence $B$ is non-empty.
$(1)$ A1: Let $u = \begin{bmatrix}0&0&1\end{bmatrix}$ and $V = \begin{bmatrix}0&1&0\end{bmatrix}$, which are both in $B$, but $u +v = \begin{bmatrix}0&1&1\end{bmatrix}$ is not in $B$, since $0\cdot1 +1\cdot1 \neq 0$. Hence not closed under multiplication.
$(2)$ S1: I know its irrelevant since its not closed under A1...but is this correct--> let $u = \begin{bmatrix}a&b&c\end{bmatrix}$, then $k\cdot u = \begin{bmatrix}ka&kb&kc\end{bmatrix}\implies (ka)(kb) + (kb)(kc) = 0 \implies k^2(ab) + k^2(bc) = 0$, hence must clearly be in set $B$?
Question 2
Show that the set of all twice differentiable functions $f:\mathbb R\to \mathbb R$ satisfying the differential equation $\sin(x)f''(x) + x^2f(x) = 0$ is a vector space with respect to the usual operations of addition of functions and multiplication by scalars. Note that $f''$ is the second derivative of $f$.
Attempted answer: The zero vector in $F$ space has the zero function $0(x) = 0 \in \mathbb R$ and we have $0''(x) = 0'(x) = 0$. Assume that the set of solutions for the equation is a subset of $F$ space, hence it contains the zero function and so $0(x) = \sin(x)\cdot 0''(x) + x^2\cdot0(x) = 0+0 = 0$ so set of solutions is in subspace.
|
So for question 2:
A1: $\sin(x)(f+g)'' + x^2 (f+g)(x)= 0 \implies
\sin(x) [f''(x) + g''(x)] + x^2\cdot f(x)+x^2\cdot g(x) = 0$ $ \implies \sin(x)\cdot f''(x) + \sin(x)\cdot g''(x) + x^2\cdot f(x) +x^2\cdot g(x)$ so clearly $f +g \in F$.
S1: $\alpha \cdot (\sin(x)\cdot f''(x) + x^2\cdot f(x)) \implies \alpha \sin(x)\cdot f''(x) + \alpha \cdot x^2\cdot f(x)$.....stuck here, how do i tell if its in the subset or not ??? really confused in this regard.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Minimum number of different clues in a Sudoku I wonder if there are proper $9\times9$ Sudokus having $7$ or less different clues. I know that $17$ is the minimum number of clues. In most Sudokus there are $1$ to $4$ clues of every number. Sometimes I found a Sudoku with only $8$ different clues.
In this example the number $9$ is missing, but the Sudoku was very well solvable.
Is it possible to have a $9\times9$ Sudoku with less than $8$ different clues?
|
Does the mere interchangeability of 2 or more missing values really result in more than one solution?
In a standard sudoku these 'values' are only symbols and bear no arithmetical value or meaning. Follow the reasoning above and not any standard sudoku with even 8 different clues can have a unique solution, because you are free to fill in the 9 free cells with any individual number (or icon of clover, pet animal or ladybug).
I think, that as long
*
*as the 8, 7 or less given symbols allow only one result among them
(and with enough givens they will) and if
*the remaining available cells can be filled in only one way to accommodate any 1, 2 or more different arbitrary symbols
that ought to be regarded as a unique solution and thus, the puzzle in question as well-posed.
I am not entirely sure if this is possible to construct, but think it's rather likely.
Thank you for taking up this interesting question.
edit:
As an example, I took the unique solution to a trivial standard sudoku, removed 2 of the symbols and searched for the ways to share the free cells.
A puzzle with only 7 different values given:
-------------------------
| 1 4 7 | 2 8 | 3 6 |
| 2 8 | 3 6 | 4 7 1 |
| 3 6 | 4 7 1 | 8 2 |
-------------------------
| 4 7 1 | 8 2 | 6 3 |
| 8 2 | 6 3 | 7 1 4 |
| 6 3 | 7 1 4 | 8 2 |
-------------------------
| 7 1 4 | 8 2 | 3 6 |
| 8 2 | 3 6 | 1 4 7 |
| 3 6 | 1 4 7 | 2 8 |
-------------------------
And the only possible distribution of the missing values (named arbitrarily):
------------------------- -------------------------
| | | A | | | B | |
| | A | | | B | | |
| A | | | | | | B |
------------------------- -------------------------
| | | A | | | B | |
| | A | | | B | | |
| A | | | | | | B |
------------------------- -------------------------
| | | A | | | B | |
| | A | | | B | | |
| A | | | | | | B |
------------------------- -------------------------
Every symbol in a standard sudoku's solution is placed according to one out of 46656 distribution patterns.
My understanding of a 'unique solution' could be expressed like:
There exists exactly one set of 9 patterns that can be combined without overlapping and where each pattern covers one of the (given or missing) symbols.
Returning to the OP's question, my answer would be 'yes', sudoku puzzles may have less than 8 different values given and still result in a unique solution.
Which brings up some other questions to be maybe explored in the future:
*
*How much further can the number of different values be decreased?
*How much could an individual value's count be decreased?
*And how are these parameters related?
I really wouldn't know how to find proof in any form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Proof by induction: $n! > n^2$ for $n\ge 4$ Proof by induction: $n! > n^2$ for $n\ge 4$
Basis step:
if $n=4$
$4\cdot3\cdot2\cdot1 > 4^2$
$24 > 16$
I don't know how to do the inductive step.
|
Inductive step:
$$(n+1)! = (n+1)n! > (n+1)n^2 $$
But clearly, $n^2 > n + 1$ for integer $n \ge 2$. You could prove this rigorously by showing that the curve $n^2 - n - 1$ lies strictly above the $x$ axis for $n \ge 2$.
Hence,
$$(n+1)! > (n+1)^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Determine if R is an equivalence relation I've got this question: Consider the relationship $R$ between ordered pairs of natural numbers such that $(a, b)$ is related to $(c, d)$ (denoted by $(a, b) R (c, d)$) if and only if $ad = bc$.
Discuss whether $R$ is an equivalence relation.
I'm pretty new to set theory and am wondering if someone can explain how you would go about evaluating if $R$ is an equivalence relation (I really want to understand how to work out the answer, not just the answer, thanks!)
|
*
*Check reflexivity: Is it the case that for all $(a, b)\in \mathbb
N\times \mathbb N$, it is true that $(a, b) R (a, b)$? That is, is it
true that for all such $(a, b)$, $ab = ba$?
*Check symmetry: Is it the case that for all $(a, b), (c, d) \in \mathbb N\times \mathbb N,$ that If $(a, b) R (c, d)$, then $(c, d) R (a, b)$? This means that if $ad = bc,$ is it true that $cb = da$?
*Check transitivity: Is it the case that for all $(a, b), (c, d), (e, f) \in \mathbb N\times \mathbb N,$ that If $(a, b) R (c, d)$ and $(c, d) R (e, f)$, then it follows that $(a, b) R (e, f)?$ We need to show that $$(ad = bc \text{ and } cf = de) \implies af = be$$ This is slightly (very slightly) tricky, but just a little algebra gives the desired result: Suppose we know that $(a, b) R (c, d)$ and $(c, d) R (e, f)$. Then by the definition of $R$, we know that $ad = bc$ and $cf = de$. Then $adcf = bcde$, and so canceling the factors $c, d$ on each side of the equation gives us $af = be$, as desired, since this means $(a, b) R (e, f)$. Hence transitivity holds.
If all three properties hold for $R$ (if you can answer yes to all of the above questions), then $R$ is an equivalence relation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Application of Composition of Functions: Real world examples? Do you know of a real world example where you'd combine two functions into a composite function? I see this topic in Algebra 2 textbooks, but rarely see actual applications of it. It's usually plug and chug where you take f(g(4) and run it through both functions. This leads to the idea of creating a composite function f(g(x). But it's somewhat academic, and it's not like you're saving time b/c you need to run 50 different numbers through both functions.
While on this topic, where is this topic used in later math? In Precalculus, you can determine the domain of the composite function. In Calc, composition is used to describe the ideas behind the Chain Rule. In Calc, you break down a function into the 2 components to show it's continuous. (If the components are continuous, so is the composite function) Any other main areas?
Thanks!
|
First example of Algorythms: You have a list, compose by a head (an element) and a tail (a list). A composition of functions could return the second element of the list, let's say, L:
$ Head(Tail(L)) $
This is a simple examen in my field of study, I don't know if that's what you're looking for.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/716907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 1
}
|
The Affine Tangent Cone I'm failing to see how exactly is the tangent cone at a singular point on a curve picking out all the different tangent lines through this singular point (say the origin in $\mathbb{A}^2$)?
Could someone explain this, or at least redirect me to a source I could read about? I tried to look online, but at most places they are just taking this as a known fact!
Thanks!
|
Initial remark. Suppose someone gives you a continuous function of two variables, and asks you to calculate the Taylor series in $(0,0)$. If we happen to notice that the first derivatives vanish at the origin, we will call it a critical point.
The idea of the tangent cone is that we are doing exactly this: taking the Taylor series of the simplest continuous function: a polynomial. And if the first derivatives vanish, we call $(0,0)$ a singular point.
We have a curve $X=V(f)\subset \mathbb A^2$ passing through the origin.
Even if the origin is a nonsingular point of the curve, you do have a tangent cone at the origin: it is also called the tangent line! (and this is precisely encoded in the corresponding first terms of the "Taylor series" of $f$).
However, the tangent cone is constructed starting from the leading form of $f\in k[x,y]$, which is the homogeneous form $\tilde f$ of smallest degree appearing in the decomposition of $f$; it is again in two variables. For instance, the leading form of $f=2x+y-8y^2x$ is $\tilde f=2x+y$, the term of smallest degree. The tangent cone is the zero set $V(\tilde f)$. So the origin is nonsingular if and only if $f$ has leading form of degree one. In that case, $V(\tilde f)$ is exactly the tangent line.
In general, the leading form will be a product of linear factors, each appearing with some exponent. So the scheme $V(\tilde f)$ is a union of lines, where some of them are possibly nonreduced.
If $f$ starts, say, with degree $2$, then for instance $(0,0)$ will be a node in case the (degree $2$) leading form is a product of two distinct linear forms, i.e. something of the kind $$(ax+by)\cdot (cx+dy).$$ If, instead, the leading form has the shape $(ax+by)^2$, something else happens (example: the cusp $f=y^2-x^3$, whose tangent cone which is a double line).
Reference. All this is beautifully explained in Mumford's The red book of varieties and schemes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
How is $\{\emptyset,\{\emptyset,\emptyset\}\} = \{\{\emptyset\},\emptyset,\{\emptyset\}\}$? I'm slightly confused as to how
$$\{\emptyset,\{\emptyset,\emptyset\}\} = \{\{\emptyset\},\emptyset,\{\emptyset\}\}$$
are equivalent. I thought two sets were equivalent if and only if "$A$" and "$B$" have exactly the same elements. In this case, we have one element which is in both sets but then two elements aren't in other! Can someone please explain where I am going wrong in my definition?
Thanks a bunch!
|
may be it is better to use that $\{ \emptyset, \emptyset\} = \{\emptyset\} \cup \{\emptyset\}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Proof that certain number is an integer Let $k$ be an integer and let
$$
n=\sqrt[3]{k+\sqrt{k^2-1}}+\sqrt[3]{k-\sqrt{k^2-1}}+1
$$
Prove that $n^3-3n^2$ is an integer.
(I have started posting any problem I get stuck on and then subsequently find a good solution to here on math.se, primarily to get new solutions which might be even better, and to get checked if my solutions are ok)
|
Since $$a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^2-ab-bc-ca)$$
$$a+b+c=0\implies a^3+b^3+c^3=3abc$$
Let $a=\sqrt[3]{k+\sqrt{k^2-1}}$, $b=\sqrt[3]{k-\sqrt{k^2-1}}$, $c=1-n$
Clearly we have $a+b+c=0$
$$a^3+b^3+c^3=3abc\tag{1}$$
$$ab=\sqrt[3]{k+\sqrt{k^2-1}}\cdot\sqrt[3]{k-\sqrt{k^2-1}}$$
$$ab=\sqrt[3]{k^2-(\sqrt{k^2-1})^2}$$
$$ab=\sqrt[3]1 = 1$$
Substituting back in $(1)$
$$k+\sqrt{k^2-1}+k-\sqrt{k^2-1}+(1-n)^3=3(1-n)$$
$$2k+1-3n+3n^2-n^3 = 3-3n$$
$$\require{cancel}{2k+1-\cancel{3n}+3n^2-n^3 = 3-\cancel{3n}}$$
$$n^3-3n^2=2k-2$$
which is obviously an integer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$\mathbb{Z}[i]/(1+i) \cong \mathbb{Z}/\mathbb{2Z}$ When I first looked at this problem, I thought that $\mathbb{Z}[i]/(1+i) \cong \mathbb{Z}/5\mathbb{Z}$, but apparently the correct answer is $\mathbb{Z}[i]/(1+i) \cong \mathbb{Z}/2\mathbb{Z}$.
Here's where I'm confused: saying that $\mathbb{Z}[i]/(1+i) \cong \mathbb{Z}/2\mathbb{Z}$ is saying that there is one element in $\mathbb{Z}[i]$ that is divisible by $1+i$, which is the $0$ element in $\mathbb{Z}[i]/(1+i)$, and there there's only ONE other element in ${\mathbb Z}[i]$ that is not divisible by $1+i$, thus it is isomorphic to $\{0,1\}$. I'm just not seeing how this isomorphism makes any sense, whatsoever...
A complete proof would be helpful, but I guess I'm more looking for intuition.
|
There is a natural ring image of $\,\Bbb Z\,$ in $\,R = \Bbb Z[i]/(1\!+\!i)\,$ by mapping integer $\,n\,$ to $\ n \pmod{1\!+\!i}$ by composing the natural maps $\,\Bbb Z\to \Bbb Z[i]\to \Bbb Z[i]/(1+i).\,$ This map $\, h\color{#0a0}{ \ {\rm is\ surjective\ (onto)}}$ since $\,{\rm mod}\ 1\!+\!i\!:\ \,1\!+\!i\equiv 0\,\Rightarrow\,i\equiv -1\,\Rightarrow\, a+bi\equiv a-b\in\Bbb Z.\,$ Finally, let's compute the kernel $\,I\,$ of the ring homomorphism $\,h.\,$ By rationalizing a denominator, $\,I = \color{#c00}{\ker h = 2\,\Bbb Z}\,$ as follows
$$ n\in I\iff (1\!+\!i)\mid n\ \ {\rm in}\ \ \Bbb Z[i]\iff \dfrac{n}{1\!+\!i}\in \Bbb Z[i]\iff \dfrac{n(1\!-\!i)}{2}\in\Bbb Z[i]\iff \color{#c00}{2\mid n}$$
Therefore, applying the First Isomorphism Theorem, $\, \color{#0a0}{R = {\rm Im}\ h} \,\cong\, \Bbb Z/\color{#c00}{\ker h} \,=\, \Bbb Z/\color{#c00}{2\,\Bbb Z}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Why is variance squared?
The mean absolute deviation is:
$$\dfrac{\sum_{i=1}^{n}|x_i-\bar x|}{n}$$
The variance is: $$\dfrac{\sum_{i=1}^{n}(x_i-\bar x)^2}{n-1}$$
*
*So the mean deviation and the variance are measuring the same thing, yet variance requires squaring the difference. Why? Squaring always gives a non-negative value, but the absolute value is also a non-negative value.
*Why isn't it $|x_i-\bar x|^2$, then? Squaring just enlarges, why do we need to do this?
A similar question is here, but mine is a little different.
Thanks.
|
The long and the short is that the squared deviation has a unique, easily obtainable minimizer (the arithmetic mean), and an inherent connection to the normal distribution. The absolute deviation, on the other hand, can admit multiple non-unique, potentially laborious to obtain minimizers (medians). For a simple illustration of this, observe that the set $\{0,1\}$ admits for a value $x$ the total absolute deviation ($L_1$ norm) $$|x-0|+|x-1|=\begin{cases}1-2x,&x\le0
\\1,&0<x\le1
\\2x-1,&1<x\end{cases}$$
which can be seen to be a piecewise linear/constant function minimized to $1$ by all $x$ in $[0,1]$. Instances with more points may be even more pathological and not admit a simple method of optimization. On the other hand, the total squared deviation ($L_2$ norm) of the same set would be $(x-0)^2+(x-1)^2=2x^{2}-2x+1$, a quadratic function with a unique minimizer of $x=0.5$, easily obtainable by setting its derivative to zero.
The connection of squared deviations to the normal distribution is highly attractive, first for the distribution's ubiquitous applicability to real world phenomena (hence the name), for instance, for dispersed measurements taken from populations or for errors in measurements. Second, the connection is attractive due to the normal distribution's enormously convenient theoretical properties, for instance, since normal distributions are symmetric about their means, have easily obtainable centers and dispersions, are closed under summation, and so on. Furthermore, from a practical point of view, there is extensive theoretical groundwork already established for the normal distribution, which is opportune to lean on.
These characteristics can ultimately be seen as consequences of the various convenient mathematical properties of $x^2$ lacked by $|x|$, e.g. differentiability everywhere (facilitating minimization), that the set of quadratic functions are closed under summation (the sum of two quadratics is another quadratic), and so on.
So this is not to say that absolute deviations are not used or less applicable than squared deviations. On the contrary. Instead, they are, in many relevant ways, less convenient to apply.
See also, Stats SE Q118.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 12,
"answer_id": 11
}
|
Is a brute force method considered a proof? Say we have some finite set, and some theory about a set, say "All elements of the finite set $X$ satisfy condition $Y$".
If we let a computer check every single member of $X$ and conclude that the condition $Y$ holds for all of them, can we call this a proof? Or is it possibly something else?
|
Using a computer to brute-force can be the first step to a proof. The next step is to prove that the program is correct!
A few ways you might do this are:
*
*Have the program output a proof for each member of the set. We can then check these proofs without having to trust the program at all. We could even send them all through an automated proof checker, which of course would also need to be proven correct! This may be worth doing, since proof checkers are generally simpler (easier to prove correct) and more general than proof finders; you might output proofs in a format understood by an existing proof checker.
*Prove that the program is correct for each member of the set. This might defeat the point of using a program in the first place!
*Prove that the program is correct for all possible inputs. This can be a good strategy, since the program only needs to simplify the problem, it doesn't need to solve it. For example, we might prove that our program returns "TRUE" if our property holds and "FALSE" if it doesn't; we specifically don't have to prove the more difficult part about which one it will actually return. To do that, we just run it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 5,
"answer_id": 1
}
|
easy inequality to prove Prove that $\log_2(x)+\frac{1-x}{x} > 0$
I think the answer is easy but I've no clue how to go about it.
|
Take the exponents of both sides, it is equivalent to showing $xe^{\frac{1}{x}-1} > 1$. But $e^x = 1+x+\cdots > 1+x$ whenever $x$ is positive, and thus $xe^{\frac{1}{x}-1} > x(1+\frac{1}{x}-1) = 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How do I reduce (2i+2)/(1-i) with step-by-step please? I need a step by step answer on how to do this. What I've been doing is converting the top to $2e^{i(\pi/4)}$ and the bottom to $\sqrt2e^{i(-\pi/4)}$. I know the answer is $2e^{i(\pi/2)}$ and the angle makes sense but obviously I'm doing something wrong with the coefficients. I suspect maybe only the real part goes into calculating the amplitude but I can't be sure.
|
Try multiplying the numerator and denominator by $1+i$. This will give you $\frac{(2i+2)(1+i)}{1^2+1^2}$. Then, FOIL the numerator and note $i^2=-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
modern analysis: integrals and continuity Let $$f(x) = \sum_1 ^\infty n*e^{-nx}$$ Where is $f$ continuous? Compute $$\int_1^2f(x) dx$$
I am having trouble proving where $f$ is continuous. For the second part, so far I have been able to compute the derivative.. although I basically had to move the summation from inside the integral to outside the integral and I am not sure why I am allowed to do that.
|
When the real part of $x$ is greater than $0$, $f(x)=\frac{e^x}{(e^x-1)^2}$. Clearly this is continuous so simply show the relation.
When the real part is less than $0$, use the limit test.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Gluing Lemma when A and B aren't both closed or open. Gluing Lemma:
$Let X = A \cup B \text{ and } f: A \rightarrow Y$ be continuous and $g: B \rightarrow Y$ be continuous with $A,B$ closed. Also $\left.f\right|_{A \cap B} = \left.g\right|_{A \cap B}$. Then $h: x \rightarrow y$ such that $\left.h\right|_A = f$ and $\left.h\right|_B = G$ is continuous.
I'm looking for an example of maps and sets when this fails if (without loss of generality) A is not closed and B is closed.
Thanks in advance.
|
The relevant part here is that $A\cap B$ can be empty and yet have have the functions need to agree at the boundary of one because it's a limit point of the other. Let $A=[0,1]$ and $B=(1,2)$. Then $A\cup B=[0,2)$. Let $f(x)=x$ and $g(x)=-x$. Then $h(x)$ is discontinuous at $1$. But the criterion that they agree on the intersection is true, since the intersection is empty. Compare this with what happens with similar endpoints if both $A$ and $B$ are open (or both closed).
For an example with a nonempty intersection, consider $S^1$, the circle in $\mathbb{C}$. I will write $[\alpha,\beta]$ to mean arc starting at $\alpha$ and going clounterclockwise to $\beta$ closed, and with $()$ to denote open arcs.
Let $$A=(0,i),\quad B=[i,e^{\frac{i\pi}{4}}]$$
So $A$ is an open quarter circle and $B$ is a closed seven-eigths of a circle such that they share an endpoint. Notice that the two arcs over lap. This however does not force continuity because we still have the same issue at $i$ that we had at $1$ in the previous problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Minimizing a convex cost function I'm reviewing basic techniques in optimization and I'm stuck on the following. We aim to minimize the cost function
$$f(x_1,x_2) = \frac{1}{2n} \sum_{k=1}^n \left(\cos\left(\frac{\pi k}{n}\right) x_1 + \sin\left(\frac{\pi k}{n}\right) x_2\right)^2.$$
I'd like to show some basic properties, specifically what its Lipschitz constant is, whether or not it is strongly convex, and where it obtains its minimum.
In finding the Lipschitz constant,
$$\frac{\partial f}{\partial x_1} = \frac{1}{n} \sum_{k=1}^n \left(\cos\left(\frac{\pi k}{n}\right) x_1 + \sin\left(\frac{\pi k}{n}\right) x_2\right)\cos\left(\frac{\pi k}{n}\right),$$$$\frac{\partial f}{\partial x_2} = \frac{1}{n} \sum_{k=1}^n \left(\cos\left(\frac{\pi k}{n}\right) x_1 + \sin\left(\frac{\pi k}{n}\right) x_2\right)\sin\left(\frac{\pi k}{n}\right),$$
but for arbitrarily large $x_1$ and $x_2$, doesn't this imply this imply the derivatives are unbounded and thus the function not Lipschitz?
I'm having a similar problem in computing the minimum of the function, as I set one of the above equations to zero, solve for $x_1$, plug it into the other, and solving for $x_2$ simply gets $x_2 (\cdots) = 0$, where $(\cdots)$ is some jarble of product sums but nevertheless a constant, and thus meaningless in determine the value of $x_2$.
|
Looking at your derivatives is quite interesting since
$$\sum _{k=1}^n \cos\left(\frac{\pi k}{n}\right) \cos \left(\frac{\pi
k}{n}\right) =\frac{n} {2}$$
$$\sum _{k=1}^n \cos\left(\frac{\pi k}{n}\right) \sin \left(\frac{\pi
k}{n}\right) =0$$
$$\sum _{k=1}^n \sin\left(\frac{\pi k}{n}\right) \sin \left(\frac{\pi
k}{n}\right) =\frac{n} {2}$$ This then leads to
$$\frac{\partial f}{\partial x_1} = \frac{x_1}{2} $$
$$\frac{\partial f}{\partial x_2} = \frac{x_2}{2} $$ So, the derivatives are zero for $x_1=0$ and $x_2=0$ and this is the only solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Lagrange polynomials sum to one I've been stuck on this problem for a few weeks now. Any help?
Prove:
$\sum_{i=1}^{n}\prod_{j=0,j\neq i}^{n}\frac{x-x_j}{x_i-x_j}=1$
The sum of lagrange polynomials should be one, otherwise affine combinations of with these make no sense.
EDIT:
Can anybody prove this by actually working out the sum and product? The other proofs make no sense to me. Imagine explaining this to someone who has never heard of lagrange.
|
HINT: Throw in the $f(x_i)$ and what happens if they are all 1?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/717991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Flatness of a manifold (or a connection) Suppose we have an $n$-dimensional manifold $S$ (with a global coordinate system) with a metric $g$ and a connection $\nabla$ with connection coefficients (Christoffel symbols) $\Gamma_{i,j}^k$ given. Suppose that the $\nabla$-geodesic connecting any two points of the manifold completely lies in $S$. Can we then say that $S$ must be flat with respect to the given connection? I am not able to straightaway show that $(\Gamma_{i,j}^k)_p = 0$ at all points $p$ of $S$.
|
I think you are misunderstanding what a flat (affine) connection is: It is a connection on a manifold $M$ such that at each point of $M$ there exists a coordinate system with zero Christoffel symbols (vanishing depends heavily on which coordinates you use). Equivalently, a connection is flat if it has zero curvature. Equivalently, it is flat if parallel transports along contractible loops are identity maps, etc. This will be explained in any Riemannian geometry textbook; my favorite is do Carmo's "Riemannian Geometry" (chapters 0 through 4). Or use Petersen's "Riemannian Geometry".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Show that, if $g(x)=\frac{1}{1+x^2}$ and $x_{n+1}:=g(x_n)$ then $x_n$ converges
Why does the sequence $\{x_n\}$ converge ?
If $x_{n+1}:=g(x_n)$, where $g(x)=\frac{1}{1+x^2}$
(We have a startpoint in $[0.5,1]$)
The sequence is bounded by $1$ independant of the startpoint (Is it necessary that $x_0\in[0.5,1]$ ?)
We have to show that the sequence is Cauchy
I compare $g(x_{n+1})$ with $g(x_n)$
$|g(x_{n+1})-g(x_n)|=|\frac{1}{1+(\frac{1}{1+x_n^2})^2}-\frac{1}{1+x_n^2}|=|\frac{x_n^4+2x_n^2+1}{x_n^4+2x_n^2+2}-\frac{1}{1+x_n^2}|$
Would this lead to an impasse, or how to continue ?
|
See related approachs and techniques (I), (II).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
The effects of requiring a recursive vs. a recursively enumerable axiomatization in the incompleteness theorem I believe that the (paraphrased) original statement of Gödels first incompleteness theorem (including Rosser's trick) is
If T is a sufficiently strong recursive axiomatization of the natural numbers (e.g. the peano axioms), $T$ is either incomplete or inconsistent.
and I'm pretty sure this is the version we proved in a course on the incompleteness theorem (which was years ago, though). Some sources (e.g. Wikipedia), however, relax that and require only a recursively enumerable axiomatization.
The effect of that relaxation is that $\textrm{Proves}(p,s)$, meaning that $p$ is the Gödel code of a proof of the statement with Gödel code $s$, is no longer deciable, but only semi-decidable. Or so I think, at least - with a recursive set of axioms, it's easy to validate whether a sequence of statements constitutes a proof, but if the axioms are only recursively enumberable, verifying that some sequence is not a proof is not generally possible.
However, it seems that this doesn't matter much, since $\textrm{Provable}(s) = \exists p \, \textrm{Proves}(p,s)$ is semi-decidable by PA in both cases, which is sufficient for the rest of the proof I guess.
My question is two-fold. First, is the reasoning above about why this relaxation is valid correct? And second, what are the consequences of this relaxation?
|
*
*Just for the record, the original version of Gödel's incompleteness theorem was about theories where the class of axioms and the rules of inference are rekursiv, which for Gödel at the time meant, as we would now put it, primitive recursive. [Recall, Gödel was initially writing in 1930/1931, a few years before the notion a general recursive function has been stably nailed down.]
*If a theory's theorems are recursively enumerable, then by Craig's theorem it is primitively recursively re-axiomatizable -- see Wikipedia. So there seems to be no real "relaxation" can be involved in re-stating Gödel's result in terms of recursively enumerable theories.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Why is $f'(x)$ the annihilator of $dx$? Let $B=A[x]$ be an integral extension of a Dedekind ring $A$ where $x$ has minimal (monic) polynomial $f(x)$. Then the module of Kahler differentials $\Omega_A^1 (B)$ is generated by $dx$. Why is its annihilator $f'(x)$?
$\Omega_A^1 (B)=I/I^2$, where $I:=\{b\otimes_A 1 -1\otimes_A b| b\in B\}$. So the annihilator of $dx = x\otimes 1 - 1\otimes x$ should be $\{h\in B| hx\otimes 1 -h\otimes x \in I^2\}$. How do I proceed from here?
|
Let us write $B = A[t]/(f(t))$ where $f(t)$ is some monic polynomial with coefficients in $A$. Then (you should prove!) that the Kahler differentials are precisely $\Omega_{B/A} = B[dt]/ (f'(t)dt)$. It is now clear the annihilator of this $B$-module is precisely $(f'(t))$.
The fact I am using is this: Let $B$ be an $A$-module. Then $\Omega^1_{B/A} \cong I/I^2$ where $I$ is the kernel of the diagonal map $f : B\otimes_A B \to B$. Here is a sketch proof. Define $d : B \to I/I^2$ by $db = b\otimes 1 - 1\otimes b$. Let $\varphi : B \to M$ be an $A$-derivation where $M$ is a $B$-module. It is enough to show there is a unique map $\varphi' : I/I^2 \to M$ making everything commute. Now prove (exercise) that $I/I^2$ as an $A$-module is generated by symbols $b\otimes 1 - 1 \otimes b$. For any such symbol $1\otimes b - b \otimes 1 \in I/I^2$, lift to an element $b \in B$ and define $\varphi'(b\otimes 1 - 1 \otimes b) = \varphi(b)$. Show this is well-defined, independent of the choice of lift.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is there a name for this type of integer? An integer $n$ such that $\exists$ at least one prime $p$ such that, $p|n$ but $p^2$ does not divide $n$.
i.e. : an integer with at least one prime that has a single power in the prime factorization.
Do these numbers have a special name, and have they been studied?
|
These are sometimes called weak numbers, presumably because not weak naturals are precisely the powerful naturals. However the weak terminology does not appear to be anywhere near as widely used as is the powerful terminology (e.g. Granville and Ribenboim use "not powerful"). I also recall seeing other names besides weak (e.g. impotent).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Ramanujan's 'well known' integral, $\int\limits_{-\pi/2}^{\pi/2} (\cos x)^m e^{in x}dx$. $$\int_{-\pi/2}^{\pi/2} (\cos x)^m e^{in x}dx=\frac{\pi}{2^m} \frac{\Gamma(1+m)}{\Gamma \left(1+ \frac{m+n}{2}\right)\Gamma \left(1+ \frac{m-n}{2}\right)}$$
Appearing at the start of Ramanujan's paper 'A Class of Definite Integrals', the above is, cited as 'well known', particularly dispiriting as I can't prove it. Anyone know of a nice proof?
$\Re(m)>-1$ and I assume that $m \in \mathbb{C}, n \in \mathbb{R}$.
|
Assume that $n >m>-1$.
Then $$ \begin{align} \int_{-\pi /2}^{\pi /2} (\cos x)^{m} e^{inx} \, dx &= \int_{- \pi/2}^{\pi /2} \left( \frac{e^{ix}+e^{-ix}}{2} \right)^{m} e^{inx} \ dx \\ &= \frac{1}{i 2^{m}} \int_{C} (z+z^{-1})^{m} z^{n-1} \, dz \\ &= \frac{1}{i2^{m}} \int_{C} \left(z^{2}+1 \right)^{m} z^{n-m-1} \, dz \\ &= \frac{1}{i2^{m}} \int_{C} f(z) \, dz \end{align}$$
where $C$ is the right half of the unit circle traversed counterclockwise with quarter-circle indentations around the branch points at $z=-i$ and $z=i$.
Have the branch cut for $f(z)$ running down the imaginary axis from $z=i$, and define $f(z)$ to be real-valued on the positive real axis.
Now close the contour with a vertical line segment just to the right of $[-i,i]$ with a half-circle indentation around the branch point at $z=0$.
Just to the right of the branch cut and above the origin,
$$f(z) = |z^{2}+1|^{m} |z|^{n-m-1} e^{i \pi /2(n-m-1)} .$$
While just to the right of the branch cut and below the origin and above $z=-i$,
$$f(z) =|z^{2}+1|^{m} |z|^{n-m-1} e^{-i \pi /2(n-m-1)} .$$
And under the assumption that $n>m>-1$, the contributions from all three indentations vanish in the limit.
For example, around $z=0$,
$$ \Big| \int_{\pi/2}^{- \pi/2} f(re^{it}) \ i r e^{it} \, dt \Big| \le \pi \ (r^{2}+1)^{m} r^{n-m}$$
which vanishes as $r \to 0$ since $n>m$.
Then going around the contour, we get
$$ \int_{C} f(z) \, dz + e^{i \pi /2 (n-m-1)}\int_{1}^{0} \left| (te^{i \pi /2})^{2} +1 \right|^{m} |te^{ i \pi /2}|^{n-m-1} e^{ i \pi /2} \, dt $$
$$+ \ e^{-i \pi /2 (n-m-1)}\int_{0}^{1} \left| (te^{-i \pi /2})^{2} +1 \right|^{m} |te^{ -i \pi /2}|^{n-m-1} e^{ -i \pi /2} \, dt = 0 $$
which implies
$$ \begin{align} \int_{C} f(z) \ dz &= e^{ i \pi /2 (n-m)} \int_{0}^{1} (1-t^{2})^m t^{n-m-1} \, dt - e^{- i \pi /2 (n-m)} \int_{0}^{1} (1-t^{2})^{m} t^{n-m-1} \, dt \\ &= 2 i \sin \left( \frac{\pi}{2} (n-m) \right) \int_{0}^{1} (1-t^{2})^{m} t^{n-m-1} \, dt \\ &= i \sin \left( \frac{\pi}{2} (n-m) \right) \int_{0}^{1} (1-u)^{m} u^{n/2-m/2-1} \, du \\ &= i \sin \left( \frac{\pi}{2} (n-m) \right) B \left( \frac{n}{2} - \frac{m}{2}, m+1 \right) \\ &= i \sin \left( \frac{\pi}{2} (n-m) \right) \frac{\Gamma(\frac{n}{2} - \frac{m}{2}) \Gamma(m+1)}{\Gamma(\frac{m}{2}+\frac{n}{2} + 1)} .\end{align}$$
Then using the reflection formula for the gamma function, we get
$$ \int_{C} f(z) \ dz= i \pi \, \frac{\Gamma(m+1)}{ \Gamma(1- \frac{n}{2} + \frac{m}{2})\Gamma(\frac{m}{2}+\frac{n}{2}+1)} .$$
Therefore,
$$ \begin{align} \int_{-\pi /2}^{\pi /2} (\cos x)^{m} e^{inx} \, dx &= \frac{1}{i2^{m}} \, i \pi \, \frac{\Gamma(m+1)}{ \Gamma(1- \frac{n}{2} + \frac{m}{2})\Gamma(\frac{m}{2}+\frac{n}{2}+1)} \\ &=\frac{\pi}{2^m} \frac{\Gamma(1+m)}{\Gamma \left(1+ \frac{m+n}{2}\right)\Gamma \left(1+ \frac{m-n}{2}\right)} . \end{align} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
}
|
Square Idempotent matrix: efficient algorithms for finding eigenvectors Given a square idempotent $N \times N$ matrix $A$ with large $N$, and a priori knowledge of the rank $K$, what is the most efficient way to compute the $K$ eigenvectors corresponding to the $K$ non-zero eigenvalues?
Information:
*
*Matrix is idempotent, therefore all eigenvalues are $1$ or $0$.
*Matrix is not symmetric.
*$K \ll N$.
*I'd like to avoid numerical computation of the eigenvalues, as I already know them, i.e., there are $K$ eigenvalues of magnitude $1$ and $N-K$ eigenvalues of magnitude $0$.
If the matrix was symmetric, an eigendecomposition would give $A = Q\Lambda Q^T$, and $Q$ would be $K$ orthonormal columns and $N-K$ zero columns. Since it's not symmetric, I believe this will not be the case, so I'd settle for the closest such matrix.
Context: essentially think of my matrix $A$ as a small perturbation around a symmetric idempotent matrix of the same size. I need the $N \times K$ matrix $B$ which is closest to giving $BB^T = A$. This must be done numerically, so really, the advice that would be ideal is "use LAPACK routine XYZ with parameter ABC to avoid computing the eigenvalues". Unfortunately, I can't seem to find any such routines which don't compute the eigenvalues as part of the process.
|
Since the matrix is idempotent $A^2=A$ the eigenvectors corresponding to the eigenvalue $1$ are exactly the elements of the image of the linear transformation described by $A$. Hence you could choose a basis of the column space of $A$ (or row space if you are thinking of rows...) and be done. Alas I do not know if your matrix is small enough for this to be a feasible solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Probability of winning a head on a coin The problem I am asking is generated from this problem:
Carla and Dave each toss a coin twice. The one who tosses the greater number of heads wins a prize. Suppose that Dave has a fair coin $[P_d(H)=.5]$, while Carla has a coin for which the probability of heasd on a single toss is $.4$ $[P_c(H)=.4$.
Here is the question I am asking:
In the experiment of that problem, Dave tosses a fair coin twice while Carla also tosses a coin twice. Fro Carla's coin, the probability of head on a single toss is $0.4$. What is the probability that Dave will win the prize provided that the experiment is repeated whenever a tie occurs.
I know the probability for $P(C_1)=P_c(HT)+P_c(TH)=2(.4)(.6)=.48$ and $P(D_2)=P_d(HH)=(.5)^2=.25$
I know that we can probably use the sum of Geometric series:
$$
\sum\limits_{n=1}^\infty (ar)^{x-1}= \frac{a}{1 - r}
$$
Can someone please help me to solve this problem? I am not sure about this.
|
The easy way is to recognize that you can ignore the event of a tie. If Dave's chance of winning on one turn is $d$, Carla's chance of winning on one turn is $c$, then Dave's chance of winning overall is $\frac d{c+d}$ and Carla's is $\frac c{c+d}$ You can show this by summing the geometric series as you suggest. The chance of nobody winning on a given turn is $1-c-d$, so Dave's chance of winning is $d + (1-c-d)d+ (1-c-d)^2d+\dots$ where the factors of $(1-c-d)$ come from turns nobody won. Now sum the series and you will get $\frac d{c+d}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What is the sum of $1^4 + 2^4 + 3^4+ \dots + n^4$ and the derivation for that expression What is the sum of $1^4 + 2^4 + 3^4+ \dots + n^4$ and the derivation for that expression using sums $\sum$ and double sums $\sum$$\sum$?
|
Here is my favorite method which works for any polynomial summand and you only need to remember two basic facts, one from calculus one and one about polynomials. First, since summations are analogous to integration, we have that
$$\int x^k \approx x^{k+1} \Rightarrow \sum x^k \approx x^{k+1}.$$
For your problem, let us define
$$f(n)=\sum_{i=1}^n i^4$$
and since the summand is a polynomial of degree four, the sum $f(n)$ must be a polynomial of degree five. If you don't believe me then just compute at least the first six values of $f(n)$ and compute the sixth differences and all of the terms will be zero (analogous to the sixth derivative of a fifth degree polynomial being zero).
Then using the (second) fact that a polynomial of degree five can be uniquely determined by six points, use points
$$(1,f(1)),(2,f(2)),(3,f(3)),(4,f(4)),(5,f(5)),(6,f(6))$$
and compute the unique interpolating polynomial and you get
$$f(n)=\frac{6n^5+15n^4+10n^3-n}{30}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/718939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 3
}
|
Show that if $f(x)= \sum\limits_{i=0}^n a_i x^i$ and $a_0+\frac{a_1}{2}+\ldots+\frac{a_n}{n+1}=0$, then there is an $x \in (0,1)$ with $f(x)=0$ Show that if $f(x)= \sum\limits_{i=0}^n a_i x^i$ and $a_0+\dfrac{a_1}{2}+\ldots+\dfrac{a_n}{n+1}=0$, then there is an $x \in (0,1)$ with $f(x)=0.$
|
you can use the zero point theorem!
if $a_0>0$ and $\dfrac{a_1}{2}+\ldots+\dfrac{a_n}{n+1}<0$
then the condition you write will turn out to be a critical condition:
$a_0+\dfrac{a_1}{2}+\ldots+\dfrac{a_n}{n+1}=0$
since $f(0)=a_0>0$ and $f(1)=$$-na_0-(n-1)\dfrac{a_1}{2}-\ldots-\dfrac{a_n}{n+1}<0$$\Longrightarrow$$\frac{n-1}{n}\frac{a_{1}}{2}+\frac{n-2}{n}\frac{a_{2}}{3}+......+\frac{1}{n}\frac{a_{n-1}}{n}\le$$\dfrac{a_1}{2}+\ldots+\dfrac{a_n}{n+1}={-a_0}$
a contradiction holds, which imply that $f(0)\cdot{f(1)}\le0$ $,$ then you can use the zero point theorem !
is it helpful ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving that $\sum_{k=0}^n\frac{1}{n\choose k}=\frac{n+1}{2^{n+1}}\sum_{k=1}^{n+1}\frac{2^k}{k}$ I want to prove for any positive integer $n$, the following equation holds:
$$\sum_{k=0}^n\frac{1}{n\choose k}=\frac{n+1}{2^{n+1}}\sum_{k=1}^{n+1}\frac{2^k}{k}$$
I tried to expand $2^k$ as $\sum_{i=0}^k{k\choose i}$ and interchange summation, also tried let $f(x)=\sum_{k=1}^{n+1}\frac{x^k}{k}$ and compute $f'(x)$. But it seems I'm not on the right path.
|
Note that successive terms in the sum on the left turn out have a simple expression for their sum: when $k \ne 0$, we have
$$\frac{1}{\binom{n}{k-1}} + \frac1{\binom{n}{k}} = \frac{\binom{n}{k}+\binom{n}{k-1}}{\binom{n}{k-1}\binom{n}{k}} = \frac{\binom{n+1}{k}}{\binom{n}{k-1}\binom{n}{k}} = \frac{\frac{n+1}{k}\binom{n}{k-1}}{\binom{n}{k-1}\binom{n}{k}} = \frac{\frac{n+1}{k}}{\binom{n}{k}}$$
where we've used Pascal's rule and the "absorption" identity that $\binom{n}{k} = \frac{n}{k} \binom{n-1}{k-1}$ for $k \neq 0$. Applying absorption again further gives the above expression to be equal to
$$\frac{\frac{n+1}{k}}{\binom{n}{k}} = \frac{\frac{n+1}{k}}{\frac{n}{k}\binom{n-1}{k-1}} = \frac{n+1}{n} \frac{1}{\binom{n-1}{k-1}}$$
This gives a strategy for evaluating the sum on the left:
$$\begin{align}
2\sum_{k=0}^n \frac{1}{\binom{n}{k}}
&= \sum_{k=1}^{n} \left( \frac{1}{\binom{n}{k-1}} + \frac1{\binom{n}{k}}\right) + 2 \\
&= 2 + \frac{n+1}{n} \sum_{k=1}^{n} \frac{1}{\binom{n-1}{k-1}}
\end{align}$$
or, calling the left-hand-side sum as $L_n = \sum_{k=0}^n \frac{1}{\binom{n}{k}}$, we have
$$2 L_n = 2 + \frac{n+1}{n} L_{n-1}$$
$$L_n = \frac{n+1}{2n}L_{n-1} + 1$$
Calling the right-hand-side term $R_n$, we have
$$\begin{align}
\frac{2^{n+1}}{n+1} R_n &= \sum_{k=1}^{n+1}\frac{2^k}{k} \\
&= \frac{2^n}{n} R_{n-1} + \frac{2^{n+1}}{n+1}
\end{align}$$
thus
$$R_n = \frac{n+1}{2n}R_{n-1} + 1$$
and both the LHS $L_n$ and RHS $R_n$ satisfy the same recurrence and have the same initial values (check for $n=1$, say), so they are equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
}
|
Is a matrix that is symmetric and has all positive eigenvalues always positive definite? I know a symmetric matrix is positive definite if and only if every eigenvalue is positive. However, is a matrix that is symmetric and has all positive eigenvalues always positive definite? More specifically, I have the following matrix:
$$\begin{bmatrix}3& -1 \\-1 & 3 \end{bmatrix}$$
Its eigenvalues are $4$ and $2$, and it is symmetric. Is it positive definite? Thanks.
|
Yes in the real case.
Let $ A $ be a matrix that is symmetric and has all positive eigenvalues. Then, $\forall x\in V, x^TAx > 0 $. This is true for all eigenvectors $ x_i $ of $ A $. Hence $0<x^T_iAx_i=x_i\alpha_i x_i=||x_i||^2\alpha_i $, so $\alpha_i> 0 $.
Suppose $\alpha_i> 0\forall i $. Then, on eigenvectors, $0<x^T_iAx_i$. $ A $ is real and symmetric. Hence, in $ V $ there exists an orthonormal basis of $ A $, by the spectral theorem. In this basis, $\forall y\in V, y=\sum a_ix_i $. The above relation thus gives $ y^TAy=\sum a_i^2\alpha_i> 0 $. Q.E.D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
A urn containing $n$ balls, numbered $1,2,...,n$, and $k$ balls are chosen at random without replacement. I have a homework question with specific $n$ and $k$ given for the below question, but I would rather understand how this works for any given $n$ or $k$ to build my intuition for these questions.
A urn containing $n$ balls, numbered $1,2,...,n$, and $k$ balls are chosen at random without replacement. Let $X$ be the largest-numbered ball removed. Determine the probability function of $X$.
$X$ takes values from $S$
My thoughts: In the number set from $1,2,...,k-1$, all of these numbers have zero probability of being the greatest number. We then have probability of not drawing a specific ball(say ball $x\in S$) $\frac{n-1}{n}*\frac{n-2}{n-1},...,\frac{n-k}{n-k+1}$ = $Pr(Q)$. Then the probability thus of drawing that specific ball is merely $1 - Pr(Q)$
The probability of each number being the greatest goes up towards number $n$. So $1 - Pr(Q)$ should actually be the probability $n$ is the greatest number drawn. But that doesn't workout mentally, because it would seem the probability that ball numbered $n-1$ is the largest number drawn, would be $1 - pr(Q)$ take away the probability that ball numbered $n$ is drawn, which has the same probability, and therefore the probability of getting ball numbered $n-1$ would be zero.
Any tips would be greatly appreciated!
|
The largest, as you pointed out, is one of $k,k+1,\dots,n$.
There are $\binom{n}{k}$ equally likely ways to choose $k$ balls.
Let us count the number of "favourables," that is, the number of ways to choose $k$ balls with $m$ the largest.
So we must choose $k-1$ balls from the balls $1,2,\dots,m-1$ to keep $m$ company. Or perhaps to be bullied by $m$. There are $\binom{m-1}{k-1}$ ways to do this.
Now you should be able to write down an expression for $\Pr(X=m)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that if R is an integral domain then Torsion of M is a submodule of R-module M I can't quite get this one. I can show that two non-zero elements m,n of M have a non-zero product but they belong to the R-module, not the integral domain so I don't know if it's necessarily true that their product be nonzero.
|
Your prove should go like this: If $m_1,m_2$ are torsion elements, then there exist two elements $a_1,a_2$ of $R$, $a_1\neq 0, a_2 \neq 0$ such that
$$
a_1m_1=0 \qquad a_2 m_2=0
$$
have to show: for any $r_1,r_2 \in R$ the Element $m:=r_1m_1+r_2m_2 \in M$ is a torsion element.
first note that $a_1a_2 \neq 0$ because $R$ is a domain
and this element anihilates the linear combination (R commutative)
$$
a_1a_2 (r_1m_1+r_2m_2) = a_1a_2r_1m_1+a_1a_2r_2m_2 = a_2r_1(a_1m_1)+a_1r_2(a_2m_2)=0+0=0
$$
So you have found a non-zero Element ($a:=a_1a_2$) such that $am=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is $f$ differentiable at $(x,y)$? I am working on a practice question, and I am not sure if what I have done would be considered, 'complete justification'. I would greatly appreciate some feedback or helpful advice on how it could be better etc. The question is here:
Let $f: \mathbb {R}^2 \to \mathbb{R} $ be a function defined by:
$$ \
f(x,y) =
\begin{cases}
\frac{\sin(x^2 + y^2)}{x^2 + y^2} & \text{if } (x,y) \ne (0,0) \\
1 & \text{if } (x,y) = (0,0)
\end{cases}
$$
Is $f$ diff'ble at $(x,y) = (0,0)$?
Here is what I have:
By definition, $f(x,y)$ is diff'ble at $(0,0)$ if
$$ \lim_{(x,y) \to (0,0)} f(x,y)= \frac{f(x,y) - [f(0,0) + f_{x}(0,0)(x-0) + f_{y}(0,0)(y-0)]}{\sqrt{x^2 + y^2}} = 0\tag{*}$$
Since $f(x,y)$ is piecewise, $f_{x}(0,0)$ and $f_{y}(0,0)$ is derived from 1st principles:
So, $$\begin{align}f_{x}(0,0) &= \lim_{h \to 0} \frac{\frac{\sin((0+h)^2 + (0)^2)}{(0+h)^2 + (0)^2} - f_{x}(0,0)}{h} \\
&= \lim_{h \to 0} \frac{\frac{\sin(h^2)}{h^2} - 1}{h}\\
&= \lim_{h \to 0} \frac{\sin(h^2) - h^2}{h^3} \\
&= \lim_{h \to 0} \frac{2h \cos(h^2) - 2h}{3h^2}\end{align} $$ by L'Hopital's rule, apply this twice more I can see that the limit is $0$. Similarly, $f_y(0,0)$ is derived the same way.
Then from $(*)$, I have:
$$f_{x}(0,0) = \lim_{h \to 0} \frac{\frac{\sin(x^2 + y^2)}{x^2 + y^2} - 1}{\sqrt{x^2 + y^2}} $$
From here I haven't had luck trying to get this to $0$. So instead, I try to make it easier.
Since $f_x(0,0)$ and $f_y(0,0)$ exist we must evaluate whether $f_x(x,y)$ is continuous at $(x,y)=(0,0)$ i.e. if
$$\lim_{(x,y) \to (0,0)} \frac{\sin(x^2 + y^2)}{x^2 + y^2} = 1 $$ then $f(x,y)$ is differentiable at $(0,0)$. So letting $u = x^2 + y^2$,
$$\lim_{(x,y) \to (0,0)} \frac{\sin(x^2 + y^2)}{x^2 + y^2} = \lim_{ (x,y) \to (0,0)} \frac{\sin(u)}{u} = 1 $$ by L'Hopital once more.
Therefore, $f$ is differentiable at (0,0).
I feel like I am perhaps not completely justifying this, because I take a short cut - but is it valid? Should I perhaps use a epsilon-delta proof? Or is there a way to work with that tricky limit I have in $(*)$?
Many thanks for the help in advance!
|
You've already found
$$\lim \limits_{(x,y)\to (0,0)}\left(\dfrac{f(x,y) - [f(0,0) + f_{x}(0,0)(x-0) + f_{y}(0,0)(y-0)]}{\sqrt{x^2 + y^2}}\right)=\lim \limits_{(x,y)\to (0,0)}\left( \dfrac{\frac{\sin(x^2 + y^2)}{x^2 + y^2} - 1}{\sqrt{x^2 + y^2}}\right).$$
To evaluate $\lim \limits_{(x,y)\to (0,0)}\left( \dfrac{\frac{\sin(x^2 + y^2)}{x^2 + y^2} - 1}{\sqrt{x^2 + y^2}}\right)$ simplify the denominator and change the variables with $x=\rho \cos(\theta)$ and $y=\rho\sin(\theta)$ to get
$$\begin{align}\lim \limits_{(x,y)\to (0,0)}\left( \dfrac{\frac{\sin(x^2 + y^2)}{x^2 + y^2} - 1}{\sqrt{x^2 + y^2}}\right)&=\lim \limits_{(x,y) \to (0,0)}\left(\dfrac{\sin(x^2+y^2)-(x^2+y^2)}{(x^2+y^2)^{3/2}}\right)\\
&=\lim \limits_{\rho \to 0}\left(\dfrac{\sin\left(\rho^2\right)-\rho ^2}{|\rho| ^3}\right)\\
&=\lim \limits_{\rho \to 0}\left(\dfrac{\rho ^2+o\left(\rho^2\right)-\rho^2}{|\rho|^3}\right)\\
&=0.\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Integrate the following function:
Evaluate:
$$\int \frac{1}{ \cos^4x+ \sin^4x}dx$$
Tried making numerator $\sin^2x+\cos^2x$
making numerator $(\sin^2x+\cos^2x)^2-2\sin^2x\cos^2x$
Dividing throughout by $cos^4x$
Thank you in advance
|
Another way:
$$I=\int\frac{dx}{\cos^4x+\sin^4x}=\int\frac{(1+\tan^2x)\sec^2x dx}{1+\tan^4x}$$
Setting $\displaystyle\tan x=u,$
$$I=\frac{(1+u^2)du}{1+u^4}=\int\frac{1+\dfrac1{u^2}}{u^2+\dfrac1{u^2}}du$$
$$=\int\frac{1+\dfrac1{u^2}}{\left(u-\dfrac1u\right)^2+2}du$$
Set $\displaystyle u-\dfrac1u=v$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Visual explanation of the following statement: Can somebody fill me in on a visual explanation for the following:
If there exist integers $x, y$ such that $x^2 + y^2 = c$, then there also exist integers $w, z$ such that $w^2 + z^2 = 2c$
I know why it is true (ex. take $w = x-y, z = x+y$), but I would think there is a visual explanation hiding somewhere because of squared terms (we can make squares!!)
|
It will take me forever to post the diagram so here is a description.
Draw the circle with centre $(0,0)$ and radius $\sqrt c\,$. Locate the point $(x,y)$ on this circle: by assumption, $x$ and $y$ are integers. Draw the tangent to this circle starting at $(x,y)$ and having length $\sqrt c\,$. This will give a point distant $\sqrt{2c}$ from the origin (because we have a right angled isosceles triangle), and the point will have integer coordinates because it is obtained from $(x,y)$ by a displacement of $(y,-x)$ or $(-y,x)$, depending which way we drew the tangent.
Update: see another answer for the picture. Thanks Oleg!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 6,
"answer_id": 2
}
|
Calculate percentage given value, minim and maximum It's my first time on Math.stack; be gentle.
I have slider with a range between -1 and 1.
If my slider is at 0 I'd expect it to be at 0%
If it were at either -1 or 1 I'd expect it to be 100%
However it must take into account those won't always be the max & min
When I've got a minimum value of -0.1896362 and maximum value of 0.1383057 I get a bit confused
This is what I've got so far (This is wrong):
percentage = ((slider-minimum)/(maximum-minimum)) *100
I've read this post which is similar to my problem, but the negative numbers are messing things up.
|
Alright, so let's take $u$ to be the upper bound. Lets make $l$ the lower bound. When you go to the right, the percentage of the area swept from x=0 to some $x$ the right is:
$$\frac{x}{u}\times 100\%$$
Similarly, on the left you'll just use your lower bound. You don't even need absolute value because the negatives will cancel:
$$\frac{x}{l}\times 100\%$$
Let me know if that's what you meant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to integrate $\displaystyle 1-e^{-1/x^2}$?
How to integrate $\displaystyle 1-e^{-1/x^2}$ ?
as hint is given: $\displaystyle\int_{\mathbb R}e^{-x^2/2}=\sqrt{2\pi}$
If i substitute $u=\dfrac{1}{x}$, it doesn't bring anything:
$\,\displaystyle\int\limits_{-\infty}^{\infty}\left(1-e^{-1/x^2}\right)dx=\int\limits_{-\infty}^{0}\left(1-e^{-1/x^2}\right)dx+\int\limits_{0}^{\infty}\left(1-e^{-1/x^2}\right)dx\overset{?}=2\int\limits_{0}^{\infty}\left(1-\frac{e^{-u^2}}{-u^2}\right)du$
$2\displaystyle\int\limits_{0}^{\infty}\left(1-\frac{e^{-u^2}}{-u^2}\right)du=?$
How to continue ?
$\textbf{The original exercise was}$:
If a probability has a density $f(x)=C(1-e^{-1/x^2})$ then determine the value of constant $C$
Since $\displaystyle\int f\overset{!}=1$, i thought first to calculate the expression above.
($\textbf{ATTENTION:}$ Question edited from integrating $e^{-1/x^2}$ to integrating
$1-e^{-1/x^2}$)
|
$\int\left(1-e^{-\frac{1}{x^2}}\right)dx$
$=\int\left(1-\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{-2n}}{n!}\right)dx$
$=\int-\sum\limits_{n=1}^\infty\dfrac{(-1)^nx^{-2n}}{n!}dx$
$=-\sum\limits_{n=1}^\infty\dfrac{(-1)^nx^{1-2n}}{n!(1-2n)}+c$
$=\sum\limits_{n=1}^\infty\dfrac{(-1)^n}{n!(2n-1)x^{2n-1}}+c$
$\because\int_{-\infty}^\infty\left(1-e^{-\frac{1}{x^2}}\right)dx$
$=\int_{-\infty}^0\left(1-e^{-\frac{1}{x^2}}\right)dx+\int_0^\infty\left(1-e^{-\frac{1}{x^2}}\right)dx$
$=\int_\infty^0\left(1-e^{-\frac{1}{(-x)^2}}\right)d(-x)+\int_0^\infty\left(1-e^{-\frac{1}{x^2}}\right)dx$
$=\int_0^\infty\left(1-e^{-\frac{1}{x^2}}\right)dx+\int_0^\infty\left(1-e^{-\frac{1}{x^2}}\right)dx$
$=2\int_0^\infty\left(1-e^{-\frac{1}{x^2}}\right)dx$
$=2\int_\infty^0\left(1-e^{-x^2}\right)d\left(\dfrac{1}{x}\right)$
$=2\left[\dfrac{1-e^{-x^2}}{x}\right]_\infty^0-2\int_\infty^0\dfrac{1}{x}d\left(1-e^{-x^2}\right)$
$=4\int_0^\infty e^{-x^2}~dx$
$=2\sqrt\pi$
$\therefore C=\dfrac{1}{2\sqrt\pi}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/719963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 3
}
|
What is the relation between this binary number with no two 1 side by side and fibonacci sequence? I saw this pattern of binary numbers with constraints first number should be 1 , and two 1's cannot be side by side.
Now as an example
1 = 1
10 = 1
100,101 = 2
1000,1001,1010 = 3
10000,10001, 10010, 10100, 10101 = 5
Strangely I see the numbers we can form of this binary numbers with $n $digit is the$ n$th Fibonacci number , at least for for the first 5 Fibonacci number. How Can we show that it is true for nth number ? and how is this happening?
|
Suppose we make an $n$-digit string with no consecutive $1$s. Then it either ends with a $0$ or a $1$.
If it ends with a $0$, we can add (from the front) any $(n-1)$ digit string with no consecutive $1$s. There are $a_{n-1}$ of these.
If it ends with a $1$, then the previous digit must be a $0$ because there are no consecutive $1$s. But before this $1$ we can add any $(n-2)$-digit string. There are therefore $a_{n-2}$ $n$-digit strings with no consecutive $1$'s which start with $10$.
Hence $a_n = a_{n-1} + a_{n-2}$. This is exactly the Fibonacci sequence!
In fact, this is the very explanation (with some modification) Derek Holton gave in his wonderful book - A Second Step to Mathematical Olympiad Problems, for a similar problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Bifurcation values for logistic map To find the bifurcation values for $$x_{i+1}=f(x_i) = rx_i(1-x_i)$$first I set $rx(1-x) = 0$ and found the x values and then used the x values to find $r = 0$ and $r = 1$.
Do you think what I did here is correct? If not, can you help me find the mistakes here?
|
To study bifurcations of maps, begin by looking for fixed points. The logistic equation is
$$x_{i+1} = f(x_i) = rx_i(1-x_i)$$
so a fixed point satisfies
$$x = rx(1-x) \Rightarrow x(rx - r+1) = 0$$
which implies that there are fixed points at $x=0$ and $x=1-1/r$. To analyze bifurcations as $r$ varies, consider the linearization of the map around these fixed points $x_i=x^*+y_i$
$$y_{i+1} = f'(x^*)y_{i}$$
Now $f'(x)=r(1-2x)$ so linearizing about $x=0$ gives
$$y_{i+1} = ry_i$$
so there is a clear loss of stability when $r=1$ and a period-doubling bifurcation when $r=-1$.
Linearizing about $x=1-1/r$ gives
$$y_{i+1} = (2-r)y_i$$
so there is a bifurcation from instability to stability as $r$ passes through 1 from below, and then a period doubling bifurcation as $r$ passes through $3$.
That gives you three bifurcations, at $r=-1$, $r=1$ and $r=3$. To find further bifurcations you must analyze the second iteration of the map,
$$x_{i+2} = f(f(x_i))$$
and search for fixed points, which correspond to period-2 orbits. This isn't as hard as it sounds - you end up having to solve a quartic equation, but you already know two of the solutions, as any fixed point trivially satisfies this equation. That gives you further period-doubling bifurcations, from 2-cycles to 4-cycles. Finding the location of the next set of period-doubling bifurcations (from 4-cycles to 8-cycles) is hard.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
What are some alternative ways of describing n-dimensional surfaces using control points other than Bezier surfaces? I'm interested in problems involving geometric constraints and curve subdivision. I noticed that most of these problems describe the curves/surfaces using the Bezier form. I wanted to know if there are alternative ways of expressing an n-dimensional surface using control points that isn't a type of spline or Bezier curve.
|
Bezier curves are just polynomials. From a mathematical point of view, the $m+1$ Bernstein polynomials of degree $m$ are just a basis for the vector space $\mathbb{P}_m$ of all polynomials of degree $m$. So, of course, you can use other bases of $\mathbb{P}_m$, instead. This won't give you new types of curves and surfaces, just a different way of looking at them. Two other common choices are the "power" basis $\{1, u, u^2, \ldots, u^m\}$ and various Lagrange bases. Occasionally Legendre or Chebyshev polynomials. The Bernstein basis has some very attractive qualities. For example, it's very stable, numerically, it forms a partition of unity, so you get the convex hull property, and the coefficients (i.e. "control points") make some sense geometrically.
Also, regardless of what basis you use, polynomials are very attractive: easy to differentiate and integrate, easy to bound, useful for approximation, and generally well understood. Using anything else is going to be like swimming upstream, by comparison. Even using rational functions (as in rational Bezier curves) makes things quite a bit more difficult.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Interesting question about functions I saw the following question and I would like to share. I don't know the answer.
Suppose that the function $f:\Bbb{N}\to\Bbb{N}$ has the property $f(f(n))<f(n+1)$ for any $n\in\Bbb{N}$. Prove that $f(n)=n$.
|
Consider the set $A=\{ f ( f (1)), f (2), f ( f (2)), f (3), f ( f (3)),\ldots, f (n), f ( f (n)), . . .\}$.
That is the set of all numbers appearing in the inequality $f ( f (n)) < f (n + 1)$.
This set has a smallest element, which cannot be of the form $f (n + 1)$
because then it would be larger than $f ( f (n))$.
Thus it is of the form $f ( f (n))$.
The same argument shows that for this $n$, $f (n) = 1$.
If $n$ itself were greater than $1$, we
would get $1 = f (n) > f ( f (n − 1))$, which is impossible. Hence, $f (1) = 1$ and $f (n) > 1$ for $n > 1$.
Considering the restriction $f : \{n \ge 2\} \to \{n \ge 2\}$, the same argument applied shows that $f (2) = 2$ and $f (n) > 2$ for $n > 2$. By induction, one
shows that $f (k) = k$, and $f (n) > k$ for $n > k$, thus the unique solution to the problem is $f(n)=n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.