text
stringlengths 83
79.5k
|
|---|
H: What is difference between “true in ZFC” and “true”?
Let $P$ be a claim. There are two questions.
Is it true in ZFC that $P$?
And the other is:
Is it true that $P$?
What is difference between these two questions?
AI: That depends entirely on what theoretical assumptions are silently being made (e.g., by being implicit in the context) in the second question. If the person asking the second question is working in $\mathsf{ZFC}$, then there is no difference. If the working theory is $\mathsf{ZF}+\neg\mathsf{AC}$, on the other hand, the two questions are very different. And the background theory need not be based on $\mathsf{ZF}$ at all; some alternatives are listed here.
|
H: Optimization Shape Question
What is the shape of the rectangle that will have the maximum area if a rectangle has a fixed perimeter equal to $S$.
Now, I have no clue how to solve it. I was told to use the area formula and perimeter formula and find the derivative.
AI: \begin{align}
S&=2l+2w\\
A&=(2l)\times(2w)\\
&=(2l)\times(S-2l)\\
A=f(l)&=(2l)\times(S-2l)\\
\end{align}
What do you do now if you want to optimize the function?
\begin{align}f^\prime(l)&=2S-8l=0\\S&=4l\\\implies 4l&=2l+2w\\\implies l&=w\\\end{align}
|
H: Name for a certain class of groups that contains all the abelian groups
I cam across this type of groups. Is there a name for groups that satisfy this condition:
$$\forall x,y\in G[xyx^{-1}\in \langle y\rangle]$$
As mentioned in the title, it is easy to see that all the abelian groups satisfy this condition.
Thank you
AI: If $G$ is a group such that for all $x,y \in G$, $xyx^{-1} \in \langle y \rangle$, then every cyclic subgroup is normal. Conversely if every cyclic subgroup is normal, then $G$ satisfies the condition.
More commonly this is expressed as every subgroup is normal: if every cyclic subgroup is normal, and $H \leq G$ is any subgroup, then $H$ is the product of the cyclic subgroups $\langle y \rangle$ for $y \in H$, so $H$ too is normal. The converse is clear.
A group in which every subgroup is normal is called a Dedekind group. A non-abelian Dedekind group is of the form $Q_8 \times P \times V$ where $P$ is a periodic abelian group with no elements of even order and $V$ is an abelian group all of whose elements have order dividing 2. These were classified by Dedekind (1897) in the finite case and Baer (1933) in general. A modern proof is theorem 5.3.7 on page 139 of Robinson (1982).
Dedekind, R.;
“Ueber Gruppen, deren sämmtliche Theiler Normaltheiler sind.”
Math. Ann. 48 (1897), no. 4, 548–561.
MR1510943
JFM28.0129.03
DOI:10.1007/BF01447922
Baer, R.
“Situation der Untergruppen und Struktur der Gruppe.”
[J] Sitzungsberichte Heidelberg (1933), Nr. 2, 12-17 (1933).
JFM59.0143.02
Robinson, Derek J. S.
“A course in the theory of groups.”
Graduate Texts in Mathematics, 80.
Springer-Verlag, New York, 1982. xviii+481 pp.
ISBN: 0-387-90600-2
MR648604
DOI:10.1007/978-1-4684-0128-8
DOI:10.1007/978-1-4419-8594-1
|
H: Optimization Word Problem
A farmer is building a new cylindrical silo with a flat roof and an earthen floor that will hold $20,000 m^3$ of corn. What dimensions should the farmer construct his silo if he wants to use the least material for
construction?
I believe I have to use the surface area and area equations. Can anybody lead me in the right direction?
AI: \begin{align}
V&=20000\\
\pi r^2h&=20000\\
h&=\frac{20000}{\pi r^2}\\
A&=2\pi rh+\pi r^2\\
A&=2\pi r\frac{20000}{\pi r^2}+\pi r^2\\
A(r)&=\frac{40000}{ r}+\pi r^2\\
\end{align}
What do you do to optimize this function?
|
H: Showing a function is contractive
This seems to simple of a question and thus I am doubting myself...
Show that the function $\dfrac{1}{2}x$ on $1\leq x \leq 5$ is contractive.
\begin{align}
|F(x) - F(y)| =& \left|\dfrac{1}{2}x -\dfrac{1}{2}y\right| \\
=& \dfrac{1}{2}|x -y| \\
\leq& \dfrac{1}{2}|x-y|
\end{align}
This should then be true for all $x,y \in \mathbb{R}$ correct?
Now, what I don't know is how to find the best value of $\lambda$ (I assume best means least value for $\lambda$) in the contractive mapping definition:
\begin{equation}
|F(x)-F(y)| \leq \lambda|x-y|
\end{equation}
Let me know what you think, and any suggestions that you have.
AI: Since $|F(x)-F(y)|$ is always exactly $\frac12|x-y|$, it’s clear that $|F(x)-F(y)|\le\lambda|x-y|$ if and only if $\lambda\ge\frac12$.
|
H: Solving for $n$ in a geomtric progression
Given the general term of geometric sequence: $a_n = \dfrac{x}{2^n}$
I would like to solve for the value of n that makes $a_n =1$.
My work so far:
\begin{align*}
a_n &= \frac{x}{2^n}\\
2^n &= \frac{x}{a_n}\\
n &=\log_2\left(\frac{x}{a_n}\right)
\end{align*}
Then I let $a_n = 1$ thus:
$$n=\log_2(x)$$
However for the specific problem I'm working on $n$ has to be an integer so I applied the ceiling function to both sides of the equation:
$$\lceil n \rceil = \lceil \log_2(x) \rceil \,\,$$
At this point the value of $\lceil n \rceil$ should be the value I'm looking for. The important part of this problem is finding the smallest value of the form $2^a$ that satisfies $2^a \ge x$. I believe that $2^{\lceil n \rceil}$ is the smallest number of the form $2^a$ that satisfies the inequality however I am unsure of how to more rigorously prove it. Although I think it might have something to do with the definition of the ceiling function.
Please point out any errors in my work so far. Also any suggestions on how to prove that $2^{\lceil n \rceil}$ is the smallest value that satisfies the inequality would be fantastic!
Thanks for any assistance!
AI: What you’ve done is fine, and the proof isn’t hard.
Let $k=\lceil\log_2x\rceil$. By definition $k-1<\log_2x\le k$, so $2^{k-1}<x\le 2^k$, and it follows immediately (a) that $2^k\ge x$ and (b) that no smaller integer has that property.
|
H: Fields $k$ with subrings whose quotient field is $k$
Let $k$ be a given field, any field. Is there a subring $A$ of $k$ such that $k$ is the quotient field of $A$?
Let's restrict ourselves to fields I know anything about; subfields of $\mathbb{C}$, finite fields ($\mathbb{Z}/(p)$ and $(\mathbb{Z}/(p))[x]/(f)$ where $f$ is irreducible of degree $n$), and function fields of any characteristic.
Without thinking too much about it, it seems that such a subring exists if $k$ is a number field, if $k$ is a function field of any characteristic, but not so for finite fields. Is it perhaps the case that infinite fields always have such a subring? If so there would for example exist a subring $A \subset \mathbb{C}$ for which that would be true. I can't imagine what ring that would be (the theory of transcendental extensions might hold the key to answering that, but I don't know any of that stuff).
Anyway that's all I have to contribute, I'm just curious, so if anyone could list (types of) fields with such a subring, no motivation necessary, I'd appreciate it.
AI: Let $k$ have characteristic zero. If $k$ is algebraic over $\mathbf{Q}$, then you can take the ring of algebraic integers of $k$. Otherwise, choose a transcendence basis $S$ for $k$ over $\mathbf{Q}$. Then $k$ is an algebraic extension of $\mathbf{Q}(S)$. Now let $A$ be the integral closure in $k$ of $\mathbf{Q}[S]$. Then the field of fractions of $A$ will be $k$. Note that since $S\neq\emptyset$, $\mathbf{Q}[S]$ is not a field, so the integral closure $A$ can't be a field either, which means that $A$ is not equal to $k$.
The same argument works for a field of characteristic $p$ as long as it has positive transcendence degree over its prime field.
The only other case is where $k$ is algebraic over $\mathbf{F}_p$. Then you are out of luck. Indeed, if $A$ is any subring of $k$, then $A$ is integral over $\mathbf{F}_p$, hence is itself a field. So the field of fractions of any subring of $k$ is itself.
|
H: Whats the formula for the amount to scale up an image during rotation to not see the edges
I'm trying to figure out a formula... for how much a picture (rectangle) would have to be scaled up during a rotation (at any rotation amount) so that you don't see the edge of the picture in the square of the bounding box.
If the bounding box is square it might be a simpler formula. But when the box is rectangle it seems like it gets harder to figure out. Although it might be a different formula if the width>height, than if the height>width. The graphic above shows 30 degrees which took 170%. I would think 45 degrees would be the worst case scenario then from there to 90 it would go down again.
To clarify, I am looking for a formula to calculate a scale factor to use given an angle a height and a width. We can assume the size and aspect of the bounding box to be equal to the size and aspect of the photo. So the input is Angle, Height, and Width.. the output is Scale Factor.
Or actually, maybe it can be figured out using the Angle and an Aspect Ratio.. where aspect ratio is width/height.
In practice this will be done in code.. so it doesn't have to be a single formula. It can be different formulas for different scenarios.. such as if the w>h or w=h or w
AI: The formula you want in this case is as follows:
$$
k = \cos\theta+\frac{W}{H}\sin\theta
$$
Where: $H$ is the height, $W$ is the width, $\theta$ is the angle of rotation (from $0$ to $90$ degrees), and $k$ is the scaling factor.
Interestingly, this means that the "worst case" (largest scaling factor) corresponds to
$$
\theta = \tan^{-1}\left(\frac{W}{H}\right)
$$
Proof:
In fact, I overlooked what happens when $W<H$. In fact, we should have
$$
k =
\begin{cases}
\cos\theta+\frac{W}{H}\sin\theta & W\geq H\\
\cos\theta+\frac{H}{W}\sin\theta & W < H
\end{cases}
$$
For the second case, we could draw the above and still have overlap. The more mathematically concise (but computationally expensive) way to write this would be
$$k = \cos\theta + \max\left\{\frac{W}{H},\frac{H}{W}\right\}\sin\theta$$
|
H: MacLaurin powerseries and interval of convergence
Given the function $f(x) = 5/(6*x^2-x-1)$,
(a) Expand into MacLaurin powerseries the function $f$ up to order $3$.
(b) Find the interval of convergence of it.
(a) I will use the type of $$\frac{1}{0!}f(0) + \frac{1}{1!}f^{(1)}(0)x + \frac{1}{2!}f^{(2)}(0)x^2 + \frac{1}{3!}f^{(3)}(0)x^3 = -5+5x-35x^2+65x^3$$
(b) I will test the $\lim_{n\to \infty} |a_{(n+1)}/a_{n}| = 1$. Now, given the number "1", I don't know how to continue. Can someone give me some tips?
Thank you
AI: Hint: Use partial fractions to put $5/(6x^2-x-1)$ in nice form. Note that the denominator factors as $(3x+1)(2x-1)$. So you will be looking for an expression of shape $\frac{A}{2x-1}+\frac{B}{3x+1}$.
Note that for example the expansion of $\frac{1}{1+2x}$ is easy, since $\frac{1}{1-t}=1+t+t^2+\cdots$.
You can then combine the two expansions to get an explicit formula for the coefficient of $x^n$ in the power series expansion of your original function.
You can do a ratio test on these, not too messy.
But there is an easier way. One of the two component series has radius of convergence $1/2$, the other $1/3$. The radius of convergence of the combination is th radius of convergence of the bad one, which is $1/3$,
Remark: Partial fractions are good for other things than integration!
|
H: Faster way to compute the probability of $x$-th permutation being something
Say we have $12$ males and $24$ females
and we have to calculate the probability of the $2$nd person of being a male
we would do something like this
$(M)(M) + (F)(M)$
but what if we asked to calculate the probability of the $24$th person of being a male?
Is there a fast way to compute that?
AI: Yes, there is an easy way. It is $\dfrac{12}{36}$.
The problem has not been fully described, but presumably we put these people in a row at random. All permutations are equally likely. So Adam, and Bob, and Charlie, and so on each have probability $\frac{1}{36}$ of being in any particular position, like the $24$-th, or the first.
Also, the probability that males occupy positions $5$ and $9$ and $17$ is the same as the probability they occupy positions $1$, $2$, and $3$.
We do have to be careful though. Given that the first person is male, the probability the $17$-th is male is $\frac{11}{35}$. Symmetrically, given that the $17$-th person is male, the probability the first is male is also $\frac{11}{35}$.
|
H: Find the length of the curve between two endpoints
Problem:
Find the length of the curve $\vec r(t) = \sqrt 2t \hat i + \sqrt 2t \hat j + (1 - t^2) \hat k$ from $(0, 0, 1)$ to $(\sqrt 2, \sqrt 2, 0)$
Solution Attempt:
First find the arc length function for this vector by using $s(t) = \int ||r'(t)|| dt$
Where $r'(t) = \sqrt 2 \hat i + \sqrt 2 \hat j - 2t \hat k$ and $||r'(t)|| = 2\sqrt{1 + t^2}$
So $s(t) = \int||r'(t)||dt = \sec x \tan x + \log(\sec x + \tan x)$
This is where I get stuck. How do I evaluate this for $t$ given that the endpoints are $(0, 0, 1)$ to $(\sqrt 2, \sqrt 2, 0)$?
AI: Presumably you made the substitution $t=\tan x$. Then $\sec x=\sqrt{1+t^2}$, so
$$s(t)=t\sqrt{1+t^2}+\ln\left(t+\sqrt{1+t^2}\right)\;.$$
By inspection $\vec r(0)=\langle 0,0,1\rangle$ and $\vec r(1)=\langle\sqrt2,\sqrt2,0\rangle$, so you just want
$$\left[t\sqrt{1+t^2}+\ln\left(t+\sqrt{1+t^2}\right)\right]_0^1\;.$$
Alternatively, you could avoid substituting back by noting that $\tan 0=0$ and $\tan\frac{\pi}4=1$, so that you want
$$\left[\sec x\tan x+\ln\left(\sec x+\tan x\right)\right]_0^{\pi/4}\;.$$
|
H: Question about the proof that a countable union of countable sets is countable
Can One explain me a bit about the Hence
2.12 Theorem
Let $\{E_n\}$, where $n=1,2,3,...$, be a sequence of countable sets, and put
$S=\underset{n=1}{\overset{\infty }{\cup }}E_n$.
Then $S$ is countable.
Proof
Let every set $E_n$ be arranged in a sequence $\left\{x_{\text{nk}}\right\},k=1,2,3,\text{...},$
and consider the infinite array
(16)
in which the elements of $E_n$ form the nth row.
The array contains all elements of $S$. As indicated by the arrows, these elements can be arranged in a sequence
(17) $x_{11};x_{21},x_{12};x_{31},x_{22},x_{13};x_{41},x_{32},x_{23},x_{14};\text{...}$
Sentence 1 If any two of the sets $E_n$ have elements in common, these will appear more than once in (17).
Sentence 2 Hence there is a subset $T$ of the set of all positive integers such that $S\sim T$, which shows that $S$ is at most countable. Since $E_1\subset S$, and $E_1$ is infinite, $S$ is infinite, and thus countable.
Problem:
What's the relation between sentence 1 and sentence 2?
AI: It could have been stated a bit more clearly; let me expand it a bit. $(17)$ is to be understood as exhibiting this correspondence with the set of positive integers:
$$\begin{array}{rcc}
\Bbb Z^+:&1&2&3&4&5&6&7&8&9&10&11&\ldots\\
S:&x_{11}&x_{21}&x_{12}&x_{31}&x_{22}&x_{13}&x_{41}&x_{32}&x_{23}&x_{14}&x_{51}&\ldots
\end{array}$$
The problem is that some of the entries in the $S$ row may be identical; e.g., it might be that $x_{12}=x_{23}$ and $x_{11}=x_{22}=x_{32}$. In that case the correspondence isn’t a bijection between $\Bbb Z^+$ and $S$. However, we can strike out any duplicates in the bottom line, leaving something like this, for instance:
$$\begin{array}{rcc}
T:&1&2&3&4&\bullet&6&7&\bullet&\bullet&10&11&\ldots\\
S:&x_{11}&x_{21}&x_{12}&x_{31}&\bullet&x_{13}&x_{41}&\bullet&\bullet&x_{14}&x_{51}&\ldots
\end{array}$$
(I’ve replaced the deleted items with bullets for greater visual clarity; in fact the deleted items are simply gone.) We now have a bijection between the two sets that remain. On the bottom line we still have the set $S$, since we struck out only extra copies. On the top line we have some subset $T$ of $\Bbb Z^+$.
The Hence in Sentence $2$ actually refers to the fact that although we may not have a bijection of $S$ with $\Bbb Z^+$, we do have one with a subset $T$ of $\Bbb Z^+$. It’s the existence of $T$ and the fact that it may only be a subset of $\Bbb Z^+$ that follow from the previous sentence, as indicated by hence.
|
H: How do you make less mistakes in math?
How do you make less mistakes in math? Do you try to be more alert, do you take your time more, or what? Usually I don't make that many mistakes, but sometimes (like now) I do math as I imagine I would do it if I was ever drunk. I just did a couple of problems and I'm confusing addition with multiplication, $\lt$ with $\le$ , and other stuff like this. I make most of my mistakes when I think about how to approach/solve a more open-ended or abstract problem, not when I actually write it down on paper.
AI: I think that checking your work frequently is one of the best ways to deal with this. You can also check your work with varying degrees of formality: you can carefully go back through every computation, but you can also just look back and forth between your current line of work and the previous one to check that things seem to line up. This more informal form of checking is a good one to cultivate, because it doesn't take too much time, and can catch a lot of mistakes.
One example I use a lot: when multiplying out two expressions in parentheses, each with a lot of terms, I make sure that in the expanded expression I have the right number of terms; e.g multiplying $($two terms added together$)($three terms added together$)$ will give $($six terms added together$)$, and its pretty quick to check that you have six terms after multiplying out (quicker than computing the expansion all over again).
Related to this, another thing to try to practice is to read what you actually wrote (or what is actually written in the question), rather than what you think, or expect, to see there. (This is the basic problem with all proof-reading!) Concentration is important here, obviously, but simply being aware of the issue helps.
I think if you find that you really have trouble with concentration/alertness at a particular moment, taking a break and coming back to you work can save time in the long-term. Again, trying to cultivate a sense of what your current alertness and concentration level is can help. In general, just trying to be self-aware as you're working is helpful, and is something that you can get better at with practice.
|
H: Integral representation of cosh x
On Wolfram math world, there's apparently an integral representation of $\cosh x$ that I'm unfamiliar with. I'm trying to prove it, but I can't figure it out. It goes \begin{equation}\cosh x=\frac{\sqrt{\pi}}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} \frac{ds}{\sqrt{s}}\,e^{s+\frac{1}{4}\frac{x^{2}}{s}} \gamma >0\end{equation}
The contour is taken along a vertical line with positive real part. I thought at first glance to use the residue theorem but it seems to be of no use here.
AI: Expand the difficult part of the exponential in power series, the integral equals
$$ I = \sqrt\pi \sum_{k\geq0} \frac{(x^2/4)^k}{k!} \frac{1}{2\pi i}\int_{\Re s=\gamma} s^{-k-1/2}e^{s}\,ds. $$
The integral here is the inverse Laplace transform of the function $s^{-k-1/2}$ evaluated at the point $t=1$, given by
$$ \mathcal{L}^{-1}[s^{-k-1/2}](t) = \frac1{2\pi i}\int_{\Re s=\gamma}s^{-k-1/2}e^{st}\,ds. $$
So we can look it up (http://mathworld.wolfram.com/LaplaceTransform.html):
$$ \frac1{2\pi i}\int_{\Re s=\gamma}s^{-k-1/2}e^{s}\,ds = \frac{1}{\Gamma(k+1/2)}, $$
which also satisfies
$$ \frac{\Gamma(1/2)}{\Gamma(k+1/2)} = \frac{1}{\frac12\times\frac32\times\cdots\times(k-\frac12)}, $$
where $\Gamma(1/2)=\sqrt\pi$. Simplifying, we get
$$ \sum_{k\geq0} \frac{(x^2/4)^k}{k!} \frac{\sqrt\pi}{\Gamma(k+1/2)} = \sum_{k\geq0}\frac{x^{2k}}{(2k)!} = \cosh x. $$
|
H: Graph theory maybe relate with Planar graph
Girls and Boys in the same school, each girl know 10 boys and each boy knows 10 girls, prove that the number of boys and the number of girls in the school is the same (know is symmetric = if girl G is know the boy B, then B is know G)
I think this question is related with Pigeonhole problem, so I was trying to use Math Induction to prove it. But it is more likely about graph theory, can anyone help me about this?
AI: Draw a bipartite graph, with boys on one side and girls on the other.
Draw edge between a girl and a boy if and only if they know eachother.
Then each vertex has degree 10.
Question: How many edges are there?
Hint Count them from the "boy" vertices, then again from the "girls" vertices...
|
H: A sequence raised to a sequence
I was wondering if the following is true...
If $(a_{n})$ is a sequence of positive terms converging to $a$ and $(b_{n})$ is a real sequence converging to $b$, then the sequence $(a_{n}^{b_{n}})$ converges to $a^{b}$.
This is what I did
$$ a_{n}^{b_{n}} = e^{b_{n}\ln{a_{n}}}$$
Now as the logarithm function is continuous, we have, for a fixed $m$
$$ \lim\limits_{n\to \infty}a_{n}^{b_{m}} = e^{b_{m}\ln{a}} $$
Now as exponential is continuous, we have our result.
Is every step here correct? I am skeptical because this is an important result and the proof is easy (if it is correct) and yet it is not found in real analysis books.
EDIT: We have $(1 + 1/n) \to 1$ and $n\to \infty$ and $(1 + 1/n)^{n} \to e$. Does this qualify as a counterexample? Is the result only true for finite $a$ and $b$?
AI: The proof is not quite correct. You can't decide which variable to keep fixed and which one to compute a limit on (as your edit shows, one can obtain lots of nonsense by doing so). Instead, simply argue by continuity of $\ln$ that $\ln (a_n)$ converges to $\ln (a)$ (assuming here $a\ne 0$), and then it follows that $b_n\cdot \ln(a_n)$ converges to $b\ln(a)$. Then the computation is complete.
The proof is easy but it uses the continuity of the exponential function and of $\ln$, as well as the fact that $a^b=e^{b\ln a}$. None of these is particularly trivial. Notice that you can define exponentials from first principles, only using the algebraic definitions up to when the exponent is a rational number (all positive). Then you can define arbitrary exponents by continuity. This will produce an elementary a rigorous definition of exponents, including of the exponential function itself. If this is done, then $a^b$ is defined to be the limit $a_n^{b_n}$ where $a_n$ is a sequence of rationals converging to $a$, and $b_n$ is a sequence of rationals converging to $b$. It is then elementary and not too hard to show that $a^b$ is independent of the choice of sequences.
|
H: How many faces does the $n$-dimensional cube $I^n$ have?
I am taking an online Coursera Calculus course, and this question popped up as one of the challenge problems, reproduced below. I had a difficult time understanding the answer, which was $I^n$ has $2n$ faces.
This problem concerns the boundary operator $\partial$ from the bonus material.
Denote by I the closed unit interval $[0,1]$. Then, as observed,
$\partial I=\{0\}\cup\{1\}$ is the union of two points. Let's get a little creative.
Denote by $I^n$ the "$n$-cube", that is, the Cartesian product of $n$
intervals:
$$I^n = I \times I \times \cdots \times I$$
This is a well-defined and perfectly reasonable $n$-dimensional cube.
(Just because you can't visualize doesn't mean it can't exist!) Note
that $I^1=I$ and $I^0$ is a single point (a zero-dimensional cube!). As a
step towards building a "calculus of spaces", let us write $∂I^1=2I^0=2$
as a way of saying that the boundary of an interval consists of two
points and that $I^0=1$.
The boundary of an n-dimensional cube consists of a certain number of
$(n−1)$-dimensional cubes (called "faces"). For example, a square ($I^2$)
has four faces. Using what you know about derivatives, answer this:
how many faces does $I^n$ have?
AI: There are lots of ways to solve this problem. The best is to find one for yourself. If you don't have a clue, start with the cases which are easy to understand: a line segment has two vertices, a square has four sides, and a cube has six faces. Now try to understand the connection between the dimension, $n$, and the number of faces, $2n$.
But here is one solution: a face is determined by choosing one coordinate $i$, with $1 \leq i \leq n$ and taking either all elements with $i$th coordinate $0$ or all elements with $i$th coordinate $1$. (Thus for instance the top face of the unit cube $[0,1]^3$ is $\{ (x,y,1) \ | \ 0 \leq x,y \leq 1 \}$.) So there are $2n$ faces altogether.
|
H: The union of a countable set of countable sets?
Let $A$ be an countable set, and let $B_n$ be the set of all $n$-tuples $\left(a_1,\ldots,a_n\right)$
$B_n$ is the union of a countable set of countable sets.
This question maybe about the English.
Is my rephrase right?
$B_n$ is a countable set as the union of countable sets.
I think the quoted sentece has some problems in grammar or others?
Of course, that's only my judge, I'm not a native speaker and not so confident in the judge, so I asked this.
AI: Here are the two sentences in question:
$B_n$ is the union of a countable set of countable sets.
$B_n$ is a countable set as the union of countable sets.
Sentence (1) is just fine, and sentence (2) is not a correct paraphrase of sentence (1).
Sentence (2) says that $B_n$ is a countable set; sentence (1) does not say this.
Sentence (1) says that $B_n$ is the union of countably many countable sets; sentence (2) merely says that $B_n$ is the union of some arbitrary family of countable sets, not necessarily a countable family.
|
H: What does it mean to "identify" points of a topological space?
I was recently reading about circle rotations (a basic example in dynamical systems) and got confused by some notation. It said consider the unit circle $S^{1} = [0,1]/{\sim}$, where $\sim$ indicates that $0$ and $1$ are identified.
What does "identify" mean? and how is the set $[0,1]/{\sim}$ different from the set $[0,1]$? Thanks!
AI: It means first that $\sim$ is the following equivalence relation: for $x,y\in[0,1]$, $x\sim y$ iff $x=y$ or $\{x,y\}=\{0,1\}$. (It’s easy to check that this is an equivalence relation on $[0,1]$.) As a set $[0,1]/{\sim}$ is the set of equivalence classes of $\sim$; these equivalence classes are the singletons $\{x\}$ for $x\in(0,1)$ and the doubleton $\{0,1\}$. Finally, it says that we’re endowing $[0,1]/{\sim}$ with the quotient topology, defined as follows:
if $q:[0,1]\to[0,1]/{\sim}$ is the map that takes a point $x\in[0,1]$ to its $\sim$-equivalence class, then $U\subseteq[0,1]/{\sim}$ is open in $U\subseteq[0,1]/{\sim}$ iff $q^{-1}[U]$ is open in $[0,1]$.
It turns out that this has a rather simple intuitive meaning in this case: the effect of all this is to treat $0$ and $1$ as if they were the same point, a single point whose open nbhds are obtained by ‘gluing together’ open nbhds of $0$ and $1$ in $[0,1]$. That is, the construction effectively just bends the unit interval around into a circle and identifies the endpoints in the sense that it makes them identically the same point, and the result is the circle $S^1$.
|
H: Trigonometry Equations.
Solve for $0 \leq X \leq 360$, giving solutions correct to the nearest minute where necessary,
a) $\cos^2 A -8\sin A \cos A +3=0$
Can someone please explain how to solve this, ive tried myself and no luck. Thanks!
AI: The double angle identities mentioned by Avatar give a good approach. But there are alternatives. For example, we can rewrite the equation as $\cos^2 A+3=8\sin A\cos A$. Square both sides and use $\sin^2 A=1-\cos^2 A$. After rearranging, we get
$65\cos^4 A -58\cos^2 A +9=0.$$
We get awfully lucky, the expression on the left factors nicely, and we get
$$(13\cos^2 A-9)(5\cos^2 A -1)=0.$$
There is a small complication. We squared, and therefore may have introduced extraneous (spurious) roots. So any answer that we get has to be checked to see whether it really works.
As a bit of further help, note from the original equation that $\sin A\cos A$ cannot be negative, so $A$ can only be in the first quadrant or the third.
The rest is calculator work.
|
H: Question about Lemma 7.7.1 from Hirschhorn's Model Categories and Their Localizations
The Lemma states the following.
Let M be a model category. If $g:X\rightarrow Y$ is a weak equivalence between cofibrant objects in M, then there is a functorial factorization of $g$ as $g=ji$ where $i$ is a trivial cofibrations and $j$ is a trivial fibration that has a right inverse that is a trivial cofibration.
Can one clarify what functorial means here and how to show it? For me, it seems that this part is not proven in the proof of Lemma?
AI: The meaning of "functorial factorization" is well clarified in the following article
http://arxiv.org/pdf/1204.5427.pdf
at page 7.
What you require is the existence of two factorization functors $R, L : \mathrm{Mor}(M) \to \mathrm{Mor}(M)$ which can be used to factor any morphism, and are well behaved with respect to domains and codomains. That is: $\mathrm{dom} (L) = \mathrm{dom}, \mathrm{cod}(L)=\mathrm{dom}(R), \mathrm{cod}(R) = \mathrm{cod}$ as functors. For example, $\mathrm{dom}$ is a functor $\mathrm{Mor}(M) \to M$, defined in the only way that makes sense; $\mathrm{dom}(R)$ denotes the composite functor $\mathrm{dom} \circ R$, with a little abuse of notation. Here $\mathrm{Mor}(M)$ is the category of morphisms in $M$ and commutative squares between them. In the article I linked it is called $M^2$.
Functorial factorizations are given in any model category, as an axiom. Some authors, such as Hovey in his book "Model Categories", include the functorial factorization in the model structure itself, not merely requiring their existence.
|
H: Parametric equations, Exponential Function.
Consider the curve defined by the parametric equations $x=t^2 +t-1$ and $y=te^{2t}$
i) Show that $dy/dx =e^{2t}$
ii) Hence show that the tangent to the curve at the point on the curve where $t= -1$ passes through the origin.
I'm sorry to bug you guys, but I'm clueless and would help me if someone could help me, so I get questions that are similar to this. Thanks!
AI: $$\frac{dy}{dt}=e^{2t}+2te^{2t}=e^{2t}(1+2t)$$
$$\frac{dx}{dt}=2t+1 $$
$$\frac{dy}{dx}=\frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$
At $t=-1,x=-1, y=-e^{-2}$
So, the equation of the tangent will be $$\frac{y-(-e^{-2})}{x-(-1)}=\frac{dy}{dx}_{\text{(at }t=-1)}=e^{-2}\implies y=x\cdot e^{-2}$$ which clearly passes through the origin $(0,0)$
|
H: Would a number that bounds the convergent sequence always be a bound for its limit?
Given a sequence of real numbers $\left\{a_n\right\}$, that $a_n \to a$ and $a_n \leq c$. Does it follow that $a\leq c$?
I'm just perplexed by a line in the proof of the Intermediate Value Theorem in 4th Ed. of Royden. In the last paragraph it goes like this:
Therefore $\left\{ a_n \right\} \to x_0$. By the continuity of $f$ at
$x_0$, $\left\{ f(a_n) \right\} \to f(x_0)$. Since $f(a_n)\leq c$ for
all n, and the set $(-\infty, c]$ is closed, $f(x_0)\leq c$.
I just don't know where the closed $(-\infty,c]$ fits in, in the proof.
AI: The claim is a special case of the following fact:
Fact: If $\langle x_n:n\in\Bbb N\rangle$ is a sequence of real numbers, $\lim\limits_{n\to\infty}x_n=y$, $C$ is a closed set of real numbers, and $x_n\in C$ for all $n\in\Bbb N$, then $y\in C$.
Proof: Suppose that $y\notin C$. Then because $C$ is closed, there is an $\epsilon>0$ such that $$(y-\epsilon,y+\epsilon)\cap C=\varnothing\;.\tag{1}$$ On the other hand, $\lim\limits_{n\to\infty}x_n=y$, so there is an $m_\epsilon\in\Bbb N$ such that $|x_n-y|<\epsilon$ for each $n\ge m_\epsilon$. Pick any integer $n\ge m_\epsilon$, then $|x_n-y|<\epsilon$, so $x_n\in(y-\epsilon,y+\epsilon)$. And by hypothesis $x_n\in C$, so $x_n\in(y-\epsilon,y+\epsilon)\cap C=\varnothing$, contradicting $(1)$. Thus, $y\in C$ after all. $\dashv$
In your case $x_n=f(a_n)$, $u=f(x_0)$, and $C=(\leftarrow,c]$. To see why it’s important that $(\leftarrow,c]$ is closed, consider the following variant:
Conjecture: If $\langle x_n:n\in\Bbb N\rangle$ is a sequence of real numbers, $\lim_{n\to\infty}x_n=y$, $C\subseteq\Bbb R$, and $x_n\in C$ for all $n\in\Bbb N$, then $x\in C$.
To see that this conjecture isn’t always true, let $x_n=-\frac1{2^n}$ for each $n\in\Bbb N$, let $y=0$, and let $C=(\leftarrow,0)$, the set of negative real numbers. Certainly $x_n\in C$ for each $n\in\Bbb N$, and certainly $\lim\limits_{n\to\infty}\left(-\frac1{2^n}\right)=0$, but $0\notin C$.
The conjecture fails in this example precisely because $y$, the limit of the sequence, is a limit point of $C$ that is not itself in $C$. When $C$ is closed, this cannot happen: by definition a closed set contains all of its limit points. When $C$ is not closed, there is at least one limit point of $C$ that is not in $C$, and if such a point is the limit of the sequence, the conjecture will fail in that case, just as in the example here.
|
H: How could I see that a strongly monotonically monolithic space $X$ is first countable?
Let $A$ be a subset of $X$. A family $\mathcal B$ of subsets of $X$
is called an an external base of $A$ in $X$ if all elements of
$\mathcal B$ are open in $X$ and for any $x\in A$ and any open set
$U$ in $X$ with $x\in U$, there exists some $B\in \mathcal B$ such
that $x\in B \subset U$.
We say that a space $X$ is strongly
monotonically monolithic if for each $A \subset X$ we can assign an
external base $\mathcal{O}(A)$
to the set $\overline{A}$ satisfying the following conditions:
a) $\left| \mathcal{O}(A) \right| \le \max\{|A|,\omega\}$;
b) if $A \subset B $, then $\mathcal{O}(A) \subset
\mathcal{O}(B)$;
c) if $\alpha$ is an ordinal and we have a family $\{A_\beta: \beta
< \alpha \}$ of subsets of $X$ such that $\beta < \beta' $ implies
$A_{\beta} \subset A_{\beta'}$, then $\mathcal{O}(\bigcup_{\beta <
\alpha} A_\beta) = \bigcup_{\beta < \alpha} \mathcal{O}(A_\beta)$.
How could I see that a strongly
monotonically monolithic space $X$ is first countable?
AI: I assume that $X$ is $T_1$. Fix a point $p\in X$, and let $A=\{p\}$. Let $\mathcal{O}(A)$ be the promised external base for $\operatorname{cl}A=A$. Then $|\mathcal{O}(A)|=\omega$, and for each open nbhd $U$ of $p$ in $X$ we have a $B\in\mathcal{O}(A)$ such that $p\in B\subseteq U$. But that just means that $\mathcal{O}(A)$ is a local base at $p$, so $\chi(p,X)=\omega$. And $p$ was arbitrary, so $\chi(X)=\omega$.
|
H: How to understand the words "More algebraically"
How to understand the words More algebraically
When seeing one proof about the theorem that rational set is countable, I saw this
More algebraically , but less clearly, an explicit one-to-one onto map
$f:N\times N\to N$
is
$f(m,n)=\frac{(n+m-2)(n+m-1)}{2}+m$.
so, why not more analytically? The concept of function is more algebraically?
Also, better to know about how to construct $f(m,n)$ and make it more clearly?
AI: It would help to know what is being contrasted with this more algebraic argument. I suspect, however, that it’s a pictorial argument similar to the one shown in this earlier question. If so, the author’s point is that while the picture makes it very clear that there is a bijection $f$ from $\Bbb N\times\Bbb N$ to $\Bbb N$ and would even let us calculate any $f(m,n)$ by drawing a big enough picture, it doesn’t actually give us an algebraic formula for $f(m,n)$.
Now he’s giving us that algebraic formula. It’s not terribly hard to prove that it’s a formula for the bijection of the pictorial argument. It’s also not hard to prove without any reference to the pictorial argument that this function really is a bijection from $\Bbb N\times\Bbb N$ to $\Bbb N$. But it would be hard to come up with that formula out of thin air. If he had just presented the formula and proved that it’s a bijection from $\Bbb N\times\Bbb N$ to $\Bbb N$, you might well have said, ‘Fine, I see that it works, but where on earth did it come from?!’
It’s more algebraic because it actually gives an algebraic expression for $f$; it’s less clear because you actually have to work a bit to prove that it’s a bijection from $\Bbb N\times\Bbb N$ to $\Bbb N$; the pictorial argument gives you an obvious bijection, even though it isn’t immediately obvious how to express it with an algebraic formula.
|
H: How does knowing a function as even or odd help in integration ??
So, I am learning Fourier Series and it involves integration. I am not too good at integration.
Now, the resource I use is videos by Dr. Chris Tisdell. In the opening video he says that knowing whether the function as even or odd will greatly simplify the integration process.
So, I have two questions:
1. What is even/odd function?
2. How will that simplify integration ?
AI: Odd and even functions
Even and odd are terms used to describe particularly well-behaved functions.
An even function is symmetric about the $y$-axis. That is, if we reflect the graph of the function in the $y$-axis, then it doesn’t change. Formally, we say that $\,f$ is even if, for all $x$ and $-x$ in the domain of $\,f$, we have
$$f(-x)=f(x)$$
Examples of even functions are $\,f(x)=x^2$ and $\,f(x)=\cos x$.
An odd function has rotational symmetry of order two about the origin. That is, if we rotate the graph of the function $180^\circ$ about the origin, then it doesn’t change. Formally, we say that $\,f$ is odd if, for all $x$ and $-x$ in the domain of $\,f$, we have
$$f(-x)=-f(x)$$
Examples of odd functions are $\,f(x)=x^3$ and $\,f(x)=\sin x$.
Integration
When calculating Fourier series, you often consider integrals of the form
$$I=\int_{-a}^a f(x)\,\mathrm{d}x$$
If $\,f$ is odd or even, then sometimes you can make this simpler. We can rewrite that integral in the following way:
\begin{align*}
I=\int_{-a}^a f(x)\,\mathrm{d}x
&= \int_{-a}^0 f(x)\,\mathrm{d}x + \int_0^a f(x)\,\mathrm{d}x \\
&= \int_0^a f(-x)\,\mathrm{d}x + \int_0^a f(x)\,\mathrm{d}x
\end{align*}
For an even function, we have $f(-x)=f(x)$, whence
$$I = 2\int_0^a f(x)\,\mathrm{d}x$$
For an odd function, we have $f(-x)=-f(x)$, whence
$$I = \int_0^a (-f(x)+f(x))\,\mathrm{d}x = 0$$
That’s what it means to simplify the integration: the integral of an odd or even function over the interval $[-a,a]$ can be put into a nicer form (and sometimes we can see that it vanishes without ever computing an integral).
|
H: Why is $\sin^2x + \cos^2x = 1$ important?
To start off, I understand the proof behind this identity, and I can visualize it in my head with the unit circle.
But I read this quote:
They only need to remember three facts – that $\sin 30^\circ = ½$ , that $\tan 45^\circ =1$, and that $\sin^2x + \cos^2x =1$ . Just about everything else they need to know about trigonometry can be derived from these.
and I realized I don't have a complete understanding on practical use of the identity. Therefore I am looking for an explanation and some practical examples on why it is so important.
Thanks for your help!
AI: The "special angles" in trig are $0^\circ$, $30^\circ$, $45^\circ$, $60^\circ$, and $90^\circ$ (and their counterparts in quadrants II, III, and IV).
Presumably, the sines and cosines at $0^\circ$ and $90^\circ$ are obvious to the quoted author, so the quotation boils-down what's needed to remember values at the remaining angles to three facts.
"$\sin^2\theta+\cos^2\theta=1$" allows for conversion between sines and cosines. Always handy.
"$\sin 30^\circ = 1/2$" immediately gives $\cos 60^\circ =1/2$. (Complementary angles are cool like that, since "cosine" means "complementary sine".) With the help of the Pythagorean identity above, we also get $\cos 30^\circ = \sin 60^\circ = \sqrt{3}/2$. (If you're going to commit any of these to memory, it makes sense to choose the simple fraction, $1/2$. Personally, I think remembering "$\cos 60^\circ = 1/2$" is easier, because I can "see" it more easily in dropping a perpendicular from the top vertex of an equilateral triangle.)
"$\tan 45^\circ=1$" encodes information about the remaining special angle, saying that (in Quadrant I) $\sin 45^\circ = \cos 45^\circ$. Again with the help of the Pythagorean identity, we have $1 = \sin^2 45^\circ + \cos^2 45^\circ = 2 \sin^2 45^\circ$, so that $\sin 45^\circ = \cos 45^\circ = 1/\sqrt{2}$.
So, the three facts help to recover a total of seven facts. And then getting the remaining secants, tangents, cosecants, and cotangents is a simple matter of using various ratios. The sine-cosine Pythagorean identity also gives rise to the others, by dividing-through by $\cos^2\theta$ or $\sin^2\theta$ ...
$$\begin{align}
\frac{\sin^2\theta}{\cos^2\theta} + \frac{\cos^2\theta}{\cos^2\theta} = \frac{1}{\cos^2\theta} \quad &\to \quad \tan^2\theta + 1 = \sec^2\theta \\[6pt]
\frac{\sin^2\theta}{\sin^2\theta} + \frac{\cos^2\theta}{\sin^2\theta} = \frac{1}{\sin^2\theta} \quad &\to \quad 1 + \cot^2\theta = \csc^2\theta
\end{align}$$
... reducing pressure to memorize all three relations.
There are other (and better?) ways to remember this stuff, but the basic point is that there's a good deal of informational redundancy in trig. While it may seem like there are a zillion different things to memorize, it doesn't take long to realize that one can focus on a (very) few key facts and re-derive the rest on demand.
|
H: Basic question about field homomorphism
I have a question about the definition on this page.
I feel like conditions 1 and 2 imply 3, because a group homomorphism maps identity to identity.
Am I missing something? The proof of the lemma on that page uses condition 3 to assert that $1_F$ is not in the kernel of $\psi$, but I fail to see why this is not already implied by the first two conditions.
Finally, I would like to prove the lemma without the use of ideals. Could someone give me a hint as to how to show that the kernel of $\psi$ is trivial?
AI: Consider the map $f:\Bbb R\to\Bbb R$ such that $f(x)=0$ for all $x\in\Bbb R$: $f$ satisfies (1) and (2) but not (3).
To prove that if $\psi:F\to K$ is a field homomorphism, then $\ker\psi$ is trivial, let $x\in F\setminus\{0_F\}$; we want to show that $\psi(x)\ne 0_K$. Suppose that it is, and consider $\psi(x\cdot x^{-1})$; on the one hand it’s $\psi(1_F)=1_K\ne 0_K$, but on the other hand it’s ... ?
|
H: Please let me know how to do this step by step. I had tried , but no solution yet....
A store has an introductory sale on 12 types of candy bars. A customer may choose one bar of any five different types and will be charged no more than $1.75. Show that although different choices may cost different amounts, there must be at least two different ways to choose so that the cost will be the same for both choices.
AI: You find the number of possible choices (the number of "pigeons") and the number of possible prices (the number of "pigeonholes"), and see which one is greater.
There are $_{12}C_5 = 792$ choices (choose 5 from a set of 12 with order not mattering). The number of possible total prices is at most 176, in one-cent increments from \$0.00 up to \$1.75. So no assignment of prices to the items can yield unique total prices for every selection of choices, by the Pigeonhole Principle.
|
H: Find volume of the body $V = \{ z = \sqrt{6-x^2-y^2}, z = x^2 + y^2 \}$
Find volume of the body $V = \{ z = \sqrt{6-x^2-y^2}, z = x^2 + y^2 \}$
Now what I said is:
$$V = \iint_{D} {\sqrt{6-x^2-y^2} - x^2 - y^2 dxdy}$$.
But when I wanted to get what $D$ is, I intersected the two $z$ functions to get $\sqrt{6-x^2-y^2} = x^2+y^2$.
But what is this $D$? It is certainly not a circle.
AI: An idea: the intersection of
$$z=x^2+y^2\;,\;\;z=\sqrt{6-x^2-y^2}\implies z=\sqrt{6-z}\implies 0=z^2+z-6=(z+3)(z-2)$$
Since $\,z=-3\,$ is absurd (why?), we then have $\;2=z=x^2+y^2\;$ and you can now get easily your limits in the $\,xy-plane\;$ by means of this canonical circle.
|
H: Is it always true that $\det(A^2+B^2)\geq0$?
Let $A$ and $B$ be real square matrices of the same size. Is it true that
$$\det(A^2+B^2)\geq0\,?$$
If $AB=BA$ then the answer is positive:
$$\det(A^2+B^2)=\det(A+iB)\det(A-iB)=\det(A+iB)\overline{\det(A+iB)}\geq0.$$
AI: If $A= \left( \begin{matrix} 1 &1 \\ 0 &1 \end{matrix} \right)$ and $B=\left( \begin{matrix} 1&0 \\ n&1 \end{matrix} \right)$, then $\det(A^2+B^2)=4(1-n)$.
|
H: Quadratic Bezier curves representation as implicit quadratic equation
A quadratic bezier curve from points P1=(x1, y1) to P3=(x3, y3) with control point P2=(x2, y2) can be represented as parametric quadratic curve P(t) where t is in [0, 1].
$$P(t) = (P_1t + P_2(1 - t))t + (P_2t + P_3(1 - t))(1 - t)
= P_1t^2 + P_2(1 - t)t + P_3(1 - t)^2$$
We can extend the range of t to be all real numbers, from minus infinity to plus infinity, and then we will get a nice curve dividing the plane in two.
I read somewhere, that this infinete curve is either a straight line (if P1, P2 and P3 are colinear) or a parabola. Could you please confirm that?
Since division of plane by a parabola or a line can be expressed by quadratic equation of form:
Point P=(x, y) is
on one side if F(P) > 0
on the other if F(P) < 0
on the curve if F(P) == 0
where F(P) = A * x^2 + B * y^2 + C * x * y + D * x + E * y + F
How can we, starting from points P1, P2, P3 calculate the numbers A, B, C,D, E, F?
I think something similar is done in this wolfram demostration http://demonstrations.wolfram.com/FocusAndDirectrixInAQuadraticBezierCurve/
Thank you
AI: Kindly ask Wolfram Alpha to
Collect[Expand[Eliminate[{
x1*t^2 + x2*t*(1-t)*2 + x3*(1-t)^2 == x,
y1*t^2 + y2*t*(1-t)*2 + y3*(1-t)^2 == y
}, t]], {x, y}]
and it will tell you
which you can reformulate to something close to
\begin{align*}F(x, y) =&\phantom{+}
x^2 (y_1^2-4 y_1 y_2+2 y_1 y_3+4 y_2^2-4 y_2 y_3+y_3^2) \\
&+xy (-2 x_1 y_1+4 x_1 y_2-2 x_1 y_3+4 x_2 y_1-8 x_2 y_2+4 x_2 y_3-2 x_3 y_1+4 x_3 y_2-2 x_3 y_3) \\
&+x (2 x_1 y_1 y_3-4 x_1 y_2^2+4 x_1 y_2 y_3-2 x_1 y_3^2+4 x_2 y_1 y_2-8 x_2 y_1 y_3+4 x_2 y_2 y_3-2 x_3 y_1^2+4 x_3 y_1 y_2+2 x_3 y_1 y_3-4 x_3 y_2^2) \\
&- y^2 (-x_1^2+4 x_1 x_2-2 x_1 x_3-4 x_2^2+4 x_2 x_3-x_3^2) \\
&- y (2 x_1^2 y_3-4 x_1 x_2 y_2-4 x_1 x_2 y_3-2 x_1 x_3 y_1+8 x_1 x_3 y_2-2 x_1 x_3 y_3+4 x_2^2 y_1+4 x_2^2 y_3-4 x_2 x_3 y_1-4 x_2 x_3 y_2+2 x_3^2 y_1) \\
&+(x_1^2 y_3^2-4 x_1 x_2 y_2 y_3-2 x_1 x_3 y_1 y_3+4 x_1 x_3 y_2^2+4 x_2^2 y_1 y_3-4 x_2 x_3 y_1 y_2+x_3^2 y_1^2)
\end{align*}
So the parameters of your conic will be
\begin{align*}
A =&
y_1^{2} - 4 \, y_1 y_2 + 2 \, y_1 y_3 + 4 \, y_2^{2} - 4 \, y_2 y_3 +
y_3^{2}
\\
B =&
x_1^{2} - 4 \, x_1 x_2 + 2 \, x_1 x_3 + 4 \, x_2^{2} - 4 \, x_2 x_3 +
x_3^{2}
\\
C =&
-2 \, x_1 y_1 + 4 \, x_1 y_2 - 2 \, x_1 y_3 + 4 \, x_2 y_1 - 8 \, x_2
y_2 + 4 \, x_2 y_3 - 2 \, x_3 y_1 + 4 \, x_3 y_2 - 2 \, x_3 y_3
\\
D =&
2 \, x_1 y_1 y_3 - 4 \, x_1 y_2^{2} + 4 \, x_1 y_2 y_3 - 2 \, x_1
y_3^{2} + 4 \, x_2 y_1 y_2 - 8 \, x_2 y_1 y_3
\\ &+\, 4 \, x_2 y_2 y_3 - 2 \,
x_3 y_1^{2} + 4 \, x_3 y_1 y_2 + 2 \, x_3 y_1 y_3 - 4 \, x_3 y_2^{2}
\\
E =&
-2 \, x_1^{2} y_3 + 4 \, x_1 x_2 y_2 + 4 \, x_1 x_2 y_3 + 2 \, x_1 x_3
y_1 - 8 \, x_1 x_3 y_2 + 2 \, x_1 x_3 y_3
\\ &-\, 4 \, x_2^{2} y_1 - 4 \,
x_2^{2} y_3 + 4 \, x_2 x_3 y_1 + 4 \, x_2 x_3 y_2 - 2 \, x_3^{2} y_1
\\
F =&
x_1^{2} y_3^{2} - 4 \, x_1 x_2 y_2 y_3 - 2 \, x_1 x_3 y_1 y_3 + 4 \, x_1
x_3 y_2^{2} + 4 \, x_2^{2} y_1 y_3 - 4 \, x_2 x_3 y_1 y_2 + x_3^{2}
y_1^{2}
\end{align*}
The above was computed using sage, where I could get the output into the form I needed more easily. This should agree with the first formula, but if it does not, this here is the more reliable one. You can check that $4AB-C^2=0$ which proves that this conic is indeed a parabola.
|
H: Proof for the total derivative of a function
I'd be interested to understand why is the total derivative of a function $f(t,x,y)$, where $x = x(t)$ and $y=y(t)$ defined as:
$$\frac{df}{dt} = \frac{\partial f}{\partial t}\frac{dt}{dt} + \frac{\partial f}{\partial x}\frac{dx}{dt} + \frac{\partial f}{\partial y}\frac{dy}{dt}$$
This formula seems intuitive to me, but if I would need to prove the general case analytically, how would I do it? :)
Thank you for any help! :)
AI: In computing the partial derivative $\frac{\partial f}{\partial x_i}$ of a function $f(x_1,\dots,x_n)$ w.r.t. to a variable $x_i$, one assumes that the other variables do not vary in the computational process. This is a consequence of the very definition of partial derivative. More generally, if the variables $x_2,\dots,x_n$ are functions of the variable $x_1$, then the partial derivative $\frac{\partial f}{\partial x_1}$ does not catch the overall change of $f$ while $x_1$ varies.
One needs to introduce another measure of such change, i.e. the total derivative
$$\frac{df}{dx_1}:=\frac{\partial f}{\partial x_1}+\sum_{i=2}^n \frac{\partial f}{\partial x_i}\frac{d x_i}{d x_1}.$$
From its definition (this is the point: I take it as a definition, although you can prove it using the chain rule on $f(x_1,x_2(x_1),\dots,x_n(x_1)))$ it is clear that the total derivative takes into account of all changes when the variable $x_1$ varies.
To arrive at a geometrical interpretation of the total derivative, let us multiply both sides of the above expression with the infinitesimal increment "$dx_1$", arriving at
$$df(x):=\frac{df}{dx_1}dx_1=\frac{\partial f}{\partial x_1}dx_1+\sum_{i=2}^n \frac{\partial f}{\partial x_i} d x_i~~(*).$$
We can interpret $df(x)$ in $(*)$ as the total differential of $f$ at $x=(x_1,\dots,x_n)$ as
$$df(x)=\frac{\partial f}{\partial x_1}dx_1+\sum_{i=2}^n \frac{\partial f}{\partial x_i} d x_i=\sum_{i=1}^n \frac{\partial f}{\partial x_i} d x_i=\langle \nabla f(x), dx \rangle $$
where $dx=(dx_1,\dots,dx_n)$ represents an infinitesimal increment.
|
H: Closed form for $a_{n+1} = (a_n)^2+\frac{1}{4}$
I've been given the following sequence:
\begin{align*}
&a_0 = 0; \\
&a_{n+1} = (a_n)^2+\frac{1}{4}.
\end{align*}
I also have to prove that whatever I come up with is correct, but that will likely be the easy part.
Here are the first few values:
\begin{align}
&a_0 = 0 \\
&a_1 = \frac{1}{4}\\
&a_2 = \frac{5}{16}\\
&a_3 = \frac{89}{256} \\
&a_4 = \frac{24305}{65536}
\end{align}
I've managed to to determine that the denominators are of the form $2^{2^n}$. I've tested up to one million terms of this sequence and it appears that $\lim_{n\rightarrow\infty}a_n = \frac{1}{2}$. I spent a while trying to find something of the form $a_n = \frac{P(n)}{Q(n)}$. I haven't had any luck with this, so I started looking into some sums. I've found that
\begin{align*}
a_2 = \frac{1}{4} + \frac{1}{16} = \frac{5}{16}
\end{align*}
and,
\begin{align*}
a_3 = \frac{1}{4} + \frac{1}{16} + \frac{1}{32} + \frac{1}{256} = \frac{89}{256}
\end{align*}
But now,
\begin{align*}
a_4 = \frac{1}{4} + \frac{1}{16} + \frac{1}{32} + \frac{1}{64} + \frac{1}{128} + \frac{1}{512} + \frac{1}{1024} + \frac{1}{2048} + \frac{1}{4096} + \frac{1}{65536}= \frac{24305}{65536}.
\end{align*}
So it seems that there is some type of sum involving negative powers of 2 going on, but it isn't clear to me that there is even a pattern here. Any hints/help would be appreciated!
AI: I just checked OEIS sequence A167424 for which $$f(1)=1/2 \\ f(n+1)=[f(n)^2+1]/2.$$ If this $f$ is divided by 2 you get your sequence $a_1,a_2,\cdots,$
since then it gives correctly $a_1=1/4$ and $a_{n+1}=a_n^2+1/4.$ The recursion on the $a_n=f(n)/2$ follows on dividing through the recursion $f(n+1)=[f(n)^2+1]/2$ by 2 to obtain
$$\frac{f(n+1)}{2}=\left( \frac{f(n)}{2}\right) ^2 +\frac{1}{4}.$$
So anything like closed forms etc. should be extractable from the OEIS page, if it's there.
Side note: The Mandlebrot set intersects the real axis in the interval $[-2,1/4]$. Since it consists of those points $c$ such that the orbit of $0$ under the iteration of $f(x)=x^2+c$ is bounded, we see that your sequence is precisely the (bounded) orbit of the rightmost real point of the Mandlebrot set. (In this sense it seems not surprising there isn't a closed form, as typically Mandlebrot iterations bounce around the set unpredictably. There are periodic points, but it seems $1/4$ is not one of them. (Just thought this connection might be of interest.)
|
H: Help understanding a derivation in a book
I'm reading about Pattern recognition and when I read the appendix on my book I came across with the following derivation:
$J(\theta)$ is cost a function with parameter $\theta = (\theta_1, ..., \theta_d)$. If $J(\theta) = c$ then:
$$dc = 0 = \frac{\partial J(\theta)^T}{\partial \theta}d\theta \Rightarrow \frac{J(\theta)}{\partial \theta} \perp d\theta$$
This maybe a very easy question, but the derivation above confuses me...could someone write it out more explicitly what the author did :) I attached a picture showing more information taken from the book. I have highlighted the area I found confusing...what happened in the red area?
Thnx for guidance :)
AI: Imagine the variable $\mathbf{\theta}$ depending on some parameter $t$. Then on a level curve,
$$J(\mathbf{\theta}(t))=c$$
we have
$$\frac{d}{dt} J(\mathbf{\theta}(t)) = \frac{dc}{dt}$$
The RHS is zero because $c$ is a constant. The LHS is, by the chain rule,
$$\left (\frac{dJ}{d\mathbf{\theta}}\right)^T \frac{d\mathbf{\theta}}{dt} = 0$$
The transpose arises because the chain rule dictates it. Recall that the $d/d\mathbf{\theta}$ operator is a gradient operator; work the derivatives out component by component. For any two vectors $\mathbf{a}$ and $\mathbf{b}$, the dot product may be written as $\mathbf{a}^T \mathbf{b}$. Therefore, when $\mathbf{a}^T \mathbf{b} = 0$, we can say that $\mathbf{a} \perp \mathbf{b}$.
"Multiply" through by $dt$ to get the equation you wanted. (Why is it possible to do this? It is OK if you consider the derivative as the limit of the ratio of two very small quantities; in this case, a small quantity is $dt$.)
|
H: Get the number of digits from a number
I'm looking for a function $f$ that would give me the following results :
For any $x$ such as $ x \in \mathbb {N^*}, x>0 $
$f(x) = 1 $ when $1\leq x < 10$
$f(x) = 2 $ when $10\leq x < 100$
$f(x) = 3 $ when $100\leq x < 1000$
and so on ...
So far , I have tried the floor function with no success.
AI: $$\lfloor \log_{10}(n) \rfloor + 1$$
|
H: Probability of getting green
If a bag contains x black balls, y green balls and z yellow balls , now a ball is drawn :
1. It's yellow -> try again
2. it's green -> stop
3. it's black -> stop
What is the probability of getting a green ball?
I am a newbie so please forgive me if the question is too amateurish . Thank you !
AI: First of all, the probability of getting a blue ball, as asked in the topic of this question, is not very big, since there are no blue balls in the bag.
As far as getting a green ball goes: Suppose that you've picked $m$ yellow balls before you pick a ball that's either black or green. Since there are $x+y$ green or black balls left, the chance of getting a black ball is $x/(x+y)$, and the chance of getting a green is $y/(x+y)$. In particular, the probability you are looking for is independent of $m$.
|
H: Why does $\int\limits_0^1 {\dfrac{{x - 1}}{{\ln x}}} \;\text{d}x=\ln2$?
I have found that
$$\int\limits_0^1 {\dfrac{{x - 1}}{{\ln x}}} \;\text{d}x=\ln2$$
but I can't prove it.
Any hint?
Thank you in advance
AI: $$t>-1:$$
$$\begin{aligned}f(t)=\int_0^1 \frac{x^t-1}{\ln x}\,dx\;\Rightarrow\; f'(t) &=\int_0^1 x^t\,dx =\frac{1}{t+1}\end{aligned}$$
$$f(1)=\int_0^1 \frac{dt}{t+1}=\ln 2$$
|
H: If $f_n\to f$ uniformly on [a,b] and f is continious on [a,b] then $f_n$ is continious in [a,b]
Yesterday I wrote a test in calculus and had to answer the following question:
Prove or contradict: if $f_n\to f$ uniformly on $[a,b]$ and f is continious on [a,b] then $\exists n_0\in\mathbb N$ s.t $\forall n>n_0,f_n$ is continious in $[a,b]$.
I proved it it in the following. Let $\epsilon>0$. The convergence is unifrom so $\exists n_1\in\mathbb N$ s.t $$\forall n>n_1,\forall x\in [a,b], |f_n(x)-f(x)|<\frac \epsilon 3$$. Also the f is continious on [a,b] so it means that for every $x_0\in [a,b]$ exists a $\delta_{f}$ s.t $$|x-x_0|<\delta_f\Rightarrow|f(x)-f(x_0)|<\frac\epsilon 3$$. Then by choosing $n_0=n_1+1,\delta_{f_n}=\delta_f>0$, if $\forall n>n_0 \forall x\in[a,b],|x-x_0|<\delta_{f_n}$ then $$|f_n(x)-f_n(x_0)|=|f_n(x)+f(x)-f(x)+f(x_0)-f(x_0)-f_n(x_0)|\le |f_n(x)-f(x)|+|f(x)-f(x_0)|+|f(x_0)-f_n(x_0)|<\frac \epsilon 3+\frac \epsilon 3+\frac \epsilon 3=\epsilon$$ and f is continious in these terms. Another friedn of mine claims he contradict the question by taking $f_n(x)=\begin{cases}
0, & \text{if }x\neq0 \\
\frac 1 n, & \text{if }x=0
\end{cases}$ which converge uniformly to 0 and is not continious on [0,1]. Which one of us is right and why the other is not?
AI: Try to apply your proof to the (valid) counterexample of your friend.
Can you show that $f_n$ is continuos at $x=0$? Lets assume $\epsilon=\frac1n$ is given. Can you find $\delta>0$ such that $|f_n(x)-f_n(0)|<\epsilon$ for all $x$ with $|x-0|<\delta$?
The problem when attempting so is: Your proof requires one to choose $n$ after $\epsilon$ is given. But to show continuity of $f_n$, we go the other way around: $n$ is given (first) to specify which function we investigate for continuiuty, then $x_0$ and then $\epsilon$. As you see, chosing $\epsilon <\frac1n$ for $f_n$ as given by your friend and $x_0=0$, breaks your proof.
|
H: Congruence in rings
Let $R$ be a commutative (and probably unitary, if you like) ring and $p$ a prime number. If $x_1,\ldots,x_n\in R$ are elements of $R$, then we have $(x_1+\cdots+x_n)^p\equiv x_1^p+\cdots+x_n^p$ mod $pR$. Why is this true? I tried to show that in $R/pR$ their congruence classes are equal, but without sucess.
AI: This follows by induction if we know it for the case $n=2$.
But the binomial theorem is true in any commutative ring, so:
$$(x_1+x_2)^p = \sum_{k=0}^p \binom{p}{k}x_1^kx_2^{p-k} = x_1^p + x_2^p + \sum_{k=1}^{p-1}\binom p k x_1^kx_2^{p-k}$$
Now all you have to prove that for $1\leq k \leq p-1$, $\binom{p}{k}$ is divisible by $p$. This is fairly easy.
|
H: Question regarding positive definiteness decompositions
The definition of positive definteness that I'm working with is:
"A real n by n matrix $\mathbf{A}$ is positive definite if for all conformable, non-zero vectors $\mathbf{x}$ (n by 1), the following is satisfied: $\mathbf{x^T}\mathbf{A}\mathbf{x}>0$"
So I have the following theorem that a symmetric matrix $\mathbf{A}$ (n by n) is positive definite iff it can be decomposed as $\mathbf{A}=\mathbf{C}\mathbf{C^T}$ where $\mathbf{C}$ is a square matrix (n by n) of full rank.
However I also have the following statement: Consider the matrix $\mathbf{S} = \mathbf{B}\mathbf{B^T}$ where $\mathbf{S}$ is n by n and $\mathbf{B}$ is n by m, then the matrix $\mathbf{S}$ is positive definite if $\mathbf{B}$ is of full row rank.
Is this statement true or false? I thought I could just prove it from the theorem, but because the theorem only applies to square matrices and since $\mathbf{B}$ doesn't have to be square, I can't apply the theorem.
AI: True. That's a multiple duplicate but I can't find them...
Hint 1: every matrix of the form $BB^T$ is positive semidefinite.
Hint 2: for such matrices, $\ker BB^T=\ker B^T$.
Hint 3: $B$ has full row rank if and only if $B^T$ is injective.
|
H: Prove that $\gcd(a, b) = 1 ⇒ \gcd(a^2, b^2) = 1$
I have a question from a sample exam I find difficults to solve:
Prove that if $\gcd(a, b) = 1 ⇒ \gcd(a^2, b^2) = 1$ .
I don't have any idea how to start. I'd like to get helped. thanks!
AI: The "trick" way is to solve $ax+by =1$ then cube both sides to get a solution to $a^2X+b^2Y=1$.
The reason I call this a "trick" is that the result generalizes to any UFD, but this proof does not.
|
H: Angle in a triangle with bisectors
The line segments AD, BD and CD are bisectors in $\triangle ABC$ where D is the intersection point of the bisectors. $\angle BDC$ is $128^\circ$. How big is $\angle BAC$ ?
The answer is $76^\circ$. But how to get that?
AI: Hint: $1.$ Because $\angle BDC$ is $128^\circ$, $\angle DBC$ and $\angle DCB$ add up to ??
$2.$ The lines $BD$ and $CD$ are angle bisectors. Thus from ($1$) we conclude that $\angle ABC$ and $\angle ACB$ add up to ??
$3.$ Thus $\angle BAC$ is ??
|
H: Step in this proof unclear
I have a question about the following proof that is given here: Proofwiki
Why does the intersection of all $B_x$ only contain $\{x\}$? So how do I see that there is nothing else in this intersection? Does anybody of you know the exact argument?
AI: As David points out in the comments above, for any $y\in S$ with $y\ne x,$ we have that $S\setminus\{y\}$ is an open set (since its complement is finite) containing $x$. Since $\mathcal B_x$ is a neighborhood basis at $x$, then there is some $B\in\mathcal B_x$ such that $B\subseteq S\setminus\{y\},$ whence $y\notin B,$ and so $y\notin\bigcap\mathcal B_x.$
|
H: Finding probability of finding alternative balls
A bag contain $5$ white balls and $4$ black balls. One ball is drawn at a time .
What is the probability that the balls drawn will be alternately white and black ?
My try :
i though $p(W)$ is white ball and $p(B)$ is black ball
This event is not independent , its dependent.
So used the formula
$p(aleternative) = P(W)*P(B|A)$
$= 5/9 *1/2$
$= 5/18$
But this is wrong answer .. Guide me where i am getting wrong.
AI: It is not clear how many balls are drawn. We assume that all $9$ are.
There are $\binom{9}{4}$ equally likely positions that the black balls can be in.
Of these, precisely $1$ gives us alternating white and black.
So the required probavility is $1/\binom{9}{4}$.
|
H: filtered modules (LNAT, Davis & Kirk)
In Lecture Notes in Algebraic Topology by Davis & Kirk, on page 240, there is written:
Q1: Is convergence of the filtration assumed in the first underline? Otherwise $\forall p: F_p=A$ is a filtration of $A$ with $Gr(A)=0$ despite $A\neq0$.
Q2: What is the precise statement of second underline? If $R$ is any commutative unital ring and $Gr(A)_p$ is free (of finite rank?) for every $p$ and the filtration is bounded above, then $Gr(A)\cong A$?
I think finite rank is necessary, but not sufficient: if $K$ is a field and $A\!=\!K^{(\mathbb{R})}$ and $F_n\!=\!K^{(\mathbb{R}\setminus\{1,\ldots,n\})}$, then $Gr(A)= \bigoplus_{n\in\mathbb{N}}\! \frac{K^{(\mathbb{R}\setminus\{1,\ldots,n\})}}{ K^{(\mathbb{R}\setminus\{1,\ldots,n+1\})}} \cong\bigoplus_{n\in\mathbb{N}}\!K =K^{(\mathbb{N})}\ncong A$.
What is the right formulation and how can it be proved?
Q3: In the third underline, what precisely is the statement? Ok, so there are weird filtrations, e.g. the $\mathbb{Z}$-modules $M\!=\!\mathbb{Z}$ and $M'\!=\!\mathbb{Z}\!\oplus\!\mathbb{Z}_2$ are not isomorphic and have different $2$-torsion, but there are filtrations $0\!\leq\!2\mathbb{Z}\!\leq\!M$ and $0\!\leq\!\mathbb{Z}\!\oplus\!0\!\leq\!M'$ with $Gr\,M\cong\mathbb{Z}\!\oplus\!\mathbb{Z}_2\cong Gr\,M'$.
But what does it mean that from $Gr(A)_p$ we can determine $A$ up to extension? That $Gr(A)\cong A\oplus B$ for some $B$ for which there is an exact sequence $0\rightarrow?\rightarrow B\rightarrow?\rightarrow0$?
AI: For each graded piece you have a short exact sequence:
$0\rightarrow F_{p-1}\rightarrow F_p\rightarrow \operatorname{Gr}(A)_p\rightarrow 0.$
This gives an extension class in $\operatorname{Ext}^1(\operatorname{Gr}(A)_p,F_{p-1})$. Thus if $\operatorname{Gr}(A)_p$ is projective, then the whole Ext group is trivial, thus the short exact sequence splits and you get that $F_p=F_{p-1}\oplus \operatorname{Gr}(A)_p$.
If you glue these together (since $A=F_n$ for some $n$), you obtain:
$A=F_{n-1}\oplus \operatorname{Gr}(A)_n=F_{n-2}\oplus \operatorname{Gr}(A)_{n-1}\oplus \operatorname{Gr}(A)_n=\cdots$
|
H: What is the limit of $\frac {x^4 +y^4}{x^3 +y^3}$ as $(x,y) \to(0,0)$
What is the limit of $$\lim_{(x,y)\to(0,0)} \dfrac {x^4 +y^4}{x^3 +y^3}$$
if it exists?
I have tried to solve it by converting it to polar system $(x,y)=(r\cos a,r\sin a)$ and another settings. However I could not find the limit and not to show that there is no limit.
thanks in advance.
AI: The limit does not exist. You can derive multiple limiting values that depend on what path you choose to $(0,0)$. eg. if you choose to follow the path $y=kx$ then you get $(1+k^4)/(1+k^3)$.
Edit: Sorry, right idea, wrong execution (an x got lost somewhere).
Sami's got the same idea: if we head to the origin along $y = -x(1+kx^n)$ we get
$$\lim_{x\to 0} \dfrac{(x^4 + x^4 (1 + kx^n)^4)}{(x^3 - x^3(1 + kx^n)^3)}$$
$$=\lim_{x\to 0} \dfrac{2x^4 + o(x^4)}{-3kx^{n+3} + o(x^{n+3})}$$
$$=\lim_{x\to 0} (-2x^{1-n}/3k)$$
which is $0$ for $n<1$, any positive value we choose for $n=1$, and diverges off to $\infty$ for n>1.
|
H: Textbook confusion in 8-bit representation
A textbook I'm following says the following in the section 8-Bit Representation of a Number:
"Now consider the two’s complement of an integer n that satisfies the inequality $1 \leq n \leq
128$.
Then,
$$− 1 \geq −n \geq− 128$$
and,
$$2^8 − 1 \geq 2^8 − n \geq 2^8 - 128 $$
But, $2^8 - 128 = 256 - 128 = 128 = 2^7$.
And therefore, $2^7 \leq$ the two's complement of n $ < 2^8$." ---------- (1)
Everything is fine up to here, but now the book drops the bombshell: "It follows that the 8-bit two’s complement of an integer from 1 through 128 has a leading bit of 1."
Well, I can convince myself that such is the case, but I don't see how that follows from result (1) above. My reasoning is as follows:
A number less than or equal to 128 takes 7 bits. This means representations from 0000000 to 1111111 are possible. In the last number, taking two's complement naturally gives a leading 1, while in the rest the existence of at least one 0 'absorbs' the carry within the 7-bit string itself, leaving the original bit unchanged after addition of 1 (which means, the original 0, which is now 1).
I, however, don't see why result (1) points to this. For instance, consider the sequence $2^7 + 1 = 1000001$. This is greater than $2^7$, but the 8-bit representation does not contain a leading one.
Please help!
AI: You are just getting confused by an easy passage.
Recall $p^n$ in base $p$ is written as $1$ followed by $n$ zeros.
To always remember that, keep in mind this example from the familiar base $10$:
$10^2 = 100$ (exponent is 2 $\implies$ 2 zeros $\implies$ 3 digits).
So $2^7$ is written as $10000000$, a leading one followed by $7$ zeros: 8 bits.
Its 8-bit representation starts with a $1$.
And so does $2^7 + 1 = 10000001$
The textbook just gives it for granted that you know that, in base $p$, anything between $p^k$ (included) to $p^{(k+1)}$ (excluded) in base $p$ takes exactly $k+1$ digits with a non-zero leading digit.
For example, in the familiar base 10, everything between $10^1$ (included) and $10^2$ (excluded), i.e. everything from 10 to 99, takes 2 digits including the starting non-zero digit.
So from $2^7 \leq k <2^8$ it does follow, naturally, that $k$ will be written, in base 2, with 8 digits starting with a non-zero digit.
|
H: finding probability finding good fuse
There are $8$ good and $4$ bad fuses in a box. If $3$ are drawn at random, what is the probability that all will be good.
How i can solve this question when $3$ are drawn at random ?
AI: there are exactly $\binom{8}{3}$ ways of choosing three good fuses while there are $\binom{8+4}{3}$ ways of choosing 3 fuses (from all possible fuses), hence your probability is $\frac{\binom{8}{3}}{\binom{8+4}{3}}$
or another approach - you can count it step-by-step, althought that's usually more cumbersome:
probability of the first fuse being good is $\frac{8}{12}$, probability of the second being good given the first one we picked was good is $\frac{7}{11}$ and for the third one it's $\frac{6}{10}$ hence your answer is $\frac{8}{12} \cdot \frac{7}{11} \cdot \frac{6}{10}$
|
H: The identity of a polynomial sum
I am wondering if there is a recursive formula to calculate
$$S=1^{k}+2^{k}+3^{k}+\dots+n^{k}$$
Where $k$ and $n$ are natural numbers.
AI: I believe this should do,
$$f[k][n]=f[k][n-1]+(f[k-1][n]-f[k-1][n-1])\cdot n$$
where $f[k][n] = 1^k+2^k+\dots+n^k$. so we get $n^{k-1} = (f[k-1][n]-f[k-1][n-1])$ and multiplying both side with $n$ we get $n^k=(f[k-1][n]-f[k-1][n-1])\cdot n$.
And $f[k][n-1]+n^k = f[k][n]$
|
H: Mathematical word for geometrical object?
Is there a mathematical word to designate the concept of a geometrical object like:
square
cube
tesseract
N-dimensional cube
circle
sphere
hypersphere
regular and non-regular polygons
regular and non-regular polyhedrons
...
What is the common word to designate all these objects ?
AI: They are all compact subsets of suitable $\mathbb R^n$. I would call them manifolds in a broad sense, considering both the cases "manifold with corners"-"manifold without corners" (but with boundary).
|
H: Number of integer distance grid points in a cubic grid
Given an $n$-dimensional integer lattice $\mathbb Z^n$, how does the number of grid points $N(k)$ at integer distance $\leq k$ from a given grid point scale asymptotically? I'm mostly interested in $n = 3$ (I'm a physicist :) ), but having also the results for $n = 2$ and $n\geq 4$ would complete the picture.
EDIT: guided by a comment that has now disappeared, I stumbled upon this reference that provides the result $N(k) = c k^{n-1}$ for $n \geq 5$.
AI: You can use 'Hardy-Littlewood Asymptotic Formula' but it is very tricky...
Let $r_{k,s}(N)$ denote the number of representations of $N$ as the sum of $s$ positive $k$th powers.
Then we have
$$r_{k,s}(N)=S(N)\Gamma\left(1+\frac{1}{k}\right)^{s}\Gamma\left(\frac{s}{k}\right)^{-1}N^{(s/k)-1}+O\left(N^{(s/k)-1-\delta}\right)$$
where $S(N)$ is uniformly bounded above and below by positive constants depending only on $k$ and $s$. $\delta$ also depends only on $k$ and $s$.
Now We put $k=2$ and $s=n$. Applying Hardy-Littlewood Asymptotic Formula, we have
$$r_{2,n}(N)=S(N)\left(\frac{\sqrt{\pi}}{2}\right)^{n}\Gamma\left(\frac{n}{2}\right)^{-1}N^{(n/2)-1}+O\left(N^{(n/2)-1-\delta}\right)$$
where $S(N)$ is uniformly bounded and $\delta$ depends on $n$. Assume that $c_1<S(N)<c_2$ for positive constant $c_1$, $c_2$.
Now we have to calculate
$$N(k)=\sum_{N<k}r_{2,n}(N^{2}).$$
The result is
$$\sum_{N<k}\left(S(N^{2})\left(\frac{\sqrt{\pi}}{2}\right)^{n}\Gamma\left(\frac{n}{2}\right)^{-1}N^{n-2}+O\left(N^{n-2-2\delta}\right)\right).$$
This means you can simply calcute $N(k)$ with the inequality
$$\frac{c_{1}\sqrt{\pi}^{n}\Gamma(n/2)^{-1}}{(n-1)2^{n}}k^{n-1}+O\left(k^{n-1-2\delta}\right)<N(k)<\frac{c_{2}\sqrt{\pi}^{n}\Gamma(n/2)^{-1}}{(n-1)2^{n}}k^{n-1}+O\left(k^{n-1-2\delta}\right).$$
This means
$$C_1<\frac{N(k)}{k^{n-1}}<C_2$$
for some constant $C_1$ and $C_2$ depends on $n$.
Well, I think the result is too complicated to use in physics. Maybe you would not be satisfied with the answer. Sorry.
|
H: Prove that the maximum number of edges in a graph with no even cycles is floor(3(n-1)/2)
The question is in the title. I can see why the bound is sharp (for example, a lot of triangles sharing one common vertex if n is odd, or the same but with one spare edge hanging out if n is even). But I can't prove why the bound is the best possible.
I know that each edge must belong to at most one cycle. I was thinking that you could take any path or odd big odd cycle and turn it into bunch of triangles wherever possible, but I think that might not be valid because a lot of greedy local optimums isn't necessarily the global optimum. So what do I do?
I tried looking online for an answer before posting this and all I could find is this: Link
Is the reasoning faulty there? I couldn’t make sense of it after it talked about decomposing into cycles.
Thanks for any help
AI: This is not true if graph is not simple, for example take a single vertex and a loop, there is one edge, but your bound equals zero.
On the other hand, if the graph in question is simple then here are some hints:
WLOG you can assume that your graph is a single connected component.
Take some spanning tree of it, you will have $n-1$ edges there.
You are left with more than $\lfloor \frac{n-1}{2} \rfloor$ edges, but observe that you cannot add any vertices (the spanning tree touches all of them), therefore, any edge you add will produce a cycle (at least one).
No edge can be shared among cycles, as this would create an even cycle (this means that each edge you add will create a cycle, but it mustn't create two or more).
The best you can do is form triangles (the smallest possible odd cycles), but you can't obtain more than $\lfloor\frac{n-1}{2}\rfloor$ (for any of the remaining edges you need two from the spanning tree), which is why you can't add yet more edges.
I hope it helps ;-)
|
H: How can I construct a square using a compass and straight edge in only 8 moves?
I'm playing this addictive little compass and straight edge game:
http://www.sciencevsmagic.net/geo/
I've been able to beat most of the challenges, but I can't construct a square in 8 moves.
To clarify a move is:
Drawing a line
Drawing a circle
Extending a line is not a move.
Lines can only be drawn to and from the root points and any intersections. Circles can be drawn with their center at any existing node and their edge must touch another.
My method for constructing the square in 9 moves is as follows:
Draw circles from each point to the other, and the connecting line
Use the intersections of the circles' edges to drawn a line perpendicular to the first
Draw a smaller circle with its center at the intersection of the two lines and its edge touching the two root nodes.
Connect the four points on the small circle's edge to form a square.
The end result looks like this:
I can't find a way to refine this method to get to eight moves, and I've failed at finding another method. Is it even possible?
AI: Your solution fails because you give up a move creating the center point of your bounding circle. To complete the challenge in eight moves, you need to use one of the preexisting intersections as your center point, and then rely on the fact that extending a line is not considered a move to create the fourth vertex of the square.
The following diagram illustrates these steps, where step 4b is the non-move extension of the line:
|
H: An inequality on holomorphic functions
Let $D := \{z \in \mathbb{C}: |z| < 1\}$ and $f\colon D \rightarrow \mathbb{C}$ be holomorphic. Suppose $\lvert f(z)\rvert \leq 1$ on $D$, show that $$\frac{|f(0)| - |z|}{1 + |f(0)||z|} \leq |f(z)| \leq \frac{|f(0)| + |z|}{1 - |f(0)||z|} \ \forall z \in D.$$
I have tried using Cauchy Integral Formula to $f$ and expanding $f$ at $0$, but I have no idea why $f(0)$ appears in the inequality in the former case. Both answers and hints are appreciated and thanks in advance!
AI: Following the comment from bryanj, if $f(0)f(z) \neq 1$, consider $$g(z) := \frac{f(z) - f(0)}{1 - f(0)f(z)},$$ then $g(0) = 0$ and $g$ is holomorphic on $D$. Schwarz lemma then gives $$|g(z)| \leq |z|$$ and the result follows.
Otherwise, $f(z) = \frac{1}{f(0)}$, then the desired inequality becomes $$(|f(z)| \pm |z|)^2 \leq 2.$$ But it is true since $|f(z)| \leq 1 \ \forall z \in D.$
|
H: A set with members allowed to appear more than once
I'm looking for a definition for a set which its members could be appeared more than once!
for example: $$D=\{1,1,2,4,6,6\}$$
Could we call this a set?
AI: This object is variously called a "multiset" or a "bag". It is not, however, a set. Sets are characterized by the $\in$ relation:
$$A=B $$
is defined to mean that $$x\in A\iff x\in B$$
for every $x$. So your object is a set, but considered as a set, it is equal to the set $\{1,2,4,6\}$ and also to the set $\{1,1,2,2,4,4,6,6,6\}$, which I gather is not what you want. If you want your object to be different from $\{1,2,4,6\}$, you have to do something new.
Instead of inventing a new type of object, mathematicians often model it as a function mapping from the domain of elements to the non-negative integers: for each possible element $e$, the function says how many times element $e$ appears in the multiset. Your example $D$ would be modeled by the following function:
$$\begin{align}
D(1) & =2\\
D(2) & = 1\\
D(4) & = 1\\
D(6) & = 2 \\
D(n) & = 0\qquad\text{for any other $n$}
\end{align}$$
This paper, "Multiset theory", by Wayne D. Blizard, has an axiomatic treatment of multisets and a fairly detailed description of the considerations involved in dealing with multisets.
|
H: Value of this infinite sum
I want to compute the value of
$$\sum_{n=1}^{\infty} \frac{1}{((2n)^2 - 1)^2}.$$
I have tried evaluating the first couple partial sums, but can't find any recurrence. I don't have too many tools in my toolbox to proceed... Can anyone see how I could go on here?
AI: Hint:
$$\frac{1}{((2n)^2 - 1)^2} = \frac{1}{4}\left( \frac{1}{2 n+1}+\frac{1}{(2 n+1)^2}-\frac{1}{2 n-1}+\frac{1}{(2 n-1)^2}\right)$$
Then, you can start with partial sums, breaking them in four parts and doing a change of indices on 2 of them to get something which would (hopefully — I didn't go further in the details) simplify.
Edit: In particular, provided I didn't do any mistake in the process, you can show that
$$
\sum_{n=1}^N \frac{1}{((2n)^2 - 1)^2} = \frac{1}{4(2N+1)} + \frac{1}{4(2N+1)^2} + \frac{1}{2}\sum_{n=1}^N \frac{1}{(2n+1)^2}$$
where the first two terms go to $0$, and the last one goes to $\frac{1}{2}\sum_{n=1}^\infty \frac{1}{(2n+1)^2}$, which can be computed exactly (knowing that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$ and breaking the latter series' partial sum into even and odd $n$'s).
|
H: Trigonometric Equations
Eliminate B from each pair of equations:
x=sinB -3cosB and y=sinB+2cosB
I've tried solving this simultaneously just as the textbook has guided me through, but it still doesn't work. My initial working out was moving sinB to the left hand side and everything else to the right and solve simultaneously. But that didn't work out, I must have done something wrong, can someone please help me solve this. Thanks!
AI: HINT:
Solve for $\sin B,\cos B$
Then use $\sin^2B+\cos^2B=1$
|
H: Proving that permutations $f$ and $f^{-1}$ are of the same type
Proving that permutations $f$ and $f^{-1}$ are of the same type.
Can someone give me like a hint to start proving this?
Thanks!
AI: Reversing every cycle of $p$ would result in $p^{-1}$, thus they're the same type.
|
H: distribution of $\cos(\omega_0 n)$ where n are integers?
Assume we have the sequence $\,x[n]=\cos(\omega_0 n),$ where $n$ are integers.
If we suppose these are realizations of a random variable, what would be the p.d.f. of that random variable?
AI: So the point of this answer is not so much to answer the question posed as it is to give the question a proper formulation. However, I find this might be a little too long for a comment, so here it goes.
So if $f(x)$ is the desired pdf, we'll say that
$$F(x) = \int_{-\infty}^{x} f(x) = \int_{-1}^{x} f(x)
\Rightarrow f(x)=F'(x)$$
is the associated cdf. The question we'd like to answer then is what is the closed form of $F(x)$, where $F(x)$ is the probability that $\cos\omega_0 n<x$, given that $\omega_0$ is not a rational multiple of $\pi$ and $n$ is uniformly selected from $\mathbb N$.
Here, then, is my attempt at an answer:
Conjecture: let
$C\subset\mathbb C = \{z\in\mathbb C:|z|=1\}$.
Then e$^{i \omega_0 n}$ is uniformly distributed over $C$.
Assuming the above to be the case, we note:
$$
\cos\omega_0 n < x \Rightarrow \\
\text{Re}\{\exp(i\omega_0 n)\} < x \Rightarrow\\
\arccos(x) <\text{Arg}(\exp(i\omega_0 n))< 2\pi-\arccos(x)
$$
It follows that
$$
F(x)=\frac{2\pi-2\arccos(x)}{2\pi}=1-\frac1{\pi}\arccos(x)
$$
It follows that the desired pdf should be
$$
f(x) =
\begin{cases}
\frac{1}{\pi\sqrt{1-x^2}} & -1\leq x\leq 1\\
0 & \text{otherwise}
\end{cases}
$$
|
H: average/weighted average within Excel
5 star 128
4 star 17
3 star 10
2 star 2
1 star 14
Average stars 4.42
How many more 5 stars would I need to get Average Stars to 4.45 or 4.47 etc.
So what I tried was using SUMPRODUCT in Excel but I am approaching this wrong. First of all I cannot achieve the current number 4.42 because of my incorrect approach.
I would like to be able to change the variables at my discretion and see the outcome
how would I continue?
AI: Define $S=128 \cdot 5 + 17 \cdot 4 + 10 \cdot 3 + 2 \cdot 2 + 14 \cdot 1$ and $N=128+17+10+2+14$ Your current average is $A=\frac SN=\frac {128 \cdot 5 + 17 \cdot 4 + 10 \cdot 3 + 2 \cdot 2 + 14 \cdot 1}{128+17+10+2+14}$. If you want your average to be $A'$ from the addition of $n\ 5$'s, you need $A'=\frac {S+5n}{N+n}$. You can solve this for $n$ as all the other values are given. $A'(N+n)=S+5n, n=\frac {A'N-S}{5-A'}$
|
H: Show that the matrix $A+E$ is invertible.
Let $A$ be an invertible matrix, and let $E$ be an upper triangular matrix with zeros on the diagonal. Assume that $AE=EA$. Show that the matrix $A+E$ is invertible. WLOG, we can assume $E$ is Jordan form. If $A$ is Jordan form, it's trivial. If $A$ is not Jordan form, how to use $AE=EA$ to transform $A$ to a Jordan form? Any suggestion? Thanks.
AI: $E^n=0$ and since $A,E$ commute you have
$$A^{2n+1}=A^{2n+1}+E^{2n+1}=(A+E)(A^{2n}-A^{2n-1}E+...+E^{2n})$$
Since $A^{2n+1}$ is invertible, it follows that $A+E$ is invertible.
P.S. I only used in the proof that $E$ is nilpotent and commutes with $A$, so more generally it holds that in (any ring), if $A$ is invertible, $E$ is nil potent and $AE=EA$ then $A\pm E$ are invertible.
|
H: Mappings from $S^n$ to $S^n$ with odd degree
Let $f:S^n\rightarrow S^n$ be of odd degree, i.e. $f^*(1)$ is odd where $f^*:H_n(S^n)\rightarrow H_n(S^n)$ is the induced map on homology. Prove that there exists an $x\in S^n$ with $f(-x)=-f(x)$.
I tried to imitate the proof of Borsuk-Ulam theorem, but with no achievements.
Even in the case of $S^1$, I can not see how this happens, mainly because I don't know how to turn the condition on homology to some more intuitive ones. Should I use alternative definition of degrees in this case?
AI: If $f(x)$ and $f(-x)$ are never opposite points, then we can deform $f$ to a new map $g$ whose value at both $x$ and $-x$ is the midpoint of the unique shortest arc joining $f(x)$ to $f(-x)$. But the map $g$ by construction factors through $RP^n$. Therefore its degree is necessarily even (for example by counting inverse images of a generic point).
|
H: Prove that the longest decreasing subsequence of $p$ consists of $d$ elements.
Let $p$ be a permutation and let $d$ be the smallest integer so that $p$ can be written as the union of $d$ increasing subsequences. Prove that the longest decreasing subsequence of $p$ consists of $d$ elements.
I just need a hint, please don't do the whole question for me. I'm just struggling on how to approach this question.
Thank you!
AI: Hint: Show that this is a special case of the Dilworth's Theorem.
|
H: Calculating $a_0$ in Fourier Series
I am using this YouTube video to learn Fourier Series. The question can be clearly seen in the picture. The instructor calculates $a_0$ as the area under the triangle which is fine. Nothing wrong with it, I get it.
However, I was trying to calculate is as a sum of integration as:
$$
a_0 = \frac{1}{2\pi}[\int_{-\pi}^{0}f(x)dx + \int_{0}^{\pi}f(x)dx]
$$
However, I am unable to get the right answer. I get my answer as $\frac{1}{4}$
Can someone please tell me what is going wrong?
AI: The first integral is zero (which follows from the definition of $f$). The second integral is $x^2/2$ (again using the definition of $f$) evaluated at $\pi$, so $\ldots$
|
H: Prove that $\sum _{k=0}^{n}{{(-1)}^{k}{\binom n k}}=0$
Prove that:
$$\sum _{k=0}^{n}{{(-1)}^{k}\binom n k}=0$$
I tried with induction and failed.
A solution explain would be greatly appreciated.
AI: The Binomial Theorem says $$(a+b)^n=\sum_{0\le r\le n}\binom nr a^{n-r}b^r$$
for real or complex $a, b$, and non-negative integer $n$.
Put $a=1,b=-1$
|
H: How can I find the value of this limit
$$\lim _{ { x }\to { 0 } }{ \frac { \sin x-\arctan x }{ {x }^{ 2 }\log(1+x) } }$$
this log is natural logarithm
AI: Using Series Expansion
$$\frac{\sin x-\arctan x}{x^2\log(1+x)}$$
$$=\frac{x-\frac {x^3}{3!}+O(x^5)-\{x-\frac {x^3}3++O(x^5)\}}{x^3}\cdot\frac1{\frac{\ln(1+x)}x}$$
$$=\frac{x^3(\frac13-\frac16)+O(x^5)}{x^3}\cdot\frac1{\frac{\ln(1+x)}x}$$
$$=\{\frac16+O(x^2)\}\cdot\frac1{\frac{\ln(1+x)}x}\text{ if }x\ne0$$
If $x\to0,x\ne0$
|
H: Coefficients of this Taylor series are bounded
Let $f(z)$ be a function that's analytic in the open unit disk, and also in a region containing the closed unit disk with the exception of a few simple poles (i.e. the poles have degree 1) that lie outside the open unit disk. Show that the coefficients of the Taylor expansion of f(z) at 0 (we know its radius is at least the unit disk) are bounded.
I could use help with this. What I've tried: we can multiply $f(z)$ by $(z-c_1)(z-c_2)...$ where $c_1,...$ are the singularities to get an analytic function, and then express the series of $(z-c_1)(z-c_2)...f(z)$ in terms of $f(z)$'s series to get a constraint on the coefficients (since the analytic function's coefficients will go to 0). But this is tedious and the constraint doesn't seem good enough to show boundedness.
Update: Sorry, it looks like I confused a few people (as seen by the hints below). To clarify, $f$ is analytic in the open unit disk. We're not given analyticity in the closed unit disk.
I should also emphasize that the poles are of degree 1 at the most.
This isn't a homework question (just an exercise from a textbook), so feel free to post either hints or whole answers.
AI: The function $f$ can be written as
$$
f(z)=g(z)+\sum_{k=1}^N\frac{a_k}{z-z_k}
$$
for some $r>1$ and $|z|<r$. Here $g$ is holomorphic and $|z_k|\geq 1$ for $k\in\{1, \dotsc, N\}$. The boundedness of the coefficients follows directly from this decomposition (note that the radius of convergence of $g$ is at least $r>1$).
|
H: Asymptotic equality and inequalities
If $f$ equals $g$ asymptotically, i.e., $f(x)/g(x) \to 1$ (as $x \to \infty$), and $h \leq f$, does that mean $h \leq g$ for sufficiently large $x$? It seems to be true because the relative error, $|f/g - 1|$, tends to zero. I, however, think it is not true in general, that is, we cannot always replace $f$ with $g$ in inequalities. Am I correct?
AI: It is false: Consider $f\colon x\mapsto x+1$, $h=f$ and $g=f-1$. You do have $\frac{f}{g}\to 1$ and $h \leq f$, but always $h > g$.
However, if you have $h$ multiplicatively bounded away from $f$ (that is, there exists $\epsilon > 0$, $A\in \mathbb R$ such that $h(x) \leq f(x)(1 - \epsilon)$ for all $x\geq A$), then what you want holds.
|
H: Finding probability at least one male child
A family has three children. what are the probabilities that it has no male child and at least one male child if we consider the order of birth.
My Try:
I didn't understand the complete question. It's confusing me while solving.
AI: There are different ways to solve the problem. A simple, straight forward way is to enumerate the possible options (since the problem size is small). Since we consider the order of birth to important, the possible options are:
\begin{align}
\color{blue}F \color{blue}F \color{blue}F\\
\color{red}M \color{blue}F \color{blue}F\\
\color{blue}F \color{red}M \color{blue}F\\
\color{blue}F \color{blue}F \color{red}M\\
\color{red}M \color{red}M \color{blue}F\\
\color{red}M \color{blue}F \color{red}M\\
\color{blue}F \color{red}M \color{red}M\\
\color{red}M \color{red}M \color{red}M
\end{align}
As you can see there are $8$ possible options in total. Hence, the probability that there is no male child is $\dfrac18$ and the probability that there is at-least one male child is $\dfrac78$.
|
H: Problem about a process with bins of balls
A friend of mine give me this problem for fun:
Given $\frac {n(n+1)}{2}$ balls, first we divide arbitrarily these balls in baskets, after that we make another basket with one ball of each basket e do this procedure infinitely.
I want to prove that one time this stabilizes with 1 ball in one basket, 2 balls in another basket, ..., n balls in another basket.
It seems easy to solve, he says we can use some concept of energy (???), I'm trying with some concepts of combinatorics without any success.
Thanks in advance.
AI: I believe what you are discussing is known as Bulgarian solitaire. This is a theorem of Jorgen Brandt, i.e., that the game ends as you have described when the number of balls (or cards) is a triangular number, i.e., of the form $n(n+1)/2$.
Here is a nice source to read over: (paywall)
Solution of the Bulgarian Solitaire Conjecture
Kiyoshi Igusa
Mathematics Magazine
Vol. 58, No. 5 (Nov., 1985), pp. 259-271
Published by: Mathematical Association of America
Article Stable URL: http://www.jstor.org/stable/2690174
Note this is also the subject of a problem in the June/July 2013 AMM: Problem 11712.
|
H: What "is" a matrix in the context of a vector space?
I'm familiar with the definition of a vector space $V$ over a field $F$
I'm also comfortable with the notion that a matrix "represents" a linear map from one vector space $V$ to another vector space $W$.
The Wiki article on Vector Spaces says this:
Matrices are a useful notion to encode linear maps.
But this conflicts with what I think a matrix is. Is a matrix a "thing" or "element" that we can write down and multiply by other matricies/vectors? Or is it simply a "representation" of a linear map between spaces? Does it depend on the context?
If it is a "thing" - then how do we define the multiplication of objects from the vector space with objects of a different kind that do not belong to the vector space?
We can write a meaningful equation such as:
$M \cdot \textbf{x} = \textbf{y}$
Where x and y are vectors, and M is a matrix.
I know how to perform the multiplication, but it seems wrong to me that $M$ is of a different "species" (apologies - I'm not familiar enough with abstract algebra to know the correct classification - set/group/etc.) than the thing we are multiplying it with.
Sorry if this is vague, I'm just hoping someone might be able to clear up this slight confusion.
Thanks!
AI: To understand the connection between matrix multiplication and linear maps, it is important to understand that a linear map is uniquely determined by what happens to a basis.
To be more specific, consider a linear map $\phi: V \to W $. For the sake of example, let's imagine that $V$ and $W$ are finite dimensional. Say $\dim_{\ F} (V) = n$ and $\dim_{\ F}(W) = m$. Now choose a basis $B=\{v_1, \ldots, v_n\}$ for $V$. I claim that the linear map $\phi$ is uniquely determined by $\phi(v_1), \ldots, \phi(v_n)$. Now for an arbitrary $v \in V$, write $v = c_1 v_1 + \ldots + c_n v_n$ for uniquely determined coefficients $c_i \in F$. Then $\phi(v) = \phi(c_1 v_1 + \ldots + c_n v_n)=c_1\phi(v_1)+ \ldots + c_n\phi(v_n)$, which follows automatically from the linearity of $\phi$.
If you examine the last equation, you should find that this information can be encoded by the multiplication of an $m\times n$ matrix with an $n \times 1$ vector. The columns of the matrix are $\phi(v_1), \ldots, \phi(v_n)$, these expressed in some basis for $W$, while the vector is the coordinates of $v$ in the basis $B$.
Thus, once we have chosen bases for our finite dimensional spaces, there is a perfect correspondence between these linear maps and $m \times n$ matrices. For each choice of bases for $V$ and $W$, we obtain an isomorphism of vector spaces $L(V, W) \cong M_{m \times n}(F)$, where $L(V, W)$ is the space of linear maps from $V$ to $W$ and $M_{m \times n}(F)$ (which we might also call $F^{m\times n}$) is the space of $m \times n$ matrices with entries in $F$. When we view a matrix as a linear map without any other context, we may as well imagine $V = \mathbb{R}^n$ and $W = \mathbb{R}^m$ with the standard bases. In this picture, multiplication of matrices with suitable dimensions corresponds to composition of linear maps. In the special case $V = W$, we get square matrices, and the correspondences $L(V, V) \cong M_{n \times n}(F)$ are $F$-algebra isomorphisms.
In some sense, matrix multiplication is best understood as a window into the inner workings of a linear map. This is not the only way to view a matrix, but the connection between matrix operations and the operations available for linear maps is inescapable.
|
H: Propriedades do MDC (Properties of Greatest Common Divisor)
Estou com uma grande lista de exercícios de PROPRIEDADES DO MDC (MÁXIMO DIVISOR COMUM), e não estou conseguindo entender quais os passos que tenho que seguir nas demonstrações, e gostaria muito de aprender este conteúdo, alguém me ajuda em uma questão para ver se eu consigo entender as outras?
Questão:
Mostre que, se (a, b) = 1, a|c e b|c, então a · b|c
Added Translation from Portuguese
I have a large list of exercises PROPERTIES MDC (greatest common divisor), and I am not able to understand what steps you have to follow in the statements, and would love to learn this content, someone help me on a question to see if I I can understand the other?
question:
Show that if $(a, b) = 1, a | c$ and $b | c$, then $a · b | c$
AI: Another way is as follows. If $(a,b)=1$ then there are integers $\alpha, \beta$ such that $\alpha a + \beta b = 1$. Now $c = c\cdot 1 = c\alpha a + c \beta b$ But $c\alpha a$ is divisible by $ab$ (why?), as is $c \beta b$, so $c = c\alpha a + c\beta b$ is divisible by $ab$ too.
|
H: Solving Bessel integration
What would be the solution of the bessels equation,
$$b=k A(t)\int_0^{\infty} J_0 (k \rho) e^ \frac{-\rho^2}{R^2} \rho d \rho$$
Can I sove that by using this formulation?
$$c= \int_0^{\infty}j_0(t) e^{-pt} dt= \frac{1}{\sqrt{1+p^2}}$$
AI: No, the Laplace transform of a Bessel does not apply here. Rather, what you are looking at is the Bessel transform of a Gaussian, which is really just the expression of the 2D Fourier transform of a 2D Gaussian:
$$\int_0^{\infty} d\rho \, \rho J_0(k \rho) \, e^{-\rho^2/R^2} = \frac{1}{2 \pi} \int_{-\infty}^{\infty} dx \, \int_{-\infty}^{\infty} dy \, e^{i (k_x x + k_y y)}\, e^{-(x^2+y^2)/R^2}$$
You may verify this using the relation
$$\int_0^{2 \pi} d\phi \, e^{i k \rho \cos{(\phi-\theta)}} = 2 \pi \, J_0(k \rho)$$
where $\rho = \sqrt{x^2+y^2}$ and $\phi-\theta$ is the angle between the vectors $(k_x,k_y)$ and $(x,y)$.
Can you compute each individual FT of the 1D Gaussians? I get
$$\int_0^{\infty} d\rho \, \rho J_0(k \rho) \, e^{-\rho^2/R^2} = \frac{R^2}{2} e^{-R^2 k^2/4}$$
ADDENDUM
Here is the evaluation of the FT. You can separate into $x$ and $y$ integrals:
$$ \frac{1}{2 \pi} \int_{-\infty}^{\infty} dx \,\int_{-\infty}^{\infty} dy \, e^{i (k_x x + k_y y)}\, e^{-(x^2+y^2)/R^2} = \frac{1}{2 \pi} \int_{-\infty}^{\infty} dx \, e^{i k_x x} e^{-x^2/R^2} \, \int_{-\infty}^{\infty} dy \, e^{i k_y y} e^{-y^2/R^2}$$
The evaluation of each integral is identical in form, so let's look at the $x$ one. The idea is to complete the square and use the fact that $\int_{-\infty}^{\infty} dx \, e^{-x^2} = \sqrt{\pi}$.
$$\begin{align}\int_{-\infty}^{\infty} dx \, e^{i k_x x} e^{-x^2/R^2} &= \int_{-\infty}^{\infty} dx \, e^{-1/R^2 (x^2-i R^2 k_x x)}\\ &= \int_{-\infty}^{\infty} dx \, e^{-1/R^2 (x^2-i R^2 k_x x - R^4 k_x^2/4 + R^4 k_x^2/4)} \\ &= \int_{-\infty}^{\infty} dx \, e^{-1/R^2 (x^2-i R^2 k_x x - R^4 k_x^2/4)} e^{-R^4 k_x^2/(4 R^2)} \\ &= e^{-R^2 k_x^2/4} \int_{-\infty}^{\infty} dx \, e^{(x-i R^2 k_x/2)^2/R^2}\end{align}$$
Now, it turns out that this last integral is independent of the imaginary shifted term in the exponential, and therefore has value $\sqrt{\pi} R$. Do the same for the $y$ integral, and the result follows.
|
H: How many dice needed to make rolling at least $3$ sixes probable ($p>0.5$)
I'm wondering how to solve a questions of the example form:
How many dice are needed to make rolling at least 3 sixes in a single throw probable (p>0.5)
I know how to solve the question by graphing out all of the binomial probabilities for successive numbers of dice (for $n\geq3$ dice, compute $P_n(\geq 3$ sixes$) = 1 - P_n(0$ sixes$) - P_n(1$ six$) - P_n(2$ sixes$)$ each time and find the least $n$ such that $P_n(\geq 3$ sixes$) \geq 0.5$).
Enumerating out the probabilities for different numbers of dice, the answer I get is 16.
But one could of course consider generalising the question to arbitrary-sided dice, different number of successes, different target probability, etc. I'm wondering if there a more direct method to solve the general form of such a problem?
(There are already lots of similar questions on here, but the ones I found tackle the more classical question of computing the probability of a specific number of successes for a specific number of trials, not the number of trials required to make [at least] a fixed number of successes probable.)
AI: Consider rolling a six to be a "failure", and model the chance of getting the third failure on the $n$th throw using a negative binomial distribution, shifted right by 3 (since the negative binomial distribution counts only successes). The CDF of the negative binomial distribution then represents the probability of having thrown at least three sixes by the time you've thrown $n+3$ dice. You can then solve for a CDF of 0.5, and round up.
|
H: The Derivative of $y=x^2$ with respect to $\ln x$
Find the derivatives of $y$$=$$x^2$ with respect to $\ln x.$
AI: The function $f(x)=x^2$ takes you from$ x$ to $x^2$. However you want a function that takes you from $ln(x)$ to $x^2$. You can find this function by taking the composition of two functions.
To do this first find a function that takes you from $ln(x)$ to $x$
( $e^x$ )( because $e^{lnx}=x$)
and then apply the function $x^2$ to go from $x$ to $x^2$.
You can use the chain rule to find the derivative of this big function with respect to lnx (Kahn has videos on the chain rule)
let $f(x)=e^x$ and $g(x)=x^2$ then $g(f(ln(x))= x^2$ and using the chain rule the derivative of $g(f(x))$ is $f'(x)*g'(f(x)$. so its $e^x*2e^x$. which is $2e^{2x}$
|
H: How to quickly tell that a quadratic only has a single root?
Hello good math wizards, I'm trying to figure out why the following equation has at most one root:
$$f (t) = \textbf{x} \cdot \textbf{x} + \textbf{x} \cdot t\textbf{y} + t\textbf{y} \cdot \textbf{x} + t\textbf{y} \cdot t\textbf{y} = \left \| \textbf{x} \right \|^2
+ 2(\textbf{x} \cdot \textbf{y})t + \left \| \textbf{y} \right \|^2t^2$$
where $\textit{t}$ is a real number, $\textbf{x}$ and $\textbf{y}$ are fixed vectors in $\mathbb{R}^n$, and $\textbf{y}\neq0$
The background of this question is that I am going through the text here, and in the section about the Cauchy-Schwarz Inequality the authors state "Hence f is a quadratic polynomial with at most one root." However, I don't understand why this quadratic has at most one root. Can someone help clarify why it should be immediately apparent that his equation has only one root?
Just FYI this isn't homework or anything; I'm just going through the text. Thanks in advance!
AI: Notice that
$$f(t) = ||\mathbf{x}+t\mathbf{y}||^2$$
Thus $f$ can obviously only have one root, which would be $\mathbf{x}+t\mathbf{y} = 0$.
|
H: Is it possible to have multiple decimal points in a number?
Is it ever possible to have multiple decimal points in a number? If so, how?
For example is the value 1.1.2 possible?
This is a question posed purely out of curiosity.
AI: Mathematical notation, like other aspects of human language, is a human creation - we decide what things mean. The system that is most commonly used at the moment is decimal notation:
A "digit" is one of the following integers: $0,1,2,3,4,5,6,7,8,9$.
If $a_n,\ldots,a_0$ and $b_1,\ldots,b_m$ are digits, and $a_n\neq 0$, then the expression
$$\Large\color{red}{a_na_{n-1}\,\underset{\substack{\small\strut\,\uparrow\\\small\mathsf{ellipsis}}\,}{{\scriptsize\ldots}}\; a_1a_0\;\underset{\substack{\small\uparrow\,\strut\\\small\mathsf{decimal}\,\\ \small\mathsf{point}\,}}{.}\;b_1b_2\,\underset{\substack{\small\strut\,\uparrow\\\small\mathsf{ellipsis}}\,}{{\scriptsize\ldots}}\; b_m}$$
denotes the number
$$a_n10^n+a_{n-1}10^{n-1}+\cdots+a_110+a_0+b_110^{-1}+\cdots+b_m10^{-m}$$
The purpose of the single decimal point in this notation (ignore the ellipses) is to clarify which parts of the expression correspond to the powers of $10$ where the exponent is $\geq 0$, and those where the exponent is $<0$.
In this system of notation, there is no meaning of adding a second decimal point. That is not to say that it is impossible to come up with a meaning - perhaps there is some usage in mathematics where a second decimal point would be a convenient and clear way of indicating something, and if there were, perhaps it would then be adopted as a part of our system of notation. But currently, it does not mean anything to write $1.1.2$.
|
H: Creating a cover of closed balls for a compact set
No idea how to start thinking about this..any pointers would be appreciated! :]
Let $K \subseteq \mathbb R^n$ be a compact set.
Let us assume $K$ has a cover of $n$ open balls with radii $r_1,...r_n$.
Show that there exist $0<t_i<r_i$ for every $0\le i \le n$ such that $K$ can be covered by $n$ closed balls with the radii $t_1,...,t_n$.
AI: For $1 \leqslant i \leqslant n$, let $B_i = B_{r_i}(x_i) = \{x \in \mathbb{R}^d \colon \lVert x - x_i\rVert < r_i\}$.
Let $K_1 = K \setminus \bigcup\limits_{j = 2}^n B_j$. The balls $B_j$ are open, hence $K_1$ is a closed subset of $K$, therefore $K_1$ is compact. Let $s_1 := \sup \{\lVert x - x_1\rVert \colon x \in K_1\}$. Since $K_1$ is compact (and the norm/distance continuous), $K_1 = \varnothing$ or there exists a point $y_1 \in K_1$ with $\lVert y_1 - x_1\rVert = s_1$. Then $y_1 \in K$, and $y_1 \notin \bigcup\limits_{j = 2}^n B_j$, hence $y_1 \in B_1$, thus $s_1 < r_1$. Choose $0 < t_1 \in (s_1,\, r_1)$, and let $\tilde{B}_1 = B_{t_1}(x_1)$.
By definition of $s_1$ and the choice of $t_1$, $K_1 \subset \tilde{B}_1$ and therefore
$$K \subset \tilde{B}_1 \cup \bigcup_{j = 2}^n B_j.$$
Now let $K_2 = K \setminus \left(\tilde{B}_1 \cup \bigcup_{j = 3}^n B_j\right)$. With the same argument as above, we find a $0 < t_2 < r_2$ such that, setting $\tilde{B}_2 = B_{t_2}(x_2)$, we have $K \subset\bigcup\limits_{k = 1}^2 \tilde{B}_k \cup \bigcup\limits_{j=3}^n B_j$.
It should be clear how to continue, and after $n$ steps, we have $n$ $0 < t_j < r_j$ such that
$$K \subset \bigcup_{k=1}^n \tilde{B}_k = \bigcup_{k=1}^n B_{t_k}(x_k) \subset \bigcup_{k=1}^n \overline{B_{t_k}(x_k)} \subset \bigcup_{j=1}^n B_j.$$
Such a shrinking of covers is possible in far more general situations than a finite cover of a compact set.
|
H: Incorrect notation in math?
Does math have an incorrect notation / syntax? I don't mean writing misaligned notation (google), but when you take something like a number to powers to powers to powers, $${{2^2}^2}^3$$ (I was told this is incorrect notation by a teacher). Is it really incorrect, or does it just need to be simplified with parentheses? Do people write maths like this?
a radical expression with the root being a radical expression? $$\sqrt[\sqrt{2^3}]{2}$$
AI: Your teacher is mistaken. There is a well-established and universal convention about the meaning of an expression like $$2^{2^{2^3}}$$it is always understood to mean $$2^{\left(2^\left(2^3\right)\right)} =2^{2^8} = 2^{256}$$ People can and do write expressions like these. For example this paper, "Analog of the Skewes Number for Twin Primes", by Marek Wolf, contains the expressions $$10^{10^{10^{10^3}}}\qquad\text{and}\qquad 10^{10^{529.7}}$$on the first page, with no further explanation. Similarly "Some Rapidly Growing Functions" by Craig Smoryński has $$10^{10^{10^{34}}} < e^{e^{e^{e^{4.369}}}}$$ and similar expressions. (I picked these two papers arbitrarily; they were the first two hits in Google Scholar for "Skewes' Number".)
There is a good reason for the convention about what $a^{b^c}$ means: $a^{b^c}$ could be understood as either $a^\left({b^c}\right)$ or as $\left(a^b\right)^c$. But
if it were understood as $\left(a^b\right)^c$, one would never need to write $a^{b^c}$, since it would be equal to $a^{bc}$. So it is always understood as
$a^\left({b^c}\right)$.
Nobody ever writes $$\sqrt[\sqrt{2^3}]2$$ even though its meaning is clear. Partly this is because it would have been difficult to typeset with old-fashioned metal type, so there is a tradition of expressing this differently. And partly it is because it looks bad.
Since by definition, $$\sqrt[a]b = b^{1/a},$$ one would almost always write something like $$(2^{1/2})^{1/2^{3/2}}$$ instead, at which point it would become clear that the expression could be simplified to $$2^{(1/2)(1/2^{3/2})} = 2^{1/2^{5/2}} = 2^{2^{-5/2}}.$$ Good notation enables and encourages this sort of simplification; bad notation obscures and impedes it.
|
H: Express $f(x) = x^2 \cos(2x) + \sin^2(x)$ as a power series
Express $f(x) = x^2 \cos(2x) + \sin^2(x)$ as a power series
What I know:
I know that $$x^2\cos(2x) = x^2 \cdot \sum_{n=0}^{\infty} {(-1)^n \cdot \frac{(2x)^{2n}}{(2n)!}} = \sum_{n=0}^{\infty} {(-1)^n \cdot \frac{2^{2n} \cdot x^{2n+2}}{(2n)!}}$$
But how do we find what $\sin^2(x)$ is in terms of a power series?
AI: HINT
$$ \sin^2 {x} = 1 - \cos^2 {x} = \dfrac{1 - \cos(2x)}{2}$$
|
H: What does “unconfirmed or conditional” mean on this site?
On the home page of the “Bounded gaps between primes” polymath project, there are listed bounds for $H$ in Zhang's proof of prime gaps. For example:
What does “unconfirmed or conditional” mean on this page and what is the difference between results $H=12006$ and $H=5414?$
AI: I think "unconfirmed" means - "someone has produced an argument for this, but it hasn't been rigorously checked yet."
I think "conditional" means - "this estimate is based on some other assumption we think we can prove, but we haven't filled in all the details yet."
Both are rough translations. By far the best source for the detailed (a word I use advisedly) arguments is Terry Tao's excellent blog. As I write, it seems likely that many of the $?$ estimates will shortly be confirmed, as the arguments have now been completed and checked.
|
H: $f$ integrable but $f^2$ not integrable
At this point in Bartle, $X$ is a nonempty set, $\mathcal{X}$ is a $\sigma$-algebra of subsets of $X$, and $\mu$ is a measure on $\mathcal{X}$.
$f\in L(X,\mathcal{X},\mu)$ means:
$f:X\to R$ is measurable.
$\int f^+\,d\mu<+\infty$ and $\int f^-\,d\mu<+\infty$.
$\int f\,d\mu=\int f^+\,d\mu-\int f^-\,d\mu$.
Can someone share a couple of counterexamples:
$f\in L(X,\mathcal{X},\mu)$, but $f^2\not\in L(X,\mathcal{X},\mu)$
$f, g\in L(X,\mathcal{X},\mu)$, but $fg\not\in L(X,\mathcal{X},\mu)$
Thanks.
AI: Our space is the reals with the Lebesgue measure.
1)Let $f(x)= n$ if $n < x < n+ \frac{1}{n^3}$, for all positive integers $n$ and $f(x)=0$ otherwise. Thus $f(x)=n$ on an interval of length $\frac{1}{n^3}$.
Then
$$\int f =\sum \frac{n}{n^3} < \infty ,$$
$$\int f^2 =\sum \frac{n^2}{n^3} = \infty .$$
2) Take $f=g$ in the previous example.
|
H: Prove that $f_n(x) = 1 - \cos(\frac{x}{n})$ is uniformly convergent
Prove that $f_n(x) = 1 - \cos(\frac{x}{n})$ is uniformly convergent in $[0,2\pi]$.
So I said this, and my professor said it is wrong:
$|1-\cos(\frac{x}{n})| \leq 2, $ then we take some $\epsilon >2 $, and we have for every $n \in \mathbb{N}$ and $\forall x \in [0,2\pi]$ that $|1-\cos(\frac{x}{n})| < \epsilon$ and it is uniformly convergent.
What is wrong about what I did?
AI: As I said in my comment, your proof doesn't cover the $\epsilon \in (0,2)$ case, you need to prove for all $\epsilon$ which means also for these.
Hint For $n >4$ you can prove that
$$0< 1 - \cos(\frac{x}{n}) \leq 1 - \cos(\frac{2 \pi}{n}) \forall x \in [0,2 \pi] \,.$$
$n>4$ is needed to make sure we are in the first quadrant. This follows immediately from the monotony of $\cos(x)$ in the first quadrant.
[$n>2$ would actually suffice, but who cares :) ].
Now, use the the definition of $\lim_n \cos(\frac{2 \pi}{n})=1$ and the fact that there is no $x$ in this limit.
Edit Alternately, by the double angle formulas and $\sin(y) <y$:
$$0 \leq 1 - \cos(\frac{x}{n}) = 2 \sin^2(\frac{x}{2n}) \leq 2 (\frac{x}{2n})^2 \leq 2 (\frac{2 \pi}{2n})^2$$
|
H: n-correlation function.
So I was thinking of generalization of notions in statistics, like auto-correlation or cross-correlation (auto-correlation is a specific example of cross-correlation where we take the same proccess).
The problem is how do I generalize for example the notion of WSS (wide-sense-stationary)?
Take for example
$$R_{X,Y,Z}[n_1,n_2,n_3] = \mathbb{E}(X[n_1]Y[n_2]Z[n_3])$$
How would I generalize the notion of WSS, such that the cross correlation is a specific example of this function.
My naive notion of WSS in this case may be:
R is WSS when:
$$ R_{X,Y,Z}[n_1,n_2,n_3] = R_{X,Y}[n_1 -n_2] + R_{X,Z}[n_1 -n_3] + R_{Y,Z} [n_2 - n_3] $$
But I don't see how do I get the WSS cross correlation case when $ Z=1 \ w.p\ 1$.
Has this been done before, or tried to generalize this notion in statistics?
AI: When you have more than 2 r.v., things get more complicated. You don't have just one mixed moment anymore, but several: for example $E(X^3)$, $E(X^2 Y)$, $E(XYZ)$, etc. Thus `the' correlation depends on what power of your r.v.s you want to get involved: there are multiple covariances of mixed type to take into account. Also, in your proposed expression, you could argue that there may need to be an extra term $-R_{X,Y,Z}(n_1-n_2, n_1-n_3)$ for WSS processes, because the covariances are defined as integrations (similar to the sum formula in probability theory for multiple events).
There have been extensions for the case of Gaussian r.v., where Isserlis's formula (Gaussian moment theorem) can be used, which splits the moment for every even number of r.v.s into a sum of products of pairs (covariances), and all odd numbers giving zero. For more general distributions, I do not think it can be done because you will need to account for the skewness of the PDF.
|
H: Optimal and maximal codes
Is there an example of a code which is maximal but not optimal?
Definitions.
An $(n,M,d)$ code $C$ over $\mathbb{F}_q$ is maximal if $C$ is not contained in an $(n,M+1,d)$ code.
An $(n,M,d)$ code $C$ is optimal if $|C|=M=A_q(n,d)$, where $A_q(n,d)=\max \{M\mid\exists (n,M,d)\text{ code}\}$.
I know that each optimal code is maximal, but I do not really understand the difference between the two definitions.
Thank you for help,
Best regards
AI: One way of attempting to get an optimal code $\mathcal C$ is to begin with $\mathcal C$
consisting of $\mathbf 0 \in \mathbb F_q^n$ , and then iteratively include in
$\mathcal C$
an element of $\mathbb F_q^n-\mathcal C$ that is at distance $d$ or more from all of the
already chosen codewords in $\mathcal C$. Note that beginning with $\mathcal C = \{\mathbf 0\}$ is not required; we could begin with any other element of
$\mathbb F_q^n$ if we wish without affecting the results.
A maximal but non-optimal code $\mathcal C$ is defined to
be the result achieved when $|\mathcal C| < A_q(n,d)$ but
$\mathcal C$ cannot be enlarged any further in this manner -- that is,
all the elements of $\mathbb F_q^n - \mathcal C$ are at distance smaller than $d$
from one or more of the codewords already chosen to be in $\mathcal C$. Note that the
$\mathcal C$ that we have found is not a proper subset of a
larger code with the same minimum distance $d$. If we want to arrive
at an optimal code, we are blocked in this direction;
and need to backtrack, that is, replace some of the previously chosen codewords for
$\mathcal C$ by other choices that are also at distance $d$ or more from the codewords
in $\mathcal C$ and then explore other subsets of $\mathbb F_q^n$ to find a
code of cardinality $A_q(n,d)$.
If the value of $A_q(n,d)$ is known exactly, we can stop the search-and-include-and- backtrack-when-stuck process as soon as we find a $\mathcal C$ with $A_q(n,d)$ codewords in it, and this will be an optimal code, which, as you point out, is also maximal.
As an example, consider that with $q=2$, $n=3$, $d=2$, if we include codewords
in $\mathcal C$ in order $000,~ 011, 101,~ 110$, we get an optimal code but had
we begun with $000$ and then picked $111$ as our second choice, we would end up
with the maximal
but non-optimal code $\{000, 111\}$, all dressed up and nowhere to go, as the
atheist's tombstone said.
|
H: Triple integral over a region.
I'm trying to find the volume of $D=\{(x,y,z): \displaystyle\frac{x^2}{a^2} + \displaystyle\frac{y^2}{b^2} + \displaystyle\frac{z^2}{c^2} \leq 1\}$.
I use the change of variables $x=ap\cos\theta\sin\phi$, $y=bp\sin\theta\sin\phi$, $z=cp\cos\phi$ and i have that the region of integration after changing variables should satisfy $p^2\cos^2\theta\sin^2\phi + p^2\sin^2\theta\sin^2\phi + p^2\cos^2\phi$ = $p^2(\sin^2\phi + \cos^2\theta)\leq 1$.
After that i'm stuck, how can i find the new of integration?. I know that the new region after the change of variables will be $E= \{p,\theta,\phi : 0\leq p \leq 1, 0 \leq \theta \leq \pi, 0 \leq \phi \leq 2\pi\}$ but i don't know "why" -and i use "why" because i can see that every point in $E$ satisfies $p^2(\sin^2\phi + \cos^2\theta)\leq 1$, what i don't know how you get the set $E$ in the first place-.
AI: I would use the change of variables $x = au$, $y = bv$, and $z = cw$. The transformation $\mathbf r(u,v,w) = (x,y,z)$ transforms the unit ball $B$ into $D$ and
$dV = abc\, dudvdw$. Thus
$$
\mathrm{Vol} (D) = \iiint_D \, dxdydz = \iiint_B abc \, dudvdw = \frac{4abc\pi}{3}.
$$
|
H: Probability of winning.
There is a game with three types of cards.
Win cards, If you get one of these, you win.
Lose card, if you get this, you lose.
Revive card: If you get one of these, you draw another card.
Now Initially we have a deck of cards of all three types. p Win cards, q Lose cards, and r revive cards. The deck is well shuffled.
Then we take the first s cards of the deck ( s < p + q ) and discard them. What is the probability That I would win the game now.
This is an excercise from my book. How do we approach this problem?
AI: The probability that the first win card comes before the first lose card is unchanged at $\dfrac{p}{p+q}$.
If that is not clear, imagine taking the discarded cards and putting them at the bottom of the deck,
|
H: Summing Finitely Many Terms of Harmonic Series: $\sum_{k=a}^{b} \frac{1}{k}$
How do I calculate sum of a finite harmonic series of the following form?
$$\sum_{k=a}^{b} \frac{1}{k} = \frac{1}{a} + \frac{1}{a+1} + \frac{1}{a+2} + \cdots + \frac{1}{b}$$
Is there a general formula for this? How can we approach this if not?
AI: You can't find a general formula.
All you can do is the use the
standard asymptotic formula
for the harmonic sum
$$H_n = \sum_{k=1}^n \frac1k
= \ln n + \gamma
+\frac1{2n} -\frac1{12n^2} + \frac1{120n^4} + ...
$$
where
$\gamma \approx 0.5772156649$
is the
Euler–Mascheroni constant.
Your sum would be
$H_b - H_{a-1}$.
|
H: Irreducibility of a topological space
Let $X$ be a topological space and $X=X_1 \cup X_2$ with $X_1, X_2$ nonempty open irreducible subsets. Then $X$ is irreducible iff $X_1 \cap X_2 \ne \emptyset$.
The easy part: if it were $X_1 \cap X_2 = \emptyset$ then we would have
$$
X = (X \setminus X_1) \cup (X \setminus X_2)
$$
and this is impossible since $X$ is irreducible, so it can't be written as a union of two proper closed subsets.
The otherway gives me some problems. Suppose by contradiction $X=C_1 \cup C_2$ with $C_i$ proper closed subsets. Then... what can I do?
Could you please provide any hints, please? Thanks.
AI: HINT: If $X$ is not irreducible, there are non-empty open sets $U$ and $V$ such that $U\cap V=\varnothing$. Clearly $(U\cap X_i)\cap(V\cap X_i)=\varnothing$ for $i=1,2$, so without loss of generality $U\cap X_2=\varnothing=V\cap X_1$. (Why?) Now consider the open sets $U=U\cap X_1$ and $X_1\cap X_2$.
|
H: Volume of a rotated region $y = x^2 + 2$
I have no idea what the best method to do this is but I chose the shell method.
$y = x^2 + 2$
about $y = -2$
So I convert to f(y) since it rotates on a horizontal axis.
$x = \sqrt{y-2}$
I know the bound are 0 to 4 for either x or y, it doesn't really matter.
I know that the -2 term is lower than the lowest bound so that I will substract that term.
$$2\pi \int y(\sqrt{y-4} + 2)dy$$
From here I am stuck, I do not know how to integrate that. Did I make a mistake?
AI: Take $f(x) = y = x^2 + 2$.
If you are given the bounds for $x$ between $0$ and $4$, those bounds are not the same bounds as $y$, since we are not given $y = x$. Rather, the bounds for $y$ are $f(0)= y = 2$ to $f(4) = y =18$. I will proceed using those bounds for $y$.
Note also that $y = x^2 + 2$ corresponds to $x = \sqrt{y - 2}$.
So our desired integral is $$2\pi \int_2^{18} (y+2)\sqrt{y - 2} \,dy$$
You can use substitution: Let $$u = y - 2 \iff y = u+2 \implies dy = du$$
$$y+2 = (u+2)+2 = u+4$$
And our new bounds are then $u \in [0, 16]$, since, when $y = 2,\;u = 2 - 2 = 0$, and when $y = 18, \; u= 18-2 = 16$.
Substituting gives us $$\begin{align} 2\pi\int_0^{16} (u+4)u^{1/2} \,du \quad & = \quad 2\pi\int_0^{16} \left(u^{3/2} + 4u^{1/2}\right) \,du \\ \\ & = \quad 2\pi\left(\frac{2u^{5/2}}{5} + \frac{8u^{3/2}}{3}\right)\Big|_{u = 0}^{u =16}\end{align}$$
|
H: If $m \equiv n \pmod{A}$, then $s^m \equiv s^n \pmod{A}$?
I'm kind of stuck with the following assignment:
Prove: If $m \equiv n \pmod{A}$, then $s^m \equiv s^n \pmod{A}$
I tried $m = k_1 \times A + r$ ,
and $n = k_2 \times A + r$ ,
then $s^m = s^{k_1 \times A + r}$,
but not sure how to proceed ...
Really appreciate any hints. Thanks a lot.
AI: This is false as stated - consider $A=3$, $m=4$, $n=1$, and $s=2$. We have $$4\equiv 1\bmod 3$$ but $$16\not\equiv 2\bmod 3.$$
|
H: Name of the $(-1)^n$ function?
Does the function $f\left(n\right)=\left(-1\right)^n, n \in \mathbb{Z}$ used in a lot of mathematical formulas have a special name ?
EDIT: The context of this question is that I need a name for this function in a software.
AI: Not really. One just writes it out like that, for example, $$\sum (-1)^n\cdot a^n$$
not $$\sum \operatorname{some\;name}(n)\cdot a^n$$
Just imagine how cumbersome things would get!
ADD If you need a name for it, choose a sensible one and make it clear. I'd take $\text{alt}(n)$.
|
H: How to write boolean expressions as linear equations
I want to convert a set of boolean expressions to linear equations. In some cases, this is easy. For example, suppose $a, b, c$ $\in$ {0,1}. Then if the boolean expression is: $a$ $\ne$ b, I could use the linear equation $a + b = 1$.
To give a more complicated example, suppose I'm dealing with the boolean expression $a=b$ $\wedge$ $c$. I could describe this expression with: $-1$ $\le$ $2b+2c-4a$ $\le$ $3$.
Does that make sense?
Now how would I convert a=$b$ $\vee$ $c$? Any ideas?
Thanks for considering!
K
AI: You can translate $x \land y$ directly to $x+y=2$, and $x \lor y$ to $1 \le x+y \le 2.$
Also $\lnot x$ can be $x-1 \le 0$. This gives a basis for translation into equations or inequalities.
A simpler version of $a=b \land c$ I found to be $2a=b+c$. (This use of the 2 on the left is because of there being two variables on the right side).
Fiddling around I also found for $a=b \lor c$ the inequality $2a-1 \le b+c \le 2a$, based on using the above inequality for $x \lor y.$
|
H: How to construct a vector field under these conditions?
Task: Construct a vector field $v:\mathbb{R}^2\rightarrow \mathbb{R}^2$ such that all the circles that "touch" y-axis in the origin (i.e. derivative in $(0,0)$ is in form $(0,a)$ for some $a$) are field lines of this field, and this vector field has to be continuously differentiable.
My thoughts:
It could look like this (up to direction):
At first I consider the case $x>0$ (the right half of the plane). My idea was to parametrize each circle that "touches" y-axis as follows: $$\begin{align}x-r&=r\cdot \cos(t)\\y&=r\cdot \sin(t)\end{align}$$ and it implies
$$\begin{align}x&=r\cdot( \cos(t)+1)\\y&=r\cdot \sin(t)\end{align}$$
Also for each point $(x,y)$ in $\mathbb{R}^2$ ($x>0$) there is only one $r$ such that $(x-r)^2+y^2=r^2$:
$$\begin{align}
(x-r)^2+y^2&=r^2\\
x^2-2xr+r^2+y^2&=r^2\\
x^2-2xr+y^2&=0
\end{align}
$$
and thus $r=\frac{x^2+y^2}{2x}$.
If I want all those circles to be the field lines, I have to set the values of $v$ to be the derivatives of the curve: $$\frac{d}{dt}c(t)=\frac{d}{dt}(r\cdot(\cos(t)+1),r\cdot\sin(t))=(-r\sin(t),r\cos(t))$$
So the vector field $v$ could be $$v(x,y)=v(r\cdot(\cos(t)+1),r\cdot\sin(t))=(-r\sin(t),r\cos(t))=(-y,x-r)=(-y,x-\frac{x^2+y^2}{2x})=(-y,\frac{x^2-y^2}{2x})$$
This $v$ is actually the function plotted above. So the question is now whether this function is continuously differentiable or not. The critical point is when $x$ is close to $0$. If we take a look at partial derivative, then
$$\partial_xv=(0,\frac{x^2+y^2}{2x^2})$$
and it diverges when $x\rightarrow 0$, so it seems that this $v$ is not suitable, but I think the real $v$ should be somehow similar to the given $v$.
But these are only my considerations that for sure can be wrong.
I'd appreciate your help.
AI: The circles centered at a point $(0,a)$ passing through the origin have equations $(x-a)^2+y^2=a^2.$ There is a standard way to "go backwards" from this and get a differential equation. One takes the implicit derivative, solves for $a$, and plugs the result back into the circle equation. This gives
$$2xyy'+x^2-y^2=0.$$
Then the line field is obtained on solving for $y'$.
Note that the solved version
$$y'=\frac{y^2-x^2}{2xy}$$ makes it clear the line field is singular only on the axes.
|
H: Are there infinitely many $n$ such that $n$ and $2n+1$ are both prime numbers?
It seems that there are many such $n$. For example, the following are such $n$'s under $4000$.
2 3 5 11 23 29 41 53 83 89 113 131 173 179 191 233 239 251 281 293 359 419 431 443 491 509 593 641 653 659 683 719 743 761 809 911 953 1013 1019 1031 1049 1103 1223 1229 1289 1409 1439 1451 1481 1499 1511 1559 1583 1601 1733 1811 1889 1901 1931 1973 2003 2039 2063 2069 2129 2141 2273 2339 2351 2393 2399 2459 2543 2549 2693 2699 2741 2753 2819 2903 2939 2963 2969 3023 3299 3329 3359 3389 3413 3449 3491 3539 3593 3623 3761 3779 3803 3821 3851 3863 3911
But my question is, is it known that there are infinitely many such $n$?
AI: Such primes are known as Sophie Germain primes. It is unknown if there is an infinite amount, but we nonetheless conjecture that up to $x$ there is asymptotically
$$
C \frac{x}{(\log x)^2}
$$
such primes, where $C$ is a certain absolute constant. Sophie Germain primes were first discovered by Sophie Germain in connection with Fermat's Last Theorem. She proved that $x^n + y^n = z^n$ has no non-trivial solutions when $n$ is a Sophie Germain prime.
|
H: Finding the inverse of a matrix using elementary matricies
Can somebody help me understand what exactly is being asked here? I understand how to construct elementary matrices from these row operations, but I'm unsure what the end goal is. Am I to assume that $Y$ is built from these row operations?
Let $Y$ be the $4\times 4$ matrix which applies the following operations, in the order given, when used as a left-multiplier:
divide $R_{2}$ by $3$,
add $2R_{1}$ to $R_{4}$,
swap $R_{2}$ and $R_{4}$, then
subtract $R_{4}$ from $R_{3}$.
Find $Y^{-1}$ without calculating $Y$.
If I were to venture a guess, I would say that it's implying that $E_{4}E_{3}E_{2}E_{1}=Y$, and therefore I need to find $Y^{-1}=E^{-1}_{1}E^{-1}_{2}E^{-1}_{3}E^{-1}_{4}$. But the wording of the question makes me not 100% sure.
I've found \begin{align*}
E_{4}E_{3}E_{2}E_{1}=\left[\begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & -1 \\
0 & 0 & 0 & 1
\end{array}\right]\left[\begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0
\end{array}\right]\left[\begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
2 & 0 & 0 & 1
\end{array}\right]\left[\begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & \frac{1}{3} & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}\right]
\end{align*}
so if my assumption is right, I'd just need to build the inverse of the matrices and multiply them in reverse order to get $Y^{-1}$
AI: Yes, that is correct. The inverse of an elementary operation matrix is the operation that reverses that elementary operation:
Add $R_3$ to $R_4$
Swap $R_2$ and $R_4$
Subtract $2R_1$ from $R_4$
Multiply $R_2$ by $3$
Apply these transformations to the identity matrix to find the inverse of $Y$.
|
H: Maximization of $x^2e^{-(x^4+y^2)}$
My textbook proposes to maximize $f(x, y)=x^2e^{-(x^4+y^2)}$. It begins by showing that it tends to $0$ as $r=\sqrt{x^2+y^2}$ tends to $\infty$.
First it remarks that $x^2e^{-x}\rightarrow 0$ as $x\rightarrow\infty$, and that $e^{-x^4}$ is eventually smaller than $e^{-x}$, therefore $x^2e^{-x^4}$ tends to $0$.
Next it points out that as $y^2\geq0$, we have $e^{-y^2}\leq1$. From this I can see we can conclude that $f(x, y)=x^2e^{-(x^4+y^2)}=x^2e^{-x^4}e^{-y^2}\leq x^2e^{-x^4}$
From this it immediately concludes that
$f(x, y)\rightarrow 0$ as $r\rightarrow\infty$ Why?
This is my first question. While I can sort of see the justification for this, how do you prove it rigorously? I can see that, for instance, with fixed $y$, the limit is verified (because then we're just letting $|x|$ tend to infinity). Inversely, with fixed $x$, I suppose the limit would be verified as well (but I don't think that follows from anything the book explicitly points out - I think we need that $e^{-y}\rightarrow0$). Even given that, how do we know it'll still tend to $0$ if $(x, y)$ becomes large along some other axis? I thought maybe I'd express $(x, y)$ as $tX$, where $t\geq0$ and $X$ is some vector, then letting $t\rightarrow\infty$. This results in an expression which I'm pretty sure does tend to $0$ (haven't worked through it in detail yet). But does that prove the desired proposition? And is there a simpler way of doing it that I'm missing (surely there must be or the book would have explained it in more detail)?
My second question is about what comes right after:
Hence any maximum occurs in a bounded region of the plane.
I'm actually having some difficulty understanding what this even means. That there exists a bounded region such that we can be sure no maximum occurs outside of it? If so, how? If $f$ does tend to $0$, then it's true that there's a bounded disc beyond which $f$ is smaller than, say, $1$. But maybe inside that region, $f$ is even smaller. Intuitively, I suppose that might mean there's a maximum on the boundary of that disc... not sure, though. And again, is there a simpler method that I'm missing?
AI: To see that $x^2e^{-(x^4+y^2)}\to0$ as $(x,y)\to0$, note that $e^{-(x^4+y^2)}\le1$ and $x^2\to0$ as $(x,y)\to0$.
For $x\ge3$, we have that
$$
\frac{(x+1)e^{-(x+1)}}{xe^{-x}}=\frac{x+1}{xe}\le\frac12
$$
Therefore, by induction, we have
$$
\frac{(x+n)e^{-(x+n)}}{xe^{-x}}\le\frac1{2^n}
$$
Thus, for $x\ge3$
$$
(x+n)e^{-(x+n)}\le\frac1{2^n}xe^{-x}
$$
which shows that
$$
\lim_{x\to\infty}xe^{-x}=0
$$
Substitute $x\mapsto2x^4$, divide by $2$, and take the square root to get
$$
\lim_{|x|\to\infty}x^2e^{-x^4}=0
$$
We also have
$$
\lim_{|y|\to\infty}e^{-y^2}=0
$$
Therefore, since $|x|\ge\frac{r}{\sqrt2}$ or $|y|\ge\frac{r}{\sqrt2}$
$$
\lim_{x^2+y^2\to\infty}x^2e^{-(x^4+y^2)}=0
$$
This says that
$$
\color{#C00000}{\lim_{r\to\infty}x^2e^{-(x^4+y^2)}=0}
$$
Note that
$$
x^2e^{-(x^4+y^2)}=x^2e^{-x^4}e^{-y^2}
$$
$e^{-y^2}\le1$ and is $1$ only when $y=0$.
For $x^2e^{-x^4}$, we can just take the derivative:
$$
\frac{\mathrm{d}}{\mathrm{d}x}x^2e^{-x^4}=(2x-4x^5)e^{-x^4}
$$
This vanishes precisely when $x^4=\frac12$ and that is at $\pm\frac1{\sqrt[4]{2}}$. At either of those points,
$$
x^2e^{-x^4}=\frac1{\sqrt2}e^{-1/2}
$$
Since $x^2e^{-x^4}$ tends to $0$ as $x\to0$ and $|x|\to\infty$, this must be the maximum.
|
H: Elements $a,b\in L$ of degree $3$ over $K$ such that $a+b$ has degree $6$ over $K$
A finite field extension $L/K$ and elements $a,b\in L$ of degree $3$ over $K$ such that $a+b$ has degree $6$ over $K$. Can you give me an example about such field extension? Thanks.
AI: Hint: Let $K=\mathbb{Q}$ and let $L$ be the splitting field of $x^3-2$ over $K$. Can you determine what $L$ is, prove that $[L:K]=6$, and find appropriate elements $a,b\in L$?
|
H: Partial fraction $\int \frac{dx}{2x^2 - 3}$
$$\int \frac{dx}{2x^2 - 3}$$
I think I have to factor this but I don't know how to. If I don't know how to is it valid to do
$$\int \frac{dx}{2x^2 - 3} \quad =\quad \int \frac {Ax+ B}{2x^2 - 3}\,dx$$
AI: $$\int \frac{dx}{2x^2 - 3} = \int \frac{dx}{2(x^2 - 3/2)}\,dx = 1/2\int \frac{dx}{x^2 - 3}\,dx$$
We can factor the denominator by writing it as a difference of squares:
$$1/2 \int \left(\frac{A}{x+\sqrt{3/2}}+ \frac{B}{x-\sqrt{3/2}} \right)\,dx$$
Now we just need to determine the constants (numerators): $A, B$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.