Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Ways of writing sets My English is not very good so I hope you understand .
I don't know how to say it in English but we studied that there are 2 ways to write a set.
*
*The first way is just to list these {you list the elements here and you separate with commas}
*The second way is to find a special property that they share.
For example
If you have this set $\{2,3,.......,9\}$
You would say that the special property is $\{x:1<x<10,x \mathrm{\;belongs\; to\; the\;natural\;numbers}\}$
Now my Question is
How can I write the special property for the following sets:
*
*$\{3,5,6,9,10,12,15,18,20,21,24\}$
*$\{17,19,23,29,31,37,41,43,47\}$
*$\{1,2,3,5,8,13,21,34,55\}$
I would be grateful if you help
|
The solutions are as follows:
*
*1: $\{ x \in \mathbb{Z}_{\geq 3} | x = 0 \, \mathrm{mod}\, 3 \, \mathrm{or}\, x = 0 \, \mathrm{mod}\, 5, \, \max(x) = 24 \} -$ See this
*3: $\{x(n) \in \mathbb{Z}_{> 0} | 2\leq n \leq 11, x(n) = x(n - 1) + x(n - 2), x(0) = 0, x(1) = 1\} -$ See this
The second series contains prime numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find a quadratic equation with integral coefficients whose roots are $\frac{α}{β}$ and $\frac{β}{α}$ The roots of the equation $2x^2-3x+6=0$ are α and β. Find a quadratic equation with integral coefficients whose roots are $\frac{α}{β}$ and $\frac{β}{α}$.
The answer is $4x^2+5x+4=0$
I don't know how to get to the answer. Could someone explain the steps?
|
As $\alpha+\beta=\dfrac32, \alpha\beta=\dfrac62$
let $y=\dfrac\alpha\beta\iff y+1=\dfrac3{2\beta}\iff\beta=\dfrac3{2(y+1)}$
But as $\beta$ is a root of $$2x^2-3x+6=0$$
$$2\left(\dfrac3{2(y+1)}\right)^2-3\left(\dfrac3{2(y+1)}\right)+6=0$$
As $y+1\ne0,$ multiply both sides by $\dfrac{2(y+1)^2}3$ to find $$0=3-3(y+1)+4(y+1)^2=4y^2+5y+4$$
By symmetry, we can surmise that the same equation will be reached if we start with $y=\dfrac\beta\alpha$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
}
|
Solve PDE: $xu_x + yu_y + u_z = u , u(x,y,0)=h(x,y)$ Here is problem: solve PDE, quasilinear, problem
$xu_x + yu_y + u_z = u , u(x,y,0)=h(x,y)$.
Here what I did: Given $\Gamma: <x=s, y=s, z =0, u=h(s)>$
$dx/dt =x$, $dy/dt = y$, $dz/dt=1$ and $du/dt = u$
$x=se^t$, $y=se^t$, $z=t$ , $u=h(s)e^t$. Now I am stuck. I do not know to get s and solution. Please help. Thanks
|
The characteristic equations are given by:
$$ dx/x=dy/y=dz/1=du/u.$$
$dx/x=dy/y \implies \ln(x)=\ln(y)+\ln(c_1) \implies x/y = c_1$
$dy/y =dz/1 \implies \ln(y)=z+c_2 \implies c_2 = \ln(y)-z$
$dz/1=du/u \implies z=ln(u)-\ln(c_3) \implies u = c_3e^{z}$
Now use that $c_3=F(c_1,c_2)$, hence:
$u=F(c_1,c_2)e^{z}=F(x/y,\ln(y)-z)e^{z}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If α and β are the roots of the equation $3x^2+5x+4=0$, find the value of $α^3+β^3$ If α and β are the roots of the equation $3x^2+5x+4=0$, find the value of $α^3+β^3$
How can I factorize the expression to use the rule of sum and product of roots?
The answer is $\frac{55}{27}$
|
Using the general form of quadritic equation, $x^2 -(a+b) x + ab$,
we get the values of $a+b$ and $ab$.
Now, the expression $a^3+b^3$ can be reduced to $(a+b)^3 -3ab(a+b)$.
Substitute the value of $a+b$ and $ab$ in the above equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Constructing a divergent series Please would you help me with this question? I've been thinking about it for ages but I've made very little headway, so if possible a hint would be ideal.
Let $\sum_{n=1}^∞{x_n}$ be a divergent series, where $x_n > 0$ for all $n$. Show that there is a divergent series $\sum_{n=1}^∞{y_n}$ with $y_n > 0$ for all $n$, such that $(\frac{y_n}{x_n}) → 0.$
I have not been taught analysis formally, hence my lack of progress. I know to consider the series as a sequence of partial sums, and I tried to take the contrapositive of the statement but that just overcomplicated matters. I know I don't have many ideas to present but I have been trying this for days.
Thank you in advance.
|
Based on this result:
If the positive series $\sum a_n$ diverges and $s_n=\sum\limits_{k\leqslant n}a_k$ then $\sum \frac{a_n}{s_n}$ diverges as well
Let's take $S_n=\sum\limits_{k=1}^n x_n\to+\infty\quad$ then $\quad\displaystyle y_n=\frac{x_n}{S_n}$ agrees with your requirements.
*
*$x_n>0\implies S_n>0\implies y_n>0$
*$\displaystyle \frac{y_n}{x_n}=\frac 1{S_n}\to 0$
*$\sum\ y_n$ diverges
For instance the classical divergent series $x_n=\frac 1n\ $ gives $\ y_n\sim\frac 1{n\ln(n)}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Integral Representation of the Dottie Number I noticed that a lot of commonly-used mathematical constants that can't be expressed in closed-form can be expressed by integrals, such as
$$\pi=\int_{-\infty}^\infty \frac{dx}{x^2+1}$$
and
$$\frac{1}{1+\Omega}=\int_{-\infty}^\infty \frac{dx}{(e^x-x)^2+\pi^2}$$
I was wondering if anyone knows how to express the Dottie Number $\omega$, or the unique solution to the equation
$$\cos(\omega)=\omega$$
using an integral.
In general, what are some strategies for expressing constants as integrals? I'm also struggling to express the reciprocal fibonacci constant as an integral (but don't tell me how to do that one).
|
You seem to have provided an answer to your own question. Thank you. I will bookmark this post; it may help me in some of my own work. I'd like to add a little more information; it may suggest other ways to tackle your goal, perhaps direct you to a more concise solution.
The Dottie Number (D) also happens to be the solution of Kepler's Equation of Elliptical Motion (it satisfies the "equal area swept out in equal time" condition) at the quarter period for e = 1.
i.e.,
$$ E - e \sin(E) = M $$
$$ E - \sin(E) = \frac\pi 2 $$
(Aside #1: When e = 1, the conic is a parabola, suggesting D can be expressed in the form
$$ D = a b^2 $$
where a and be are constants yet to be determined.)
Kepler's Equation reduces to
$$ \cos(\sin(E)) = \sin(E) $$
This is the definition of the Dottie Number, where $\sin(E) = D$.
I bring up this connection because Kepler's Equation can be expressed in terms of a Bessel Function of the First Kind ($J_k$):
$$ E = M + 2 \sum_{k=1}^\infty \frac{1}{k} J_k (ke) \sin(kM) $$
It's an infinite series, not an integral, but perhaps it helps you.
(Aside #2: The Dottie Number is just a special case of a more general equation:
$$ \cos(k x) = x $$
where 0 $\le$ k $\le$ 1. The Dottie Number is the solution to the equation for k = 1. A plot of the solutions to this equation for 0 $\le$ k $\le$ 1 is tantalizingly close to a graph of $\text{sech}(0.817326346581 k)$. Since $E$ is periodic and a function of itself, I suppose a solution should include sine and exponential terms, but I haven't found an exact relationship. Perhaps it involves fractional derivatives.)
(Aside #3: There used to be a blog solely about the Dottie Number. Have you come across it? It has lot of good information about the Dottie Number, for example, Bertrand's Semicircle.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 0
}
|
Certain property of finite field $F_p$ Let $F_p$ be a finite field of order $p$ where $p$ is a prime number. Let $\{ \alpha_1, \ldots, \alpha_{p-1} \}$ be a multi-set with $\alpha_i \in F_p$ and each $\alpha_i$ non-zero.
I want to show that $$\sum_{i\in K} \alpha_i = -1$$ for some subset $K \subseteq \{1,\ldots, p-1\}$.
I am stuck at this. Any hint(s) would be appreciated.
|
*
*Since any element $\alpha \in \mathbf{F}_p^*$ generates $(\mathbf{F}_p,+)$, the only subset $S \subset \mathbf{F}_p$ such that $S= S \cup (\alpha+S) \bmod p$ is $\mathbf{F}_p$.
*If all the $\alpha_i$ are the same element $\alpha$, take $b \equiv -\alpha^{-1} \bmod p$ so that $\sum_{i=1}^b \alpha_i = -1$.
*Otherwise wlog. $\alpha_1 \ne \alpha_2$ so that $S_2 = \{\alpha_1,\alpha_2,\alpha_1+\alpha_2\}$ contains $3$ elements, and $S_{i+1} = S_i \cup ( \alpha_{i+1}+S_i) $ contains at least $i+2$ elements, therefore $S_{p-1}$ contains $-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2446815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving that this series has a finite sum Consider the following series
$$\sum_{n=1}^{\infty}\dfrac{\log n}{n(n-1)}$$
I have tried to use the ratio test, but then I would get
$$\dfrac{(n-1)\log(n+1)}{(n+1)\log n}$$
And taking the limit as $n \to \infty$ would yield 1 so I don't think it would help.
|
Note that $\log(n)\le \sqrt{n}$. Hence, we have
$$\left|\frac{\log(n)}{n(n-1)}\right|\le \frac{1}{n^{1/2}(n-1)}$$
Can you finish now?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
First-order sentence involving only a symmetric binary relation with only infinite models It's known that there are sentences of first-order logic which only have infinite models, even if our language consists only of a binary relation $R$. An example of such a sentence is
$$\forall x \exists y Rxy \wedge \forall x \forall y \forall z ((Rxy \wedge Ryz ) \to Rxz) \wedge \neg \exists x Rxx$$
What I'm curious about is whether there are still sentences with only infinite models when we mandate that our binary relation be symmetric?
To phrase the question formally, consider a first-order language $\mathcal{L} = \{R^{2}\}$ (without equality). Let $\phi$ denote the formula
$$\forall x \forall y (Rxy \to Ryx)$$
Is there a sentence $\psi$ of $\mathcal{L}$ such that $\phi \wedge \psi$ has an infinite model, but no finite models?
|
Let us define the formulas:
$$\kappa_3(x)=\exists u\exists v(Rxu\land Rxv\land Ruv)$$
$$\kappa_4(x)=\exists u\exists v\exists w(Rxu\land Rxv\land Rxw\land Ruv\land Ruw\land Rvw)$$
$$\alpha(x)=\neg\kappa_3(x)$$
$$\beta(x)=\kappa_3(x)\land\neg\kappa_4(x)$$
$$\gamma(x)=\kappa_4(x)$$
$$\sigma(x,y)=\alpha(x)\land\alpha(y)\land\exists u\exists v(Rxu\land Ruv\land Rvy\land\beta(u)\land\gamma(v))$$
Let $\psi$ be the conjunction of the sentences:
$$\forall x\forall y\forall z(\sigma(x,y)\land\sigma(y,z)\to\sigma(x,z))$$
$$\forall x\neg\sigma(x,x)$$
$$\exists x\alpha(x)$$
$$\forall x\exists y(\alpha(x)\to\sigma(x,y))$$
Plainly, $\psi$ has no finite model. On the other hand, it is a straightforward exercise to construct an infinite model of $\phi\land\psi.$
Intuition behind this example. The (irreflexive) models of $\phi$ are just (undirected) graphs. The problem is to construct an asymmetric relation on an undirected graph. Given a vertex $x$ let $f(x)$ denote the maximum number of vertices in a clique containing $x$. For vertices $x$ and $y$, define $x\lt y$ to mean that $f(x),f(y)\le2$ and there is a path $x,u,v,y$ with $f(u)=3$ and $f(v)\ge4$. Then we can write a first order sentence in the language of graph theory which says that the relation $\lt$ restricted to the set $\{x:f(x)\le2\}$ is an irreflexive transitive relation with no greatest element.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
If a $10$-element subset whose sum is 155 is removed from $\{1,2,...,30\}$, then it's always possible to split the remaining set into equal parts. In other words, I'd like to prove that, given the set $\{1,2,...,30\}$, if we designate a $10$-element subset whose sum is 155 (for example $\{1,2,3,4,5,26,27,28,29,30\}$), then the remaining 20 elements can always be split into two 10-element subsets whose sums are equal (in our example, $\{6,8,10,12,14,17,19,21,23,25\}$ and $\{7,9,11,13,15,16,18,20,22,24\}$).
So far all I have is that the sum $1+2+...+30 = 465 = 3\cdot 155$, so all three subsets in the problem will have a sum of $155$. After that, I can't think of anything. Could someone please give me a hint to push me in the right direction? It doesn't seem like a very complicated problem, so that would be more than enough.
|
So first pair up the numbers as $r, 31-r$ and note that any such pair can replace any other without changing the sum of a subset.
You are given a set of ten elements adding to $155$. Consider constructing a second set as made up of elements $31-r$ where $r$ is in the first set. The sum of elements in both is then $310=2\times 155$, but there may be elements in the second set which are also in the first. Show that you can use the properties of pairs (and that there are enough pairs) to eliminate duplicates from the second set without changing its sum. The remaining elements form the third set, will have the right sum (and will, by this method, consist of five pairs).
Have deleted full answer because you wanted a hint.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Card guessing game There is a pile of $52$ cards with $13$ cards in each suit (diamonds, clubs, hearts, spades). The cards are turned over one at a time. At any time, the player must try to guess its suit before it is revealed. If the player guesses the suit that has the most cards and if there is more than one suit with the most cards, he guesses one of these, show that he will make at least thirteen correct guesses.
Attempt: At first, the probability for each suit is $\frac{1}{4}$. If the first card is, say, a diamond then, for the second card, the probability of diamonds, clubs, hearts, spades are $\frac{12}{51}, \frac{13}{51}, \frac{13}{51}, \frac{13}{51}$. So the player should guess the suit that has most cards, but I don't know how to show that he will make at least thirteen correct guesses.
|
This is based on 5xum's answer but I think a better explanation...
Initially there are four suits that have equal number of cards. Let us assume we actually are very unlucky and guess incorrectly until there is only one suit left with 13 cards in. We will then keep guessing that suit until the first card in that suit is drawn and we have one correct guess.
We then have at most 12 cards in any one suit. We then repeat the logic... If there are multiple suits with 12 cards assume we guess incorrectly until only one suit has 12 cards left in it. We will then be guessing that suit until a card is drawn from it. We then have 2 correct guesses and at least one suit still has 11 cards in it.
We then keep repeating that logic until we get down to 12 guesses and at most one card in any suit. We then guess wrong until the last card which we guess correctly which gives us our 13 correct guesses.
This is obviously the worst case scenario since we assumed incorrect guesses unless we were guaranteed a correct guess. Obviously we could have done a lot better if some of those guesses were correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
}
|
Separable algebras over a non-commutative ring
What is the 'correct' definition of a separable algebra over a non-commutative ring? Are there known results about such algebras? Examples?
Recall that one of the equivalent definitions of a separable algebra $A$ over a commutative ring $R$ says that $A$ is a projective $A \otimes_R A^{op}$-module.
Also recall that there are several ways to define an algebra over a non-commutative ring, see the followimg two questions: 1 and 2.
Remarks:
1) An example for commutative $R$ and $A$ is: $R=k[p,q]$, $A=k[x,y]$,
where $k$ is a field of characteristic zero and $p,q \in k[x,y]$ have an invertible Jacobian; see Theorem 38.
I wonder what can be said in the non-commutative analog, where $A$ is the first Weyl algebra (generated by $X$ and $Y$) and $R$ is its sub-algebra generated by the images of $X$ and $Y$ under an endomorphism.
2) If $R$ is non-commutative (and $A$ is non-commutative), then is there a problem with $A \otimes_R A^{op}$?
Defining the tensor product of a left $R$-module $A$ with a right $R$-module $B$ over a non-commutative $R$ seems ok, but perhaps what we get is not a ring, but only a group? (I may be wrong).
|
The standard definition is: an inclusion of algebras $R\subseteq A$ is a separable extension of algebras if the map $\mu:A\otimes_RA\to A$ induced by the multiplication of $A$ is split as a map of $A$-bimodules.
This is used in many places and has many applications. For example, if $G$ is a finite group, $k$ a field of characteristic $p$ dividing the order of $G$ and let $P$ be a $p$-Sylow subgroup of $G$. Then the extension of group algebras $kP\subseteq kG$ is separable. This is important, for example, in proving Higman's theorem that says that $kG$ has finite representation type iff $P$ is cyclic, and in many other places.
You can read a bit about this in Pierce's book on associative algebras.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $f(n) \geq n$ Let $f$ be a function from $\mathbb{N}$ to $ \mathbb{N}$ such that $\forall n \in \mathbb{N}$, $f(f(n)) <f(n+1)$.
Prove that $\forall k \geq n$, $f(k) \geq n$.
I've put much time and effort to solve this but unfortunately couldn't.
I tried to prove a simpler version $\forall n\geq 0$, $f(n) \geq n$.
for $n = 0$, $f(0) \geq 0$ because it's absurd otherwise.
We can use the same idea to prove that $f(1) \geq 1$ and so on, but as each time you have to find $n$ contredictions. I tried to use recursion, but, you know, I failed.
Can I get some help/hints ? Thanks :D
|
I think that the simpler version is actually harder to prove than the original question. We can prove the original statement with induction to $n$:
Base case: For $n = 0$ it clearly holds, because $f(k) \geq 0$ for all $k \in \mathbb{N}$.
Inductive step: Suppose that it holds for all $n = m$ with $m \in \mathbb{N}$. Let now $l \geq m+1$, so $l-1 \geq m$. Because the statement holds for $m$, we have $f(l-1) \geq m$. Now we can apply the inductive hypothesis also for $k = f(l-1)$, and we get $f(f(l-1)) \geq m$. Now it follows $f(l) > f(f(l-1)) \geq m$. This is exactly the statement we wanted to prove for $n = m+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
How many times must I toss a coin in order that the odds are more than 100 to 1 that I get at least one head? How many times must I toss a coin in order that the odds are more than 100 to 1 that I get at least one head?
I believe that it is 10 times as if the coin is flipped ten times there is only ten outcomes that include only 1 head out of 1024 total outcomes. Is this correct?
|
The chance of getting at least one head is $1 - (\frac{1}{2})^n$
This equation has to equal $99 \%$, which gives us the following:
$$1 - \Big(\frac{1}{2}\Big)^n = 0.99$$
$$\Big(\frac{1}{2}\Big)^n = 0.01$$
$$n \log\frac{1}{2} = \log0.01$$
$$n = \frac{\log0.01}{\log\frac{1}{2}}$$
$$n = 6.6438...$$
Obviously we can't have $6.6438...$ number of tries, which means it must go up to the nearest integer, which means we must have $7$ tries to have more than $99 \%$ to get at least one head.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2447854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Maximal real subfield of a number field Let $L\subset \mathbb{C}$ be a number field such that $L / \mathbb{Q}$ is a Galois extension, then is it true that $[L:L\cap \mathbb{R}]\leq 2$?
Thanks very much!
|
Yes. If $c$ is complex conjugation, $c$ acts on $L$, as an automorphism of order $1$ or $2$, so its fixed field, $L\cap\Bbb R$, is a subfield of $L$ of index $1$ or $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Simple humps of a continuous function Suppose $y=f(x)$ is a continuous function and $f(x)=f(x')$ with $x≠x'$. Can we always find a sub-interval of the interval $[x, x']$ where $f$ is a simple hump or trough? By a simple hump, I mean a curve that rises monotonically from a certain height $y=k$, reaches a maximum, and then falls monotonically back to $y=k$. A simple trough is the inverse of that.
|
No, we can't necessarily do that. Take, for instance, the Weierstrass function, whose graph is a fractal, going up and down infinitely many times on any interval.
Note that if your function is indeed a bumb, meaning that it is first increasing then decreasing, then it is necessarily differentiable almost everywhere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Image sets in Complex Analysis I was given the problem "what is the image set of the first quadrant in the $z$ plane under the mapping $w=z^4$, but I have no idea how to even think about this.
The most I did was write $w=|z|^4 (\cos(4\theta) +i\sin(4\theta))$
But again, Im not sure if that's helpful or not. How do we picture things like this?
|
HINT
The complex number $(r, \theta)$ is mapped to $\left(r^4, 4 \theta\right)$.
What is the range of $r$ and $\theta$? What is the range of $r^4$ and $4 \theta$? What is the resulting range for $\left(r^4, 4 \theta\right)$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is $N$ normal to $HN$ if $H$ subgroup and $N$ normal subgroup of the group $G$ If $H$ is a subgroup of $G$ and $N$ is a normal subgroup of $G$, then whats the relation between $N$ and the subgroup $HN$ in respect to normality, i.e. must $N$ be normal to $HN$ ?
|
Given:
*
*$G$ is a group.
*$H\lt G$.
*$N$ normal to $G$.
To Show: $N$ normal to $H\circ N$.
Possible Proof:
Now,
$H\circ N$ = {$h\circ n | h\in H, n\in N$}$ = H\cup N$.
Since, $H\lt G$ and $N$ normal to $G$ $\implies H, N \lt G \implies (H\cup N)\subseteq G\implies H\circ N\subseteq G$.
Now,
$N$ normal to $G$
$\iff g\circ N\circ g$-1 $= N, \forall g\in G$. [This comes from the basic properties of a normal subgroup.]
$\implies g\circ N\circ g$-1 $= N, \forall g\in H\circ N$. [Since, $H\circ N\subseteq G$, already proved.]
$\iff N$ normal to $H\circ N$.
QED
PS: Please let me know if there are any errors in the proof. I myself wanted a proof for this theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Discovering Quadratic Reciprocity Is there anything similar to this (page written by Field Medalist Timothy Gowers) for quadratic reciprocity ?
I mean, the link there explains how you can figure out the solution of cubic equation by yourself without having a suddent flash of inspiration/ genius genes. Is there some similar guide for quadratic reciprocity ?
|
The first part of this paper http://www.math.ubc.ca/~belked/lecturenotes/620E/Frei%20-%20The%20Reciprocity%20Law%20from%20Euler%20to%20Eisenstein.pdf shows you how the QR law was discovered historically and shows the path all the way back to Diophantus.
It seems that the first question explicitly stated that is equivalent to a case of QR is
p is an odd prime of the form $x^2+ y^2$ with x, y integers $\iff$ p is equal to 1 mod 4.
This leads to the more general question:
Given $N\in \mathbb{Z}$, describe the primes $p \notin 2$ for which $p = x^2 + Ny^2$.
This was considered by Fermat and studied carefully by Euler and leads to the full QR theorem whereas the question for sum of two squares leads to the evaluation of the quadratic character of -1 mod p only.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Is there a $2 \times 2$ real matrix $A$ such that $A^2=-4I$? Does there exist a $2 \times 2$ matrix $A$ with real entries such that $A^2=-4I$ where $I$ is the identity matrix?
Some initial thoughts related to this question:
*
*The problem would be easy for complex matrices, we could simply take identity matrix multiplies by $2i$.
*There is another question on this site showing that this has no solution for $3\times3$ matrices, since the LHS has determinant $\det(A^2)=\det^2(A)$ which is a square of real number, but determinant of $-4I_3$ is negative. But the same argument does not work for $2\times2$ matrices, since $\det(-4I_2)=4$ is positive.
|
$A = \left [\begin{array}{ccc}
0 & 2 \\
-2 & 0 \\
\end{array} \right ]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Alternatives to Fano's Axiom in Projective Space In Projective Geometry, Fano's Axiom says:
The three diagonal points of a complete quadrangle are never collinear.
I would like to prove this from more basic Axioms within three-dimensional Projective Space. The theorem of Desargues is non-trivial in plane geometry, but can be proven from basic axioms within three-dimensional geometry; could the same be true for Fano's Axiom? If not, are there nice (equivalent) alternatives?
Edit: by “more basic” I mean intuitively more basic, which makes it a somewhat subjective question of course.
|
You can't prove Fano's axiom from 3-dimensional geometry because the projective plane over the field $F_2$ with two elements does not satisfy Fano's axioms. Recall that the projective plane can be defined starting with 3-dimensional space over $F_2$ by a suitable equivalence relation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Closed form solution for $\int_{0}^{2\pi}|\cos^N x||\sin^M x|\cos^n x\sin^m x dx$ I am trying to calculate Fourier series coefficients (by hand) and the integrals I need to solve are of the following type
$$I(N,M,n,m)=\int_{0}^{2\pi}|\cos^N x||\sin^M x|\cos^n x\sin^m x dx,$$
in which $N,M,n,m \in \{0,1,2,3\}$. I tried to use WolframAlpha / Maple to come up with a general formula because doing all the $4^4$ cases would make it impossible for me to work with that list but both didn't give a result.
It would be great if there was a way to obtain a simple closed form solution. If that is not possible is there a way to get very accurate approximation for $I(N,M,n,m)$?
|
Hint: An easy way is to use the following identity which is quite easy to prove
which is an easy consequence of Integration, trigonometry, gamma/beta functions
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2448835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Existence of a subsequence such that $\lim_{n\to \infty} \|x_{n_k}-x\|=\ell>0$ in a Banach space? Let $x_n$ be a sequence in a uniformly convex Banach space $E$ such that $x_n \to x$ weakly in $E$ and $\|x_n\|_E \to \|x\|_E$. Then $x_n \to x$ strongly in $E$.
To show this by contradiction a proof I'm reading states that first we suppose that (for the non-trivial $x\neq 0$ case)
$$
\limsup_{n\to \infty} \|x_n-x\|>0.
$$
Then there exists a subsequence $x_{n_k}$ such that
$$
\lim_{n\to \infty} \|x_{n_k}-x\|=\ell>0.
$$
My question is how can we say there exists such a subsequence that converges to a finite number? I don't see anything in the assumptions that gives us this?
|
The point at which you seem to have a question is not about Banach spaces; it's about sequences of real numbers. You have a sequence $\{a_n\}_{n=1}^\infty$ of real numbers for which
$$
\limsup_{n\to\infty} a_n >0
$$
and the question is: how does this imply that there is a subsequence $\{a_{n_k}\}_{k=1}^\infty$ for which
$$
\lim_{k\to\infty} a_{n_k} = \ell>0.
$$
If you have $\limsup_{n\to\infty} a_n = \ell>0,$ then how do you show there is a subsequence converging to $\ell\text{?}$
First approach:
Apply the definition of $\limsup:$
$$
\limsup_{n\to\infty} a_n = \inf\left\{ \sup\{a_n,a_{n+1},a_{n+2}, a_{n+3}, \ldots\} : n\in\{1,2,3,\ldots\} \right\} = \ell >0.
$$
This means every number $h<\ell$ fails to be a lower bound of the sequence
$$
\Big\{ \sup\{a_n,a_{n+1},a_{n+2}, \ldots\} \Big\}_{n=1}^\infty.
$$
Failure of $h$ to be a lower bound of this sequence means some member of this sequence is $>h.$ Thus we have
$$
\forall h<\ell\ \exists n\ \sup\{a_n,a_{n+1},a_{n+2},\ldots\} > h.
$$
Just let $n_k$ be some index $\ge n$ for which $a_{n_k}>h.$
Second approach: Relying on a definition of $\limsup,$ show that $\limsup_{n\to\infty} a_n = \ell$ if and only if $\ell$ is the largest of all limits of subsequences of $\{a_n\}_{n=1}^\infty.$ That implies there is some subsequence converging to $\ell.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Prove or give a counterexample
There exists a non-negative number s such that for all non-negative numbers t, the inequality s $\geq$ t holds
Don't really know if this statement is true or false. I can see it being either, but can't really approach a proof or a counterexample for it.
Thanks.
|
The quantifiers are in the order "first $s$, then $t$". Since $t$ may depend on $s$, we should think of $t$ as a function of $s$. Can we write constraints for $t$ in terms of $s$ that make the inequality true?
We find that $0 \leq t$ because $t$ is nonnegative and $s \geq t$ is our inequality, so we must have $t \in [0,s]$ for the inequality to hold.
Can we pick an $s$ so large that all possible choices of $t$ are in $[0,s]$? No. If we let $t = s + 1$ (or let $t$ be any function of $s$ that is always bigger than $s$), then $t \not\in [0,s]$, so not $s \geq t$.
Note: We needed to write $t$ as a function of $s$ in the above, so that we simultaneously showed each choice for $s$ does not work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Definite integral on $[0,\pi]$ How to calculate the following integral if $\varepsilon \in (0,1)$:
$$\int \limits_{0}^{\pi}\frac{d\varphi}{(1+\varepsilon\cos \varphi)^2}$$
|
Hint:
Use the substitution
$$
s=\tan{\frac{\varphi}{2}}, \quad \sin{\varphi}=\frac{2s}{s^2+1}, \quad \cos{\varphi} = \frac{1-s^2}{s^2+1}.
$$
(Apparently more details are given in this answer.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Converting riemann sum to definite integral Another Riemann sum I'm struggling to convert to a definite integral... $\lim_{n\to \infty}$ $\sum_{i=1}^n$ $\frac{6n}{9n^2+4i^2}$. Any ideas as to what my $x_i$ should be in this case?
|
Let $x_i=\dfrac{i}{n}$ then $x_1=\dfrac{1}{n}\to0$, $x_n=\dfrac{n}{n}\to1$and $\Delta x=\dfrac1n$ so
$$\lim_{n\to\infty}\sum_{i=1}^n\dfrac{6}{9+4(\dfrac{i}{n})^2}\frac1n=\int_0^1\dfrac{6}{9+4x^2}dx$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Example of a semigroup S with no identity element and a subgroup G of S I need an example of a semigroup S without an identity element and a subgroup G of S.
I have found it easy to find/make semigroups without identities but then making a subgroup from it has not been fruitful. An example or a hint would be much appreciated.
|
Take your favorite group $G$. Let $S$ consist of $G$ together with two additional elements $a$ and $b$, and extend the multiplication in $G$ by defining $as = sa = bs = sb = b$ for all $s \in S$. You can confirm this operation is associative, and clearly it has no identity since any product with $a$ is $b$. But $G$ is still a subgroup of $S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Finding the polar cone of the given cone Given a closed convex cone $D$ in $\mathbb{R}^{n}$, the cone $K_{2} \in \mathbb{R}^{m}$ is defined by $$ K_{2} = \{ y = (y^{1}, y^{2}, \cdots , y^{m}): y^{i} \in \mathbb{R}^{n},\, i= 1, \cdots , m, \, y^{1} + y^{2} + \cdots + y^{m} \in D \} $$
I need to describe its polar cone $K_{2}^{\circ}$.
Recall that for a given cone $C$, its polar cone $C^{\circ}$ is defined to be the set of all $x$ such that $\langle x,y \rangle \leq 0$ for all $y \in C$.
So, if $y \in K_{2}$, then $$ y = \begin{pmatrix} y^{1}, & y^{2},& \cdots, & y^{m}\end{pmatrix} \\ = \begin{pmatrix}\begin{pmatrix} y_{1}^{1} & y_{2}^{1} & \cdots y_{n}^{1} \end{pmatrix}, \begin{pmatrix} y_{1}^{2} & y_{2}^{2} & \cdots y_{n}^{2} \end{pmatrix},\cdots ,\begin{pmatrix} y_{1}^{m} & y_{2}^{m} & \cdots y_{n}^{m} \end{pmatrix} \end{pmatrix}.$$
So, I need to find the set of all $x$ such that when I take the inner product of $x$ and $y$, I get a value $\leq 0$.
My first problem is that I'm not sure if I have even expressed a general set in $K_{2}$ correctly here. Secondly, I would think that perhaps I should take the inner product of a general $y$ with a general $x$, set the result $\leq 0$ and then try to solve for what the components of $x$ are, but as I am not sure even what a general $x$ should look like, I am at a loss as to how this should be done.
If this is not the correct approach to finding the polar of this cone, what is the correct approach? Beyond the inner product definition of a polar cone (which in terms of angles between things, means that $x$ and $y$ make an obtuse angle with each other), I don't know much about how to go about finding them.
I sincerely thank you for your time and patience!
|
Here's an initial observation, but not a full solution. Changing notation slightly,
$$K_2 = \{ Y = \begin{bmatrix} y_1 & \cdots & y_m \end{bmatrix} \in \mathbb R^{n \times m} \mid y_1 + \cdots + y_m \in D \}.$$
(Here $y_i$ is the $i$th column of the matrix $Y$.) A matrix $X \in \mathbb R^{m \times n}$ belongs to $K_2^\circ$ if and only if
$$\langle X, Y \rangle = \text{tr}(Y X^T) = \text{tr}(y_1 x_1^T + \cdots + y_m x_m^T) \leq 0 \text{ for all } Y \in K_2.$$ If $x_1 = \cdots = x_m = x \in D^\circ$, then
$$\langle X, Y \rangle = \text{tr}( (y_1+\cdots+y_m)^T x) = x^T(y_1 + \cdots + y_m) \leq 0,
$$
so $X \in K_2^\circ$.
This shows that $S = \{ \begin{bmatrix} x & \cdots & x \end{bmatrix} \mid x \in D^\circ\} \subset K_2^\circ$.
We can conjecture that in fact $S = K_2^\circ$. We still need to show containment in the other direction.
By the way, sometimes once you have guessed what the polar cone is, it turns out to be easier to show that $S^\circ = K_2$. If it can be shown that $S^\circ = K_2$, which I suspect is straightforward, it will follow that $S = K_2^\circ$.
Let's attempt to show that $S^\circ \subset K_2$. So, suppose that $Y \in S^\circ$. From the definition of $S^\circ$, we have that $\langle Y, X \rangle \leq 0$ for all $X \in S$. Using the definition of $S$, we see that
$$
\tag{1}\langle Y, \begin{bmatrix} x & \cdots x \end{bmatrix} \rangle \leq 0
$$
for all $x \in \mathbb R^n$.
We have hoping to conclude that $Y \in K_2$. In other words, we are hoping to conclude that the columns of $Y$ sum to $0$. Does this follow somehow from the fact that the inequality (1) holds for all $x \in \mathbb R^n$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Number of functions $f : A \to A$ such that $f(f(x))=f(x)$ If $A=\left\{1,2,3,4,5\right\}$ Then
Find Number of functions $f : A \to A$ such that $f(f(x))=f(x)$
Case $1.$ if $f$ is injective then $f(f(x))=f(x)$ $\implies$ $f(x)=x$, hence there is only one injective function which is an identity function.
Case $2.$ When $f$ is many to one function
Let us assume that $f(1)=f(2)=f(3)=f(4)=f(5)=k$ Then clearly for any $k$ from the set $A$, $f(f(x))=f(x)$
hence there are five such functions.
Are there any other possibilities?
|
There are other possibilities. Here is one:
1 goes to 2
2 goes to 2
3 goes to 5
4 goes to 5
5 goes to 5
Expanding on this answer: The framework for these functions is to have some number (at least one) of fixed points, and have everything else map into the set of fixed points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
How to find minimum and maximum value of x, if x+y+z=4 and $x^2 + y^2 + z^2 = 6$? I just know that putting y=z, we will get 2 values of x. One will be the minimum and one will be the maximum. What is the logic behind it?
|
$z=4-x-y$
$x^2+y^2+(4-x-y)^2-6=0$
Differentiate wrt $x$ and $y$
$2x-2(4-x-y)=0$
$2y-2(4-x-y)=0$
Gives $x=y=4/3$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2449942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
}
|
Show that $\Delta \le \frac {\sqrt{abc(a+b+c)}}{4}$ If $\Delta$ is the area of a triangle with side lengths a, b, c, then show that: $\Delta \le \frac {\sqrt{abc(a+b+c)}}{4}$. Also show that equality occurs in the above inequality if and only if a = b = c.
I am not able to prove the inequality.
|
One has $\Delta = \frac{1}{4}\sqrt{(a+b+c)(b+c-a)(c+a-b)(a+b-c)}$ (See this link).
Moreover, one has $(a+b-c)(b+c-a) \leq (\frac{a+b-c + b+c-a}{2})^2 = b^2$
So $[(a+b+c)(b+c-a)(c+a-b)(a+b-c)]^2 \leq a^2b^2c^2$.
Thus, $\Delta \leq \frac{1}{4}\sqrt{abc(a+b+c)}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
One dimensional martingale is recurrent? Consider a sequence of independent and identically distributed random variables $X_i$ with $P(X_i >1 )>0$ and $E(X_i) = 0$.
therefore $M_n = \sum_{i=1}^n X_i$ is a martingale.
I would like to prove that $M_n\geq 0$ infinitely often with probability $1$.
I thought about using upcrossing inequalities, but it seems that they only bound the number of upcrossing from above.
A second idea was to use a related idea, the result that any positive martingale converges almost surely, to deduce the result by contradiction.
The argument would go as follows:
Let $A = \big[\exists\, N,\; M_n<0 \text{ for } n \geq N_0\big]$, and therefore for $\omega \in A$ $M_n$ converges almost surely, but since $P(X_i>1)>0$ $M_n$ does not converge with probability $1$. therefore $P(A) = 0$
Is this argument correct? Is there another one that doesn't require Doob's upcrossing inequalities?
|
No, your reasoning does not work. The martingale covergence theorem requires $M_n(\omega) \geq 0$ for all $\omega \in \Omega$ (and not just $M_n(\omega) \geq 0$ for some $\omega \in \Omega$). To fix this gap in your reasoning you have to show that
$$\mathbb{P} \left( \limsup_{n \to \infty} M_n < 0 \right) \in \{0,1\}. \tag{1}$$
For this you can use Hewitt-Savage's 0-1-law.
Alternative proof (using $(1)$): Set
$$Z(\omega) := \limsup_{n \to \infty} M_n(\omega),\qquad \omega \in \Omega.$$
Suppose that $\mathbb{P}(Z <0)>0$, then it follows from $(1)$ that $\mathbb{P}(Z <0)=1$. Thus,
$$\mathbb{E}(Z)<0. \tag{2}$$
On the other hand, we have by Fatou's lemma
$$\mathbb{E}(Z) = \mathbb{E} \left( \limsup_{n \to \infty} M_n \right) \geq \limsup_{n \to \infty} \mathbb{E}(M_n) = 0.$$
Obviously, this is a contradiction to $(2)$, and therefore we conclude $\mathbb{P}(Z<0)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
for $x^2+y^2=a^2$ show that $y''=-(a^2/y^3)$ For $x^2+y^2=a^2$ show that $y''=-(a^2/y^3)$
I got that
$y^2=a^2-x^2$
$y'=-x/y$
$y''=(-1-y'^2)/y$
But then I get stuck.
|
Implicit differentiation gives
$$
2x+2yy'=0 \tag{*}
$$
Differentiate again (after removing the common factor $2$):
$$
1+(y')^2+yy''=0 \tag{**}
$$
Now (*) implies $y'=-x/y$, so you can substitute in (**):
$$
1+\frac{x^2}{y^2}+yy''=0
$$
Isolate $y''$ and go on:
$$y''=-\frac{y^2+x^2}{y^3}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Does $\limsup_{x\to\infty}\int_x^{x+h}f'(y)dy = 0$, $\forall h>0$, imply $\lim_{x\to\infty}\int_x^{x+h}f'(y)dy = 0$, $\forall h>0$? I have an absolutely continuous function $f:[0,\infty)\to[0,\infty)$ that satisfies $\limsup_{x\to\infty}\int_x^{x+h}f'(y)dy = 0$ for all $h>0$. I need to check if it is true or false that $\lim_{x\to\infty}\int_x^{x+h}f'(y)dy = 0$ for all $h>0$. Any hint will be welcome.
|
Sketch: Choose $N\in \mathbb N$ such that $2^n +n^2+n < 2^{n+1}$ for $n\ge N.$ Define
$$g = \sum_{n=N}^{\infty}\left ( \frac{1}{n}\chi_{(2^n,2^n+n^2)} - \chi_{(2^n+n^2,2^n+n^2+n)}\right ).$$
Now define $f(x) = \int_0^x g.$ Then $f$ is a counterexample. I'll leave it here for now. Ask if you have questions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finitely many conjugacy classes of finite subgroups of given order Let $G$ be a periodic locally soluble group with finite Sylow p-subgroups for all primes p.
It is know that in these conditions $G$ is residually finite. Moreover it can be proved that $G$ has only finitely many conjugacy classes of finite subgroups of given order.
Let $L$ be a subgroup of $G$ such that $L=HN$, where $H$ is a finite subgroup of $G$ and $N$ is a normal subgroup of $G$.
Why the index $|N_G(L):N_G(H)N|$ is finite?
I know this is true since this is stated at Lemma 1.6 of the paper
"Locally inner endomorphisms of SF-Groups" by Belyaev
|
You know that there are only finitely many conjugacy classes of subgroups of $HN$ that are isomorphic to $H$.
Supose $g_1,g_2 \in N_G(L)$ and $H^{g_1}$ and $H^{g_2}$ are in the same conjugacy class in $HN$. Since $g_1 \in N_G(L)$, we have $HN=H^{g_1}N$, so there exists $n \in N$ with $H^{g_1n}=H^{g_2}$, so $g_1ng_2^{-1} = g_1g_2^{-1}(g_2ng_2^{-1}) \in N_G(H)$ and hence, since $N$ is normal in $G$, $g_1 \in N_G(H)Ng_2$.
So the number of cosets of $N_G(H)N$ in $N_G(L)$ is at most equal to the number of conjugacy classes of subgroups of $HN$ that are isomorphic to $H$, which is finite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
On the foliation determined by isometric flow on homogeneous manifold Let $(M,g)$ be a non-compact homogeneous, 1-connected Riemannian manifold with $G := Isom(M,g)$ and let $v \in \Gamma (M,TM)$ be a $G$-invariant, nowhere-vashining vector field, such that $||v|| = 1$ everywhere. Then $v$ determines a global flow $\phi_v: \mathbb R \times M \to M$ such that at each time $t \in \mathbb R$, $\phi_v^t := \phi_v(t, \; \_ \;)$ commutes with every element $g \in G$
The trajectories of $\phi_v$ form the leaves of a $1$-dimensional foliation $\mathfrak F$ of $M$. Now suppose further that for each $t \in \mathbb R$, we have that $\phi_v^t \in G$. Now I have the following questions:
1) Is $\mathfrak F$ always a regular foliation? (in other words, are the trajectories embedded submanifolds ?)
2) If so, does $Y :=M/ \mathfrak F$ always have the structure of a smooth manifold, such that the quotient map is a submersion
3) For $F \in \mathfrak F$ a leaf, is $F \hookrightarrow M \rightarrow Y $ always a fiber bundle ? (In particular, are all leaves diffeomorphic? )
|
Consider the 3-dimensional round sphere and its isometry group $O(4)$. This group contains the subgroup $U(2)$, whose center is isomorphic to $U(1)$ (scalar unitary matrices). Of course, $O(4)\ne U(2)$, but one can modify the constant curvature metric on $S^3$ making it a Berger sphere $B=S^3_{t,\epsilon}$, whose isometry group (for generic values of $\epsilon, t$) is $U(2)$, see this paper:
P.Gadea and J.Oubina, Homogeneous Riemannian Structures on
Berger 3-Spheres.
Now, take $C=B\times B$ with the product metric. Except for the $Z_2$-symmetry (swapping the factors), which one can eliminate by, say, rescaling the metric on one of the factors, the isometry group of $C$ is $U(2)\times U(2)$, hence, its center is the torus $T^2=S^1\times S^1$. The manifold $C$ is homeogeneous, simply-connected and compact. But $T^2$ contains a subgroup $H$ isomorphic to ${\mathbb R}$ (actually, continuum of such subgroups, but we just need one) whose orbits are not closed (their closures are tori); the action of this subgroup yields your flow. This is a counter-example in the setting of compact manifolds. To make a noncompact example, take $M=C\times {\mathbb R}$ with the product metric and use the same subgroup $H$ as before: The full isometry group of $M$ is $Isom({\mathbb R})\times U(2)\times U(2)$, so $H$ is still contained in its center.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Showing that $f(x,y)=\dfrac{x}{y}$ is continuous when $x>0$ and $y>0$. So I am hoping to show that $f(x,y)=\dfrac{x}{y}$ is continuous when $x>0$ and $y>0$.
I am not sure how to approach this problem. My idea was that taking $$\dfrac{\partial f}{\partial x}=\dfrac{1}{y}$$ and $$\dfrac{\partial f}{\partial y}=\dfrac{-x}{y^2}$$
My logic was that since both partials exists and are defined if $x>0$ and $y>0$, but I then discovered that existence of partial derivatives does not imply continuity.
I am curious if there are any clever ways to show continuity of this function? Note I am not concerned with the case that $x=0$ or $y=0$
|
For $f(x)$ and $g(y)$ continuous then in $(x_0,y_0)$ where $g(y_0)\neq0$ we have:
*
*$\forall \varepsilon>0,\exists \delta_1>0\mid |x-x_0|<\delta_1\implies |f(x)-f(x_0)|<\varepsilon$
*$\forall \varepsilon>0,\exists \delta_2>0\mid |y-y_0|<\delta_2\implies |g(y)-g(y_0)|<\varepsilon$
Since $g(y_0)\neq 0$ it is possible to choose $0<\varepsilon<\frac 12 |g(y_0)|$
So for $\delta=\min(\delta_1,\delta_2)$ we have
$\begin{array}{ll}
\displaystyle \left|\frac{f(x)}{g(y)}-\frac{f(x_0)}{g(y_0)}\right| &=\displaystyle\left|\frac{f(x)g(y_0)-f(x_0)g(y)}{g(y)g(y_0)}\right| =\displaystyle\left|\frac{g(y_0)(f(x)-f(x_0))-f(x_0)(g(y)-g(y_0))}{(g(y)-g(y_0))g(y_0)+g(y_0)^2}\right|\\\\
&\displaystyle<\frac{\varepsilon\left(|f(x_0)|+|g(y_0)|\right)}{\bigg||g(y_0)^2|-|g(y)-g(y_0)||g(y_0)|\bigg|}<\frac{\varepsilon\left(|f(x_0)|+|g(y_0)|\right)}{\frac 12|g(y_0)^2|}<k\,\varepsilon\end{array}$
with $k$ constant, thus the quotient is continuous in $(x_0,y_0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove square root of 3 is irrational I have read several articles on math.stackexchange.com, and also this article: https://www.grc.nasa.gov/www/k-12/Numbers/Math/Mathematical_Thinking/irrationality_of_3.htm
I still can't quite understand why one of the numbers can't be even.
Especially this part:
"Since any choice of even values of a and b leads to a ratio a/b that can be reduced by canceling a common factor of 2, we must assume that a and b are odd, and that the ratio a/b is already reduced to smallest possible terms."
Isn't 2/3 or 3/2 smallest possible terms as well?
|
Your citation establishes that $a$ and $b$ must either both be even, or both be odd:
*
*$b$ is either even or odd (2 cases)
*
*If $b$ is odd, then $b^2$ is odd. Hence $3b^2$ is odd, being the product of two odd numbers. But $3b^2 = a^2$, and so $a^2$ is odd. And this means $a$ must be odd, since the square of an even number would be even.
*If $b$ is even, then $b^2$ is even, so $3b^2$ is even, so $a^2$ is even, so $a$ is even.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2450949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Eigenbundles of an self-adjoint endomorphism of complex vector bundles Let $L\colon E\to E$ be an endomorphism of complex Hermitian vector bundle over a smooth manifold $M$. Suppose $L$ is self-adjoint and its eigenvalues are constant. How do I see that $E$ can be decomposed into a direct sum of eigenbundles.
For each fiber $E_x$ where $x\in M$, I can decompose $E_x$ into eigenspaces. However, how do I see that these eigenspaces form subbundles of $E$? What if the ranks of these eigenbundles are not locally constant and jump around?
My idea: If I can smoothly choose an eigensection $\sigma$ of $E$, i.e., $L(x)\sigma(x)=\lambda \sigma(x)$ for every $x\in M$. Then, I can consider the line bundle generated by $\sigma$ and its orthogonal complement $W$. Since $L$ necessarily maps $W$ into itself, I can choose another eigensection again. By induction, I am done. However, I don't know how to smoothly choose an eigensection at the beginning.
|
If I understand well, you assume that the eigenvalues $a_1,\dots,a_k$ of $L_x$ are independent of the point $x$. Then you can sort things out by using the the projection onto an eigenspace of a diagonalizable linear operator can be written as a polynomial in the operator. For each $i=1,\dots,k$, define
$$
P_i:=(\prod_{j\neq i}(a_i-a_j)^{-1})\prod_{j\neq i}(L-a_j\cdot id_E),
$$
where the second product is a composition of vector bundle homomorphisms $E\to E$. Clearly, this defines a vector bundle homomorphism $E\to E$. Using that $L_x$ is diagonalizable with eigenvalues $a_1,\dots,a_k$ you see that $(P_i)_x$ is the projection onto the $a_i$-eigenspace in $E_x$. In particular, in each point $x$, the ranks of the maps $(P_i)_x$ add up to the fiber dimension of $E$. Since locally, none of these ranks can drop, they have to be locally constant, so for each $i$, the image of $P_i$ is a smooth subbundle of $E$. (The same argument applies if the eigenvalues do depend on the point, as long as the stay pairwise different.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $A$ and $B$ have $n$ and $m$ elements respectively with $A\cap B=\emptyset$. Prove that $A\cup B$ has $m+n$ elements. If $A$ and $B$ have $n$ and $m$ elements respectively with $A \cap B=\emptyset$. Prove that $A \cup B$ has $m+n$ elements.
The solution in the book starts by letting $f$ be a bijection from $\{$1,..m$\}$ onto $A$ and $g$ a bijection from $\{$1,..n$\}$ onto $B$
Then proving the function
$h:=$$f(i)$ if $i=1,...,m$
Or $g(i-m)$ if $i=m+1,...,m+n$ is a bijection from $\{$1,..,m,m+1,..,m+n$\}$ onto $A$$\cup$$B$
But I'm stuck with the surjection part
|
Since $A$ has $m$ elements and $B$ has $n$ elements, there are bijections $f:\{1,\ldots,m\}\to A$ and $g:\{1,\ldots,n\}\to B$.
Now define $h:\{1,\ldots,m+n\}\to A\cup B$ by
$$
h(i) = \begin{cases}
f(i) & \text{if}\ i\in\{1,\ldots,m\}, \\
g(m-i) & \text{if}\ i\in\{m+1,\ldots,m+n\}.
\end{cases}
$$
First you should convince yourself that $h$ is well-defined.
To see that it is an injection, suppose $h(i)=h(j)$. Then either $i,j\in\{1,\ldots,m\}$ or $i,j\in\{m+1,\ldots,m+n\}$ because $A$ and $B$ are disjoint and $h$ maps $\{1,\ldots,m\}$ into $A$ and $\{m+1,\ldots,m+n\}$ into $B$. If $i,j\in\{1,\ldots,n\}$, then we have $f(i)=h(i)=h(j)=f(j)$, so the injectivity of $f$ gives $i=j$. Similarly we get $i=j$ if $i,j\in\{m+1,\ldots,m+n\}$.
For surjectivity, fix $x\in A\cup B$. Then either $x\in A$ or $x\in B$. If $x\in A$, we can use the surjectivity of $f$ to obtain $i\in\{1,\ldots,m\}$ such that $f(i)=x$. In this case $h(i)=f(i)=x$. On the other hand, if $x\in B$, then we can use the surjectivity of $g$ to obtain $i\in\{1,\ldots,n\}$ such that $g(i)=x$. Then $f(m+i)=g(i)=x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Volume of a bounded solid in R3 What is the volume of the solid in xyz-space bounded by \begin{align}
y = 2 - x^2 \\
y = x^2 \\
z = 0 \\
z = y + 3 ?
\end{align}
I have formatted the problem as follows:
$$\iiint 1 \,dx\,dy\,dx$$
\begin{align}
-1 ≤ x ≤ 1 \\
2 - x^2 ≤ y ≤ x^2 \\
0 ≤ z ≤ y + 3 \\
\end{align}
When I solve the triple integral, though, I get a value of zero. My guess is that my limits of integration are wrong, but I need a nudge in the right direction!
|
The volume is symmetric with respect to the $yz$-plane, so in the integral:
$$
\int _{-1}^1\int_{x^2}^{2-x^2}\int_0^{y+3} dzdydx
$$
the two parts $\{-1<x<0\}$ and $\{0<x<1\}$ have opposite sign and its sum is null. You have to express the volume as:
$$
V=2\int _{0}^1\int_{x^2}^{2-x^2}\int_0^{y+3} dzdydx
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Real Analysis: density of rationals and reals My textbook in real analysis proves that the rationals are dense in the reals by using first the archemidean principle and then constructing a fairly long and contrived proof.
What I am wondering is, is that could you not first show that the sum of two rationals divided by a rational is a rational number (the sum can be done by contradiction, and ditto for the product of two rationals), and then just point out that the arithmetic average of any two nonequal rational numbers is gaurenteed to be in between them? Proving that step should be straightforward, assume wlog xx
y+x>2x
(x+y)/2>x
And then this is similar if x and y are in general real, provided that one uses the axiom that addition and division is a closed operation.
Can someone point out why this is no rigorous if it isn't? Pardon the lack of latex... Still learning it! Cheers :)
|
You can do that if you know about decimal representations (yes, you know, but it looks like you are there building reals from the axioms).
If you were to define rationals as periodic decimals and irrationals as non-peridic, then you would have, for irrational $\alpha=a_0.a_1a_2...a_n...$, the rational $a_0.a_1a_2,...a_n999....$ is $> \alpha$ but $< \alpha +1=(a_0+1).a_1a_2...a_n...$ (you should take into account here that two different decimal representations can represent same number and avoid that).
With decimals, you can easily prove that between two rationals there is a rational and irrational and that between two irrationals there is a rational and irrational.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $\limsup\limits_{n\to\infty}a_n=+\infty$ and $\limsup\limits_{n\to\infty}b_n\in\mathbb{R}$, then $\limsup\limits_{n\to\infty}(a_n+b_n)=+\infty$? There is the following exercise in a book:
$\limsup \limits_{n \to \infty} (a_n + b_n) \leq \limsup \limits_{n \to \infty} a_n + \limsup \limits_{n \to \infty} b_n $
And the author(Kazuo Matsuzaka) writes as follows in his solution without proofs:
If $\limsup \limits_{n \to \infty} a_n = +\infty$ and $\limsup \limits_{n \to \infty} b_n = +\infty$, then $\limsup \limits_{n \to \infty} (a_n + b_n) = +\infty$.
If $\limsup \limits_{n \to \infty} a_n = +\infty$ and $\limsup \limits_{n \to \infty} b_n \in \mathbb{R}$, then $\limsup \limits_{n \to \infty} (a_n + b_n) = +\infty$.
Please tell me how to prove the above two statements if they are true.
Please tell me counter examples if the above two statements are false.
|
The first is easy to prove:
Since,for all $m \in \mathbb{N}$:
$$
\limsup_{n\to\infty}a_n \leq \sup_{k\geq m} a_k
$$
If:
$$
\limsup_{n\to\infty}a_n = +\infty
$$
Then:
$$
\sup_{k\geq m} a_k = +\infty
$$
Therefore if:
$$
\limsup_{n\to\infty}a_n = +\infty\\
\limsup_{n\to\infty}b_n = +\infty
$$
Then, for all $k \in \mathbb{N}$:
$$
\sup_{k\geq m}a_n = +\infty\\
\sup_{k\geq my}b_n = +\infty
$$
Therefore:
$$
\sup_{k\geq m}a_n > 0\\
\sup_{k\geq m}b_n > 0
$$
Therefore:
$$
\sup_{k\geq m}a_n + \sup_{k\geq m}b_n > \sup_{k\geq m}a_n
$$
By the linearity of $\sup$:
$$
\sup_{k\geq m}(a_n + b_n) > \sup_{k\geq m}a_n
$$
Taking limits on $k$(allowable since the inequality is valid for all $k \in \mathbb{N}$):
$$
\lim_{k\to\infty}\sup_{k\geq m}(a_n + b_n) > \lim_{k\to\infty}\sup_{k\geq m}a_n
$$
Thus, by the definition of $\limsup$:
$$
\limsup_{n\to\infty}(a_n + b_n) > \limsup_{n\to\infty}a_n
$$
Since:
$$
\limsup_{n\to\infty}a_n = +\infty
$$
Therefore:
$$
\limsup_{n\to\infty}(a_n + b_n) = +\infty
$$
I'll have to think about the second statement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Endomorphisms: if $Im(g)$ is contained in $Im(f)$ then necessarily $g=f \phi$? Let $f$ and $g$ be endomorphisms of a vector space $V$.
If $Im(g)$ is contained in $Im(f)$ then does there necessarily exist an endomorphism $\phi$ of $V$ such that $g=f \phi$?
Do we also have that $Ker(g)$ contained in $Ker(f)$ implies the existence of an endomorphism $\phi$ such that $f=\phi g$?
|
Hint : The hypothesis already tells you that for all $v\in V$, $g(v)\in Im(f)$, i.e. there exist $w\in V$ such that $f(w)=g(v)$. Use this to construct first a function $\phi$ on a basis of $V$, and then extend that function to a linear application defined on $V$.
For the second part, first take a basis $\mathcal{B}_1$ of $\ker g$, and then extend it to a basis $\mathcal{B}$ of $V$. Then the complement $\mathcal{B}_2=\mathcal{B}\setminus \mathcal{B}_1$ is a linearly independent family, and $g(\mathcal{B}_2 )$ is a basis of the image of $g$. Extend it to a basis $\mathcal{B}'$ of $V$, and then you can simply define $\phi $ by putting $\phi(g(b))=f(b)$ for all $b\in \mathcal{B}_2$ and anything you want for the other vectors in your basis $\mathcal{B}'$. You can now check that $f=\phi\circ g$ by comparing the values on $\mathcal{B}$ : on $\mathcal{B}_1$ both are zero by hypothesis, on $\mathcal{B}_2$ they are equal by construction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Symplectic group $Sp(n)$ acts transitively on the unit Sphere $S^{4n-1}$ I'm trying to prove that the symplectic group $Sp(n)$ acts transitively on the sphere $S^{4n-1}$, and as a consequence $Sp(n)/Sp(n-1)$ is homeomorphic to $S^{4n-1}$. To me $Sp(n)$ is the group of $2n\times 2n$ unitary complex matrices satisfying $AJ=J\bar A$ where J is the matrix
\begin{bmatrix}
0 & -I_n\\
I_n & 0
\end{bmatrix}
It is clear to me that these matrices map the sphere into the sphere. To prove transitivity it is enough to show that any vector in the sphere can be mapped to for example the vector $(1,0,\dots,0)$ via multiplication with a symplectic matrix, and to prove that $Sp(n)/Sp(n-1)$ is homeomorphic to $S^{4n-1}$ it is enough to show that the stabilizer of $(1,0,\dots,0)$ is precisely $Sp(n-1)$.
Well, this is the part in where I'm stuck. I don't know how to construct a symplectic matrix mapping a vector in the sphere to $(1,0,\dots,0)$ or to deduce that $Sp(n-1)$ is the stabilizer of that point. I'd appreciate any help with this because everywhere I read these fact are presented as obvious.
[EDIT] Probably it is easier to send the vector $(1,0,\dots,0)$ to any other $x$. In that case I need to construct a symplectic matrix whose first column is $x$.
|
First, let's figure out a nice description of $Sp(n)$. For $A\in Sp(n)$, write it in the block form $A = \begin{bmatrix} B & C\\ D & E\end{bmatrix}$ where each block is $n\times n$. Then a simple calculation shows that $AJ = J\overline{A}$ iff $A$ has the form $A = \begin{bmatrix} B & -\overline{D}\\ D & \overline{B}\end{bmatrix}$.
Now, given $x = x_1\in \mathbb{C}^{2n}$ of unit length, we extend it to an orthonormal set as follows. Pick $x_2$ to be a unit length vector in the orthogonal complement to $\{x_1, Jx_1\}$, pick $x_3$ to be a unit length vector in the orthogonal complement to $\{x_1, Jx_1, x_2, Jx_2\}$, etc.
If we write $x_i = \begin{bmatrix} y_i \\ z_i\end{bmatrix}$, where both $y_i, z_i \in \mathbb{C}^n$, then the way we choose the $x_i$ guarantees that $\begin{bmatrix} y_i \\ z_i\end{bmatrix}$ is perpendicular to both $\begin{bmatrix} y_j \\ z_j\end{bmatrix}$ (when $j\neq i$) and $\begin{bmatrix} -z_j \\ y_j\end{bmatrix}$ (for any $j$).
Now, we choose $B$ and $D$ so that the block $2n\times n$ matrix $\begin{bmatrix} B\\ D\end{bmatrix} = \begin{bmatrix} x_1 & x_2 & ... & x_n\end{bmatrix}$.
I claim that $A = \begin{bmatrix} B & -\overline{D} \\ D & \overline{B}\end{bmatrix}$ is actually in $U(2n)$. First, since the blocks on the right are just rearrangements and conjugates of things on the left, it's clear that every column of $A$ has unit length. So, it is enough to show that the columns are pairwise perpendicular. This is obvious if both columns come from the left blocks or if both come from the right blocks. The fact that $x_i$ is perpendicular to $Jx_j$ shows it when one column comes from the left block and the other from the right block.
Now, simply note that $A\begin{bmatrix}1 \\ 0 \\ \vdots \\ 0\end{bmatrix} = x$.
$ \ $
Now, let's compute the stabilizer at the point $p = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0\end{bmatrix}$. If $Ap = p$, then it follows that the first column of $A$ is $p$, so the first column of $D$ is all $0$s, as is the first column of $B$, except that the top entry in $B$ is $1$.
Because $A\in U(2n)$, if the top left entry is $1$ the rest of the entries in the top row must be $0$.
It follows that $B$ has the form $B = \begin{bmatrix} 1 & 0 \ldots 0 \\ \begin{array} x0 \\ \vdots \\ 0\end{array} & B'\end{bmatrix}$ and the $D$ has the form $D = \begin{bmatrix} 0 & 0\ldots 0\\ \begin{array} x0\\ \vdots \\ 0\end{array} & D'\end{bmatrix}$ where $B'$ and $D'$ are both $(n-1)\times (n-1)$ matrices. If we set $A' = \begin{bmatrix} B' & -\overline{D'}\\ D' & \overline{B'}\end{bmatrix}$, it now follows easily that $A'\in Sp(n-1)\subseteq U(2(n-1))$. This shows that the stabilizer at $p$ is contained in $Sp(n-1)$, embeddeded into $Sp(n)$ as shown.
Conversely, since the first column of any matrix in $Sp(n-1)\subseteq Sp(n)$ is $\begin{bmatrix} 1 \\0 \\ \vdots \\ 0\end{bmatrix}$, $Sp(n-1)$ is a subset of the isotropy group at $p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Convex function and conditional expectation I have the following question. I hope someone has encountered this or can point me in a direction.
Probability space $(\Omega,\mathcal{F}, \mathbb{P})$. Let $X:\Omega\to\mathbb{R}$ be a random variable and $f:\mathbb{R} \times \mathbb{R} \to \mathbb{R}$ be a function. Suppose we have a sub-sigma-algbra $\mathcal{G}\subset\mathcal{F}$.
If $\mathbb{E}[f(X, y)]$ is convex in y, is it true that
$$\mathbb{E}[f(\mathbb{E}[X : \mathcal{G}], y)]$$
is also convex in y???
If not, what conditions do we need to impose to ensure convexity?
Thank you.
|
A simple counterexample. Let $X\sim N(0,1)$ and $f(x,y)=(x^2-1)y^2$. It follows that
$$
E\left [f(X,y)\right ]=0,
$$
which is convex in $y$. If $\mathcal{G}=\{\varnothing, \Omega\}$,
$E(X|\mathcal{G})=EX=0$ and so
$$
E\left [f(E(X|\mathcal{G}),y)\right ]=-y^2,
$$
which is not convex. Unfortunately, I have no idea what assumptions should be imposed to ensure the convexity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Tossing coins, one fair, one unfair: how to represent the probability tree? Suppose I have a fair coin and an unfair coin. The fair coin has head, tail, the unfair coin has both heads.
You pick one coin at random and toss them two times and observe the outcomes.
Which of the figure below is a better probability tree representation of these experiments (top or bottom)?
I am raising the question because, in the first figure, it seems to draw the head twice is redundant, given that the probability of getting a head is certainty.
|
I think you interpreted it wrong for unfair coin probablity of getting head is greater than or less than half not equal to half also it may not be equal to one so your second tree emphasize on only one condition that coin is unfair with all head with probablity 1 which not seems to look correct.This is my thought maybe you are also right.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2451867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Approximation for the sum of primes I have attempted to put together an approximation for the sum of primes.
I've used the much simplified $$\operatorname{li}(x)=\frac{x}{\log(x)-1}$$ combined with $$\frac{x}{2}$$ to give:
$$\frac{x^2}{2(\log(x)-1)}$$
The only thing is it is not accurate so:
1) I wonder if I've gone wrong? or does it get accurate with numbers $> 10000$?
2) Suggestions for better approximations but not depending on many iterations.
3) $$x/ \log x$$ pointed out by @mixedmath is the usual way to introduce the prime number theorem but wouldn't using the approximation to $\operatorname{li}(x)$ above be better with $x/2$?
|
I'm not sure why you're using $x/(\log x - 1)$ instead of $x/ \log x$ (which is what's given in the standard prime number theory), but if you were to use $x/\log x$ you would actually get the correct asymptotic: $x^2 / 2 \log x$. This is a pretty slowly converging asymptotic: the error term is on the order of $O(x^2/\log^2 x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why does $arg(z^{2})\neq 2arg(z)$? I was reading a text that i found about the argument of a complex number (http://scipp.ucsc.edu/~haber/ph116A/arg_11.pdf) but I dont truly understand the proof given in that text about why is it that $arg(z^{2})\neq 2arg(z)$, so if it is true, can you give me an idea of why does it happen, because most of the complex anaylisis textbooks say that this property is always true,$arg(z_{1}z_{2})=arg(z_{1})+arg(z_{2})$ if $z_{1},z_{2}$ are not zero, but if what I asked is true it means that it is false for $z_{1}=z_{2}$. I am really confused about this.
|
The property $\arg(z_1z_2)=\arg(z_1)+\arg(z_2)$ is not true in general.
Assuming $\arg$ takes its values on $[0,2\pi)$, then what happens when $z_1=z_2=e^{\frac{3}{2}\pi i}=-i$?
If by $\arg$ the multivalued function is meant, then it depends on how you define $2\arg(z)$.
More generally, one can define $A+B$ for two subsets of the complex numbers (or, more generally, subsets of an additive group) by
$$
A+B=\{a+b:a\in A,b\in B\}
$$
Now, generally,
$$
2A=\{2a:a\in A\}\ne A+A\tag{*}
$$
The inclusion $2A\subseteq A+A$ is easy to see. For the other, one needs that, for every $a,b\in A$, there exists $c\in A$ with $a+b=2c$. This can happen or not, depending on the set $A$. For instance, if $A=\mathbb{Q}$ is the set of rational numbers, then $2\mathbb{Q}=\mathbb{Q}+\mathbb{Q}$; but if $A=\mathbb{Z}$ is the set of integers, then $\mathbb{Z}+\mathbb{Z}=\mathbb{Z}$, but $2\mathbb{Z}$ is the set of even integers.
In case the set $A$ is described as
$$
A=\{a_0+nc:n\in\mathbb{Z}\}
$$
where $a_0$ and $c$ are fixed complex numbers (in your case $c=2\pi i$), the equality
$$
2A=A+A
$$
doesn't hold. Indeed
$$
2A=\{2a_0+2nc:n\in\mathbb{Z}\}
$$
whereas $(a_0+0c)+(a_0+1c)=2a_0+c\in A+A$ and
$$
2a_0+c=2a_0+2nc
$$
cannot be satisfied with integer $n$.
Of course this could be easily repaired by defining
$$
2A=A+A
$$
instead of the naïve definition (*).
Why your textbook is insisting on this is unknown to me, but as you see it has nothing to do with arg, and is rather a property of addition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Two $p$-norms are not equivalent for different $p$ on $\ell_1$ Given $1\le p < r < \infty$, prove that $\|\cdot\|_p$ and $\|\cdot\|_r$ are not equivalent.
The approach I was trying is as follows:
-
Want to show that there do not exist $m,M>0$ such that $$m\|x\|_r\le\|x\|_p\le M\|x\|_r$$
for all sequences $(x_k)\in \ell_1$.
We know that all sequences in $\ell_1$ converge in a $p$-norm, with $p\ge 1$. The idea, I think, is to show that we can always have a convergent sequence whose $p$-norm is bigger than its $r$-norm, regardless of how large our $M$ is. This would imply that there is no upper bound on the right-hand side of the above inequality.
The problem I'm having is how to formalize this. One might think of an upper bound for the right-hand side as follows:
*
*Define a sequence $M_n := \frac{\left(\sum\limits_{k=1}^\infty |x_k|^r \right)^{1/r}}{\left(\sum\limits_{k=1}^\infty |x_k|^p \right)^{1/p}}$, where $n:=\lceil\|x\|_1\rceil$, and then show that $M_n$ diverges.
But I don't think this is a nice way. So I'd appreciate some hints on this.
|
Let $x_n\in l_1$ where the first $n$ co-ordinates of $x_n$ are each equal to $1$ and the remaining co-ordinates are all $0.$ Let $1\leq p_1<p_2<\infty.$
For brevity let $q=(1/p_1+1/p_2)/2$ and $r=(1/p_1-1/p_2)/2 .$
Let $y_n=x_n/n^q.$ Then $$\|y_n\|_{p_1}=n^r\geq 1 \; \text { and }\; \|y_n\|_{p_2} =n^{-r}\to 0 \;\text { as }\; n\to \infty.$$ In the $l_{p_2}$ norm $(y_n)_n$ converges to the vector $0$ but in the $l_{p_1}$ norm $0\not \in Cl(\{y_n\}_n)$ so the topologies are different.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Nonlinear Programming by Wiley 3rd ed, 1.2 a,b (I don't understand obj function) The following is from page 30, chapter 1 of Wiley's Nonlinear Programming: Theory and Algorithms (third edition). This is for a graduate course in Optimization Theory. My problem is that I do not understand how they came up with the objective function from the given information.
[1.2] Suppose that the daily demand for product $j$ is $d_j$ for $j=1,2$. The demand should be met from inventory, and the latter is replenished from production whenever the inventory reaches zero. Here, the production time is assumed to be insignificant. During each production run, $Q_j$ units can be produced at a fixed setup cost of $k_j$ and a variable cost of $c_j Q_j$. Also, a variable inventory-holding cost of $h_j$ per unit per day is also incurred, based on the average inventory. Thus, the total cost associated with product $j$ during $T$ days is $T d_j k_j/Q_j + T c_j d_j + T Q_j h_j/2$. Adequate storage area for handling the maximum inventory $Q_j$ has to be reserved for each product $j$. Each unit of product $j$ needs $s_j$ square feet of storage space, and the total space available is $S$.
a. We wish to find optimal production quantities $Q_1$ and $Q_2$ to minimize the total cost. Construct a model for this problem.
b. Now suppose that shortages are permitted and that production need not start when inventory reaches a level of zero. During the period when inventory is zero, demand is not met and the sales are lost. The loss per unit thus incurred is $l_j$. On the other hand, if a sale is made, the profit per unit is $P_j$. Reformulate the mathematical model.
I have fooled around with this function a little, but have still yet to understand it. The subscripted quantities are clearly vectors. Something strange here is that the quantity $c_j Q_j$ is given meaning in the description but does not appear in the objective function. Why is the third term divided by 2? Is that because half of production is for product 1 and the other half is for product 2 (that doesn't make sense)?
I am not principally concerned with even answering the questions yet, I want to understand the premise first. Can anyone explain this, please?
|
I am mostly comfortable with the objective function now, and propose the following solution:
Let $C(Q_j)=T[d_j k_j/Q_j + c_j d_j + Q_j h_j/2]$, under the (possibly erroneous) assumption that only $Q_j$ varies and the rest of the unknowns are constant. Under the simplification that all other unknowns are constant, let $a=Td_j k_j$, $b=Th_j/2$, $c=T c_j d_j$. Then
$C(Q_j)=a/Q_j + bQ_j + c$.
Differentiating and setting the derivative equal to zero, we have
$C'(Q_j)=-a(Q_j)^{-2} + b = 0$
$\implies (Q_j)^{2} = b/a$
$\implies Q_j = \pm\sqrt{b/a}$
So then $Q_j = \pm \sqrt{\frac{Td_jk_j}{Th_j/2}}$
In this context, $Q_j \approx \sqrt{\frac{2d_jk_j}{h_j}}$.
Presumably, the information given after the objective function was provided for use in Part b. I'm still thinking about that solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A Hard Inequality Given that $x,y,z$ are positive real numbers such that $2x+4y+7z=2xyz$, find the minimum of $L=x+y+z$.
Does anybody have a solution that is purely algebraic?
I was only able to solve it with Lagrange multipliers.
Also, how would you show that the solution given by Lagrange multipliers is in fact a global solution?
Note: By a change of variables, this is equivalent to minimizing $$L=a+b+c-\frac{3}{2}$$ subject to $$2 a b c = a + 4 b + 2 a b + 7 c + a c - 9$$
where $a>0,b>\frac{1}{2},c>1$.
$L$ is minimized when $a=b=c=3$ and $L=7.5$.
Source: https://brilliant.org/problems/another-weird-inequality/
(I did not write this question)
|
For $x=3$, $y=2.5$ and $z=2$ we get the value $7.5$.
We'll prove that it's a minimal value.
Indeed, let $x=3a$, $y=2.5b$ and $z=2c$.
Thus, the condition gives
$$3a+5b+7c=15abc$$ and we need to prove that
$$6a+5b+4c\geq15$$ or
$$(6a+5b+4c)^2(3a+5b+7c)\geq15^3abc,$$ which is true by AM-GM:
$$(6a+5b+4c)^2(3a+5b+7c)\geq\left(15\sqrt[15]{a^6b^5c^4}\right)^2\cdot15\sqrt[15]{a^3b^5c^7}=15^3abc.$$
Done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Find all roots of $2x^3+16$ in $\mathbb C$
Find all roots of $p(x) = 2 x^3 + 16$ in $\mathbb C$.
I found my answers to be x = 2, -1+i$\sqrt{3}$, -1+i$\sqrt{3}$.
But when I put the expression into Symbolab, it gives me -2, 1+i$\sqrt{3}$, 1+i$\sqrt{3}$ as roots of p(x) in C.
Can someone explain where I went wrong?
This is how I did it;
First, I rewrite p(x) and get
$2x^3+16=0$
$2x^3=-16$
$x^3=-8$ ------ (1)
Express (1) in Euler form
$x=re^{\theta i}$
$x^3=r^3e^{3\theta i}$
Let w = -8
$ |w| = \sqrt{(-8)^2} = 8$
$\theta = tan^{-1}(0) = 0$
$w = 8e^{0\theta}$
Now I have
$r^3e^{3\theta i} = 8e^{0\theta}$
Equate the modulus and argument
$r^3 = 8$
$r = 2$
$3\theta = 0+2\pi k$ ------- for k $\in Z$
$\theta = {2\pi k \over 3}$
Now I have
$ x = 2(cos {2\pi k \over 3} + i sin {2\pi k \over 3})$
Calculating x by substituting k = 0,1,2
k = 0, x = 2
k = 1, x = 2(${-1 \over 2} + i {\sqrt{3} \over 2}$) = -1+i${\sqrt{3}}$
k = 2, x = 2(${-1 \over 2} - i {\sqrt{3} \over 2}$) = -1-i${\sqrt{3}}$
So all roots of p(x) in C are 2, -1-i${\sqrt{3}}$, -1+i${\sqrt{3}}$
.
.
.
I will be very appreciated if someone can help me out.
|
Another way you can calculate this is to use De Moivre's theorem, which states that if $z = r(\cos \theta + i\sin \theta)$, the $n$th roots of $z$ are $$r^{1/n}\left(\cos\frac{\theta+2\pi k}{n} + i \sin \frac{\theta+2\pi k}{n}\right)$$
From $(1)$, which is $x^3=-8$, we find that $z = 8(\cos \pi + i \sin \pi)$, so $r=8$, $\theta=\pi$ (because $8$ and $-8$ are opposite to each other), and $n=3$ (we are finding the $3$rd roots of $z$). Substituting these values into the formula, we have:
First root: $8^{1/3}(\cos \frac{\pi+2\pi *0}{3} + i \sin \frac{\pi+2\pi *0}{n}) = 2(\cos \frac{\pi}{3} + i\sin \frac{\pi}{3}) = 1+i\sqrt{3}$
Second root: $8^{1/3}(\cos \frac{\pi+2\pi *1}{3} + i \sin \frac{\pi+2\pi *1}{3}) = 2(\cos\pi+i \sin \pi) = -2$
Third root: $8^{1/3}(\cos \frac{\pi+2\pi *2}{3} + i \sin \frac{\pi+2\pi *2}{3}) = 2(\cos \frac{5\pi}{3} + i\sin\frac{5\pi}{3}) = 1-i\sqrt{3}$
Use the fact that $n$th roots of $z$ have $n$ complex roots. Have we have found all the complex $3$rd roots of $8$ yet?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Show determinant is non-negative Let $A,B \in M_{2}(\mathbb R)$ . Show that $\det((AB+BA)^4 + (AB-BA)^4)\geq 0$
My attempt: expression becomes $\det(2(M-N)^2+16MN)$ where $M=(AB)^2$ and $N=(BA)^2$.
Not sure how to continue from here.
Any hints appreciated.
|
Let $d=\det(AB-BA)$ and $\lambda_1,\lambda_2$ be the two eigenvalues of $AB+BA$. Since $X^2=-\det(X)I_2$ and in turn $X^4=\det(X)^2I_2$ for any traceless $2\times2$ matrix $X$, we get
$$
\det\left[(AB+BA)^4 + (AB-BA)^4\right]
=\det\left[(AB+BA)^4 + d^2I_2\right]
=(\lambda_1^4+d^2)(\lambda_2^4+d^2).
$$
As $AB+BA$ is real, either $\lambda_1$ and $\lambda_2$ are both real or they are complex conjugates to each other. In either case, the assertion follows immediately.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Solving linear differential equation $y'+\frac{1}{3}sec(\frac{t}{3})y=4cos(\frac{t}{3})$ using integrating factor Given \begin{array}{l} y^{\prime} +\dfrac{1}{3}\,\sec\left(\dfrac{t}{3}\right) y=
4\, \cos\left(\dfrac{t}{3}\right) \\ y(0)=3 \end{array} where $ 0<\dfrac{t}{3}<\dfrac{\pi}{2}$, I must find the general solution $y(t)$. The problem I keep running into is when I am trying to find the integrating factor. I'm wondering if there is a way for me to simplify the integrating factor in such a way that I am able to get the differential equation into the form $(\mu(t)y(t))'=\mu(t)g(t)$ so that I can integrate both sides? Thanks!
|
compute $$\mu(t)=e^{\int\frac{1}{3}\sec(t/3)dt}=\frac{\sin(t/6)+\cos(t/6)}{\cos(t/6)-\sin(t/6)}$$
and multiply both sides with $$\mu(t)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2452936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How do I find the limit of the function? How do I find the limit for the function $(1 + h)^{\frac{1}{h}}$ as $h$ goes to $0$? I do not know where to start. We just started using Logs.
|
Write $$(1+h)^{\frac{1}{h}}=e^{\frac{1}{h} \ln{(1+h)}}$$
So what is the limit $\frac{1}{h}\ln{(1+h)}$ as $h \to 0$?
Take the function $f(h)=\ln(1+h)$ and we have $f(0)=0$
Then $$\lim_{h \to 0} \frac{\ln(1+h)}{h}=\lim_{h \to 0}\frac{f(h)-f(0)}{h-0}=f'(0)=1$$
Thus the general limit is $e$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
A question about analytic geometry (vectors) Let $a , b, c$ be three vectors such that $a+3b+c = o$ where $o$ is $(0,0,0)$ vector. we also know $|a| = 3 , |b| = 4 , |c| =6$. we want to find $a.b + a.c + b.c$
I know we can solve it by saying $a+b+c = -2b$ and then squaring the sides. and at last we get the answer 3/2 . But my problem is that is it even possible for $|a| = 3 , |b| = 4 , |c| =6$ to be such that $a+3b+c = o$ ? I think it is not possible because of triangle inequality. If it is possible, how? and If it is not, why do we get an answer for it?
|
As Lord Shark of the Unknown pointed out in the comments, it's not possible to have three such vectors sum up to zero. But nevertheless the calculation is possible: it is just a series of logical passages which do not yet lead to a contradiction, but surely enough if you continued to use these data in other you might end up with some absurdity.
It's like saying, "let's assume that there exists a number $x$ such that $|x|>1$ and $2x=0$." You still can deduce from the first piece of data that $|2x|>2$ or that $|x|+3>4$...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
how is the minus sign understood in set theory, is it similar to the complement (\) function. Assume:
$A = \{1,2,3\},$
$B = \{2,3,4\}$
Is $A - B = \{1\}$, or is it $\{1\}$ plus the piece of $\{4\}$ that you 'owe', assuming in Venn Diagram you are subtracting a piece of $B$ from $A$ itself that do not contain the element '4'. (Is it fair to visualise sets in Venn Diagrams?)
Or do we just equate the minus($-$) sign to the complement() function.
|
$A-B$ is alternative notation for $A \setminus B$. They both mean the elements that are in $A$ but not in $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Distributing balls in boxes There are five balls of identical sizes but different colors. One of the balls is red, one is blue, one is green and the other two are yellow. Moreover, there are three boxes which are numbered 1, 2 and 3. There are two more boxes but both of them are numbered 4. In how many different ways can we place the five balls in the given five boxes such that each box contains exactly one ball?
|
One way to do it is to "temporarily mark" the identical balls and boxes so that all balls and boxes are distinguishable. That is, we imagine that we put a sticker on one of the yellow balls and on one of the boxes labeled $4.$
With the stickers, we have five distinguishable balls in five boxes
and can apply the known formula for that case.
Now we ask what happens when we remove the stickers. We will "lose" some arrangements because arrangements that were formerly distinguishable are no longer distinguishable.
For example, for the arrangement
$$ (1,R), (2,B), (3,Y), (4,Y), (4,G), $$
without the stickers we cannot tell which of the yellow balls is in box $3$
and we cannot tell which "box $4$" holds the green ball.
For most of the distinguishable arrangements that remain after we remove the stickers, there were four arrangements with stickers;
we can count these by choosing which yellow ball has a sticker and which "box 4" has a sticker.
There are some arrangements where this four-to-one ratio is not true, however:
if both yellow balls are in boxes labeled $4,$ there are only two ways to distinguish arrangements using the two stickers: put the ball with the sticker in the box with the sticker, or put it in the other box.
So you can add up your final answer as follows:
from all the arrangements of five distinguishable balls in five
distinguishable boxes, take half of the arrangements that put both
yellow boxes in boxes labeled $4,$ and add one quarter of the other arrangements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
The union of two open sets is open (In metric Spaces) Let $X$ a set not empty and $(X,d)$ a metric space. Prove he union of two open sets is open.
My proof:
Let $A_1,A_2$ open sets, we need to prove $A_1\cup A_2$ is open.
As $A_1,A_2$ are open set, then for all $a_1,a_2\in A_1,A_2$ respectively we have $r_1,r_2>0$ such that $B(a_1,r_1)\subset A_1$ and $B(a_2,r_2)\subset A_2$
Let $r=\frac{1}{2}min\{r_1,r_2\}$ and $x\in A_1\cup A_2$, then $x \in A_1$ or $x \in A_2$
This implies: $B(x,r)\subset A_1\cup A_2$
In conclusion, $A_1\cup A_2$ is open set.
Note: My definition of Open is $\forall x\in A_1$ exists $r>0$ such that $B(x,r)\subset A$
What is your opinion about my proof? Do you think is a good proof? Is convincing?
|
Considering $a_1,a_2$ is confusing. In fact, the union of any collection of open sets is open. Let $A=\bigcup_t A_t$, where all $A_t$ are open. Let $x\in A$. So, there is at least one $t$ for which $x\in A_t$. Therefore for some $r>0$ we have $B(x,r)\subset A_t\subset\bigcup_t A_t=A$.
This proof works in any topological space, not necessarily metric. It is enough to replace a ball $B(x,r)$ with an open neighbourhood of $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Why is $\left\{ n\in \mathbb{N}:n^2 \right\}$ nonsense, $ $ but $\left\{ n^2: n\in \mathbb{N} \right\}$ correct? Why is $\left\{ n\in \mathbb{N}:n^2 \right\}$ nonsense, $ $ but $\left\{ n^2: n\in \mathbb{N} \right\}$ correct?
From my understanding, $\left\{ n\in \mathbb{N}:n^2 \right\}$ should be read as:
"The set of natural numbers such that each natural number is multiplied to itself."
What's wrong with this?
Why is it not equivalent to $\left\{ n^2: n\in \mathbb{N} \right\}$? $ $ which is read as:
"The set of $n^2$'s such that $n$ is a natural number."
|
To confirm what everybody else has said, after the colon, you need a sentence that may or may not be true. Since there’s no verb in what follows the colon, your formulation is bad grammar.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Given a Markov chain $X \rightarrow Y \rightarrow Z$, under what condition $I(X;Y) = I(X;Z)$ A theorem (The Data Processing Inequality) states that
if $X \rightarrow Y \rightarrow Z$, then $I(X ; Y ) \geq I ( X ; Z )$
Question: I was wondering under what conditions $I(X;Y) = I(X;Z)$?
The proof:
Using chain rule of mutual information, we have
\begin{align*}
I ( X ; Y , Z ) &= I ( X ; Y ) + I ( X ; Z | Y )\\
&= I ( X ; Z ) + I ( X ; Y | Z )
\end{align*}
rewrite the above equalities, we have
\begin{align*}
I ( X ; Y ) + I ( X ; Z | Y ) &= I ( X ; Z ) + I ( X ; Y | Z )\\
I ( X ; Y ) &= I ( X ; Z ) + I ( X ; Y | Z )\\
I ( X ; Y ) &≥ I ( X ; Z )
\end{align*}
To obtain $I(X;Y) = I(X;Z)$ indicates $I(X;Y|Z)=0$,
\begin{align*}
I(X;Y|Z) &= 0\\
D_{\mathrm{KL}}[p(X,Y,Z) \| p(X|Z) p(Y|Z) p(Z)] &= 0\\
p(X,Y,Z) &= p(X|Z) p(Y|Z) p(Z)
\end{align*}
Would it be possible to further simplify it for the conditions?
|
As stated in Cover and Thomas's Elements of Information theory 2e (in the discussion of Theorem 2.8.1, the data processing inequality), you have equality iff $X \to Z \to Y$ is a Markov chain (think of why this is equivalent to $I(X;Y|Z) =0$ and the joint distribution of $X,Y$ given $Z$ under the markovian assumption I've stated and conditional independence).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Showing isomorphism between normal subgroups. This a fairly vague and basic example but i am struggling with it.
Suppose G is a group and it is the internal direct product of two subgroups H and K. $H\times K=G$ write down a explicit isomorphism $\phi :K \to G/H$ and prove that K is isomorphic to $G/H$.
Let $ k \in K $ then let $ \phi (k) = kH $ be the homomorphism.
Pf: consider $\phi (k_1) \phi (k_2) = k_1 H k_2 H $ since H is normal $k_1 H k_2 H
=k_1 k_2 H$ = $\phi ( k_1 k_2 ) $ so $ \phi $ is a homomorphism.
Show $ \phi $ is one to one
Assume $ \phi (k_1) = \phi (k_2)$ then $ k_1H=k_2H$ for some $h \in H $ then we have $ k_1h_1=k_2h_2$ or $ k^{-1}_{2}k_1=h_2h^{-1}_{1}$
Since $h_2h^{-1}_{1} \in H$ it implys that $k^{-1}_{2}k_1 \in H $ but $H \cap K = \{e\} $ so $k^{-1}_{2}k_1 = e$ or $k_1=k_2$ so one to one.
show $\phi $ is onto. I want to show that for all $a \in G/H $ that that the map $ \phi(k) \to kH $ hits every element of $G/H $.
|
$a\in G/H$ then $a= xH$, for $x\in G$. Note that $G = H\times K$, then there exists $h\in H$, $k\in K$ such that $x =kh$. So one has $\phi(k) = kh = kH = a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Number of ways of arranging ten girls and three boys if the boys separate the girls into groups of sizes $3, 3, 2, 2$ Ten girls are to be divided into groups of sizes $3,3,2,2$. Also, there are $3$ boys. Number of ways of linear sitting arrangement such that between any two groups of girls, there is exactly one boy (no boy sits at either extreme end)?
MY SOLUTION:
$10$ girls can be divided into groups of sizes $3,3,2,2$ in
$$\frac{10!}{3!3!2!2!2!2!}$$
ways which gives me unique combination of groups.
I can then arrange these groups in $4!$ ways, and people within them in $3!3!2!2!$ ways.
Finally the $3$ boys in $3!$ ways.
Seems correct? It gives $$\frac{10! 4! 3!}{2! 2!}$$ ways.
|
For $10$ girls we have $10!$ permutations. We have $3!$ for boys. We just put the boys in the right positions. Thus, the result is $10! \times 3!$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2453992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
What is the area of canvas required to make a conical surface tent with height $35~\text{m}$ and radius of base $84~\text{m}$?
A conical circus tent is to be made of canvas. The height of the tent is $35~\text{m}$ and the radius of the base is $84~\text{m}$. What is the area of canvas required?
I calculated slant height which came out to be $76.3$ and then I applied lateral surface area of cone formula which is $\frac{22}7\times76.3\times84$
But my answer is wrong. Right answer is $24024$.
This a gmat exam question.
|
The slant height is $a=\sqrt{h^2+r^2}=91m$
$$S=\pi r a \approx 3.14\cdot 84 \cdot 91\approx 24024m^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
what is the image of the set $ \ A=\{(x,y)| x^2+y^2 \leq 1 \} \ $ under the linear transformation what is the image of the set $ \ A=\{(x,y)| x^2+y^2 \leq 1 \} \ $ under the linear transformation $ \ T=\begin{pmatrix}1 & -1 \\ 1 & 1 \end{pmatrix} \ $
Answer:
From the given matrix , we can write as
$ T(x,y)=(x-y,x+y) \ $
But what would be the image region or set $ \ T(A) \ $ ?
|
Another way to look at this question is to look at the action of $T$ on the complex plane $\mathbb{C}$. If we consider $\mathbb{C}$ as a vector space over $\mathbb{R}$, it is a $2$-dimensional space with (standard) basis $(1, i)$. Then a complex number $x + iy$ in cartesian form maps to $\begin{pmatrix} x\\ y \end{pmatrix}$.
The action of $T$ corresponds to multiplication by $1 + i$, as,
$$T\begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} x - y\\ x + y \end{pmatrix} \sim (x - y) + i(x + y) = (x + iy)(1 + i)$$
So, we are looking at the set of complex numbers $(1 + i)z$ such that $|z| \le 1$. Note that
$$|(1 + i)z| = |1 + i| |z| = \sqrt{2}|z|,$$
so $|z| \le 1$ if and only if $|(1 + i)z| \le \sqrt{2}$. Thus the unit disk maps to the disk centred at $0$ with radius $\sqrt{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
What is $\Omega$ in the context of Poisson's equation? I have recently started a new course on PDE's and have already stumbled on an example that I'm struggling to understand:
The main aspect of this example that I am struggling to wrap my head around is not in the proof, it's in the way that $\Omega$ is used in the second and third lines.
Does $\Omega$ denote the solution space? i.e. Does $\Omega = (x,y,z)$ and $\partial \Omega = (\partial x , \partial y , \partial z)$ (if the problem is to be solved in 3 dimensions)?
Furthermore, if it is the case that $\Omega$ denotes the solution space for the problem, why is $\partial \Omega$ the boundary of the solution space?
|
For PDEs, it is very common to denote the domain of the PDE with $\Omega$.
Common examples are a circle/ball
$$
\Omega = B_1(0) := \{ x\in \mathbb R^n : \|x\| < 1 \}
$$
or a square
$$
\Omega = (0,1)\times (0,1)
$$
In this case $\partial\Omega$ denotes the boundary of the domain, and not partial derivatives!
For example, in the case of $\Omega=B_1(0)$
we have
$$
\partial\Omega = \partial B_1(0) := \{ x\in \mathbb R^n : \|x\| = 1 \}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Throw 10 dice, probability of having 6 the $6$th time at the $6$th throw
We throw a die $10$ times. What is the probability of having the number 6 appear for the first time at the $6$th throw?
I have tried $$\frac{5}{6} \cdot \frac{5}{6} \cdot \frac{5}{6} \cdot \frac{5}{6} \cdot \frac{5}{6} \cdot \frac{1}{6} \cdot (6!-1)$$
but it's wrong.
$(6!-1)$ is the expression to say that there is only 1 way to order the 6
Should I consider the last 4 throws ?
any help ?
What is the probability that the first $3$ throws are distinct and the $4$th will be equal to one of the first $3$ ones?
Here I did : $$\frac{6}{6} \cdot \frac{5}{6} \cdot \frac {4}{6} \cdot \frac{1}{2}$$
I don't know if it's correct.
|
*
*For your first question:
$$\left(\frac56\right)^5\cdot\frac16$$
You want to get anything but six on the first five rolls and six on the sixth roll. After that, you do not care whether you get a six or not, so the probability of succeeding is one after the sixth throw. You don't need to use the factorial at all in this question.
If you really want to consider the four dice rolls after you got your first six on the sixth roll, you can look at this similar problem with coin flips (There are a lot fewer outcomes, which is easier to work with.)
Say you have an unfair coin so that you have a $\frac13$ chance of getting tails and a $\frac23$ chance of getting heads. What is the probability that you get your first tail on the fourth try out of six coin flips?
There are $2^6$ possible outcomes, but you only have the favorable outcomes of $HHHTHH$, $HHHTHT$, $HHHTTH$, and $HHHTTT$. Now, the probability of any of these happening is the sum of their probabilities since these are disjoint events. Our probability is therefore $$\left(\left(\frac23\right)^3\cdot\frac13\cdot\left(\frac23\right)^2\right)+ \left(\left(\frac23\right)^3\cdot\frac13\cdot\frac23\cdot\frac13\right)+\left(\left(\frac23\right)^3\cdot\frac13\cdot\frac13\cdot\frac23\right)+\left(
\left(\frac23\right)^3\cdot\frac13\cdot\left(\frac13\right)^2\right)$$
We can factor out the $\left(\frac23\right)^3\cdot\frac13$ from all of these, which is why I wrote the probabilities in that way. This leaves us with $$\left(\frac23\right)^3\cdot\frac13\cdot\left(\left(\frac23\right)^2+2\cdot\frac23\cdot\frac13+\left(\frac13\right)^2\right)$$
You should notice that the stuff in parentheses is equal to $\left(\frac23+\frac13\right)^2=1^2=1$, leving us with $\left(\frac23\right)^3\cdot\frac13$, which is what would happen if you only looked at the first four coin flips. This should make sense for several reasons, chief among them being the coin flips after the fourth coin flip have no effect on the coin flips before them. Likewise, the probability of getting your first six on the sixth dice roll after getting your first six on the sixth roll is one.
*You answered your second question correctly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Sequence of Measurable Functions Converging Pointwise
Let $\{f_n\}$ be a sequence of measurable functions on $E$ that converges pointwise almost every on $E$ to the function $f$. Then $f$ is measurable.
Proof:
Let $g : E \to \Bbb{R}$ be defined by $g(x) = \lim_{n \to \infty} f_n(x)$ for every $x \in E$. Then $f = g$ almost every on $E$, and therefore it suffices to prove that $g$ is measurable. Let $C$ be some closed set in $E$. I trying to show that $ g^{-1}(C) = \bigcap_{n=1}^\infty f_n^{-1}(C)$, but I don't think this is exactly right. I was able to prove the the RHS side is contained in the LHS, as I will now show:
Let $x$ be in the set on the LHS. Then $x \in f_n^{-1}(C)$ for every $n \in \Bbb{N}$ or $f_n(x) \in C$ for every $n \in \Bbb{N}$. Since $C$ is closed, $g(x) = \lim_{n \to \infty} f_n(x) \in C$ or $x \in g^{-1}(C)$.
As for proving the other set inclusion, I have been quite unsuccessful. Is there any way of getting this strategy to work?
|
Proposition
Let $f_n:E \to \Bbb{R}$ a sequence of measurable functions such that $f_n(x) \to f(x)$ pointwise $\forall x \in E$.Then $f$ is measurable.
Proof
If $f_n$ are measurable ,then $\limsup_n f_n,\liminf_n f_n$ are also measurable.
But $f_n$ converges to $f,\forall x \in E\Rightarrow f(x)=\limsup_nf_n(x)=\liminf_nf(x)$
Thus $f(x)$ is measurable.
Let $N$ be the set of elements $x$,where $f_n(x)$ does not converge to $f(x)$.
Then from hypothesis $m(N)=0$
Define the function: $g:E \to \Bbb{R}$ such that $$g(x)=\begin{cases}\
f(x) & E\setminus N\\
0 & x \in N\\
\end{cases}$$
Then from the proposition, $g$ is measurable because $g=(f)1_{E \setminus N}$.
Now we have that $g=f$ almost everywhere and we know that if a function $h$ is almost everywhere equal to a measurable function,then $h$ is measurable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
ln(1+x) maclaurin series I found the first four derivatives of
$ f(x) = ln(1+x) $
Then for all n > 1, $$ f^n(x) = \frac{(-1^{n+1})(n-1)!}{(1+x)^n } $$
So, $$ f^n(0) = (-1^{n+1})(n-1)! $$
By definition Maclaurin Series are defined as:
$$\sum_{n=0}^{\infty} \frac{f^n(0)}{n!}x^n$$
*Since the $ f^n(0) $ is only true when n > 1 then,
$$\sum_{n=1}^{\infty} \frac{f^{n+1}(0)}{n+1!}x^{n+1}$$
I continue by replacing each term and I get:
$$ ln(1+x) = - \sum_{n=1}^{\infty} \frac{(-1)^n}{n*(n+1)}x^{n+1}$$
I know that the answer should be :
$$ ln(1+x) = - \sum_{n=1}^{\infty} \frac{(-1)^n}{n}x^{n}$$
Where did I go wrong?
"*" : Unsure about the step.
I'm really bad at taylor series so if you have any good sites for resources, it would be very much appreciated.
Thank you for any help and answers.
|
If $f(x)=\log(1+x)$, then we have
$$\begin{align}
f^{(1)}(x)&=(1+x)^{-1}\\\\
f^{(2)}(x)&=-(1)(1+x)^{-2}\\\\
f^{(3)}(x)&=(-1)(-2)(1+x)^{-3}\\\\
\vdots
f^{(n)}(x)&=(-1)(-2)\cdots (-(n-1))(1+x)^{-n}\\\\
\end{align}$$
Therefore, we can write
$$\begin{align}
f(x)&=\log(1+x)\\\\
&=\sum_{n=0}^\infty f^{(n)}(0)\frac{x^n}{n!}\\\\
&=\sum_{n=1}^\infty (-1)^{n-1} (n-1)!\frac{x^n}{n!}\\\\
&=\sum_{n=1}^\infty (-1)^{n-1} \frac{x^n}{n}
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that there are infinitely many positive integers n so that 2^n starts with d Given the studies of rotational transformations and measurable transformations, let d be any positive integer. Show that there are infinitely many positive integers n so that $2^n$ starts with d.
|
The claim follows from the following Lemma in dynamics:
Let $\alpha$ be an element in $S^1$ of infinite order (i.e $\alpha^n\not=1$ for every $n$). Then the set $\{\alpha^n : n\in\mathbb{N}\}$ is dense in $S^1$.
How can we use the lemma? the number $2^n$ starts with $d$ if and only if there exists $k$ so that $d\cdot 10^k\leq 2^n<(d+1)\cdot 10^k$, as log is monotone this is true if and only if $log(d)+klog(10)\leq n log(2)< log(d+1) +klog(10)$, divide by $log(10)$ we have
The number $2^n$ starts with $d$ iff there exists $k$ so that $\frac{log(d)}{log(10)}+k\leq n \frac{log(2)}{log(10)}<\frac{log(d+1)}{log(10)}+k$.
This is true, if and only if $e^{2\pi i n \frac{log(2)}{log(10)}}$ lies in the interval between $e^{2\pi i \frac{log(d)}{log(10)}}$ and $e^{2\pi i \frac{log(d+1)}{log(10)}}$.
Use the Lemma above and the proof is complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Union on the empty set and the set containing the empty set I'm trying to get a clearer sense of some of the consequences the axiom of unions has on the empty set. I understand that $\emptyset = \{\} \not= \{\emptyset\}$.
But assuming the following identities are correct, I don't understand why $\bigcup\emptyset = \bigcup \{\} = \bigcup \{\emptyset\}$.
It's likely that I'm floundering on some minutiae of set theory, but it's making me uncomfortable, and I'd like to know what I'm missing.
|
$z \in \bigcup A$ iff there exists $y \in A$ for which $z \in y$. No such $z$ exists for $A = \emptyset$ or $A = \{ \emptyset \}$.
Indeed, for the former, we have no $y$; for the latter, there is a $y$, but it's empty, so there's no $z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2454969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Solving $\frac{4x}{x+7}I know how to solve the problem. The reason I post the problem here is to see whether there is a quick approach, rather than a traditional method, to solve the problem.
The problem is: find $x$ that satisfy $\frac{4x}{x+7}<x$
I considered two cases: $x+7>0$ and $x+7<0$, and then went through details to find x. The process took me a few minutes. I wonder whether there is a simple way to find the answer.
|
Here is another way--whether or not it is simpler depends on the problem and your previous experience.
First solve the equality
$$\frac{4x}{x+7}=x$$
I'm sure you can do that fairly quickly, getting $x=0$ or $x=-3$. Then find the values of $x$ where one or both of the sides of the equation are undefined. In your problem, that is $x=-7$.
Those finitely many values of $x$ break up the real number line into a finite number of intervals, two of them (the left-most and the right-most) infinitely large. Check each of those intervals to see if they make your initial equality true or false. You are guaranteed that any point in an interval will give the same answer as any other point in that interval.
Your final answer is the union of the intervals that made the equality true.
In your case examine the intervals:
$(-\infty,-7)$: Inequality is False
$(-7, -3)$: Inequality is True
$(-3, 0)$: Inequality is False
$(0, \infty)$: Inequality is True
So your final answer is
$x\in (-7, -3) \cup (0, \infty)$
NOTE: My first version of the answer had the Trues and Falses reversed so the answer was wrong. I now realize just what I did wrong--thanks for the corrections in the comments!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Distributing gifts so that everybody gets at least one So I was in class discussing the following problem:
We have $20$ different presents to distribute to $12$ children. It is not required that every child get something; it could even happen that we give all the presents to one child. In how many ways can we distribute the presents?
After some discussion we realized that the answer was $12^{20}$, because the problem could only be solved if we saw this from the perspective of the presents and not from the children. I thought it was very cool.
Then I went home and thought of a corollary to the problem: how many ways are there so that each child gets at least one present? I have been thinking for a week and I cannot solve it. I thought it was $12^{12} \times 12^8$, the first number to represent the presents distribute to at least one child and the other the ones spread out without discrimination. However, that number is bigger than the original number of ways which makes no sense. How would you go about solving this?
|
It is actually quite a bit more simple than the previous posters have mentioned - no need to use the binomial and you will end up with a correct answer if you do
I believe the correct answer is
(k-1)C(n-1) thus 19 choose 11
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Show that for any bounded shape, you can make a straight cut in any direction to create halves of equal area On an infinite frying pan, there is a bounded pancake. Prove that one can make a straight cut in any given direction (that is, parallel to a given line) that splits the pancake in halves of equal area.
I think that you can use intermediate value theorem, but I'm not quite sure where to start or how to apply the theorem
|
Here's the outline of a solution. For every nonzero vector $v \in \mathbb{R}^n$ and $\lambda \in \mathbb{R}$ consider the half-space defined as $H_{\lambda} = x \cdot v \geq \lambda$. For any bounded region $B$, let $B_{\lambda} = H_{\lambda} \cap B$. This corresponds to the part of $B$ on side of a cut that runs parallel to any vector perpendicular to $v$. Fix the vector $v$ and define the function $f(\lambda) = \mbox{Volume}(B_\lambda)$. Show that $f(\lambda)$ is continuous, that $\lim_{\lambda \to \infty} f(\lambda) = 0$, and that $\lim_{\lambda \to -\infty} f(\lambda) = \mbox{Volume}(B)$. Finally, apply the intermediate value theorem.
Perhaps this picture might help:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
First-order set theory : What is the class of all sets in ZFC? What do we mean when we let the universal set be the class of all sets? How do I intuitively think about this? Do I just think of it as a collection of all sets? Also, is the Axiom of Regularity a part of ZFC?
I have to prove a statement in the class of all sets, but I'm not sure what I'm supposed to assume since I've no idea what a class is (besides the definition 'a collection which is not a set because it causes paradoxes i.e Russel's paradox)
|
The following is meant to be a clarification of Ross Millikan's post:
In most commonly used set theories, we define classes (sometimes known as virtual classes) as a collection of sets satisfying some property. More precisely: Let $\phi(x, y_1, \ldots, y_n)$ be a formula in the language of set theory and let $p_1, \ldots, p_n$ be sets (parameters). Then
$$
C = \{x \mid \phi(x, p_1, \ldots, p_n \}
$$
is a (virtual) class and, at least in $\mathrm{ZFC}$, all classes are of this form - for varying $\phi$ and $p_1, \ldots, p_n$.
As an example consider $V$ - the class of all sets. $V$ is a class in the above sense since
$$
V = \{ x \mid x = x \}.
$$
As another example consider $P$ - the class of all pairs. $P$ is a class since
$$
P = \{ x \mid \exists y \exists z \colon x = (y,z) \}.
$$
We can use $P$ to form another class, the class $X$ of all pairs whose first coordinate is a natural number:
$$
\begin{align*}
X & = \{ x \mid x = (y,z) \in P \wedge y \in \mathbb N \} \\
&= \{ x \mid \exists y \exists z \colon x = (y,z) \wedge y \in \mathbb N \}.
\end{align*}
$$
It's also useful to note that given two classes $A,B$ the intersection of those two - call it $A \cap B$ - is a class. Here is why:
Fix formulas $\phi, \chi$ and parameters $p_1, \ldots, p_m, q_1, \ldots, q_n$ such that
$$
A = \{x \mid \phi(x, p_1, \ldots, p_m) \},
$$
$$
B = \{x \mid \chi(x, q_1, \ldots, q_n) \}.
$$
Then
$$
A \cap B = \{ x \mid \phi(x, p_1, \ldots, p_m) \wedge \chi(x, q_1, \ldots, q_n) \}.
$$
And, as you might imagine, there are many more ways in which we can combine known classes in order to generate new ones.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Why is base-10 decimal? A way to write a number in various bases is: 1012, 3Fa16, 51910.
The thing here, is that we apparently specify the base in decimal by default. This makes sense in everyday life, since we're not really doing base conversion when grocery shopping. But 10 is ambiguous when working with different bases. So why is 10110 not binary or ternary or octal when working specifically with base conversion?
How about 101102? Is 1111113 = 18310 = 183?
Related:
*
*Why do they call it base 10?
*Name for "decimals" in other bases?
*Base ten is called "decimal"; what's the name of numbers in base 15?
|
Yes, you could indicate the base of the base too, but eventually you need a base-indicator that is specified in some "default" base. We normally use base ten so if nothing else is stated that is what the base that is assumed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Does their exist a real continuous function other than $f(x)=0$ such that $f(2x) = -2f(x)$? I have a gut feeling it doesn't exist but I'm not sure how to prove/disprove it.
My attempt: Suppose there exists $a \in \mathbb{R}\setminus\left\{0\right\}$ such that $f(a) \neq 0$ . Define $x_n = \frac{a}{2^n}$
$f(x_{n+1}) = \frac{-1}{2} f(x_n)$ and inductively $f(x_n) = (\frac{-1}{2})^n f(a)$
What can I do from here?
|
(Rewriting achille hui's comment as an answer.)
Yes, there are other functions satisfying that equation. One such function is $f(x) = x \sin(\pi \log_2(x))$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
3D application of basic trigonometry https://imgur.com/a/D3ANJ (can't be uploaded because of size)
I first thought that ON//BC, because B is due east of O and C is due north of B.
However, that results in me getting an incorrect value of OT(correct value = 39.3)
|
Let $OB=a$.
Thus, $$OT=a\tan40^{\circ}$$ and
$$OC\tan25^{\circ}=a\tan40^{\circ},$$
which gives
$$OC=\frac{a\tan40^{\circ}}{\tan25^{\circ}}$$ and by Pythagoras theorem we obtain:
$$70^2+a^2=\left(\frac{a\tan40^{\circ}}{\tan25^{\circ}}\right)^2$$ and from here we can find a value of $a$:
$$a=\frac{70}{\sqrt{\frac{\tan^240^{\circ}}{\tan^225^{\circ}}-1}}$$ and $OT$.
I got $OT=39.262...$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Conditional probability of multivariate gaussian I'm unsure regarding my (partial) solution/approach to the below problem. Any help/guidance regarding approach would be much appreciated.
Let $\mathbf{X} = (X_1, X_2)' \in N(\mu, \Lambda ) $ , where
$$\begin{align}
\mu &= \begin{pmatrix}
1 \\
1
\end{pmatrix}
\end{align}
$$
$$
\begin{align}
\Lambda &= \begin{pmatrix}
3 \quad 1\\
1 \quad 2
\end{pmatrix}
\end{align}
$$
We are tasked with computing: $P(X_1 \geq 2 \mid X_2 +3X_1=3)$
I here begin by doing a transformation,
$$ \mathbf{Y} = (Y_1, Y_2)', \qquad Y_1 = X_1, \qquad Y_2 = X_2 + 3X_1$$
We now are interested in the probability,
$$P(Y_1 \geq 2 \mid Y_2 = 3)$$
Since we can write that $\mathbf{Y = BX}$, it follows that,
$$\mathbf{Y} \in \mathcal{N}(\mathbf{B\mu, B\Lambda B')})$$
where
$$\mathbf{B}= \begin{pmatrix}
1 \quad 0\\
3 \quad 1
\end{pmatrix} \rightarrow \quad \mathbf{B \mu} = \begin{pmatrix}
1 \\
4
\end{pmatrix}, \quad \mathbf{B\Lambda B'}= \begin{pmatrix}
1 \quad 0\\
3 \quad 1
\end{pmatrix} \begin{pmatrix}
3 \quad 1\\
1 \quad 2
\end{pmatrix} \begin{pmatrix}
1 \quad 3\\
0 \quad 1
\end{pmatrix} = \begin{pmatrix}
3 \quad 10\\
10 \; \; 35
\end{pmatrix}$$
We thereafter know that we can obtain the conditional density function by,
$$
f_{Y_1\mid Y_2 = 3} (y_1) = \frac{f_{Y_1,Y_2}(y_1, 3)}{f_{Y_2}(3)} \tag 1
$$
The p.d.f. of the bivariate normal distribution,
$$f_{Y_1, Y_2}(y_1, y_2) = \frac{1}{2\pi \sigma_1 \sigma_2 \sqrt{1-\rho^2}} e^{\frac{1}{2(1-\rho^2)}(\frac{(y_1 - \mu_1)^2}{\sigma_1^2} - \frac{2 \rho (y_1 - \mu_1)(y_2 - \mu_2)}{\sigma_1 \sigma_2} + \frac{(y_1 - \mu_1)^2}{\sigma_2^2})} $$
The marginal probability density of $Y_2$,
$$f_{Y_2}(y_2) = \frac{1}{\sqrt{2\pi} \sigma_2} e^{-(y_2 - \mu_2)^2 / (2\sigma_2^2)}$$
Given that,
$$\sigma_1 = \sqrt{3}, \quad \sigma_2 = \sqrt{35}, \quad \rho = \frac{10}{\sigma_1 \sigma_2 } = \frac{10}{\sqrt{105}} $$
we are ready to determine (1). However, the resulting expression, which I then need to integrate as follows,
$$
Pr(Y_1 \geq 2 \mid Y_2 = 3) = \int_2^\infty f_{Y_1\mid Y_2 = 3} (y_1) \, dy_1
$$
becomes quite ugly, making me unsure whether I've approached the problem in the wrong way?
Thanks in advance!
|
The covariance between $X_1 + \lambda (3 X_1 + X_2)$ and $3 X_1 + X_2$ is $10 + 35 \lambda$, therefore if we take $\lambda = -2/7$, we get
$$\operatorname{P}(X_1 \geq 2 \mid 3 X_1 + X_2 = 3) =
\operatorname{P} \left(
X_1 -\frac 2 7 (3 X_1 + X_2 - 3) \geq 2 \mid 3 X_1 + X_2 = 3 \right) = \\
\operatorname{P} \left( X_1 -\frac 2 7 (3 X_1 + X_2 - 3) \geq 2 \right),$$
and $X_1/7 -2 X_2/7 \sim \mathcal N(-1/7, 1/7)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2455972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Why quotient ring $R/R$ is zero ring $\{ 0\}$? There is already similar question.
Factor rings $R/R$ and $R/0$
Of course, I read it. However I still don't know, how and why the quotient ring $R/R$ is zero ring. So, I ask your help to check my thinking logic.
In my opinion, it seems to be $R/R=R$.
The definition of the quotient ring is $R/P=\{r+P|r \in R\}$.
So, $R/R=\{r+R|r \in R\}$. Does $r+R$ make all of element of $R$?
And if $R$ is a commutative ring with identity, is that something different result?
I think I have a big problem with that logic, but I'm blind to find it.
I hope your brighter sight.
Thank you in advance.
|
The elements of the quotient ring $R/P$ are equivalence classes of elements in $R$. That is, two different elements $r_1, r_2 \in R$ produce the same class in $R/P$ if $r_1 - r_2 \in P$. This is equivalent to the definition you wrote above.
Now $R/0 = R$ because $r_1, r_2 \in R$ are in the same class in $R/0$ only if $r_1 - r_2 = 0$, which means that $r_1 = r_2$. As a result, every element is in its own class, and no two different elements become equal. Thus $R/0$ looks exactly the same as $R$.
However, for any two elements $r_1, r_2 \in R$ we know that $r_1 - r_2 \in R$. Thus in the quotient ring $R/R$, every element is in the same class, so there is only one class! This means we have a ring with only one element -- the zero ring.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
Alternate proof that $1+x+...+x^{p-1}$ is irreducible for prime $p$
For $p$ prime, $P(x)=1+x+...+x^{p-1}$ is irreducible in $\mathbb{Z}[x]$.
This is a classic problem to which there exists a clever solution which applies Eisenstein's criterion to $P(x+1)$.
However I believe I have another solution, but I wish to make sure I haven't made some stupid mistake:
We have $P(x)(x-1)=x^p-1$. For $f$ a polynomial in $\mathbb{Z}[x]$, let $\overline{f}$ denote it's reduction mod $p$, which is a polynomial in $\mathbb{F}_p[x]$.
By Fermat, we have that $\overline{P(x)(x-1)}=x-1$ so $\overline{(P(x)-1)} \overline{(x-1)}=0$. But $\mathbb{F}_p[x]$ is an integral domain so $\overline{P(x)}=1$. Thus if $P=QR$ for nonconstant polynomials $Q,R$ in $\mathbb{Z}[x]$ then $\bar{Q}\bar{R}=1$. Hence $\bar{Q}$ and $\bar{R}$ are constants polynomials. Thus the leading coefficients of $Q$ and $R$ are divisible by $p$, which means the leading coefficient of $P$ is divisible by $p$, a contradiction.
|
This does not quite work; if it did, the same logic should hold for $x^{p^p}-1$; for instance, say $x^{27}-1$. This should imply $$
1+x+\dots+x^{26}
$$
is irreducible, but it is not, as can be checked by wolfram. The problem is as @Wojowu points out in the comments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Optimal route consisting of rowing then walking Problem
You're in a boat on point A in the water, and you need to get to point B on land. Your rowing speed is 3km/h, and your walking speed 5km/h.
See figure:
Find the route that takes the least amount of time.
My idea
I started by marking an arbitrary route:
From here, I figure the total time is going to be $$T = \frac{R}{3\mathrm{km/h}} + \frac{W}{5\mathrm{km/h}}$$
Since this is a function of two variables, I'm stuck.
A general idea is to express $W$ in terms of $R$ to make it single-variable, and then apply the usual optimization tactics (with derivatives), but I'm having a hard time finding such an expression.
Any help appreciated!
EDIT - Alternative solution?
Since the distance from A to the right angle (RA) is traveled 3/5 times as fast as the distance between RA and B, could I just scale the former up?
That way, I get A-RA being a distance of $6\cdot\frac53 = 10\mathrm{km}$, which makes the hypotenuse $\sqrt{181}$ the shortest distance between A and B. And since we scaled it up, we can consider it traversable with walking speed rather than rowing speed!
Thoughts?
|
*
*a) the solution
The formula has already been indicated by wgrenard and AdamBL
$$
T = {1 \over 3}\sqrt {36 + \left( {9 - W} \right)^{\,2} } + {1 \over 5}W
$$
differentiating that
$$
{{dT} \over {dW}} = {{5W + 3\sqrt {36 + \left( {9 - W} \right)^{\,2} } - 45} \over {15\,\sqrt {36 + \left( {9 - W} \right)^{\,2} } }}
$$
and equating to $0$ gives
$$
\eqalign{
& {{dT} \over {dW}} = 0\quad \Rightarrow \quad 3\sqrt {36 + \left( {9 - W} \right)^{\,2} } = 45 - 5W\quad \Rightarrow \cr
& \Rightarrow \quad 9\left( {36 + \left( {9 - W} \right)^{\,2} } \right) = 25\left( {9 - W} \right)^{\,2} \quad \Rightarrow \cr
& \Rightarrow \quad 9 \cdot 36 = 16\left( {9 - W} \right)^{\,2} \quad \Rightarrow \quad W = 9 - \sqrt {{{9 \cdot 36} \over {16}}} = 9 - {9 \over 2} = {9 \over 2} \cr}
$$
which is a minimum, because the function is convex as already indicated.
Thus
$$
\left\{ \matrix{
W_m = 9/2 \hfill \cr
T_m = 17/5 \hfill \cr
R_m = 15/2 \hfill \cr} \right.
$$
*
*b) Scaling
Your idea of scaling according to speed is quite entangling.
That means (if I understood properly) that you are transforming the triangle from space to time units.
But, by introducing different scaling factors for the two coordinates, you undermine the Euclidean norm,
which does not " transfer" between the two systems (if assumed valid in one, shall be modified in the other).
Consider for example the transformation sketched below.
From the mathematical point of view it is a linear scale transformation
$$
\left( {\matrix{ {y_1 } \cr {y_2 } \cr } } \right) =
\left( {\matrix{ {1/v_1 } & 0 \cr 0 & {1/v_2 } \cr } } \right)
\left( {\matrix{ {x_1 } \cr {x_2 } \cr } } \right)
$$
Now, with constant $v_1, \,v_2$, any path in $x$ will transform in the corresponding path in $y$
(going through corresponding points).
If the path is a curve parametrized through a common parameter $\lambda$, not influencing the $v_k$'s,
then, at any given value of $\lambda$ the point on the $x$ plane will transform into the corresponding
point in $y$ plane
$$
\left( {\matrix{ {y_{1}(\lambda) } \cr {y_{2}(\lambda) } \cr } } \right) =
\left( {\matrix{ {1/v_1 } & 0 \cr 0 & {1/v_2 } \cr } } \right)
\left( {\matrix{ {x{_1}(\lambda) } \cr {x_{2}(\lambda) } \cr } } \right)
$$
and the minimal path in one plane will be the corresponding minimal path in the other.
But we shall also have that
$$
\frac{d}{{d\lambda }}\left( {\matrix{ {y_{1}(\lambda) } \cr {y_{2}(\lambda) } \cr } } \right) =
\left( {\matrix{ {1/v_1 } & 0 \cr 0 & {1/v_2 } \cr } } \right)
\frac{d}{{d\lambda }}\left( {\matrix{ {x{_1}(\lambda) } \cr {x_{2}(\lambda) } \cr } } \right)
$$
that is that the "velocities" compose vectorially.
Therefore if $\lambda$ is the time, you shall go from $A$ to $C$ with a composition of a vertical
rowing speed and a horizontal walking speed (a "$\infty$-thlon"), which takes the same time as rowing $AH$ and walking $HC$.
When, instead, you just row on $AC$, then you shall change the above matrix - for that segment only - according to the $\angle AC$,
and of course you loose the correspondence minimal to minimal as based on the Euclidean norm (straight line $A'B'$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
pointwise product of two characters of $G$ is a character of $G$ Let $\phi: G \to GL_{n}(\mathbb{C})$ and $\rho: G \to GL_{m}(\mathbb{C})$ be representations. Let $V = M_{mn}(\mathbb{C})$. Define the representations $\tau : G \to GL(V)$ by $\tau_{g}(A) = \rho_{g}A\phi_{g}^{T}$.
I know that $\chi_{\tau}(g) = \chi_{\rho}(g)\chi_{\psi}(g) \ \forall \ g \in G$. What is the best argument to show that the pointwise product of two characters of $G$ is a character of $G$?
|
Have you learned about the tensor product of two representations? The character of $\phi\otimes\rho$ is the pointwise product of the characters of $\phi$ and $\rho$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Anyone have some information about this identity/identities for regular polygons? I was looking at some equilateral triangles and started drawing up some equations and I came across the following (it may take some time for my actual question to come):
Suppose that we put a point $D$ anywhere within the triangle. Then we draw lines to each vertex of the triangle, we can now see that we form $3$ triangles, namely $ABD, ACD$ and $BCD$. Further draw lines perpendicular from $D$ to each side of the triangle, the lines $DE, DG$ and $DF$.
We can also note that $AH$ is the height of the triangle (we'll call the height $h$). For simplicity we will also call the sides $AC = AB = BC = S$. We will also rename the perpendicular lines from $D$ to each side; $L_1, L_2$ and $L_3$ respectively (for our purposes it doesn't matter which line is which $L_n$).
We can easily see that everything done so far is valid no matter where we put the point $D$ because we can always the draw the lines.
Now the definitions are done, so we can start looking at the area of $ABC$. This is just $\frac{bh}{2} = \frac{Sh}{2}$. The area can also be derived from adding together the area of triangles $ABD, ACD$ and $BCD$. The area for each small triangle is just $\frac{SL_n}{2}$.
Now set the two equations for the areas to equal each other: $$\frac{Sh}{2} = \frac{SL_1}{2} + \frac{SL_2}{2} + \frac{SL_3}{2}$$
$$h =L_1 + L_2 + L_3$$
Which is quite interesting. No matter where we put a point $D$, the perpendicular lengths from $D$ to the sides always sum up to the height of the triangle.
My next step was to extend this to all regular polygons, which led me to derive the following:
$$\frac{2A_n}{S} = \sum_{i=1}^{n} L_i$$
Where $A_n$ is the area of some regular polygon and the subscript $n$ denotes how many sides it has.
I was wondering if anyone has some material about this? Where it's e.g. extended to other shapes/higher dimensions etc. Thank you for your help.
|
Essentially the same proof shows a corresponding theorem for a regular tetrahedron -- the altitude is the sum of the length of the perpendicular dropped to each of the four sides (from any interior point). This is, in fact, one starting point for the notion of "barycentric coordinates", which might be a starting place for you to look to see what's already been done with ideas like these.
(BTW, the same proof and corresponding theorem holds in all dimensions. In dimension 1, it's not very interesting, though, and in dimensions higher than 3 it's hard to visualize.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\sigma(f)=f(\Omega)$ for $f\in C(\Omega)$, $\Omega$ a Compact, Hausdorff Topological Space Let $\Omega$ be a compact, Hausdorff, topological space; let $A=C(\Omega)$, the unital, Banach algebra of continuous functions from $\Omega$ to $\mathbb C$; let
$$\text{Inv}(A)=\{f\in A:g(\omega)f(\omega)=f(\omega)g(\omega)=1\text{ for some }g\in A\text{ and every }\omega\in\Omega\};$$
and let $\sigma(f)=\{\lambda\in\mathbb C:\lambda1-f\notin\text{Inv}(A)\}$.
I want to show that $\sigma(f)=f(\Omega)$.
Let $\lambda\notin\sigma(f)$. Then there is $g\in A$ such that $f=\lambda1-1/g$. Hence, $f(\omega)\neq\lambda$ for every $\omega\in\Omega$, so $\lambda\notin f(\Omega)$. Conversely, let $\lambda\notin f(\Omega)$. Then $\lambda-f(\omega)\neq0$ for every $\omega\in\Omega$ and thus has inverse $g=1/(\lambda1-f)\in C(\Omega)$. Hence, $\lambda\notin\sigma(f)$.
I did not use the fact that $\Omega$ is compact and Hausdorff. Why do we need that?
|
I don't think there's anything wrong with your argument.
It is just that $C(\Omega)$, for non-compact $\Omega$, is not an interesting algebra. In particular, it is not a Banach algebra, not with the natural norm, because you would need your functions to be bounded.
Also, on "arbitrary" $C(\Omega)$, the spectrum becomes uninteresting. For starters, you cannot guarantee that it will be non-empty (any surjective function will have empty spectrum), nor that it will be bounded. So the spectral radius, and its relation to the norm, are completely absent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proving that a cyclic group is generated by a single element. I am currently reading "The Theory of Finite Groups" by Kurzweil and Stellmacher.
I am already stuck on page 4.
On page 3, a cyclic group is defined as:
The group G is cyclic if every element of G is a power of a fixed element g.
Then on page 4 a proof is given for:
1.1.2 Let $G = \langle g \rangle$ be a cyclic group of order $n$. Then $G = \{1, g, \cdots, g^{n-1}\}$
I don't understand the given proof. But I think this is partially due to the fact that I do not know why the author is proving 1.1.2 in the first place. For me, the definition and 1.1.2 look pretty much the same.
Could someone please explain to in which way they differ and why someone has to prove 1.1.2 given the definition.
|
By definition, if $G$ is cyclic with generator $g$, then every element of $G$ is a power of $g$. What 1.1.2 is saying is that if $G$ has order $n$, then more specifically every element of $G$ is equal to $g^m$ for some $m$ such that $0\leq m<n$. This is stronger than the definition, because of the restriction that $0\leq m<n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2456979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Affine subspaces and parallel linear subspaces Let $\mathcal{X}$ be a real vector space and $C\subset \mathcal{X}$ an affine subspace of $\mathcal{X}$, i.e. $C\neq\emptyset$ and $C=\lambda C + (1-\lambda)C$ for all $\lambda\in\mathbb{R}$. In the text I am reading, they have defined the linear subspace parallel to $C$ to be $V=C-C=\{a-b : a\in C, b\in C\}$. What is the significance of subtracting the entire set $C$ compared to just one vector from $C$? Why not just take $V=C-c$ for some $c\in C$?
The example I was looking at was a line in $\mathbb{R}^2$, $C = \{(3,y) : y\in \mathbb{R}\}$. Doesn't $C-(3,0) = \{c-(3,0) : c\in C\}$ produce the same set as $C-C$ or am I missing some subtlety?
|
I am not sure if you believe me or not! but the only reason that people like to write parallel subspace in the way $V=C-C$ is that it looks NICER.
But what you said is correct both representations are equivalent. it is a simple exercise .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\sqrt[n]{n} > \sqrt[n+1]{n+1}$ without calculus? I'm stuck with this sample RMO question I came across:
Determine the largest number in the infinite sequence $\sqrt[1]{1}$, $\sqrt[2]{2}$, $\sqrt[3]{3}$, ..., $\sqrt[n]{n}$, ...
In the solution to this problem, I found the solver making the assumption,
$\sqrt[n]{n}>\sqrt[n+1]{n+1}$ for $n \geq 3$ How would you prove this?
Any help would be greatly appreciated.
EDIT: In this competition, you aren't allowed to use calculus. Non-calculus methods would be appreciated.
|
We wish to compare $\sqrt[n]n \lessgtr \sqrt[n+1]{n+1}$. Raise each side to the $n(n+1)$th power to get
$$ n^{n+1} \lessgtr (n+1)^n $$
and use the binomial theorem on the right-hand side:
$$ n\cdot n^n \lessgtr \underbrace{n^n+\binom n1 n^{n-1} + \binom n2 n^{n-2} + \cdots + \binom n{n-1} n^1}_{n\text{ terms}} + 1 $$
Because $\binom{n}{k}\le n^k$, each of the $n$ indicated terms is at most $n^n$. And when $n\ge 3$, the last term $\binom n{n-1}n^1 = n^2$ is so much smaller than $n^n$ that the final $1$ term is insufficient to make the RHS exceed $n\cdot n^n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
}
|
How do we conclude using De Morgan's laws that these two are equal? Question: (p∧q)→(p∨q)≡¬(p∧q)∨(p∨q)
Which steps should I take to derive the equation to the right from the equation to the left? In the book, it just shows this equation but doesn't answer how did they actually get it. Since this example in the book shows up under the De Morgan's laws section, I rightfully considered De Morgan's laws could help us to solve this problem.
If you need the full question just tell me.
|
If both $p$ and $q$ are true, then at least one of either $p$ or $q$ will be true.
Since this example in the book shows up under the De Morgan's laws section, I rightfully considered De Morgan's laws could help us to solve this problem.
Yes, but it is too soon. The step you have applies Material Implication.
*
*$\qquad(p\wedge q)\to (p\vee q)~~\equiv~~\neg (p\wedge q)\vee (p\vee q)$
This sets it up to apply deMorgan's Rule next.
*
*$\qquad\phantom{(p\wedge q)\to (p\vee q)} ~~\equiv~~(\neg p\vee \neg q)\vee (p\vee q)$
Finish with association and commutation, contradiction, and then annihlation.
*
*$\qquad\phantom{(p\wedge q)\to (p\vee q)}~~\equiv~~\top$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Method to solve A = mod( Bn, C) How would I go by solving an equation of the general form.
A = mod ( Bn, C )
Solve for n knowing A, B and C
Where B and C are Natural numbers and A and n are whole numbers. Also the greatest common denominator between B and C is 1.
|
If $A \not \in [0, C-1]$ then clearly there is no solution. Otherwise, we have
$$A \equiv Bn \pmod C.$$
Since $(B,C) = 1$, the inverse $B^{-1}$ exists modulo $C$, and we can multiply it on both sides to get:
$$AB^{-1} \equiv n \pmod C.$$
So your problem is simply to find the inverse $B^{-1}$, which can be done using the extended Euclidean algorithm.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Computing $\lim_{\varepsilon\to 0^{+}}\psi(\varepsilon)/\Gamma(\varepsilon)$ with asymptotic expansions I have the following limit of which I want to compute:
\begin{equation}
\lim_{\varepsilon\to 0^{+}} \frac{\psi(\varepsilon)}{\Gamma(\varepsilon)}.
\end{equation}
For $\varepsilon\approx 0$ and $\varepsilon\neq 0$ I have the following limiting forms
\begin{equation}
\tag{1}
\psi(\varepsilon)=-\frac{1}{\varepsilon}-\gamma+O(\varepsilon)
\end{equation}
and
\begin{equation}
\tag{2}
\frac{1}{\Gamma(\varepsilon)}=\varepsilon+O(\varepsilon^{2}).
\end{equation}
If I multiply $(1)$ and $(2)$ together we get
\begin{align}
\tag{3}
\frac{\psi(\varepsilon)}{\Gamma(\varepsilon)}
&=
-1-\frac{O(\varepsilon^{2})}{\varepsilon}
-\gamma\varepsilon-\gamma O(\varepsilon^{2})
+\varepsilon O(\varepsilon)+O(\varepsilon)O(\varepsilon^{2})\\
&=
-1-O(\varepsilon)
-\gamma\varepsilon-\gamma O(\varepsilon^{2})
+O(\varepsilon^{2})+O(\varepsilon^{3}).
\end{align}
In the limit, all of the terms with $\varepsilon$ approach zero such that we arrive at
\begin{equation}
\lim_{\varepsilon\to0^{+}} \frac{\psi(\varepsilon)}{\Gamma(\varepsilon)} =
-1.
\end{equation}
I have checked this answer against WolframAlpha which yields the same result. Despite getting the same result, I have doubts as to if this is a sound approach to computing the limit.
My question is this: Is the use of asymptotic expansions in this manner proper (i.e. is this a valid method to computing my limit)? Or does it just happen to work out in this example?
|
One could take the following method:
\begin{align}
\frac{\psi(x)}{\Gamma(x)} &= \frac{\Gamma(x) \, \psi(x)}{\Gamma^{2}(x)} = \frac{\Gamma'(x)}{\Gamma^{2}(x)} \\
&= - \frac{d}{dx} \left(\frac{1}{\Gamma(x)}\right).
\end{align}
Now make use of the Taylor series expansion of the inverse of the Gamma function, namely,
$$\frac{1}{\Gamma(x)} \approx x + \gamma \, x^{2} + \left(\frac{\gamma^{2}}{2} - \frac{\pi^{2}}{12}\right) \, x^{3} + \mathcal{O}(x^{4})$$
then
$$\frac{\psi(x)}{\Gamma(x)} \approx - \left(1 + 2 \gamma \, x + \left(\frac{3 \, \gamma^{2}}{2} - \frac{\pi^{2}}{4}\right) \, x^{2} + \mathcal{O}(x^{3}) \right).$$
Taking the limit as $x \to 0$ leads to
$$\lim_{x \to 0} \frac{\psi(x)}{\Gamma(x)} = -1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What are some examples in which the introduction of nets helped understand a concrete topological space? I've had the chance to learn about nets, though every statement I was exposed to didn't seem to be useful in practice.
For example, the fact $x \in \bar{A}$ iff there exists some net $(x_\alpha)_{\alpha \in J}$ such that $x_\alpha$ converges to $x$, doesn't seem to be useful in practice because if one can construct such a net, than one can also prove directly that every neighboorhood of $x$ has a non empty intersection with $A$.
Question: Are examples of results about concrete spaces (or otherwise — spaces that have some specific property that is not described by the concept of nets) being obtained using nets, and that could not be obtained using sequences, where nets actually prove to be an efficient tool?
|
You can prove Tychonoff's theorem using nets (that's how it's done in Folland's Real Analysis, for instance), but that can't be done with sequences, even for concrete spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Solving $\tau(n)+\phi(n)=n$ for $n\in\mathbb{N})_{\ge 1}$.
Let $\tau(n)$ denote the number of divisors of a positive integer $n$,
and let $\phi(n)$ be Euler's totient function, i.e. the number of
positive integers less than and coprime to $n$. I'd like to find all
$n$ such
$$ \tau(n)+\phi(n)=n.$$
*
*The case when $n=p$ for $p\in\mathbb{P}$ is easy. We have
$\tau(n)=2$, $\phi(n)=p-1$ and the equation is equivalent to $$
2+p-1=p+1=p,$$ which is impossible.
*The case of $n=p^k$, where $k\in\mathbb{N}_{\ge 2}$ is already more complicated. We have $\tau(p^k)=k+1$ and $\phi(p^k)=p^k(1-1/p)$, hence the equation is equivalent to
$$ k+1+p^k\left(1-\frac{1}{p}\right)=p^k,$$
which can be rearranged into
$$ k=p^{k-1}-1=p^{k-1}-1^{k-1}=(p-1)(p^{k-2}+p^{k-3}+\ldots+1).$$
For example, for $k=2$ this is $2=p-1$, and $p=3$. It's easy to check that $n=3^2=9$ satisfies the conditions. For $k=3$, we have $3=p^2-1$, and $p=2$. Hence $n=2^3=8$. $k=4$ gives $4=p^3-1$, and unfortunately there's no solution.
*When $n=p_1\cdot p_2^k$ with $p_1,p_2$ distinct primes, we have
$\phi(p_1p_2^k)=\phi(p_1)\phi(p_2^k)$. As
$\tau(p_1p_2^{k})=2+k+1=k+3$, $\phi(p_1)=p_1-1$ and
$\phi(p_2^k)=p_2^k(1-1/p_2)$, the equation transforms into
$$k+3+p_1-1+p_2^k\left(1-\frac{1}{p_2}\right)=p_1p_2^k,$$
which, after little bit of rearranging gives
$$ k+2+p_1+p_2^k-p_2^{k-1}=p_1p_2^k.$$
For $k=1$, this is
$$1+2+p_1+p_2-p_2^{0}=p_1p_2, $$ or
$$ 2+p_1+p_2=p_1p_2,$$
which does not help very much.
I have no clue about the approach to the general case. I'm not even sure my current approach is useful at all.
Any hints greatly appreciated.
|
$$n-\tau(n) = \sum_{a=2}^n 1_{a \,\nmid \, n}$$
$\sum_{a=2}^n 1_{a \,\nmid \, n} = \phi(n) = 1+\sum_{a=2}^n 1_{(a,n)=1}$ means there is exactly one $a \in [2,n]$ such that $(a,n)\ne 1$ and $a\nmid n$.
Take a prime $p | (a,n)$. Thus $n = p d$ and for any $b = p r,r \in [2,d], (r,d)=1$ we have $(b,n)\ne 1$, $b\nmid n$ ie. $\phi(d)-1$ possibilities. Therefore $\phi(d)-1 = 1$ and $d = 3$ or $4$ or $6$
Hence $n= 3p$ or $4p$ or $6p$ and we obtain the complete list of solutions easily.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How are $z$ and $z^*$ independent? I have been told that a complex number $z$ and its conjugate $z^*$ are independent. Part of me understands this, since for two independent variables $x$ and $y$ we can always define new independent variables $x' = \alpha x + \beta y$ and $y' = \alpha x - \beta y$.
However, this contradiction is confusing me:
Suppose I assume $x$ and $y$ are real. Then if I know $z$, I know both $x$ and $y$, which sort of makes sense because $\mathbb C \cong \mathbb R^2$. For example, if you tell me $z = 4 + 5i$, then $z^*$ is uniquely determined to be $4 - 5i$. How can we then say $z$ and $z^*$ are independent? I cannot change $z$ without also changing $z^*$. I can, however, change $x$ without changing $y$.
|
It is true that there is a one-to-one map between $z$ and $z^*$, it's just the reflection about the $x$-axis of the complex plane. Therefore, it is certainly not true that $z$ and $z^*$ are independent. However, if we consider $z^*$ as a function of $z$, so that $z^* = f(z)$, then it turns out that $f(z)$ is not a "nice" function in a sense that it cannot be built out of basic arithmetic operations such as $+,-,\times,\div$ and, finally, it is not differentiable. This means that, in the complex setting, the complex conjugation becomes an additional, independent, "arithmetic operation". It is in this sense that $z$ and $z^*$ are independent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2457972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 2
}
|
Let $f : I \to \mathbb R$ be continuous. For any compact interval $J \subseteq f(I), \exists$ a compact interval $K \subseteq I$ with $f(K)=J.$
Let $f \colon I \to \mathbb R$ be continuous where $I$ is an interval. For any compact interval $J \subseteq f(I)$ there exists a compact interval $K \subseteq I$ such that $f(K)=J.$
My attempt:
Let $J=[f(x),f(y)]$ wehre $x,y \in I.$ Without loss of generality $x<y.$ Let $p=f(x), q=f(y).$
Let $$A=f^{-1}\{p\}\cap [x,y]$$ Then $A$ is closed and bounded, hence compact. Therefore, $r=\sup A \in A.$ Thus, $f(r)=f(x).$
Similarly, let $$B=f^{-1}\{q\}\cap [x,y]$$ Then $s=\inf B \in B$. Thus, $f(s)=f(y).$
Now I want to show that $f([r,s])=J.$ I understand intuitively and geometrically that if there is a point $w \in [r,s]$ such that $f(w)<f(x)$ Then we'll get a point in $(w,s)$ such that $f(w)=f(p).$ This will contradict the definition of $r.$ However, I'm not able to make this precise.
Is there a rigorous argument to show $f([r,s])=J?$
|
WLOG $p\leq q.$ (Otherwise study the function $g(t)=-f(t)$...). We have $$f([r,s])\supset [f(r),f(s)]=[p,q]$$ by the IVP, because $f$ is continuous.
If $t\in (r,s)$ and $f(t)<p\leq q=f(s)$ then by the IVP there exists $t'\in (t,s)$ with $f(t')=p,$ contrary to the def'n of $r.$
If $t\in (r,s)$ and $f(t)>q\geq p=f(r)$ then by the IVP there exists $t''\in (r,t)$ with $f(t'')=q$ contrary to the def'n of $s.$
So $f([r,s])\subset [p,q].$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2458102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Proving that every reduced residue class contain at least one prime I don't know if I expressed this clearly, but I want to know if the following is true and also some help proving it in case it is.
$\forall a,b \in \mathbb{N} , \gcd{(a,b)} = 1 \Rightarrow \exists p \equiv b \pmod{a}$
Where $p$ is a prime and $b < a$.
I know the converse is true, because a prime can only be congruent to reduced residue classes.
If it is too trivial, some hints can suffice, if it is not, pointing to good material would be greatly appreciated.
Thanks in advance.
|
This is a consequence of Dirichlet's theorem. I don't know a way to prove your theorem without it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2458224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
SAT II Geometry Find the missing side length
I'm thinking the answer is choice A but I want someone to back up my reasoning/check. So since DE and DF are perpendicular to sides AB and AC respectively that must make EDFA a rectangle. Therefore AF must be 4.5 and AE must be 7.5. Since they state AB = AC that must mean EB is 4.5 and CF is 7.5. Therefore CF rounded to the nearest whole number is 8
|
Here is my thoughts on this one. It is indeed clear that triangles BED and CDF are similar. If $BD = y$ then DC is $24-y$. Using similarity we get the ratio $y/4.5 = (24-y)/7.5$ from which follows $y=9$ and so $CD=15$. With Pythagorean theorem you find 12.99
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2458313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Convexity of logistic loss How to prove that logistic loss is a convex function?
$$f(x) = \log(1 + e^{-x})?$$
I tried to derive it using first order conditions, and also took 2nd order derivative, though I do not see neither $f(y) \geq f(x) + f'(x)(y-x)$, nor positive definiteness (aka always positive second derivative in this case).
|
You can simplify the given function and then take $2$nd order derivative:
$$y=\ln{(1 + e^{-x})}=\ln{\frac{e^x+1}{e^x}}=\ln{(e^x+1)-x}.$$
$$y'=\frac{e^x}{e^x+1}-1,$$
$$y''=\frac{e^x(e^x+1)-e^{2x}}{(e^x+1)^2}=\frac{e^x}{(e^x+1)^2}>0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2458438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Two points on curve that have common tangent line Find the two points on the curve $y=x^4-2x^2-x$ that have a common tangent line.
My solution: Suppose that these two point are $(p,f(p))$ and $(q,f(q))$ providing that $p \neq q$. Since they have a common tangent line then: $y'(p)=y'(q),$ i.e. $4p^3-4p-1=4q^3-4q-1$ and after cancellation we get: $p^2+pq+q^2=1$.
Tangent lines to curve at points $(p,f(p))$ and $(q,f(q))$ are $y=y(p)+y'(p)(x-p)$ and $y=y(q)+y'(q)(x-q)$, respectively. I have tried to put $x=q$ in the first and $x=p$ in the second equations but my efforts were unsuccesfull.
Can anyone explain me how to tackle that problem?
|
Complete the square :
your curve is $y = x^4 - 2x^2 - x = (x^2 - 1)^2 + (-x-1)$.
So the curve stays above the line $y = -x-1$, and is tangent to it when it touches it, that is when $x^2-1 = 0$, so that's when $x=1$ and $x=-1$ :
The line $y=-x-1$ is tangent to the curve at those two points $(1,-2)$ and $(-1,0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2458594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.