Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Replacing a natural number containing a certain digit with the sum of two without that digit A question in Google Code Jam 2019 qualification round wanted a positive integer n which contains at least one digit 4 to be represented as a sum of two positive integers a and b, neither containing 4.
For my solution, I found that simply replacing all 4s in n with 3s to get a
always results in no 4s in b for all n < 100,000,000 (when it starts taking too long on my puny laptop). Can it be proven that this method always works for any integer n?
More generally, can I always replace digit d with d - 1, for n > 1?
| For 2 ≤ d ≤ 9, the method in my comment always works as b just contains 1 in the same position as any d and 0 otherwise. E.g. for d = 2, n = 25225222,
25225222
—15115111
—————————
10110111
For d = 0, put 9 in the same position as any 0 and 9 otherwise.
d = 1 is interesting. With this program:
def is_bad(x): return '0' in str(x)
for n in range(1, int(1e5) + 1):
if (is_bad(n)):
is_ok = False
for a in range(1, n):
b = n - a
if (not (is_bad(a) or is_bad(b))): is_ok = True; break
if (not is_ok): print('%d has no solution!' % (n))
# else: print('%d = %d + %d' % (n, a, b))
I got
1 has no solution!
19 has no solution!
21 has no solution!
199 has no solution!
201 has no solution!
219 has no solution!
221 has no solution!
1999 has no solution!
2001 has no solution!
2019 has no solution!
2021 has no solution!
2199 has no solution!
2201 has no solution!
2219 has no solution!
2221 has no solution!
19999 has no solution!
20001 has no solution!
20019 has no solution!
20021 has no solution!
20199 has no solution!
20201 has no solution!
20219 has no solution!
20221 has no solution!
21999 has no solution!
22001 has no solution!
22019 has no solution!
22021 has no solution!
22199 has no solution!
22201 has no solution!
22219 has no solution!
22221 has no solution!
There are 2 ^ (m - 1) values of n with m digits with no solution.
For m > 2, prepend 1 and append 0 to the binary representation of values from 0 to 2 ^ (m - 2), zero-padded to m - 2 digits; multiply the result by 2; and ± 1.
So it's possible to find a pair for all d except 1!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3178755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Integrating with different methods leads to different results? Acceleration of a particle is given as $a = 0.1(t-5)^2$. The particle is initially at rest.
Here is the way I initially attempted it. (I didn't use substitution as the question in the textbook appears before substitution is officially taught.)
$$a\:=\:0.1t^2-t+2.5$$
so
$$v = \frac{1}{30}t^3-0.5t^2+2.5t+c$$
$c$ evaluates to zero.
But my textbook uses a substitution and gets:
$$\frac{1}{30}\left(t-5\right)^3 +c$$
In this case, c has a value when t= 0 and so the end result is different. Wouldn't this lead to different values in general?
| Note that you ended up with
$$
v(t) = \frac{1}{30} t^3 - \frac12 t^2 + \frac52 t
$$
But the book's answer must have $c = 5^3/30$, so the complete answer is
$$
\begin{split}
v(t) &= \frac{1}{30} (t-5)^3 + \frac{5^3}{30}\\
&= \frac{(t-5)^3-5^3}{30} \\
&= \frac{t^3 - 3\cdot 5 t^2 + 3 \cdot 5^2t - 5^3 + 5^3}{30} \\
&= \frac{t^3}{30} - \frac{t^2}{2} + \frac{5t}{2}\\
\end{split}
$$
which is the same answer as the one you ended up with.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why isn't $e^n$ equal to 1? We know $e^{2\pi i} = 1$, and that $(x^m)^n = x^{mn}$. This way, we can rewrite $e^{n}$ as some version of $(e^{2\pi i})^{\frac{n}{2\pi i}}$ for most n (right?).
But if this is true, then why isn't $e^3 = 1$, for example, if we can rewrite it as $(e^{2\pi i})^{\frac{3}{2\pi i}} = (1)^{\frac{3}{2\pi i}} = 1$ ? What am I missing here?
I just came upon this issue by accident while doing a problem, and I'm not sure how to best resolve it.
| Good question! The answer is that although the rule ${(x^b)^c} = x^{bc}$ holds when $b$ and $c$ are integers, it does not hold in general when they are not integers.
Consider the following simpler example. As you know, $(-1)^2 = 1$. Raising both sides to the power $\frac12$, we get $${((-1)^2)}^{1/2} = 1^{1/2},$$ which is still correct, but we cannot then apply the ${(x^b)^c} = x^{bc}$ rule to the left side to obtain $$(-1)^{2\cdot(1/2)} = (-1)^1 = -1 = 1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Demand for operations research in industry I want to know how much demand there is for OR in the commercial world, and understand the relevance and prevalence of OR.
Is the airline industry the place where this technique is applied the most? What other areas would OR be in demand from your experience?
Also, is it mainly academics that perform OR for corporates or are consulting firms providing this service as well?
| Airlines do indeed rely on operations research. So does UPS, which has an in-house operations research group that has generated sizeable cost savings for them.
Rather than listing the various industries that use OR, I'll just recommend that you look at past winners of the INFORMS Franz Edelman Award. You might also want to look through a few issues of the INFORMS journal Interfaces, recently renamed the INFORMS Journal on Applied Analytics. Many of the articles in it document OR applications in industry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does every null set have a superset which is an $F_{\sigma}$ null set? Let $A$ be a Lebesgue null set in $\mathbb R$. Can we find a set $B$ with the following properties:
1) $A \subset B$
2) $B$ has measure $0$
3) $B$ is an $F_{\sigma}$ set (i.e. a countable union of closed sets).
I suppose so because that null sets I had ever seen have been constructed from finite or countable sets or Cantor set.
| A closed set of Lebesgue measure zero has empty interior. So a countable union of such sets, an $F_\sigma$ of measure zero, is meager (also called "first Baire category), and so are all its subsets. But there are Lebesgue null sets that are not meager, for example the set of those numbers in $[0,1]$ whose binary expansion does not have asymptotically half zeros and half ones (i.e., those numbers whose binary expansions violate the strong law of large numbers).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Was there ever an axiom rendered a theorem? In the history of mathematics, are there notable examples of theorems which have been first considered axioms?
Alternatively, was there any statement first considered an axiom that later has been shown to be derived from other axiom(s), therefore rendering the statement a theorem?
| The most famous example I know is that of Hilbert's axiom II.4 for the linear ordering of points on a line, for Euclidean geometry, proven to be superfluous by E.H. Moore. See this wikipedia article, especially "Hilbert's discarded axiom". https://en.wikipedia.org/wiki/Hilbert%27s_axioms
In the article of Moore linked there, it is stated that also axiom I.4 is superfluous.
http://www.ams.org/journals/tran/1902-003-01/S0002-9947-1902-1500592-8/S0002-9947-1902-1500592-8.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
} |
Change of variables in stochastic PDE I have the following stochastic partial differential equation (SPDE):
$d v = -\mu \frac{\partial v}{\partial x} dt + \frac{1}{2} \frac{\partial^2 v}{\partial x^2} dt - \sqrt{\rho} \frac{\partial v}{\partial x} d M_t$,
with $M_t$ standard Brownian motion and $\mu, 0 \leq \rho \leq 1$ real-valued parameters. I read that the solution (without boundary conditions) can be written as the solution of the PDE
$ \frac{\partial u}{\partial t} = \frac{1}{2} (1-\rho) \frac{\partial^2 u}{\partial x^2} - \mu \frac{\partial u}{\partial x}$
shifted by the current value of the Brownian driver
$v(t,x) = u(t, x-\sqrt{\rho} M_t)$
How can I show this explicitly?
NOTE: It might be useful to keep in mind that the SPDE describes the density of the particles
$ d X_t^i = \mu dt + \sqrt{1-\rho} d W_t^i + \sqrt{\rho} d M_t$,
with $W_t^i$ independent Brownian motions. $M_t$ somehow represents a "common noise".
NOTE: is there maybe a mistake in the SPDE? Should we have $\frac{1}{2} (1-\rho) \frac{\partial^2 v}{\partial x^2}$?
| Proving these kind of phase shifts can indeed be difficult as the Ito calculus does not provide a nice chain rule as we have in deterministic calculus. There is however a trick to solve this problem. Choose a smooth test function $\zeta$ and define the functional
\begin{align}
\phi(v_t,M_t)=\langle v_t,\zeta\rangle_{H}=\langle u(t,\cdot),\zeta(\cdot+\sqrt{\rho}M_t)\rangle_{H},
\end{align}
where I assume your solution lives in some suitable Hilbert space $H$. Now using standard Ito calculus you can derive an SDE for $\phi$ (because you know the (S)PDE for $u$ and the SDE for $M_t$). Then, you can shift the $\sqrt\rho M_t$ back to $u$ (which results in $v$) and then drop the inner product with $\zeta$. The resulting SPDE should be equal to your original problem.
I hope this answer is still useful for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\lim_{x\to 1}\frac{(x^n -1)(x^{n-1}-1)\cdots(x^{n-k+1}-1)}{(x-1)(x^2-1)\cdots(x^k-1)}$
Evaluate:$$
\lim_{x\to 1}\frac{(x^n -1)(x^{n-1}-1)\cdots(x^{n-k+1}-1)}{(x-1)(x^2-1)\cdots(x^k-1)}
$$
I'm trying to spot an error in my calculations. It is known that $x^n - 1$ may be factored out as $(x-1)(1+x+x^2+\cdots+x^{n-1})$. Using that fact consecutively for all the brackets one may obtain:
$$
k\ \text{times}\begin{cases}
x^n-1 = (x-1)(1+x+\cdots+x^{n-1})\\
x^{n-1}-1 = (x-1)(1+x+\cdots+x^{n-2})\\
\cdots\\
x^{n-k+1}-1 = (x-1)(1+x+\cdots+x^{n-k})
\end{cases}
$$
For the denominator:
$$
k\ \text{times}\begin{cases}
(x-1) = (x-1)\\
(x^2-1) = (x-1)(1+x)\\
\cdots \\
(x^k-1) = (x-1)(1+x+\cdots+x^{k-1})
\end{cases}
$$
So if we denote the expression under the limit as $f(x)$ we get:
$$
f(x) = \frac{(x-1)^k\prod\sum\cdots}{(x-1)^k\prod \sum\cdots}
$$
Now if we let $x\to1$ we get:
$$
\lim_{x\to1}f(x) = \frac{(n-1)(n-2)\cdots (n-k)}{1\cdot 2\cdot 3\cdots (k-1)} = \frac{(n-1)(n-2)\cdots (n-k)}{(k-1)!}
$$
But this doesn't match the keys section which has $n\choose k$ as an answer. I've checked several times but couldn't spot a mistake. Looks like I'm missing a $+1$ somewhere.
| A quick way to evaluate the limit:
First consider
\begin{align}
\lim_{x \to 1} \frac{x^{n-j+1} -1}{x^j -1} &= \lim_{x \to 1} \frac{(n-j+1) x^{n-j}}{j \, x^{j-1}} = \frac{n-j+1}{j}
\end{align}
now,
\begin{align}
\lim_{x\to 1}\frac{(x^n -1)(x^{n-1}-1)\cdots(x^{n-k+1}-1)}{(x-1)(x^2-1)\cdots(x^k-1)} &=
\lim_{x \to 1} \prod_{j=1}^{k} \frac{x^{n-j+1} -1}{x^j -1} \\
&= \prod_{j=1}^{k} \, \lim_{x \to 1} \frac{x^{n-j+1} -1}{x^j -1} \\
&= \prod_{j=1}^{k} \frac{n-j+1}{j} \\
&= \frac{(n)(n-1) \cdots (n-k+1)}{k!} = \frac{n!}{k! \, (n-k)!} \\
&= \binom{n}{k}.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Ratio and Proportion quantitative aptitude exam for management entrance in India The following is a question from the Common Admission Test which is an aptitude exam conducted in India for admission into postgraduate management programmes of premier management institutes of the country.
The salaries of Ram and Shyam are in the ratio 6:7 while their expenditures are in the ratio 7:8 respectively. Which of the following can be the ratio of their savings?
A) 5:6
B) 6:7
C) 7:8
D) None of these
RatioProportionVariation
| HINT #1: Shyam makes $1/6$ more money but only spends $1/7$ more money, so what do you think the ratio of their savings can be?
HINT #2: Put Ram's money on the $x$-axis and Shyam's money on the $y$-axis. The salaries are represented by a vector $S=(6a, 7a)$ for some $a > 0$, and the expenditures are represent by a vector $E = (7b, 8b)$ for some $b > 0$. Draw the diagram, subtract the two vectors to arrive at the savings $V = S-E$. What can you conclude about the possible slope of the $V$?
Can you finish from here, or do you need more hint?
Expanding on Hint #1: If Shyam makes $1/6$ more money and also spends $1/6$ more money, then clearly he will also save $1/6$ more money. I hope that is obvious. If not, try a few numbers, e.g. they make $6000, 7000,$ and spend $600, 700,$ and therefore save $5400, 6300.$ All three pairs are $6:7.$
If Shyam makes $1/6$ more money, but spends less than $1/6$ more money, then he will end up saving more than $1/6$ more money. This is less obvious, but follows from the previous paragraph. Again, e.g. they make $6000, 7000,$ but spend $700, 800,$ and save $5300, 6200.$ The last pair is $1:1.170$ which is higher than $6:7 = 1:1.167$.
Now consider the specific case of Shyam spending $1/7$ more money. This is less than $1/6$ more money, so he will save more than $1/6$ more money. This immediately rules out answers (B) and (C). But is the answer (A) or (D)? I.e. can the savings ratio reach $5:6 = 1: 1.2$?
If the expenditures $\ll$ the salaries, the savings ratio will remain close to (but slightly higher than) $6:7$. (I hope this is obvious, e.g. imagine them spending $0.07, 0.08.$) If the expenditures are very big, e.g. in the limit when Ram spends all his salary while Shyam doesn't, the ratio will become infinite (division by zero). So as the expenditures increase, any ratio above the $6:7$ ratio is reachable. Thus the answer is (A), i.e. $5:6$ is a possible ratio.
If you need more concrete reasoning, imagine they make $6000, 7000,$ and spend $7x, 8x$ for some value of $x$. Thus you are trying to solve:
$${7000 - 8x \over 6000 - 7x} = {6 \over 5} \implies 35000 - 40x = 36000 - 42x \implies 2x = 1000 \implies x=500$$
I.e. they spend $3500, 4000$ and save $2500, 3000$ which is indeed a $5:6$ ratio. But IMHO more importantly, you should be able to realize $5:6$ is reachable without actually solving for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3179997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that there are at least two points on a manifold to which a vector is normal Let $M \subset \mathbb R^3$ be a $2$-dimensional manifold which is also a compact set. And let $v \in \mathbb R^3$ be a vector which satisfies $ ||v|| = 1$.
The task is to prove that there are at least two points $x,y \in M$ such that $v$ is a normal vector to their tangent spaces.
There was given a hint to look at the extremum points of the function $f(x) = <x,v>$ which means $f(x) = x_1v_1 + x_2v_2 + x_3v_3$. Since the functions is continuous and $M$ is compact then the function get a minimum and maximum on $M$. Let's say the $x$ is the maximum and $y$ is a the minimum. Moreover, I noticed that $\nabla f = (v_1,v_2,v_3) = v$.
However, I am not so sure how to continue from here. I am supposed to show that $v$ is normal to both $T_xM$ and $T_yM$ but I don't see how exactly. I still haven't used the fact that $M$ is a manifold so it probably uses that.
Help would be appreciated
| Following the hint you propose, $f(x)=\langle x, v\rangle$ has a maximum and a minimum. If they both coincide, it turns out that your $2$-manifold is a compact subset of a plane in $\mathbb{R}^3$, which would imply that it is a surface with boundary, contradicting your definition of submanifold as something that is locally a graph of a function with open domain in $\mathbb{R}^2$ (Invariance of Domain implicitly used).
Then if $x_0$ is the maximum of $f$ and $\gamma: (-\epsilon, \epsilon)\rightarrow M$ is a curve with $\gamma(0)=x_0$, $t=0$ is a critical point of $f\circ \gamma$ (this is just a directional derivative of $f$ at $x_0$). This means that $0=\nabla f(x_0)\cdot \gamma'(0)=v\cdot \gamma'(0)$. This proves that $v\perp T_{x_0}M$, since $T_{x_0}M$ is exactly the plane containing all vectors tangent to curves through $x_0$.
The exact same argument proves that $v\perp T_{y_0}M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the name of this method that can easily find $\lim_{x \to 0^+} \frac{x^x-1}{\ln(x)+x-1}$? I can easily find $\lim_{x \to 0^+} \frac{x^x-1}{\ln(x)+x-1}$ using these steps. However, I don't completely remember the rules my professor told me about it, and I want to know what it's name is so I can look it up.
$$\lim_{x \to 0^+} \frac{x^x-1}{\ln(x)+x-1} $$
$$= \lim_{x \to 0^+} (x^x-1)\cdot\lim_{x \to 0^+}(\frac{1}{\ln(x)+x-1}) $$
$$=0 \cdot 0$$
$$=0$$
It's confusing how $\lim_{x \to 0^+}(\frac{1}{\ln(x)+x-1}) = 0$ which puzzles me since $\ln(0)$ is undefined.
| The rule is if $\lim_{x\to a} f(x) = k$ and $k$ is finite and $\lim_{x\to a} g(x) = m$ and $m$ is finite then $\lim_{x\to a} f(x)g(x) = km$ and $\lim_{x\to a} (f(x)+g(x))=m+k$ and if $m \ne 0$ then $\lim_{x\to a} \frac {f(x)}{g(x}) = \frac km$.
As to we $\lim_{x\to 0}\frac {1}{\ln x +x - 1} = 0$. There is a rule that if $\lim_{x\to a} f(x) = \pm \infty$ then $\lim_{x\to a} \frac 1{f(x)} = 0$.
We can extend these rules to cases wher $\lim_{x\to a} f(x) = \infty$ but we must be careful we actually know what we are talking about or we might end up talking gibberish.
If $\lim_{x\to a}f(x) = \infty$ and $\lim_{x\to a} g(x) = k$ finite. Then $\lim_{x\to a} (f(x) \pm g(x))=\infty$.
If $k > 0$ then $\lim_{x\to a} f(x)g(x) = \infty$ and $\lim_{x\to a}\frac {f(x)}{g(x)} = \infty$. If $k < 0$ then $\lim_{x\to a} f(x)g(x) = -\infty$ and $\lim_{x\to a}\frac {f(x)}{g(x)} = -\infty$. If $k=0$ then $\lim_{x\to a}\frac {g(x)}{f(x)} = 0$ but $\lim_{x\to a} f(x)g(x)$ is indeterminate.
As to youre Note: In calculating a $\lim_{x\to 0^+} \frac 1{\ln x + x-1}$ we have $\lim_{x\to 0^+} \ln x = -\infty$ so $\lim (\ln x + x - 1)= -\infty$ and $\lim_{x\to 0^+} \frac 1{\ln x + x-1}=0$.
As to your comment that $\ln 0$ is undefined... Of course it is! That's the entire reason we are taking limits. Notice $0^0$ is also undefined.
Note when we take limits $x \to a$, we are considering cases where $x$ is near $a$. We are NEVER considering the the case where $x = a$.
$f(a)$ could be undefined, could be something utterly different, or could be completely irrelevent. We are looking at $f(x)$ when $x$ is near $a$. So nobody cares about what $f(a)$ does.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Is there always a way to generate nontrivial finite extensions of a field? Suppose I have a field $k,$ an algebraically closed field $L,$ and an embedding $k \hookrightarrow L.$ Then we know that for any algebraic extension of $k, E$ we can extend the embedding to $E \hookrightarrow L.$ However, if we take $L$ to be the algebraic closure of $k$ and $E$ to be a nontrivial finite extension of $L$ then $E$ is algebraic over $k$ and we have an embedding of $E \hookrightarrow L.$ So this would mean that $L$ and $E$ are infinite dimensional, otherwise, if $L$ and $E$ were finite, $E$ would have strictly greater dimension and such an embedding would not make sense. However, we have something like $\mathbb{C}$ that is a finite dimensional algebraic closure of $\mathbb{R}.$ So would this mean that there are no nontrivial finite extensions of algebraically closed fields in the first place?
For finite fields, we can always take its polynomial rings and modulo out by some irreducible. For example, $\mathbb{R}[X]/X^2 + 1.$ However, this does not work for algebraically closed fields. But I feel that we should be able to make nontrivial extensions for any field. Is this not the case?
| Your idea is almost a complete proof. You start off well:
Let $k$ be a field and $L$ an algebraic closure of $k$. Let $K$ be a finite extension of $L$. Then $K/L$ is algebraic, hence $K/k$ is algebraic, hence there exists an embedding $\iota:\ K\ \longrightarrow L$.
Now let $\alpha\in K$ and let $f\in L[X]$ be its minimal polynomial over $L$. Then $f$ is irreducible over $L$ and $\iota(\alpha)\in L$ is a root of $f$, so $f$ is linear. Hence $\alpha\in L$ and so $K=L$.
As for the question in your comment, the answer is no. If $k$ is a field then $k[x]/\mathfrak{m}$ is a finite and simple extension of $k$, but not every extension of fields is finite and simple.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A property of unique factorization domains I'm asked to prove the following statement:
Let $R$ be a unique factorization domain, let $a,b$ be coprime elements. Show that for any ideal $J$ we have $$(ab)+J=((a)+J )\cap ((b) + J).$$
When $J=0$ it's trivial, but when $J \ne 0$, I can't work it out, although it seems easy.
This statement arises naturally when dealing with primary decomposition.
Update: Thanks Andraw Hubery for pointing out that it is not in general true for UFD.(Although valid for PID)
| It's true for a principal ideal domain, since for $J=(x)$ this then becomes $$\mathrm{gcd}(ab,x)=\mathrm{gcd}(a,x)\cdot\mathrm{gcd}(b,x).$$
It's not true in general for unique factorisation domains. For example, let $R=K[x,y]$ be the polynomial ring over some field $K$, and take $a=x$, $b=y$ and $J=(x+y)$. Then
$$(xy)+J=(xy,x+y) \quad \textrm{and}\quad (x)+J=(x,y)=(y)+J,$$
and clearly $(xy,x+y)\neq(x,y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $\|u+tv\| \ge \|u\|$ for all $t$, prove that $u \cdot v=0$
Let $u, v \in \mathbb R^n$. Prove that if $$\|u+tv\| \ge \|u\|$$ for all $t \in \mathbb R$, then $u\cdot v=0$ (vectors $u$ and $v$ are perpendicular).
I tried writing $v$ as $(n+xu)$, where $u\cdot n=0$, and then try to prove that $x$ must be zero, but was unable to develop this solution.
| Although the original question was interested only in the case where the vector space of interest is $\Bbb R^n$, the existing answer has shown the result is more general than that, applying to any vector space over $\Bbb R$. In fact, we can generalise even further, a point I think is of some interest: if in a space over $\Bbb C$ the inequality runs over all complex $t$ (or $z$, as I'll call it to make the complexity manifest), the orthogonality still follows.
Suppose $V$ is a vector space over $\mathbb{C}$ and $u,\,v\in V$ such that $\forall z\in\mathbb{C}\left(\left\Vert u+zv\right\Vert \ge\left\Vert u\right\Vert \right)$; then $\left\langle u|v\right\rangle =0$. We'll assume the inner product is antilinear in its right argument. Define $x:=\Re z,\,y:=\Im z$ and $a:=\Re\left\langle u|v\right\rangle ,\,b:=\Im\left\langle u|v\right\rangle$ so $$0\le\left\Vert u+zv\right\Vert ^{2}-\left\Vert u\right\Vert ^{2}=\left(x^{2}+y^{2}\right)\left\Vert v\right\Vert ^{2}-2\left(ax-by\right)$$for all $x,\,y\in\mathbb{R}$. The case $x=\frac{a}{\left\Vert v\right\Vert ^{2}},\,y=\frac{-b}{\left\Vert v\right\Vert ^{2}}$ gives $$\left(x^{2}+y^{2}\right)\left\Vert v\right\Vert ^{2}-2\left(ax-by\right)=-\frac{a^{2}+b^{2}}{\left\Vert v\right\Vert ^{2}},$$so $a=b=0$. Hence $\left\langle u|v\right\rangle =0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
A question on the implicit function theorem. The question is given below:
Approximate by a second-degree polynomial the solution of $z^3 + 3xyz^2 - 5x^2y^2z + 14 = 0$, for $z$ as a function of $x,y,$ near (1, -1, 2).
Could anyone give me a hint for the solution please?
My ideas:
our desired function will map (x,y) to z, so $n=2$ and $m = 1$. further $x_{0} = (1, -1)$ and $y_{0} = 2$, and $F(x,y,z) = z^3 + 3xyz^2 - 5x^2y^2z + 14.$ clearly $F$ has a continuous partial derivative and the derivative is $$DF(x,y,z)[3yz^2 -10xy^2z \quad 3xz^2-10x^2yz \quad 3z^2+6xyz-5x^2y^2 ],$$
And hence,
$$DF(1,-1,2)=[-32 \quad 32 \quad -5 ],$$
Then,
$$DF(1,-1,2)\begin{bmatrix}
0 \\
0 \\
z
\end{bmatrix}= -5z,$$
But then what next?
| You don't need the derivative of $F$. The steps I suggest you take are the following:
*
*We know, from the implicit function theorem, that, near the point $(1, -1)$, there exists a function $z:\mathbb R^2\to \mathbb R$ such that $$z(x,y)^3 + 3xyz(x,y)^2 - 5x^2y^2z(x,y) + 14 = 0.$$ That is, in this step, we write $z$ as the function. We also know, of course, that $z(1,-1)=2$.
*Now, take the equation above, and take the derivative, with respect to $x$, of it. Be careful: $z$ is a function of $x$ now, but $y$ is not! So, $\frac{\partial z(x,y)^3}{\partial x} = 3z(x,y)^2\cdot \frac{\partial z(x,y)}{\partial x}$, for example.
*From the point above, you should get an equation from which you can extract the value of $\frac{\partial z}{\partial x}$ at $(x,y)=(1,-1)$.
*Do the same to get the partial derivative for $y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Complex character of a finite group $G$ We let $G$ be a finite group.
If $\chi$ is a complex character of $G$, we define $\overline{\chi}:G \to \mathbb{C}$ by $\overline{\chi}(g)=\overline{\chi(g)}$ for all $g \in G$, and define $\chi^{(2)}:G \to \mathbb{C}$ by $\chi^{(2)}(g) = \chi(g^2)$. We write $\chi_{S}$ and $\chi_{A}$ for the symmetric and alternating part of $\chi$. We note that $\chi_{S}$ and $\chi_{A}$ are characters of $G$ with $\chi^{(2)}=\chi_{S} + \chi_{A}$ and $\chi^{(2)}=\chi_{S} + \chi_{A}$.
First, I want to show that $\overline{\chi}$ is a character of $G$. Now, we can show that $\overline{\chi}(g)=\overline{\chi(g)}=\chi(g^{-1})$ for all $g \in G$ thus $\overline{\chi}(g)$ is a character. Is that OK?
Next, I want to show that $\chi$ is irreducible iff $\overline{\chi}$ is irreducible.
For $(\implies)$ we assume that $\overline{\chi}$ is not irreducible. Thus we must have a reducible representation $\rho:G \to GL(V)$. But then $\chi$ must also be reducible, w.r.t. to that reducible representation, which is a contradiction. $(\impliedby)$ we can show by the same argument. Tbh, it doesn't seem correct to me, I don't think that I really understand what could go wrong here.
Lastly, we let $\chi_{1}$ be the trivial character of $G$. If I understand it correctly, $\chi_{1}(g)=1$ for all $g \in G$. We want to show that $\langle \chi , \overline{\chi} \rangle= \langle \chi_{S},\chi_{1}\rangle + \langle \chi_{A}, \chi_{1} \rangle$.
We have:
$\langle \chi , \overline{\chi} \rangle = \frac{1}{|G|}
\displaystyle\sum_{g \in G} \chi(g)\overline{\chi(g)} = \frac{1}{|G|}
\displaystyle\sum_{g \in G} \chi(g)\chi(g^{-1})$
and now I am not sure where to go from here, I'd appreciate any hints.
| Recall that a character $\chi$ is irreducible if and only if $\langle \chi,\chi\rangle=1$. Note then that
$$\langle \overline{\chi},\overline{\chi}\rangle =\frac{1}{|G|}\sum_{g\in G}\overline{\chi}(g)\overline{\overline{\chi}(g)}=\frac{1}{|G|}\sum_{g\in G}\overline{\chi(g)}\chi(g)$$
but
$$\langle \chi,\chi\rangle=\frac{1}{|G|}\sum_{g\in G}\chi(g)\overline{\chi(g)}$$
Thus, we see that $\langle \chi,\chi\rangle=\langle\overline{\chi},\overline{\chi}\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3180890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Hadamard product tricks of particular entities Consider the following matrices $\mathbf{Q}_{H}$ order $\left(
T\times n\right) $ and $\mathbf{A}$ of order $\left( T\times T\right) $ and $%
\mathbf{\hat{u}}$ of order $\left( n\times 1\right) $ and denote by $\circ $
the Hadamard product and by $\left\vert \mathbf{\hat{u}}\right\vert ^{\circ
-1}=\left\{ 1/\left\vert \hat{u}_{i}\right\vert \right\} $ (element wise
inverse of the vector $\mathbf{\hat{u}}$)
Observe that
$$
\mathbf{A}\left( \mathbf{Q}_{H}\mathbf{\hat{u}}\circ \left\vert \mathbf{Q}%
_{H}\mathbf{\hat{u}}\right\vert ^{\circ -1}\right) \not=\mathbf{AQ}_{H}%
\mathbf{\hat{u}}\circ \left\vert \mathbf{Q}_{H}\mathbf{\hat{u}}\right\vert
^{\circ -1}
$$
thus
\begin{eqnarray*}
\left\vert \mathbf{Q}_{H}\mathbf{\hat{u}}\right\vert \circ \left( \mathbf{A}%
\left( \mathbf{Q}_{H}\mathbf{\hat{u}}\circ \left\vert \mathbf{Q}_{H}\mathbf{%
\hat{u}}\right\vert ^{\circ -1}\right) \right) &\not=&\left\vert \mathbf{Q}%
_{H}\mathbf{\hat{u}}\right\vert \circ \left( \mathbf{AQ}_{H}\mathbf{\hat{u}}%
\circ \left\vert \mathbf{Q}_{H}\mathbf{\hat{u}}\right\vert ^{\circ
-1}\right) \\
&\not=&\left\vert \mathbf{Q}_{H}\mathbf{\hat{u}}\right\vert \circ \mathbf{AQ}%
_{H}\mathbf{\hat{u}}\circ \left\vert \mathbf{Q}_{H}\mathbf{\hat{u}}%
\right\vert ^{\circ -1} \\
&\not=&\mathbf{AQ}_{H}\mathbf{\hat{u}}
\end{eqnarray*}
Is there a mathematical trick to get rid of the $\left\vert \mathbf{Q}_{H}%
\mathbf{\hat{u}}\right\vert $ in this entity
$$
\left\vert \mathbf{Q}_{H}\mathbf{\hat{u}}\right\vert \circ \left( \mathbf{A}%
\left( \mathbf{Q}_{H}\mathbf{\hat{u}}\circ \left\vert \mathbf{Q}_{H}\mathbf{%
\hat{u}}\right\vert ^{\circ -1}\right) \right)
$$
Or maybe another way of writing it that will help some of the proofs.
Thank you so much in advance
| Define the vectors
$$v = Qu, \quad s={\rm sign}(v), \quad b={\rm abs}(v)$$
where the functions are applied elementwise.
Assuming $\,v_k\ne0$, the elementwise division of the vector is
$$v\circ b^{\circ -1} = v\oslash b = s$$
So the main equation reduces to
$$\eqalign{
y = b\circ(As) = BA\,s \cr
}$$
where the Hadamard product was eliminated by introducing the diagonal matrix
$$B = {\rm Diag}(b)$$
If some of the elements $\,v_k=0$, you can proceed by defining the corresponding $\,s_k=0$.
Another interesting way of writing the result is
$$\eqalign{
C &= bs^T, \quad
y &= (C\circ A)\,e \cr
}$$
where $e$ is the all ones vector.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inversion of an almost identity matrix Say we have a square matrix like so
1 c c c ... c
c 1 c c ... c
...
c c c c ... 1
What would be the inverse of this matrix? Calling an inv function is expensive, especially for a very large matrix. I am almost certain there's a simple formula to quickly find this inversion since the inversion also has a very similar form of
x y y y ... y
y x y y ... y
...
y y y y ... x
Though I am not sure what the exact formula is, I appreciate any help.
| A few examples with WA seem to indicate that the inverse of that matrix has the same form, except that the diagonal element is $-((n-2)c+1)$ and we have to divide by the determinant, which seems to be $(n-1)c^2-(n-2)c-1=(c-1)((n-1)c+1)$.
Here is the inverse for $n=5$:
This is probably easy to prove directly or using that the original matrix is a circulant matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
x^2+y^2=2, xy=1, how to find x and y I have problems when doing these equations when I don't know any variable's value.
Can someone please explain how to do this and possibly give some tips when it comes to solving these problems?
Well, I know x = 1 and y = 1, but what about this one:
a-b=3, a:b=3:2, find a^2-b^2
(I am not quite is a:b=3:2 = a/b:3/2 or it was meant to be that way)
| Since $xy=1$ it is $x^2+y^2=2xy$, which gives us $x^2-2xy+y^2=0\Leftrightarrow (x-y)^2=0$
This means, it has to be $x=y$. From the condition $xy=1$ we deduce $x^2=1\Leftrightarrow x=y=\pm 1$, which gives the only two solution $(1,1), (-1,-1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Expectation, Variance and Moment estimator of Beta Distribution I'm given a beta distributed random variable: $X \sim \text{Beta}_{(\theta, 1)} =: \mathbb{P}_\theta$. Where $\theta \geq q$ and
$$\mathbb{f}_\theta(x) = \theta \cdot x^{\theta-1} \cdot \mathbb{1}_{[0,1]}(x)$$
I was asked to compute the expectation and variance of $X$ and came up with the following solution:
$$\mathbb{E}X = \int_\mathbb{R} x \cdot \mathbb{f}_\theta(x) \, \text{dx} = \theta \int_0^{1} x^{\theta} \, \text{dx} = \frac{\theta}{\theta + 1}$$
$$\mathbb{E}X^2 = \int_\mathbb{R} x^2 \cdot \mathbb{f}_\theta(x) \, \text{dx} = \theta \int_0^{1} x^{\theta+1} \, \text{dx} = \frac{\theta}{\theta + 2}$$
$$\text{Var}X = \mathbb{E}X^2 - (\mathbb{E}X)^2 = \frac{\theta}{(\theta+1)^2(\theta+2)}$$
Are these computations correct or did I end up making a mistake? Now let $X_1, \dots, X_n \sim \text{Beta}_{(\theta,1)}$ iid. How can I justify that the moment estimator of $\theta$ is given by $\hat\theta:n = \frac{\bar X_n}{1 - \bar X_n}$? Is this estimator consistent?
I'd say that the estimator is consistent. The Expectation and Variance of $X_1$ is finite. Now after the strong law of large numbers we get
$$\lim_{n \rightarrow \infty} \hat\theta_n = \frac{\mathbb{E}X_1}{1 - \mathbb{E}X_1} = \theta$$
Therefore the estimator should be consistent, but how can I show that the moment estimator of $\theta$ is given by $\hat\theta_n$? I'm familiar with the method of moments, but don't understand how to apply it in this case.
| The idea behind the Method of Moments estimator is the following.
Let suppose we have $\underline{X}$ $=$ $(X_{1},...,X_{n})$ iid observations, distributed according to $f(\cdot \mid \boldsymbol{\theta})$, $\boldsymbol{\theta}$ $\in$ $\Theta$ $\subseteq$ $\mathbb{R}^{d}$.
Define:
$$\mu_{k} = E(X_{i}^{k}) = \mu_{k}(\theta_{1},...,\theta_{d})$$
the $k$-th moment of $X_{i}$ and
$$m_{k} = \frac{1}{n}\sum_{i = 1}^{n}X_{i}^{k}$$
the $k$-th sample moment. Then we estimate the vector of parameters $\boldsymbol{\theta}$ = $(\theta_{1},...,\theta_{d})$ solving the following system of equations:
\begin{cases} m_{1} = \mu_{1}(\theta_{1},...,\theta_{d})\\
.\\
.\\
m_{d} = \mu_{d}(\theta_{1},...,\theta_{d})\end{cases}
leading to $(\hat{\theta_{1}},...,\hat{\theta_{d}})$ $=$ $\boldsymbol{\hat{\theta}}$, our method of moments estimator.
Now, consider a generic beta distribution:
$$f(x_{i} \mid \theta, \alpha) = \frac{\Gamma(\theta + \alpha)}{\Gamma(\theta)\Gamma(\alpha)}x_{i}^{\theta - 1}(1 - x_{i})^{\alpha - 1}I(0 < x_{i} < 1)$$
In this case, $\boldsymbol{\theta}$ $=$ $(\theta, \alpha)$, hence we need to consider the first two moments.
The first moment of $X_{i}$ is:
$$E(X_{i}) = \frac{\theta}{\theta + \alpha}$$
while the second moment can be proved to be:
$$E(X_{i}^{2}) = \frac{\theta(\theta + 1)}{(\theta + \alpha)(\theta + \alpha + 1)}$$
Now, at this point, for the first moment in the sample, we have:
$$m_{1} = \frac{1}{n}\sum_{i = 1}^{n}X_{i} = \overline{X_{n}}$$
while for the second moment in the sample we exploit the sample variance:
$$s^{2} = m_{2} - m_{1}^{2} \Rightarrow m_{2} = s^{2} + m_{1}^{2}$$
and, at this point, we apply the system of equations described before to this case, to be then solved for both $\theta$ and $\alpha$:
\begin{cases} m_{1} = \overline{X_{n}} = E(X_{i}) = \frac{\theta}{\theta + \alpha}\\
m_{2} = s^{2} + m_{1}^{2} = E(X_{i}^{2}) = \frac{\theta(\theta + 1)}{(\theta + \alpha)(\theta + \alpha + 1)}\end{cases}
which would yield the two Method of Moments estimators $\hat{\theta}$ and $\hat{\alpha}$, and this would be in the general case with both parameters of the beta unknown. In your case, the problem is simplified being $\alpha$ $=$ $1$ known, so that we have to estimate only $\theta$ using the first sample moment, that is we have only the first equation to solve with $\alpha$ $=$ $1$:
$$\overline{X_{n}} = \frac{\theta}{\theta + 1} \Rightarrow \hat{\theta} = \frac{\overline{X_{n}}}{1 - \overline{X_{n}}}$$
Hope it clarifies.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is correct approach for test of divergence/convergence for $\sum_{n=1}^{\infty} \frac{\sqrt{n^3+n+1}}{n^4}$ What is the correct approach for determining if the following series is convergent or divergent?
$$\sum_{n=1}^{\infty} \frac{\sqrt{n^3+n+1}}{n^4}$$
My thought process was to use the limit comparison test where:
$$\sum_{n=1}^{\infty} \frac{\sqrt{n^3+n+1}}{n^4} \leq \frac{1}{n^4}$$
According the p-series test this series converges. Is this correct?
| As this is a series with positive terms, you may use equivalents: a polynomial function is asymptotically equivalent to its leading term, so
$$\frac{\sqrt{n^3+n+1}}{n^4}\sim_\infty\frac{\sqrt{n^3}}{n^4}=\frac 1{n^{5/2}},$$
which is a $p$-series ($p=5/2$) converging to $\zeta(5/2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
How to show a function is not invertible locally? I have a silly question:
If a smooth function $f$ has a invertible derivative $Df(x_0)$ at a point $x_0$, then $f$ has a smooth inverse locally. This is what I learned in Inverse function Theorem. If we have a function $g$ such that $Dg(x_0)$ is not invertible, can we say that $g$ is not invertible locally?(Not necessarily smooth inverse)
Or in another words, what is the sufficient conditions to say a function is not invertible locally?
| A sufficient condition for $f$ not to have a local inverse at $x_0$ is that $x_0$ is a local extremum. For any $\delta$ small enough, $f(x_0+\delta), f(x_0 - \delta) \leq f(x_0)$, and hence we an order them as
$$f(x_0 \pm \delta) \leq f(x_0 \mp \delta) \leq f(x_0)$$
Hence, using the intermediate value theorem, we can show that $f$ is not invertible on $[x_0-\delta, x_0 + \delta]$. Since $\delta$ can be taken to be arbitrarily small, this shows $f$ cannot be locally invertible.
Using the second derivative test, we can state this condition in terms of derivatives: if $f'(x_0) = 0$ and $f''(x_0) \neq 0$, then $f$ fails to be locally invertible at $x_0$. However, if $f''(x_0) = 0$, the second derivative test fails, and $f$ may or may not be locally invertible (as the example $f(x) = x^3$ given in the comments shows).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Area of a rectangle inside a triangle with given coordinates
Given a triangle with vertices at points $(0, -a), (0, a), (b, 0)$, where $a > 0$, find the maximal area and the dimensions (base and height) of a rectangle that can be contained within the triangle.
I tried to find a function, differentiate it, and find the maximum. I think I kind of did it right, but since I'm not sure I want to ask for some advice.
I set up $x$ as the length of the base and $y$ as the height, but since $y$ is being divided by the $x$ axis in two parts, the function of the area is:
$$\mbox{Area}=x2y$$
and to find y with the triangle, I saw that we can do a little similar triangle to the bigger triangle on the positive side so
$$ \frac ab = y/{b-x}$$
then
$$y=a-{ax}/{b}$$
plugin that into de area equation gives
$$\mbox{Area}=2x(a-{ax}/b)$$
so the derivative is
$$A'=2a-4{ax/b}$$
solving for A'=0 gives $x=b/2$ so finding the dimensions and the area won't be hard once you get the proper x, but I'm kinda doubtful because I didn't set any constraint for the base "x"
| Let $A(0,b),$ $B(b,0),$ $C(0,-a)$ and $KLMN$ be our rectangle, $KL=x$, where $K\in BC$ and $N\in AB$.
Thus, since $\Delta ABC\sim\Delta NBK$, we obtain:
$$\frac{NK}{2a}=\frac{b-x}{b}$$ or
$$NK=\frac{2a(b-x)}{b}.$$
Id est, by AM-GM
$$S_{KLMN}=\frac{2a(b-x)x}{b}\leq\frac{2a\left(\frac{b-x+x}{2}\right)^2}{b}=\frac{ab}{2}.$$
The equality occurs for $b-x=x,$ which says that we got a maximal value.
Now, we see that in the optimal case $KL=\frac{b}{2}$ and $NK=a.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3181816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is $\frac{lnk}{\pi}$ irrational or rational?Here $k$ is a positive integer. I ask the question because I want to prove $\{\frac{lnn}{2\pi}\}$ (fractional part) is dense in $[0,1)$.
If we take $n=2^{k}$.It will become $\{\frac{ln2}{2\pi}k\}$.So we need to prove $\frac{ln2}{\pi}$ is irrational.It's difficult for me.But we can take $n=3^{k}.$ So one of $\frac{ln2}{\pi}$ and $\frac{ln3}{\pi}$ must be irrational. If not, $\frac{ln3}{ln2}$ is rational. This means $3^{p}=2^{q}$ and causes a contradiction.
So is there direct way to show $\frac{lnk}{\pi}$ is irrational? Any reference will be thanked.
| You don't know which of $\frac{\ln 2}{\pi}$ or $\frac{\ln 3}{\pi}$ are irrational. But you know that at least one of them is. That's enough. You are allowed to pick "the irrational one", even though you don't know which one it is, as long as you know it exists.
(Small note: If it happens that they are both irrational, which is the most likely result, then "the irrational one" doesn't make sense. So you have to phrase yourself in a way that makes sense no matter which ones are actually irrational.)
Alternatively, and if you want to be a bit more explicit, you don't actually need irrationality for what you want to prove. You can show that for any $N$, on the set $[e^{2\pi N}, e^{2\pi(N+1)})\cap \Bbb N$, the function $f$ given by
$$
f(n) = \left\{\frac{\ln n}{2\pi}\right\}
$$
is increasing, and the difference $f(n)-f(n-1) = \frac{\ln n - \ln(n-1)}{2\pi}$ between consecutive values becomes small as $N$ grows large. This means that picking $N$ large enough, you can get the values of $f$ as close together as you want. So the collection of all values of $f$, across all natural numbers $N$, must be dense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving condition for spanning vectors I have highlighted the bit I don't understand, what does it mean by in the linear form? And didn't the proof already state that 4 lines above?
| The linear form is as outlined in the last sentence of the first paragraph, namely the linear function of the entries of b: $l$(b)$ = \sum_{i=1}^n \alpha_i b_i$ corresponding to a zero row.
The statement which is four lines above what you highlighted states that the coefficient of $b_i$ is $1$ at the start i.e., before we start the Gaussian elimination process (by switching rows, multiplying rows by scalars etc). The highlighted statement is saying that if the original $i$-th row (after at least one step in the Gaussian elimination process) is a zero row, it must have become a zero row by addition/s of multiples of one or more above rows. So the linear form on the right of the vertical line will be ($b_i$ + linear combination of the $b_j's$), where $j \neq i$. That is, $b_i$ plus a linear combination of the other entries of b. In this expression, the coefficient of $b_i$ is $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
how $\int_a ^ b |f'(x)|$ gives the length of the arc of the contour $f$ : $(f(x) : x \in [a , b])$ I got to know that $\int_a ^ b |f'(x)|$ gives the length of any contour. Where $f(x)$ is a piece-wise differentiable function from $[a,b]$ to $\mathbb R^2$. I was reading complex integral . Can anyone please enlighten me how it is ? I understand for the functions whose Domain and Range are $\mathbb R$.
| What is the length of a curve?
Let $a = x_0 < x_1 < x_2 < ... < x_n = b$ a partition of $[a,b]$
An approximation of the length can be
$Len(f) \approx \sum_{k=0}^{n-1}|f(x_i+1)-f(x_i)|$
You can see that when at the limit where the partition is delicate enough you get
$Len(f) = \sum_{k=0}^{\infty}|f(x_i+1)-f(x_i)|=\sum_{k=0}^{\infty}\Delta x_i|\frac{f(x_i+1)-f(x_i)}{\Delta x_i}|\rightarrow_{n\rightarrow \infty} = \int_a^b|f'(x)|dx$
Thats the general idea
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $ \int_X |f_n|^2d\mu<\infty $ for all $ n\in\mathbb N $
Let $ (X,\mathcal M,\mu) $ be a measure space with $ \mu(X)<\infty $. Let $ (f_n)_{n=1}^\infty $ be a sequence of functions in $ L^1(X,\mathcal M,\mu) $, and let $ f\in L^1(X,\mathcal M,\mu) $. Suppose that
$$ \lim_{n\to\infty}\int_X |f_n-f|d\mu=0 .$$
Suppose also that $$ C:=\sup\left\{\int_X|f_n|^4d\mu:n\in\mathbb N\right\}<\infty .$$
Prove that
*
*$ \int_X |f|^4d\mu\le C $.
*$ \int_X |f_n|^2d\mu<\infty $ for all $ n\in\mathbb N $, and $ \int_X|f|^2d\mu<\infty $
*$ \lim\limits_{n\to\infty}\int_X|f_n-f|^2d\mu=0 .$
My attempt:
*
*Since $$\lim_{n\to\infty}\int_X |f_n-f|d\mu=0, $$ we know that $ f_n $
converges to $ f $ in measure, i.e., for every $ \epsilon>0, $
$$ \lim_{n\to\infty}\mu(\{x\in X:|f(x)-f_n(x)|\ge\epsilon\})=0. $$ Hence we can find a subsequence $ f_{n_k} $ of $ f_n $ such that $$ f_{n_k}\to f\quad a.e.\ \text{as}\ k\to\infty. $$ Now it suffices to prove that $$ \liminf_{k\to\infty}\int_X |f_{n_k}|^4d\mu\le C=\sup\left\{\int_X|f_n|^4d\mu:n\in\mathbb N\right\} $$
which is obviously true and we have applied the Fatou's lemma: $$ \int_X |f|^4d\mu=\int_X \lim_{k\to\infty}|f_{n_k}|^4d\mu\le\liminf_{k\to\infty}\int_X |f_{n_k}|^4d\mu .$$ And we are done!
*I am stuck on this one...... How to deal with $ |f_n|^2 $? I am thinking of Holder's inequalities, but nothing helps.
| $$\int |f_n|^2 d\mu\leq\sqrt{ \int d\mu}\cdot\sqrt{\int |f_n|^4 d\mu}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Number of solutions of this equation *. The number of solutions of the equation $a+2b=k$ is $\left\lfloor\frac{k}{2}\right\rfloor+1,$ where $a,b\in \mathbb{N}\cup\{0\}\mbox{ and fixed }k\in\mathbb{N}.$ More general what is the number of solutions of the equation $a_{1}+2a_{2}+3a_{3}+\ldots+(r-1)a_{r-1}+ra_{r}=k,$ where $a_{i}\in\mathbb{N}\cup\{0\}\quad 1\leq i\leq r \mbox{ and fixed }k\in\mathbb{N}?$
| The number of solutions of $\sum_{i=1}^r ia_i=k$ in the nonnegative integers is equal to the number of integer partitions of $k$ whose parts all have size at most $r$. The numbers $a_i$ represent the number of parts of size $i$.
The number of such partitions is sometimes denoted $p_{\le r}(k)$. There is no closed form for this $p_{\le r}(k)$. It can be shown using Schur's theorem that asymptotically,
$$
p_{\le r}(k)\sim \frac{k^{r-1}}{r!(r-1)!}\qquad \text{as }{k\to\infty}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What is a good way to prove that the function $\Bbb Z_n \rightarrow \Bbb Z_u \times\Bbb Z_v$ is well defined? Given a function of the type $\Bbb Z_n \rightarrow \Bbb Z_u \times\Bbb Z_v$ ($[x]_n\rightarrow([x]_u, [x]_v)$) where of course $n=u*v$ how can I prove that is a well defined function?
| For $f:\mathbb{Z}_n\to\mathbb{Z}_u\times\mathbb{Z}_v$, $[x]_n\mapsto ([x]_u, [x]_v)$
It might go like this:
To prove that this function is well-defined, we have to check if it really does not matter which representative of the equivalence class we choose.
So let $[x]_n=[y]_n$. Then $x=nk+r$ and $y=nl+r$ with $0\leq r<n$.
Now $f([x]_n-[y]_n)=f([x-y]_n)=([x-y]_u, [x-y]_v)\stackrel{x-y=n(k-l)}{=}([n(k-l)]_u, [n(k-l)]_v)\stackrel{n=uv}{=}([0]_u,[0]_v)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $X_n \overset{p}{\rightarrow} 0$ as $n \rightarrow \infty$
Let $X_1,X_2 ,\ldots$ be random variables defined by the relations
$P(X_n = 0) = 1−\frac{1}{n}$, $P(X_n = 1) = \frac{1}{2n}$ and $P(X_n = −1) = \frac{1}{2n}$ , $n\ge 1$
Show that:
$X_n \overset{p}{\rightarrow} 0 \quad \text{ as } \quad n \rightarrow \infty$,
$X_n \overset{r}{\rightarrow} 0\quad \text{ as }\quad n
\rightarrow \infty$ for any $r>0$
For the first one I did $$\lim_{n \to \infty}P(|X_n \ge \epsilon|)=\lim_{n \to \infty}1/2n=0$$ I used 1/2n as $X_n$ can't be as to be greater than 1.
But for the second one, I'm kind of stuck but I tried:
$$\lim_{n \to \infty}E(|X_n-0|^r)=\lim_{n \to \infty}E(X_n^r)=$$
| The random variable $\left\lvert X_n\right\rvert$ takes the value one with probability $1/n$ and $0$ with probability $1-1/n$ hence it is also the case for the random variable $\left\lvert X_n\right\rvert^r$. We deduce that $\mathbb E\left\lvert X_n\right\rvert^r=1/n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3182868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Maximum size of a subcritical birth-death process Beginning with a population of $n_0$ individuals, let each individual have a probability $p$ to survive until it replicates into two independent and identical individuals, where $p<\frac12$.
It follows that the population goes extinct in the long-run with probability 1, and the expected number of replications before extinction is $n_0p/(1-2p)$.
The population must reach some maximum size $N$ before going extinct, where $N\geq1$. What is the expected value of this maximum size $N$?
| I have written this $\texttt{R}$ code to compute a point estimate for the mean and variance of $N$ given fixed values for $p$ and $n_0$:
rm(list=ls())
N <- 10000
maxpop <- numeric(N)
p <- 0.3
n_0 <- 2
for(n in 1:N) {
pop <- n_0
maxpop[n] <- n_0
while(pop > 0) {
if(runif(1)>p) {
pop <- pop - 1
}
else {
pop <- pop + 1
maxpop[n] <- max(maxpop[n], pop+1)
}
}
}
print(mean(maxpop))
print(var(maxpop))
Perhaps it can be useful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
CW complexes are the cofibrant objects in the Quillen model structure on Top? If $J$ is a class of maps in a category, the $J$-cellular maps are by definition transfinite compositions of pushouts of coproducts of maps in $J$.
Now if $J$ denotes the family of inclusions $S^{n-1}\hookrightarrow D^n$ for all $n$, then CW complexes are elements of Cell($J$).
However, not every map of Cell($J$) is a relative CW complex since for the latter we are only allowed to use the $n$-th inclusion $S^{n-1}\hookrightarrow D^n$ at the $n$-th step of a transfinite composition (we can only glue $n$-cells at step $n$).
The retracts of maps in Cell($J$) are exactly the cofibrations in a $(J, J_{ac})$-cofibrantly generated model structure. The Quillen model structure on Top (with Serre fibrations) is cofibrantly generated with generating cofibrations the family of inclusions $S^n\hookrightarrow D^n$.
Also, the CW complexes are exactly the cofibrant objects in this model structure (read on nlab https://ncatlab.org/nlab/show/CW+complex).
That must mean that CW complexes are exactly the retracts of cell complexes. This is false since there are cell complexes which are not CW complexes.
Where is the mistake ??? are there more cofibrant objects in Top than just CW complexes ?
| They are not all the cofibrant objects in the Quillen model structure on spaces, they are "among" the cofibrant objects- as is stated in the nlab article you linked in the question.
They are all of the cofibrant objects in the mixed model structure, see here, at least up to actual homotopy equivalence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
how to find zero solutions and stability? Show that the zero solution of $$ \large \ddot{x}+bx^2 \dot{x}+kx=0$$
is asymptotically stable if $b>0$ and unstable if $b<0$. Does this depend on the sign of $k$?
I know for 1st order equation but how to find stability of 2nd order equation?
Can I convert it into system of 1st order equations as follows:
Let $y=x$ and $z=\dot x$. Then,
$\dot z=\ddot x=-bx^2 \dot x-kx=-by^2z-ky,$
i.e., $\dot z=-by^2z-ky, ......(1)$
and $ \dot y=z, .......(2)$
These are the two equations.
But how to find zero solutions and stability ?
Do we need to linearize this system again?
Help me
| Any constant solution obviously has $\dot x=0$, $\ddot x=0$ so that the equation $kx=0$ remains.
For $k<0$ you get a saddle point, thus $k>0$.
For the stability consider
$$
\frac{d}{dt}\frac12(\dot x^2+x^2)=-bx^2\dot x^2
$$
which tells you in which direction the solutions cross the circles around the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a formalization of the link between geometry and analytical geometry? Geometry and algebra/calculus can be formalized by axioms.
Is there a global theory that combines both and establishes correspondences such as
*
*the equation of a straight line is $ax+by+c=0$,
*the length of a segment is $\sqrt{(x_b-x_a)^2+(y_b-y_a)^2}$,
*a rotation corresponds to an orthogonal transformation,
*the circumference of a unit circle is $2\pi$,
and so on. I mean not just in the numerical sense, but with an established correspondence between the equations and the geometric entities and measures as defined by Euclid.
As an application, can a geometric proof of the identity
$$\lim_{\theta\to0}\frac{\sin\theta}\theta=1$$ constitute an undisputable argument in terms of calculus ?
| The "basics" are developed by Hilbert into The Foundations of Geometry (1899).
The book states the axioms for plane Euclidean geometry.
Hilbert defines the fundamental geometrical object : segment.
Indipendently, he states the laws for real numbers.
Finally, Hilbert develop an "algebra of segments", i.e defines the operations of sum and multiplication of segments, showing that they satisfy the previous laws.
With all this machinery in place :
To the system of segments already discussed, let us now add a second system. We will distinguish the segments of the new system from those of the former one by means of a special sign, and will call them “negative” segments in contradistinction to the “positive” segments already considered. If we introduce also the segment $O$, which is determined by a single point, and make other appropriate conventions, then all of the rules [previously] deduced for calculating with real numbers will hold equally well here for calculating with segments.
In a plane $\alpha$, we now take two straight lines cutting each other in $O$ at right angles as the fixed axes of rectangular co-ordinates, and lay off from $O$ upon these two straight lines the arbitrary segments $x$ and $y$. We lay off these segments upon the one side or upon the other side of $O$, according as they are positive or negative. At the extremities of $x$ and $y$, erect perpendiculars and determine the point $P$ of their intersection. The segments $x$ and $y$ are called the co-ordinates of $P$. Every point of the plane $\alpha$ is uniquely determined by its co-ordinates $x, y$, which may be positive, negative, or zero.
Let $l$ be a straight line in the plane $\alpha$, such that it shall pass through $O$ and also through a point $C$ having the co-ordinates $a, b$. If $x, y$ are the co-ordinates of any point on $l$, it follows at once from theorem 22 [ratio between corresponding sides of similar triangles] that
$a : b = x : y ,$
or
$bx − ay = 0 ,$
is the equation of the straight line $l$ .
See also : Gerard Venema, Foundations of Geometry (Pearson, 2011).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$L^2$ and Sobolev space In Raymond's book on Pseudodifferential Operator page 18, he says , where $S'$ is the tempered distributions, we define sobolev space of exponent $s$ as
$u \in S'$ with $\lambda^s \hat{u} \in L^2$. This is equivalent to $\hat{u}$ is a function satisfying
$$ ||u||_s^2 =(2 \pi)^{-n} \int (1+|\xi|^2 )^s |\hat{u}(\xi)|^2 \,d \xi < \infty $$
$\lambda^s (\xi) = (1+|\xi|^2 )^\frac{s}{2}$.
My concern is if $u \in S'$, then $u:S \rightarrow \Bbb R$ is a continuous semi linear norm. How does this guarantee $\hat{u}$ is a "function"?
Let me make this more precise.
Let us denote $u \in S'$ by $L_u$ as it is a linear functional. So $L_u: S \rightarrow \Bbb C$ is a tempered dsitribution. We define $\hat{u}$ by
$$L_{\hat{u}} (\varphi) = L_{u}(\hat{\varphi}(-x)), \quad \forall \varphi \in S$$
So this is our $\hat{u}:=L_{\hat{u}}$. This is an element in $S'$.
So what exactly does it mean to say that $\lambda^s \hat{u} \in L^2$? So I guess we embed $L^2$ into $S'$ I.e. there exists $\varphi \in L^2$ such that
$$\lambda^s \hat{u} (f) = \int f \bar{\varphi} $$
for all $f \in S$?
| It i,s just the definition of $L^{2}$ norm. Just square $\lambda^{s}\hat {u}$ and integrate. I thing the book defines $L^{2}$ space by taking the reference measure as Lebesgue measure divided by $(2\pi)^{n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving $\int_0^{\infty}f(x) $ converges using Lagrange Let $f:[0,∞) \to \Bbb R$ be a differentiable such that $f(x) > 0$ for every $x \in [0, \infty)$ and a positive function. Assume there exits an $0 < L < \infty$ such that $$\lim_{x\to \infty}\,[\ln(f(x))]'=-L.$$
Prove that $\int_0^{\infty}f(x) $ converges.
We were hinted that using Lagrange would do the trick but I fail to see how applying $g(x) =\ln(f(x))$ would help or by using the limit definition. Please help solve.
| Since $\lim\limits_{x\to \infty}\,[\ln(f(x))]' = \lim\limits_{x\to \infty}\, \dfrac{f'(x)}{f(x)}=-L < 0$,
$$\forall \varepsilon \in (0,L), \exists M > 0: \forall x \ge M, \frac{f'(x)}{f(x)} \le -L + \varepsilon < 0.$$
Fix any $\varepsilon \in (0,L)$. Multiply both sides by $f(x)$. (We can do so since $f(x) > 0$ for all $x \in [0,\infty)$.)
$${f'(x)} \le (-L + \varepsilon) f(x) < 0 \tag{*} \label{*}$$
From this, it's clear that $f$ is (strictly) decreasing on $[M,\infty)$.
Source: https://ltcconline.net/greenl/courses/107/Series/INTTEST.HTM
Therefore, the convergence of $\int_M^\infty f(x) dx$ is equivalent to the convergence of the infinite sum $\sum\limits_{n \ge M} f(n)$. (Since $f$ is differentiable on $[0,\infty)$, it's continuous on the closed and bounded interval $[0,M]$, so $\int_0^M f(x) dx$ is well-defined.)
To tidy things up, write $N = \min\{ n \in \Bbb{N} \mid n \ge M\}$ so that $\sum\limits_{n \ge M} f(n) = \sum\limits_{n=N}^\infty f(n)$.
Apply Grönwall's inequality (with $u(t) = f(t))$, $\beta(t) = -L + \varepsilon$ and $I = [N,\infty) \;(\subseteq [M,\infty))\;$) to see that
$$ f(t)\leq f(N)\exp {\biggl (}\int _{N}^{t}(-L + \varepsilon)\,{\mathrm {d}}s{\biggr )} = f(N) \exp((-L + \epsilon)(t - N)) \tag{#} \label{#}$$
for all $t \in [N,\infty)$.
Put $t = N+1$ in \eqref{#} to get the relation
$$f(N+1) \le f(N) e^{-(L-\epsilon)} \tag3 \label3.$$
To conclude that the series $\sum\limits_{n=N}^\infty f(n) < \infty$ by induction, we need \eqref{3} for all natural numbers $n \ge N$.
$$\forall n \ge N, f(n+1) \le f(n) e^{-(L-\epsilon)} \tag4 \label4.$$
To justify \eqref{4}, replace $I$, $N$ and $t$ with $[n,\infty)$, $n$ and $n+1$ respectively in \eqref{#}.
Now, we see that the sequence $(f(n))_{n \ge N}$ is in fact bounded by a geometric progression with common ratio $e^{-(L-\varepsilon)} \in (0,1)$.
$$0 \le \sum_{n=N}^\infty f(n) \le f(N) \sum_{n = 0}^\infty e^{-(L-\varepsilon)} = \frac{f(N)}{1 - e^{-(L-\varepsilon)}} < \infty.$$
This shows that $\sum\limits_{n=N}^\infty f(n) < \infty$, so $\int_M^\infty f(x) dx < \infty$, and hence $\int_0^\infty f(x) dx < \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can a sum of a finite number of exponentially growing numbers be calculated as a function of the growth rate and the number of growths? This is a question based on the exponential growth pattern of a game mechanic in World of Warcraft: The Heart of Azeroth item in the game has a level that can be increased by gathering Azerite, with each level requiring roughly 1.3 times as much Azerite as the one before. I'm trying to calculate how much Azerite I have left to gather to get from my current level (40) to the max level (50) when I want to start playing again, so effectively $$X + 1.3X + 1.3^2 X + 1.3^3 X + \ldots + 1.3^{10}X$$
I'm trying to figure out a formula that I can use so I don't have to manually add these numbers (even though deriving that formula would probably be slower than just adding the 10 numbers manually). Essentially, given a start Azerite requirement of X, a rate of increase of p and n numbers to sum, is there a simple formula to calculate this exponential sum without having to calculate and add each number? I assume there is one, but I haven't done any real math since high school so I have no idea where to start.
| You are looking at a geometric series
$$
x + px + p^2x + \ldots xp^n = x\sum_{k=0}^n p^k = \frac{x\left(1-p^{n+1}\right)}{1-p}
$$
If $|p|<1$, $n$ is large and you are happy with an approximation, $p^{n+1}$ becomes very small, so your sum is approximately $$\frac{x}{1-p}.$$
For your example this does not approximate well since $p = 1.3 > 1$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3183834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $ab+bc+ca \geq 3k^2-1$, prove that: $a^3+b^3+c^3-3abc \geq 9k$.
If $ab+bc+ca \geq 3k^2-1$, prove that: $a^3+b^3+c^3-3abc \geq 9k$.
I recently came across a question in which we had to prove the above inequality using the given condition as mentioned above. Here $a,b,c$ are distinct positive integers and $k$ is also a positive integer. I absolutely have got no idea how to solve it or efficiently use the condition 'positive integers'. Furthermore, although the expression $a^3+b^3+c^3-3abc$ seems a bit familiar but I'm not able to understand how to make the condition useful.
Please help.
| Use that $$a^3+b^3+c^3-3abc=\left( a+b+c \right) \left( {a}^{2}-ab-ac+{b}^{2}-bc+{c}^{2}
\right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to solve $x^{x^x}=(x^x)^x$? How can we solve the equation :
$x^{x^x}=(x^x)^x$
with $x \in {\mathbb{R}+}^*$
Thanks for heping me :)
| We can check when the exponents are equal.
It is $x^{x^x}=(x^x)^x\Leftrightarrow x^{(x^x)}=x^{x^2}\Leftrightarrow x^x=x^2$
Now x^x-x^2=0\Leftrightarrow $x^2(x^{x-2}-1)=0$.
So $x^2=0$ or $x^{x-2}=1$.
Since $x\neq 0$ we have $x^{x-2}=1$ left, which holds if $x-2=0$. So $x=2$
And $2^{2^2}=2^{2\cdot 2}$
Edit: And x=1 is an obvious solution...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Prove that the rotation of sums is equal to the rotation of products So the question starts off:
Prove $$\ e^{t_1+t_2} = e^{t_1}e^{t_2}$$
E(t) is a unique solution to $\dot{E} = E, E(0) = 1$.
Let $E_1(t) = E(t_1 + t)$, and E_2(t) = E(t_1)E(t)
$$\dot E_1 (t) = \dot E_1 (t_1 + t) = E(t_1+t) = E_1 (t); E_1(0) = E(t_1)$$
$$\dot E_2 (t) = E(t_1)\dot E (t) = E(t_1)E(t) = E_2(t); E_2(0) = E(t_1)$$
Therefore(by uniqueness and existence theorem I think):
$$E_1(t) = E_2(t)$$
Which implies:
$$E(t_1+t) = E_1(t) = E_2(t) = E(t_1)E(t)$$
Then by setting $t = t_2$ we get our desired result.
Then the question asks to prove:
$$R(t_1+t_2) = R(t_1)R(t_2)$$
where $$R(t) = \begin{bmatrix}\cos(t)&-\sin(t)\\\sin(t)&\cos(t)\end{bmatrix} $$
Given: $R_1(t) = R(t+t_2)$ and $R_2(t) = R(t)R(t_2)$
Prove by a similar argument to the one above.
Any suggestions on how to approach this?
| If you want to repeat the argument, you can look at the equation $R''=-R$.
Another way is to notice that
$$
R(t)=e^{it}\begin{bmatrix}1/2&i/2\\ -i/2&1/2\end{bmatrix} + e^{-it}\begin{bmatrix} 1/2&-i/2\\ i/2&1/2\end{bmatrix}.
$$
Then
\begin{align}
R(s)R(t)&=e^{it}e^{is}\begin{bmatrix}1/2&i/2\\ -i/2&1/2\end{bmatrix}^2+e^{-it}e^{-is}\begin{bmatrix}1/2&-i/2\\ i/2&1/2\end{bmatrix}^2+2\operatorname{Re}e^{it}e^{-is}\begin{bmatrix}1/2&i/2\\ -i/2&1/2\end{bmatrix}\begin{bmatrix}1/2&-i/2\\ i/2&1/2\end{bmatrix}\\ \ \\
&=e^{i(t+s)}\begin{bmatrix}1/2&i/2\\ -i/2&1/2\end{bmatrix}+e^{-i(t+s)}\begin{bmatrix}1/2&-i/2\\ i/2&1/2\end{bmatrix}+2\operatorname{Re}e^{i(t-s)}\begin{bmatrix}0&0\\ 0&0\end{bmatrix}\\ \ \\
&=\begin{bmatrix}
\operatorname{Re} e^{i(t+s)}&-\operatorname{Im} e^{i(t+s)}\\ \operatorname{Im} e^{i(t+s)}&\operatorname{Re} e^{i(t+s)}
\end{bmatrix}\\ \ \\
&=\begin{bmatrix} \cos(t+s) &-\sin(t+s)\\ \sin(t+s)&\cos(t+s)\end{bmatrix}\\ \ \\
&=R(t+s)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
arranging 2 blue balls, 2 red balls and 1 green ball How many ways are there to arrange 2 blue balls, 2 red balls and 1 green ball?
My Answer: $$\frac{5!}{2!*2!}$$
If this is incorrect then please help me understand where I am wrong.
But if this is correct then please help me with this:
"A bag holds $4$ red marbles, $5$ blue marbles, and $2$ green marbles. If $5$ marbles are selected one after another without replacement, what is the probability of drawing $2$ red marbles, $2$ blue marbles, and $1$ green marble?"
Correct Answer:
$$\binom{5}{2}\binom{3}{2}\binom{1}{1}\left(\frac{4}{11}\right)\left(\frac{3}{10}\right)\left(\frac{5}{9}\right)\left(\frac{4}{8}\right)\left(\frac{2}{7}\right) = \frac{20}{77}$$
why does it have this?
$$\binom{5}{2}\binom{3}{2}\binom{1}{1}$$
instead of this:
$$\frac{5!}{2!*2!}$$
for more info, you can refer to my pervious question here:
Basic combinations logic doubt in probability
| Well,
$$
\color{blue}{\binom 52 \binom 32 \binom 11 = \frac{5!}{2! \times 2!} = 30}
$$
So it seems that what you are doing is also correct. However, your thought processes when computing the answer were different, so I will just highlight that.
*
*What was the person who wrote the "correct answer" thinking? He was thinking : let me first decide when the blue balls were drawn, followed by when the red balls were drawn, followed by when the green ball was drawn. So what he did was this : the first $\binom 52$ represents the two chosen spots in which the red balls were drawn. Then, these spots are gone, so from the remaining three spots, two were chosen for the red balls, and then the one remaining spot for the green ball. This is why the answer is written in that order as well.
*What were you thinking? Probably : you assumed all the balls are different, then the number of orders in which they can appear is $5!$. Now, you remembered that two of them are blue and two of them are red, so for this you divided by $2!$ twice to account for that.
And your answers are the same, because both the ways of thinking about the problem are correct, and therefore will lead to the same answer!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Immediate consequence of standard Schwarz-Pick lemma! How to interpret? Let $g:D(0,1)\to D(0,1)$ be analytic function and $g(0)=a\in[0,1)$;
suppose $$G(z)=\frac{z+a}{1+az},\text{for}\ z\in D(0,1).$$
we want to prove that:$$g(D(0,r))\subset G(D(0,r))\ \forall \ r\in(0,1).$$
In the origin paper, it says that this is an immediate consequence of the standard Schwarz-Pick lemma. My question is why this is right and how to interpret.
Here
$$D(0,1)=\{z \in \mathbb{C}||z|<1\}$$ $$D(0,r)=\{z \in \mathbb{C}||z|<r\}.$$
Standard Schwarz-Pick lemma: if $f:D(0,1)\to D(0,1)$ is analytic and
$\alpha \in D(0,1)$, then
$$\left|\frac{f(z)-f(\alpha)}{1-\overline{f(\alpha)}f(z)}\right|
\leq \left|\frac{z-\alpha}{1-\overline{\alpha}z}\right|, \forall \ z\in D(0,1).$$
Any hints and help will welcome!
| Let $|z|<r$. Define $H(z)=\frac {z-a} {1-az}$. Verify that $G=H^{-1}$. We have to show that $g(z)=G(\zeta)$ with $|\zeta| <r$. Define $\zeta$ as $H(g(z))$. Then $g(z)=G(\zeta)$. To show that $|\zeta| <r$ apply Schwarz Pick Lemma with $f$ changed to $g$ and $\alpha =0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing $|x_n -x|$ converges to x I'm given a generic sequence $x_n$, and I know that $\lim_{n ->\infty} \frac{|x_{n+1}-x|}{|x_n-x|} < 1$ and exists. I'm trying to show that $x_n$ converges to $x$.
I tried a bunch of approaches, but the farthest I got was a proof by contradiction where I try show that $\lim_{n->\infty} |x_n-x|$ cannot be greater than 0, but instead must equal 0, resulting in $x_n$ converging to $x$.
Can I get some assistance?
| Hint:
It is equivalent to show tha $|x_n-x|\to 0$. Now, since $\lim_{n\to\infty} \frac{|x_{n+1}-x|}{|x_n-x|} $ exists and is $<1$, there exists $k<1$ and $N_0$ such that
$$\frac{|x_{n+1}-x|}{|x_n-x|}\le k\quad\forall n \ge N_0.$$
Deduce that, for all $n\ge N_0$, one has $\;|x_n-x|\le k^{n-N_0}\,|x_{N_0}-x|$ (use induction on $n$) and conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Reference request: Laplace-Beltrami eigenfunction bases for Sobolev spaces I'm working on a smooth $(d-1)$-dimensional surface $M\subset \mathbb{R}^d$. Let $(\phi_k)_{k\in\mathbb{N}}$ be an orthonormal basis of $L^2(M)$ consisting of the eigenfunctions of the Laplace-Beltrami operator $-\triangle_{M}$, with corresponding eigenvalues $(\lambda_k)_{k\in\mathbb{N}}$.
Claim: the $(\phi_k)$ are $H^1$ orthogonal, where $H^s=W^{s,2}$ is an $L^2$ Sobolev space, and $\lVert\phi_k\rVert_{H^1}^2 = 1+\lambda_k$.
Proof: By a Green's formula on $M$ (a generalisation of the divergence theroem; note I'm assuming $\partial M=\emptyset$)
$$\int_M \nabla \phi_j \cdot \nabla \phi_k dx= -\int_M \phi_j \triangle \phi_k dx = \lambda_k \int_M \phi_j \phi_k dx= \lambda_k\delta_{jk}.$$
This generalises to $H^n$ for any positive integer $n$. I'm confident the $\phi_k$ are also $H^s$ orthogonal, with norms $\lVert \phi_k \rVert_{H^s}^2 \sim 1+\lambda_k^s$ (with the exact constants depending on how you define the norm) for any $s\in\mathbb{R}$, defining $H^s$ in some appropriate sense. Does someone have a reference for this? I've not done much differential geometry so ideally one which is fairly gentle in that regard!
| One definition of $H^s(M)$ is $$H^s(M)= \{u \in \mathcal{S}(M) : \sum_{j=1}^\infty \lambda_j^{2s} \lvert\langle u, \bar{\phi}_j \rangle \rvert^2\equiv \lVert u \rVert_{H^s(M)}^2<\infty\},$$ where $\mathcal{S}$ is the Schwartz space of distributions on $M$ (i.e. the dual of $C^\infty_c(M)$, which is just $C^\infty(M)$ if $M$ is a compact surface). The bar on $\phi_j$ is the complex conjugate, because strictly we should be using complex scalars and inner products.
For this definition of $H^s$, the given scaling of norms of the $\phi_j$ is obvious. So the real question is does the above definition coincide with other commonly used definitions?, to which the answer is yes, as shown in Lions & Magenes, Non-homogeneous boundary value problems and applications, volume I, chapter I, remark 7.6. [1]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3184941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Infinite sum of prime-counting function The function, $\pi (n) $ is the number of prime number less than or equal to $n$.
So, my question is the radius of convergence about
$$\sum_{n = 0} ^ {\infty} {\frac {1}{\pi (n)} x^n } $$
What is radius of convergence in this power series?
| Hint: $\pi(n)\sim {n\over \log{n}}$ and $\pi(n+1)\le \pi(n)+1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Uniqueness of solutions of diffusion equation with initial condition In his PDE, Walter A. Strauss claims that the diffusion equation on the whole real line has a unique solution, given an initial condition. However he only proves uniqueness given an initial-boundary condition for solutions on a finite interval (in section 2.3, using the maximum principle). Is this a gap or am I missing something obvious here? The passage I am referring to can be found on page 49 (section 2.4 Diffusion on the whole line) in the second edition.
| It seems that he is only claiming here that $u$ is a solution of (1), (2). He does not prove uniqueness in the book, and it should probably be understood that a uniqueness result (with some qualifier) must be supplied by external sources (As a commenter said, there is no uniqueness without qualifiers). Keep in mind that this is not meant to be a rigorous graduate text, and at times you encounter something like this in the book.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $\lim_{n\rightarrow 0}\frac{1}{n}\int_{0}^{1}\ln(1+e^{nx})dx$ $$\lim_{n\rightarrow 0}\frac{1}{n}\int_{0}^{1}\ln(1+e^{nx})dx$$
My try:
$$\frac{b-a}b\leq \ln b-\ln a\leq \frac{b-a}a \implies \frac{1}{1+e^{nx}}\leq \ln(1+e^{nx})-\ln e^{nx}\leq \frac1{e^{nx}}$$
Then I integrated and multiplied by $\frac{1}{n}$ and I got:
$$\frac{1}{n}\int _0^1\frac{1}{1+e^{nx}}dx\leq \frac{1}{n}\int _0^1[\ln(1+e^{nx})-\ln e^{nx}]dx\leq \frac{1}{n}\int _0^1 \frac{1}{e^{nx}}dx$$
How to continue? There is an easier method to solve this limit?
| You may squeeze the integral as follows using
*
*$nx = \ln e^{nx} \leq \ln (1+ e^{nx})$ and
*$\ln (1+ e^{nx}) = nx + \ln \left(1 +\frac{1}{e^{nx}}\right)\stackrel{0\leq x \leq 1}{\leq} nx + \ln 2$
So, you get
$$\color{blue}{\frac{1}{2}} = \frac{1}{n}\int_0^1 nx \; dx \leq \frac{1}{n}\int_0^1 \ln (1+ e^{nx}) \; dx \leq \frac{1}{n}\int_0^1 nx \; dx + \frac{1}{n}\int_0^1 \ln 2 \; dx =\color{blue}{\frac{1}{2}} + \frac{\ln 2}{n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Gagliardo-Nirenberg inequality for fractional Sobolev spaces Wikipedia states two versions of the Gagliardo-Nirenberg inequality for nonfractional Sobolev spaces. I'm interested in generalizations to fractional (Slobodeckij) Sobolev spaces.
Such a generalization of the version for functions on $\mathbb{R}^n$ can be found e.g. here.
Unfortunately, I don't find such a generalization of the version for functions defined on a bounded Lipschitz domain $\Omega \subset \mathbb{R}^n$. I'm pretty sure that the inequality still holds if one replaces the terms $\|D^j u\|_{L^p}$ and $\|D^m u\|_{L^r}$ by the corresponding Gagliardo semi-norms.
Does anyone know an article/book where such a generalization can be found?
| It is straightforward. You have $u$ defined on $\Omega$ and then the extension $\tilde u$ defined on $\mathbb{R}^n$. Now the Gagliardo semi-norm of $u$ can be estimated trivially by the Gagliardo semi-norm of $\tilde u$ (just because it's an extension). Since $\tilde u$ is defined on $\mathbb{R}^n$ you can now use the Gagliardo-Nirenberg inequality for such functions. Finally, use Theorem 5.4 of Hitchhiker's guide to bound the appearing norms of $\tilde u$ by norms of $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the inverse function theorem provide a path to interpretation of more general infinitesimal quotients? Let us first bring up inverse function theorem : If $y=f(x)$ and if $f'(x)$ exists at some point $x=a$, then exists in some neighborhood of $(a,f(a))$ an inverse function $f^{-1}(x)$ which around corresponding $b = f(a)$, such that it is differentiable and that this differential fulfills:
$$(f^{-1})'(b) = \frac{1}{f'(a)}$$
Could this bring a fruitful approach to define the reciprocal infinitesimal quotient below:
$$\frac{\partial f^{-1}(y)}{\partial y}(b) = \frac{\partial x}{\partial f(x)}(a)$$
Would it lead to anything meaningful, or will we run into trouble if we would try to do so?
For context : My mind wanders to some example $$\frac{\partial \sin(t)}{\partial \cos(t)}$$ which in polar coordinates $x = \cos(t), y = \sin(t)$ and trigonometric identity could be intuitively interpreted as the differential of the function describing the upper part of unit circle:
$$\frac{\partial \sqrt{1-x^2}}{\partial x}$$
Some far fetched but interesting try would be to extend to find meaning and interpretation to things like $$\frac{\partial g(t)}{\partial h(t)}$$
| If we stick with $\mathrm d$ rather than $\partial$, then there's little problem with handling single-variable in this sort of way. We define $\mathrm d (f(t))=f'(t)\mathrm dt$ (and similarly for any other variable), and then the algebra works out nicely for first-order derivatives.
For example:
$$\dfrac{\mathrm{d}\sin t}{\mathrm{d}\cos t}=\dfrac{\cos t\,\mathrm{d}t}{-\sin t\,\mathrm{d}t}=-\cot t\text{.}$$
And we also have, if $t\in[0,\pi]$ and we let $x=\cos t$:
$$\dfrac{\mathrm{d}\sin t}{\mathrm{d}\cos t}=\dfrac{\mathrm{d}\sqrt{1-x^{2}}}{\mathrm{d}x}=\dfrac{-2x\,\mathrm dx}{2\sqrt{1-x^{2}}\,\mathrm dx}=\dfrac{-x}{\sqrt{1-x^{2}}}=\dfrac{-\cos t}{\sin t}=-\cot t\text{.}$$
Now, there could be trouble with second-order derivatives, but there is also a fix for this (basically, don't write a second derivative as $\dfrac{\mathrm d^2y}{\mathrm dx\vphantom{)}^2}$ if $x$ and $y$ could depend on some other variable $t$). Details are discussed in the recently-popular-on-the-internet paper "Extending the Algebraic Manipulability of Differentials" by Bartlett and Khurshudyan. One version can be found on the arXiv at here and another is on the site for the journal "Dynamics of Continuous, Discrete and Impulsive Systems" here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A challenging system of coupled recursive sequences So I was runing through my old school drafts, and I've just come upon this challenging problem that years ago, one of my former math teacher had let for enthousiastic students to try.
Consider the real sequences $\left(a_n\right)_{n\in\mathbb{N}^*}$ and $\left(b_n\right)_{n\in\mathbb{N}^*}$, defined by :
$$\left|\begin{array}{lll}\displaystyle a_1=-1\\\displaystyle b_1=5\end{array}\right.$$
$$\text{and}$$
$$\forall n\geq2,\left|\begin{array}{lll}\displaystyle a_n=na_{n-1}+\frac{n+1}{n^2+1}b_{n-1}\\\displaystyle b_n=-\frac{n^2+2n+2}{n+2}a_{n-1}-\frac{n^4+3n^3+4n^2+2n}{n^3+2n^2+n+2}b_{n-1}\end{array}\right.$$
And the following questions :
*
*(The most difficult) Find closed form expressions for $\left(a_n\right)_{n\in\mathbb{N}^*}$ and $\left(b_n\right)_{n\in\mathbb{N}^*}$
*(A weaker result, thus easier) Find $\lim\limits_{n\to\infty}\frac{b_n}{n^2a_n}$.
I should say it upfront, there $are$ closed form expressions of $\left(a_n\right)_{n\in\mathbb{N}^*}$, $\left(b_n\right)_{n\in\mathbb{N}^*}$, and I remember my teacher confirmed that they did not involve any nasty summation or product.
I remember I was quite out of ideas at the time, and even now I don't really know how to proceed.
Now of course, coupled recursived sequences with constants coefficients such as
$$\left|\begin{array}{lll}\displaystyle a_n=\alpha a_{n-1}+\beta b_{n-1}\\\displaystyle b_n=\gamma a_{n-1}+\delta b_{n-1}\end{array}\right.$$
are swiftly resolved through some linear algebra, provided that the matrix $\begin{pmatrix}\alpha & \beta \\ \gamma & \delta\end{pmatrix}$ is diagonalizable.
But here, the coefficients are not constant. I tried to diagonalize the matrix nonetheless, but even though for any fixed $n\in\mathbb{N}$ it is still diagonalizable, the change of basis matrix (toward the eigenvectors basis) isn't constant with respect to $n$, either. So the straightforward diagonalization method is a dead end.
There might be a purely analytical proof, I don't know.
Any suggestions ?
| Hint.
Make
$$
B_n = \frac{n+2}{(n+1)^2+1}b_n
$$
NOTE
$$
\left(
\begin{array}{c}
a_n\\
B_n
\end{array}
\right) = \left(
\begin{array}{cc}
n & 1\\
-1& -n
\end{array}
\right)\left(
\begin{array}{c}
a_{n-1}\\
B_{n-1}
\end{array}
\right)
$$
Calling now
$$
M_n = \left(
\begin{array}{cc}
n & 1\\
-1& -n
\end{array}
\right)
$$
for $n$ even we have the curious behavior
$$
\prod_{k=0}^n M_k = \frac 12 n!\left(
\begin{array}{cc}
-1 & 1\\
-1& 1
\end{array}
\right)
$$
hence for $n$ even
$$
\left(
\begin{array}{c}
a_n\\
B_n
\end{array}
\right) = \frac 12 n!\left(
\begin{array}{cc}
-1 & 1\\
-1& 1
\end{array}
\right)\left(
\begin{array}{c}
a_0\\
B_0
\end{array}
\right)
$$
for $n$ odd is left to the reader.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
A (nontrivial) nonlinear Poisson equation Let $U \subset \mathbb R^3$ be bounded, with smooth boundary. Does the equation
$$ -\Delta f = e^{-f} \text{ in } U, \quad f = 0 \text{ on } \partial U$$
has a solution? Is there a name for this equation? Any idea or hint or reference is highly appreciated.
| I don't have a definitive answer, but this makes it a little more tractable. The differential operator is uniquely invertible and has a Green's function $G(x,y)$ that is positive. This allows us to reformulate our equation as $$f(x) = \int_U G(x,y)e^{-f(y)}dy = \Phi(f).$$ If you can show that the integral operator on the RHS is a contraction, then the solution exists and is unique by the fixed point theorem.
Let us use the $\infty$-norm for this analysis. Then we have
$$\|\Phi(f)-\Phi(g)\| \leq \int_U\|G(\cdot,y)\|\|e^{-f(y)}-e^{-g(y)}\|dy.$$
Now notice that any solution of this integral equation must be positive, so the exponentials are both bounded above by 1. Now we have
$$\|\Phi(f)-\Phi(g)\| \leq \int_U\|G(\cdot,y)\|dy.$$
If this value can be bounded above by 1, then there is a unique solution. However, this depends (only) on the domain. This value can also be bounded (in a different norm, so you must be more careful), by the reciprocal of the smallest eigenvalue of the Laplace operator on this domain, which (I believe) is generally inversely proportionally to the volume of the region $U$. Therefore, we expect this problem to have a unique solution on small domains rather than large domains.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3185784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Lower bound given expectation and standard deviation. A random variable X with integer values only has mean 3 and standard deviation 2. Under those assumptions, which is the best lower bound for $P[0\leq X \leq 6]?$. By my calculations, it is $\frac{5}{9}$ however it is not the right answer. Your thoughts please.
| You can WLOG assume symmetry about $3$ this way: Suppose $X$ is integer-valued and satisfies $E[X]=3, Var(X)=4$. Then $Y=6-X$ also satisfies those constraints, as does $$Z = \left\{ \begin{array}{ll}
X &\mbox{ with prob $1/2$} \\
Y & \mbox{ with prob $1/2$}
\end{array}
\right.$$ where the choice is decided by an independent coin flip. Then $Z$ has a symmetric probability mass function about $3$ and also satisfies:
$$P[|Z-3|>3]=P[|X-3|>3]$$
Assuming symmetry makes it easier to prove that you can restrict $X$ to the set $\{-1, ..., 7\}$ (since cases $X\geq 8$ and $X\leq -2$ use up to much variance). Then you can get the best lower bound.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
LCM of even and odd integers Is the least common multiple of an even $2k$ and odd number $2l+1$ always the product of both numbers $2k(2l+1)$ ?
And also is the least common multiples of two odd numbers the product of both odd numbers?
Thank you.
| If we take 6 and 15 then their lcm is 30 so your first question has answer "no"
If we take 3 and 15 then their lcm is 15 so your second question has answer "no"
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Sum $\sum_{k=1}^{\infty}\frac{1}{k!k^k}$ I came across this crazy sum, and I have no idea how to tackle it
$$\sum_{k=1}^{\infty}\frac{1}{k!k^k}.$$
I tried to approximate it using wolfram alpha widget,
and got something like $1.13134$.
However, I'd like to get an exact solution.
EDIT: i was thinking that maybe we could play around with the ceiling function a little bit.
Consider that:
$$\sum_{k=1}^{\infty}\frac{1}{k!k^k}.$$
is actually the area under the curve
$$f(k)=\frac{1}{\left\lceil{k^k}\right\rceil}\frac{1}{\left\lceil{k!}\right\rceil}$$
from $0$ to $\infty$. More formally, this can be expressed as:
$$I=\int_0^\infty \frac{1}{\left\lceil{k^k}\right\rceil}\frac{1}{\left\lceil{k!}\right\rceil} dk=\sum_{k=1}^{\infty}\frac{1}{k!k^k}.$$
If we multiply $I$ by:
$$L=\int_0^\infty{\left\lceil{k^k}\right\rceil}{\left\lceil{k!}\right\rceil} e^{-k}dk.$$
We get that:
$$I L=\int_0^\infty e^{-k} dk=1.$$
Hence:
$$\sum_{k=1}^{\infty}\frac{1}{k!k^k}\int_0^\infty{\left\lceil{k^k}\right\rceil}{\left\lceil{k!}\right\rceil} e^{-k}dk=1,$$
so if we could somehow evaluate $L$, then it might be possible to calculate the sum.
Unfortunately, I'm not sure if this reasoning works.
| I'm afraid I have to concur with @MariuszIwaniuk. I'll summarise what I found.
In a variant on the famous sophomore's dream calculation, note first that, since in terms of modified Bessel functions $\sum_{n\ge 0}\frac{y^n}{n!(n+1)!}=\frac{I_1(2\sqrt{y})}{\sqrt{y}}$, the substitution $u=-\ln x$ obtains $$\int_0^1\frac{I_1(2\sqrt{-x\ln x})}{\sqrt{-x\ln x}}dx=\sum_{n\ge 0}\frac{(-1)^n}{n!(n+1)!}\int_0^1x^n\ln xdx=\sum_n\frac{\int_0^1u^ne^{-(n+1)u}du}{n!(n+1)!}=\sum_{k\ge 1}\frac{1}{k!k^k}.$$Now we just need to evaluate that integral... which seems equally impossible. I also had no look identifying the value with the inverse symbolic calculator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Homotopy between unitary element and identity elements, Operator Theory Let $\mathcal{T}$ be the Toeplitz algebra. I.e. the $C^*$ algebra generated by the shift operator $S\in B(l^2(\Bbb N))$.
In page 6, line 8 of a proof we have a unitary element $u \in \mathcal{T} \otimes \mathcal{T}$, and it is claimed that $u$ is homotopic to the identity by a path of unitaries.
The claim seems to be quite general. Is there any reference/ similar result, which I could read to understand more about this?
| The unitaries you consider there are self-adjoint. In that particular case, you can write down an easy formula for the path. Let $u$ be a self-adjoint unitary in a C*-algebra. Define $h := (1-u)/2$. Then $e^{\pi i h} = u$ and the path $t \mapsto e^{\pi i th}$ connects $1$ to $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does one prove $C([0,1)\otimes A \cong C([0,1],A)$? $C([0,1])$ is a $C^*$ algebra of complex functionals. $A$ is a $C^*$ algebra. Hence $C([0,1],A)$, the continuous functions from $[0,1]$ to $A$ is also a $C^*$ algebra.
We construct its tensor product in the category of $C^*$ algebras. This is unique as $C([0,1])$ is nuclear.
There is a canonical $*$-homomorphism, $$C[0,1] \otimes A \rightarrow C([0,1],A)$$
$$(f \otimes a) (x) = f(x)a $$ How does one show this is in fact an isomorphism?
| The map is $\pi:f\otimes a \longmapsto f a$. It's obvious that it is linear and multiplicative, and preserves adjoints, so it is a $*$-homomorphism. It is injective: if $\pi(\sum_j f_j\otimes a_j)=0$, we may choose the $a_j$ so that they are linearlity independent. Then
$$
0=\pi(\sum_j f_j\otimes a_j)=\sum_j f_j a_j.
$$
Evaluating at any $t\in[0,1]$, we have $0=\sum_j f_j(t)a_j$, and the linear independence gives $f_j(t)=0$ for all $j$. As we can do this for any $t$, $f_j=0$ for all $j$, and so $\sum_jf_j\otimes a_j=0$. Thus $\pi$ is injective (in reality one also needs to check that the map is well-defined, see this answer for details).
It remains to show that $\pi$ is onto. Since it is an isometry, it is enough to show that its range is dense. There you use compactness of $[0,1]$ to show that the functions of the form $\sum_j f_j a_j$ are dense. Concretely, let $f\in C([0,1],A)$, Fix $\varepsilon>0$, then there exists a partition $0=t_0<t_1<\ldots<t_m=1$ such that $\|f(t_j)-f(t)\|<\varepsilon$ for $t\in[t_j,t_{j+1}]$. Let $a_j=f(t_j)$ for each $j$; let $g_j=1_{[t_j,t_{j+1}]}$, and let $f_j\in C[0,1]$ such that $\|f_j-g_j\|<\varepsilon/(m\|f\|)$. Then, with $k$ such that $t\in[t_k,t_{k+1}]$,
\begin{align}
\|f(t)-\sum_j f_j(t)a_j\|
&\leq \|f(t)-\sum_j g_j(t)a_j\|+\|\sum_j(g_j(t)-f_j(t))a_j\|\\
&\leq \|f(t)-\sum_j g_j(t)a_j\|+\varepsilon\\
&=\|f(t)-f(t_k)\|+\varepsilon\\ \ \\
&<2\varepsilon.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Let $f:(0,\infty)\to(0,\infty)$ be uniformly continuous function, is the following statement true?
Let $f:(0,\infty)\to(0,\infty)$ be uniformly continuous function. Does it imply $$\lim_{x\to\infty} {f(x+{1\over x})\over f(x)}=1\;?$$
By uniform continuity , for any $\epsilon>0$, $\exists\delta>0$ such that $|f(x)-f(y)|<\epsilon$ as $|x-y|<\delta$.
So, $|f(x+{1\over x})-f(x)|<\epsilon$ as $|{1\over x}|<\delta$ or $x>\delta$.
Thus, $$\left| {f(x+{1\over x})\over f(x)}-1\right|<{\epsilon\over f(x)},\ \forall x>\delta$$.
Now, by definition of range set, $f$ is bounded below by $0$, but can we get a non zero lower bound for function $f$ on $(\delta,\infty)$ i.e. can we get a $M>0$ such that $f(x)\ge M\ \forall x>\delta$? If yes then the proof can be completed easily then.
Thanks for assistance in advance.
| The statement is not true. Any bounded, continuous function $f:(0,\infty) \to (0,\infty)$ where $f(x) \to 0$ as $x \to \infty$ is uniformly continuous.
Construct such a continuous function which is piecewise linear and where $f(x) = 1/n$ for $x = n $ and $f(x) = 2/n$ for $x = n + 1/n$ where $n \geqslant 2$ is an integer. The graph of the function looks like a sequence of declining sawtooth peaks.
Here we have $f(n+1/n)/f(n) \to 2$ as $n \to \infty$.
Thus, it is not the case that $f(x +1/x)/f(x) \to 1$ for $x \in (0,\infty)$ tending to $\infty$.
I'll leave any remaining details to you.
Addendum
$$f(x) = \begin{cases} 1/2,& 0 \leqslant x \leqslant 2\\1/n, &x = n, n\geqslant 2 \\ 2/n, & x = n + 1/n, n\geqslant 2\\ x - n + 1/n, & n < x < n +1/n, n \geqslant 2\\ 2/n + (1/n-2/n)(x - n - 1/n)/(1-1/n), &n + 1/n < x < n+1, n \geqslant 2 \end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Robust & Linear Control Systems The characteristic polynomial of a control system is the following uncertain polynomial:
$$s^3 + a_2 s^2 + a_1 s + 3.5 $$
Where $a_1 \in [1.5,4.2],a_2 ∈ [1.2,4.25]$ and $ 4.2 ≤ a_1 + a_2 ≤ 6.3$
.
Is this uncertain polynomial stable?
How do I solve this problem, when I have got two values from $a_1$ and $a_2$.
| The Hurwitz matrix of the polynomial is:
$$ \begin{bmatrix} a_2 & 3.5 & 0 \\ 1 & a_1 & 0 \\ 0 & a_2 & 3.5 \end{bmatrix}$$
To stability this matrix must be definite-positive, this is, all subdeterminants must be positive. The conditions are:
$$a_2 >0$$
$$a_2 a_1 >3.5 $$
If the second subdeterminant is positive, it follows that the third one also is (because $3.5 \gt 0$ ). You can check the polynomial will always be stable for the values you have.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Let $T_n=\{11(k+h)+10(n^k+n^h)\mid1\leq k,h \leq 10\}$ for $n\in\mathbb N$. Find all $n$ such that there is no $a\neq b\in T_n$ with $110 \mid (a-b)$. With every positive integer $n$, we have a set $T_n=\{11(k+h)+10(n^k+n^h) \mid 1\leq k,h \leq 10\}$. Find all $n$ such that there is no $a\neq b\in T_n$ with $110 \mid (a-b)$.
I tried to divided it into some case with $n\leq 11$, and then to discard the case not right. Is there some other way to solve it?
| We can solve this without resorting to brute force.
Firstly, define, for positive integer $n$ and $1\leq k,h\leq10$, $f_n(k,h)=11(k+h)+10(n^k+n^h)$. Then the condition $\exists a,b\in T_n$ such that $a\ne b$ and $a\equiv b\pmod{110}$ is equivalent with the conditions below.
$$\begin{cases}
n^{k_1}+n^{h_1}&\equiv n^{k_2}+n^{h_2}\pmod{11}\\
k_1+h_1&\equiv k_2+h_2\pmod{10}\\
f_n(k_1,h_1)&\ne f_n(k_2,h_2)
\end{cases}.$$
If $(k_1,h_1,k_2,h_2)$ satisfies the above congruences, then call it a valid pair.
If $n^5\equiv1\pmod{11}$, then $n^1\equiv n^6\pmod{11}$ and $n^2\equiv n^7\pmod{11}$. Also $1+2\equiv6+7\pmod{10}$, so such an $n$ does not satisfy the conditions. By the same token, if $n\equiv-1\pmod{11}$ then $n$ does not satisfy the condition. Hence all that rest are those $n$ such that if $n^k\equiv1\pmod{11}$ then $10\mid k$, i.e. primitive roots of $11$, which are $\equiv2,6,7,8\pmod{11}$.
So let $n$ be a primitive root modulo $11$.
Then the condition $n^{k_1}+n^{h_1}\equiv n^{k_2}+n^{h_2}\pmod{11}$ becomes
$$n^{k_1}+n^{h_1}\equiv n^{k_2}+n^{k_1+h_1-k_2}\pmod{11}.$$
Hence $n^{k_2}$ satisfies the equation $$x^2-(n^{k_1}+n^{h_1})x+n^{k_1+h_1}\equiv0\pmod{11}.$$
But $x^2-(n^{k_1}+n^{h_1})x+n^{k_1+h_1}=(x-n^{k_1})(x-n^{h_1})$, thus $n^{k_2}$ is congruent modulo $11$ to either $n^{k_1}$ or $n^{h_1}$. Since $n$ is a primitive root modulo $11$, this implies that $k_2=k_1$ or $k_2=h_1$. This shows that $f_n(k_1,h_1)=f_n(k_2,h_2)$, and hence $(k_1,h_1,k_2,h_2)$ is not a valid pair.
Therefore indeed for $n\equiv2,6,7,8\pmod{11}$, there are no valid pairs, i.e. such $n$ are what we are looking for.
Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3186984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Pigeon Hole explanation I understand that the pigeonhole principle is supposedly a quite simple concept. However could you please explain to me the reasoning of how you reach this answer. Thank you.
Question: A basket cannot contain more than $24$ apples. What is the minimum amount of baskets you must have, to ensure you have at least $5$ baskets with the same number of apples in them (all baskets have at least $1$ apple contained within).
Answer of this question being $97$ baskets.
| There are different versions of PigeonHole Principle, but they all basically amount to:
if number of containers is exceeded by number of objects, then some container has something happen with certainty (or certain probability in a non-equiprobable case)
The logic of this statement, can be done via repeated use of the pigeonhole principle with strict limits. If We have a container that can hold 24 baskets( last container can be partially filled), each one having a different number of objects, 1 through 24, it will take 5 of these large containers to have 5 baskets with the same amount at most. 4 large containers are not enough but 4 containers contain $4\cdot 24 =96$ baskets. only adding 1 more basket in a fifth container, forces a collection of 5. Because, each of the first 4 containers had a basket with each allowed amount in it. adding that basket gets us 97 baskets put into 5 containers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3187132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Complex impedence between 2 terminals I have this problem to solve.
I have this so far. I am struggling with this so looking for some help please.
$$\frac{1}{5j}+\frac{1}{5+8.66j}+\frac{1}{15}+\frac{1}{-10j}=-\frac{j}{5}+\frac{5-8.66j}{5^2+8.66^2}+\frac{1}{15}+\frac{j}{10}=a-bj$$
$$a:=\frac{5}{5+8.66}+\frac{1}{15},\,b=\frac{1}{5}+\frac{8.66}{5^2+8.66^2}-\frac{1}{10}=\frac{1}{10}+\frac{8.66}{5^2+8.66^2},$$
$$\frac{1}{a-bj}=\frac{a+bj}{a^2+b^2}.$$
I know i need a imaginary number for the x and real number for the y axis.
I an struggling because thier is more than 2 branches. I know:
$$ A + B = (4 + j1) + (2 + j3)
A + B = ( 4+2 ) + j(1+3)$$
So i added like this:
$$ j + 5 - 8.66j + 1 + j = -2.66j $$
But i guess this isn't correct?
This is for cartesian form.
| Impedances sum in series; admittances (reciprocals of impedances) sum in parallel. Thus the total admittance in $\Omega^{-1}$ is $$\frac{1}{5j}+\frac{1}{5+8.66j}+\frac{1}{15}+\frac{1}{-10j}=-\frac{j}{5}+\frac{5-8.66j}{5^2+8.66^2}+\frac{1}{15}+\frac{j}{10}=a-bj$$with $$a:=\frac{5}{5+8.66}+\frac{1}{15},\,b=\frac{1}{5}+\frac{8.66}{5^2+8.66^2}-\frac{1}{10}=\frac{1}{10}+\frac{8.66}{5^2+8.66^2},$$while the total impedance in $\Omega$ is $$\frac{1}{a-bj}=\frac{a+bj}{a^2+b^2}.$$I'll leave arithmetic to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3187306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
matrix raised to a matrix i wanted to know if it was possible to raise a matrix to a matrix and i wanted to confirm if i have it. i tested it out with numbers and notation and i want to know if i'm right. $$
x =\left [ \begin{matrix}
1 & 1 \\
0 & 1 \\
\end{matrix} \right ]
$$
$$
\left [ \begin{matrix}
2 & 1 \\
1 & 2 \\
\end{matrix} \right ] ^ x
$$
now, i use e^ln(x) = x, and i diagonalize my matrix to take an easy ln.
$$
e^(ln(
\left [ \begin{matrix}
2 & 1 \\
1 & 2 \\
\end{matrix} \right ] ^ x
))
$$
$$
e ^ (x*ln(diag(\left [ \begin{matrix}
2 & 1 \\
1 & 2 \\
\end{matrix} \right ]))
$$
my diag of my matrix is as follows:
$$
\left[\begin{matrix}
1 & 0\\
0 & 1\\
\end{matrix}\right]
$$
natural log of a diag matrix is as follows
$$
\left[\begin{matrix}
ln(1)=0 & 0\\
0 & ln(1)=0\\
\end{matrix}\right]
$$
$$
x
*
\left[\begin{matrix}
0 & 0\\
0 & 0\\
\end{matrix}\right]
=
\left[\begin{matrix}
0 & 0\\
0 & 0\\
\end{matrix}\right]
$$
than i have
$$
e^\left[\begin{matrix}
0 & 0\\
0 & 0\\
\end{matrix}\right]
$$
and if that wasn't a zero matrix id apply the Taylor series.
did i do that right? would my next step be correct. i'd love to know. thank in advance!!
| Defining the exponential function of a (complex) matrix $A$ is not too difficult, it is just $\exp A:=\sum_{n=0}^{\infty} \frac{A^n}{n!}$, which can be shown to converge. The same is therefore true for $\cos A$ and $\sin A$.
Working out what $\exp A$ is another question. In the "easy" case, when $A$ is diagonalizable we have $A=P^{-1}DP$, where $D$ is diagonal with entries $d_1,d_2,\dots,d_n$. Then, using $P^{-1}(X+Y)P=P^{-1}X P+P^{-1}Y P$ and $P^{-1}(X \cdot Y)P=P^{-1}X P\cdot P^{-1}Y P$ we can check that $\exp A=P^{-1}\Delta P$, where $\Delta$ is a diagonal matrix with entries $\exp d_1, \exp d_2,\dots,\exp d_n$.
(I think you are forgetting about the matrix $P$, which you must not do.)
Now lets talk about $\log A$. It easier to talk about $\log(I+ A)$, so lets do that.Now the power series for $\log(1+z)$ only converges for small $z$, so we can only proceed if $A$ is in some sense "small". But when it is, we can define $\log(I+A)$ as $\sum_{n=0}^{\infty} \frac{(-A)^{n+1}}{(n+1)}$.
If we are in the happy situation when $A$ is diagonalisable, and its eigenvalues $d_i$ are small, we will get $\log (I+A)=P^{-1}\Lambda P$, where $\Lambda$ is diagonal with entries $\log (1+d_i)$.
Once all this machinery has been set up, you can go on to define - in some restricted circumstances - the matrix $X^Y$. It is going to be $\exp(Y\cdot \log X)$. The restrictions will include that $(X-I)$ and its eigenvalues are "small".
Again, working it out is in general not easy, but if the right things are diagonalisable then the process is diagonalise, apply the usual functions to the diagonal elements, and undo the diagonalisation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3187631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $HThe question is as follows:
Find an example of a group $G$ with a subgroup $H$ so that
$$\{(x, y) | xx^{−1} y^{−1} \in H\}$$
is not an equivalence relation on $G$.
I've just been working on this problem set for hours now and I'm having a hard time coming up with an example for this question.
| Consider $G=V=\Bbb Z_2\times \Bbb Z_2$ given by the presentation $$\langle a, b\mid a^2, b^2, ab=ba\rangle$$ and the subgroup $H\cong \Bbb Z_2$ given by $\langle b\mid b^2\rangle$. Pick $a\in G\setminus H$. Then $a\not\sim a$.
Another way to see this is that the condition $xx^{-1}y^{-1}\in H$ is equivalent to $ey^{-1}=y^{-1}\in H$, which is in turn equivalent to $y\in H$ since $H$ is a subgroup of $G$. Thus it is sufficient to let $x\in G\setminus H$ in order for $x\not\sim x$; that is, for reflexivity to fail.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3187776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Topological semiconjugacy preserves topological transitivity Let $(X, f )$, where $X$ is a compact metric space and $f : X → X $ is a continuous function and let $(Y, g)$ where $Y$ is a compact metric space and $g : Y → Y$ is a continuous function. Suppose they are topologically semiconjugate, i.e., there is a continuous surjection $h : X → Y$ such that $h ◦ f = g ◦ h$, then show that the transitivity of $f$ implies the transitivity of $g$.
Definition of topological transitivity: Let $X$ be a metric space and $f : X → X$ continuous. $f$ is said to be topologically transitive if for every pair of nonempty open sets $U$ and $V$ in $X$, there is a positive integer $n$ such that $f^n(U) ∩ V ≠ ∅$.
I really don't know how to start this demonstration, a little help would be much appreciated. Thank you in advance for your answers!
| Let $U,V$ be non empty open subsets of $Y$. Since $h$ is surjective, $h^{-1}(U)$ and $h^{-1}(V)$ are not empty. Since $f$ is topological transitive, there exists $n$ such that $f^n(h^{-1}(U))\cap h^{-1}(V)$ is not empty. Let $x\in (h^{-1}(U))$ such that $f^{n}(x)\in h^{-1}(V)$, $h(x)=y\in U$ and $g^n(y)=g^n(h(x))=h(f^n(x))$, since $f^n(x)\in h^{-1}(V)$, we deduce that $g^n(y)=h(f^n(x))\in V$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3188105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proving $\frac{1}{6}a+\frac{1}{3}b+\frac{1}{2}c \geq \frac{6abc}{3ab+bc+2ca}$ for positive $a$, $b$, $c$ I'm at the end of an inequality proof that started out complex and I was able to simplify it to:
$$\frac{1}{6}a+\frac{1}{3}b+\frac{1}{2}c \geq \frac{6abc}{3ab+bc+2ca} \quad\text{where}\quad a, b, c > 0$$
I'm able to plug in very small values close to 0 and the inequality holds, but I'm having trouble finding a way to prove it and how to start off this problem.
| A straightforward proof can be posed as follows, the inequality can be written as such
$$
(3ab+bc+2ca)\left(\frac{1}{6}a+\frac{1}{3}b+\frac{1}{2}c\right) \ge 6abc
$$
which is equivalent to
$$
3a^2b + 6ab^2 + 2ca^2 + 6c^2a + 2b^2c + 3b^2c \ge 22 abc.
$$
Now using that fact that $P(2, 1, 0) \ge P(1, 1, 1)$ (known as Muirhead's inequality) and $$P(\alpha, \beta, \gamma) := \sum_{\text{sym}}a^{\alpha}b^{\beta}c^{\gamma},$$ e.g. $P(1, 1, 1) = 6abc$, $P(2, 1, 0) = a^2b + ab^2 + bc^2 + b^2c + ac^2 + a^2c$. Thus, we have
$$
\underbrace{a^2b + b^2c}_{\ge 2abc} + \underbrace{4ab^2 + 4c^2a}_{\ge 8abc}\ge 10 abc,
$$
as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3188234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Showing $\mathbb Z_4 \times \mathbb Z_2$ to be a group I've been posed the question:
Let $P$ be the pairs $(a,b)$ where $a \in \Bbb Z_4$, and $b \in \Bbb Z_2$
An operation, $*$, is defined by: $$(a,b)*(c,d)=(a+c \pmod 4, b+d \pmod 2)$$ for all $(a,c),(b,d)\in P$
How do I show that this is a group?
I know how to do this with multiplication tables by working through the axioms but I don't know how to apply these to this question, nor if that's the best approach
| Associativity follows from that of $(\Bbb Z_4, +_4)$ and of $(\Bbb Z_2, +_2)$.
The identity is $(0\pmod 4, 0\pmod 2)$. (Why?)
The inverse of $(a,b)$ under $*$ is given by $(-a, -b)$ since $$\begin{align}(a,b)*(-a, -b)&=(a+(-a)\pmod 4, b+(-b)\pmod 2)\\
&=(0\pmod 4, 0\pmod 2).
\end{align}$$
Closure follows from the closure of $\Bbb Z_4$ under $+_4$ and of $\Bbb Z_2$ under $+_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3188388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Showing equivalence relations a=b/a=-b I have this equivalence relation where $S=\mathbb{R}$ and $a\sim b$
$ \iff a=b$ or $a=-b$
I know this is an equivalence relation and that it is also very simple but I am just confused about how to test for reflectivity, symmetry and transitivity? Like how can I see what $a\sim a$ actually means? Like do I check that subbing $a$ as $b$ it still works or? But surely this would mean $a=-a$ ?
Sorry for the confusion but if anyone has the time to offer some insight I would greatly appreciate it!
|
Like how can I see what $a\sim a$ actually means?
$a\sim a$ is a statement. A statement can be either true or false. What that statement is is defined by what $\sim$ means, and in your case, for any arbitrary $a,b$, the statement "$a\sim b$" is the same as the statement "$a=b$ or $a=-b$. Therefore, the statement $a\sim a$ is the statement $a=a$ or $a=-a$.
Notice that there is an or, not an and, connecting the two substatements, one being $a=a$ and the other $a=-a$.
In order for a statement $A$ or $B$ to be true, only one of the statements $A$, $B$ needs to be true. So, following from this, we see that $a\sim a$ is a true statement if one of the statements "$a=a$", "$a=-a$" is true. Clearly, one of them ($a=a$) is true, and therefore, $a\sim a $ is also true. Regardless of whether the other, $a=-a$, is true or not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3188541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Topological Algebraic Independence of power series Let $p$ be a prime number, let $x$ be a variable, and consider two power series over the ring $\mathbb{Z}_p$ of $p$-adic integers:
$a(x):=\underset{n\geq 1}{\sum}{\frac{p^n}{n!}x^n}=px+\frac{p^2}{2}x^2+\frac{p^3}{6}x^3+\cdots$
$b(x):=\underset{n\geq 1}{\sum}{\frac{p^n}{n!}x^{2n}}=px^2+\frac{p^2}{2}x^4+\frac{p^3}{6}x^6+\cdots$
My question is, can we find a power series $0\neq f(u,v)\in\mathbb{Z}_p[[u,v]]$ with coefficients in $\mathbb{Z}_p$, such that $f(a,b)=0$, or in other words are $a(x)$ and $b(x)$ topologically algebraically independent (TAI) over $\mathbb{Z}_p$?
It is not true that $a$ and $b$ are TAI over $\mathbb{Q}_p$, which we can observe simply by choosing a sequence of polynomials with rational coefficients which remove successively higher and higher powers of $x$. For example:
$0=a(x)^2-pb(x)-pa(x)b(x)-\frac{(7p-12)p}{12}b(x)^2+\cdots$
In fact, we can apply the same argument to say that any distinct pair of univariate power series over $\mathbb{Q}_p$ are not TAI over $\mathbb{Q}_p$.
Unfortunately, I can think of no way of ensuring that the coefficients of this power series lie in $\mathbb{Z}_p$, or even that the series can be scaled by a power of $p$ so that they will.
If anyone has any ideas or suggestions, I'd be very interested to hear them. Thanks.
| It’s late at night, and I hope I’m not getting egg all over my face here, in this argument tailored to your particular example.
First, I’m going to define $\log(x)=-\sum_{n\ge1}(-x)^n/n=x-x^2/2+x^3/3-\cdots$ and $\exp(x)=\sum_{n\ge1}x^n/n!$, so that this log and exp are inverse power series of each other, defined over $\Bbb Q_p$.
Next, your $a(x)$ is $\exp(px)$ and your $b(x)$ is $\exp(px^2)$, both of them landing in $\Bbb Z_p[[x]]$. Thus we can say, by taking logs, that $\log\bigl(a(x)\bigr)=px$ and $\log\bigl(b(x)\bigr)=px^2$, giving
$$
\bigl(\log\bigl(a(x)\bigr)\bigr)^2=p\log\bigl(b(x)\bigr)\,,
$$
a manifest statement of topological algebraic dependence, but over $\Bbb Q_p$. Now, $a(x)$ and $b(x)$ have all coefficients divisible by $p$, and $\log(px)\in\Bbb Z_p[[x]]$, so that the displayed series actually has $\Bbb Z_p$-coefficents.
This seems to be telling me that if you had only asked for the topological algebraic dependence over $\Bbb Z_p$ of $A(x)$ and $B(x)$ where $A(x)=a(x)/p$ and $B(x)=b(x)/p$, we would have it. But I’m not seeing the desired result at this point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3188708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Proving two graphs have the same chromatic number Let $G= (V,E)$ be a graph, and let $G'= (V',E')$ be a copy of $G$. That is, for each $v ∈ V$ there is a corresponding $v' ∈ V'$ and for each edge $(u,v)∈E$ there is a corresponding edge $(u',v')∈E'$. Construct a graph $G\widehat{}$ by drawing an edge from each $v∈V$ to its corresponding $v'∈V'$. Prove that $\chi (G\widehat{}) =\chi (G)$.
My work: I sketched a few graphs such that they could be colored using two color and drew $G$ and $G'$ for each. By construction it's apparent that $G\widehat{}$ could also be colored using two colors. But i'm having a hard time extending this to more than two colors and actually writing down a proof, as a picture doesn't really count as proof.
| If I'm understanding your question correctly, the new graph $\hat{G}$ is just two copies of $G$ with the corresponding vertices connected right? If so, just think about permuting the colors on the second copy. Explicitly, say you colored $V$ with the colors $\{1, 2, \ldots k\}$. Then for $v'\in V'$, if the corresponding $v\in V$ has color $i$ color $v'$ with $i+1 \mod k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3188969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Compute matrix norm induced by weight l1 vector norm For a strictly positive collection of weights $\{w_{i}\}$, consider the weighted $l_{1}$ vector norm:
$$
||x||_{W} = \sum_{i}^{N} w_{i}|x_{i}|
$$
What is (or more accurately, how would you compute) the matrix norm induced by this vector norm? That is, what is $||A||_{W}$? (You may assume $A$ is square so that the weights are well-defined.)
Note that, when $w_{i} = 1$ for all $i$, you get the standard result
$$||A||_{1} = \max_{j} \sum_{i=1}^{N} |a_{ij}|.$$
For this reason, I feel like the correct answer should be
$$||A||_{W} = \max_{j} \sum_{i=1}^{N} w_{i}|a_{ij}|.$$
But I cannot arrive at this result using any of the standard tricks.
| Let $W$ be the diagonal matrix of weights. Notice then that,
$$
\|x\|_W
= \sum_{i=1}^{N} w_i |x_i|
= \sum_{i=1}^{N} |w_ix_i|
= \|Wx\|_1
$$
By definition,
$$
\|A\|_W = \sup_{\|x\|_W = 1} \|Ax\|_W
$$
Thus, letting $y=Wx$ (so that $x = W^{-1}y$),
$$
\|A\|_W
= \sup_{\|Wx\|_1 = 1} \|WAx\|
= \sup_{\|y\|_1 = 1} \|WAW^{-1}y\|_1
= \| WAW^{-1} \|_1
$$
Now,
$$
[WAW^{-1}]_{i,j} = (w_i/w_j) A_{i,j}
$$
so,
$$
\|WAW^{-1}\|_1
= \max_j \sum_{i=1}^{N} \frac{w_i}{w_j} A_{i,j}
= \max_j \frac{1}{w_j} \sum_{i=1}^{N} w_i A_{i,j}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3189124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Using $P(A'|B')$ to find $P(A\cup B)$ We have $P(B) = \frac 3 5$ and $P(A'|B') = \frac 1 3$.
I started with $P(A\cap B)' = P(B)' \cdot P(A'|B') = \frac 2 5 \cdot \frac 1 3 = \frac 2 {15}$.
Then converted $P(A\cap B)'$ to $P(A'\cup B')$.
Is $P(A'\cup B') = P(A\cup B)'$, meaning we get $P(A\cup B) = 1 - \frac 2 {15} = \frac {13} {15}$?
| You have made two mistakes, but got the right answer! $P(A'\cap B')=P(A'|B') P(B')=\frac 1 3 (1-\frac 3 5)= \frac 2 {15}$. $P(A\cup B)=1-P(A' \cap B')=1-\frac 2 {15}=\frac {13} {15}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3189546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
preimage of a torsion-free subgroup
Let $\phi: G \to H$ be a surjective group homomorphism such that $\ker(\phi)$ is torsion-free. Let $B$ be a torsion-free subgroup of $H$. Show that $A = \phi^{-1}(B)$ is torsion-free.
I'm confused why we need the condition that $\ker\phi$ is torsion-free.
| We let $a\not=e$ is torsion, so $\varphi(a)^n=\varphi(a^n)=e$.
Now since $B$ is torsion-free, $\varphi(a)=e$, hence $a \in \ker \varphi$. This is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3189709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does fundamental theorem of calculus and chain rule work? I came across a problem of fundamental theorem of calculus while studying Integral calculus.
A problem:
$\frac{d}{dx}\int_{\pi}^{x^2}\cot^2t\ dt$
which was salved as :
Step I : Let, F(x) = $\int_{\pi}^{x^{ }}\cot^2t\ dt$
⇒ $F'(x)= \frac{d}{dx}\int_{\pi}^{x^{ }}\cot^2t\ dt=cot^2(x)$
Step II : $\frac{d}{dx}\int_{\pi}^{x^2}\cot^2t\ dt$ = $\frac{d}{dx}[F(x^2)]$ = $F'(x^2)*\frac{d}{dx}(x^2)$
Step III : $F'(x^2)*\frac{d}{dx}(x^2) = cot^2(x^2)*2x$
I don't understand how can they salve
$F'(x^2)$ in third step by putting $x^2$ directly into $F'(x)=cot^2x$
Reference : problem video
| I think you're confusing $F'(x^2)$ and $(F(x^2))'$. The first is the function $F'$ $\bf{evaluated}$ at $x^2$ and the second is the derivative of the function $x \mapsto F(x^2).$ These are two different things !
If you take the function $x \mapsto F(x) = 2x + 1$. Then $F'(x) = 2$ so $F'(x^2) = 2$ but $$(F(x^2))' = (x^2)' \cdot F'(x^2) = 2x \cdot 2 = 4x.$$
Similarly for any differentiable function $h$, $h'(2)$ is not necessarily equal to $0$ since $$h'(2) \neq (h(2))' = 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3189825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Fourier series of translated square I can't seem to find the correct Fourier series coefficients ($s_n$) of the following periodic function. I know how to get the Fourier series of the same one that is not vertically translated and has no negative values for any $t$ (Result is $A sinc(n\pi)$), but this one I'm having trouble with. The function is periodic, and during it's period T, is defined with:
$$
x(t) =
\begin{cases}
A, & t\in(0,\frac{T}{4}) \\
-A, & t\in(\frac{T}{4},\frac{3T}{4}) \\
A, & t\in(\frac{3T}{4},T)
\end{cases}
$$
How I tried solving this:
$$ s_n = \frac{1}{T}\int_{-T/2}^{T/2}{x(t)e^{-inw_0t}dt} = \frac{1}{T}\Bigg[\int_{-T/2}^{-T/4}{-Ae^{-inw_0t}dt} + \int_{-T/4}^{T/4}{Ae^{-inw_0t}dt} + \int_{T/4}^{T/2}{-Ae^{-inw_0t}dt}\Bigg]$$
Integrating I get:
$$ s_n = \frac{A}{inw_0T}\Bigg[ (e^{inw_0\frac{T}{2}}-e^{-inw_0\frac{T}{2}})-2(e^{inw_0\frac{T}{4}}-e^{-inw_0\frac{T}{4}})\Bigg]$$
Applying Euler's identity and $w_0 = \frac{2\pi}{T}$ and the definition of the $sinc$ function finally I get:
$$ s_n = Asinc(n\pi) - Asinc(\frac{n\pi}{2})$$
That differs from the answer I have in my textbook which says the answer is only $A sinc(\frac{n\pi}{2})$.
It would help if someone could shed some light on where I've done wrong.
| Since it is said that $x$ is $T$-periodic, you have that
$x(t) = -A$ for $t\in(-T/2,-T/4)$, so you will have to re-evaluate the integrals.
Then, note that
$$\text{sinc}(n\pi) = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3189978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Greatest Integer Function linear equation Given that $2[x]=x+2(x)$, $[x]$ if the Greatest Integer Function and $(x)$ is the fractional part of $x$, find the value (s) of $x$.
I tried replacing $(x)=x–[x]$ but for an equation in $x$ and $[x]$. How do I proceed???
| Let $x = n + r$ where $n \in \mathbb{Z}$ and $0 \le r \lt 1$. Then $[x] = n$ and $(x) = r$, so your equation becomes $2n = (n + r) + 2r \; \Rightarrow \; n = 3r$. Since $0 \le 3r \lt 3$, this gives $3$ choices for $n$ of $0, 1, 2$. You can then determine the matching values of $r$ and $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
prime numbers and expressing non-prime numbers My textbook says if $b$ is a non-prime number then it can be expressed as a product of prime numbers. But if $1$ isn't prime how it can be expressed as a product of prime numbers?
| This is mainly just an extended comment on Peter Foreman's answer. The (relatively difficult) uniqueness aspect of the Fundamental Theorem of Arithmetic is not needed for the OP's question, just the (easier) existence aspect.
What's missing from the OP's textbook is the qualifier in the correct assertion that every non-prime number greater than $1$ can be expressed as a product of primes. This is the existence aspect of FTA, and it can be proved by strong induction: If $n\gt1$ is not a prime, then $n=ab$ for some pair of integers with $1\lt a,b$. Both $a$ and $b$ must be less than $n$ (otherwise their product would be more than $n$), so we can assume, by strong induction, that each of them can be written as a product of primes, hence so can their product, which is $n$.
Remark: "Strong" induction means that you don't just assume an assertion is true for $n-1$ and then prove it for $n$, you assume it's true for all positive integers $k\lt n$. In this case the assertion is "if $k\gt1$ and $k$ is non-prime, then $k$ can be written as a product of primes." Note that the base case, $k=1$, is vacuously true, because $1$ is not greater than $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Tensor product notation as a power I chance upon a notation while reading a paper which I do not quite understand.
Suppose that $\hat{J}$ is a operator in a tensor product of two N$ dimensional Hilbert space. Explicitly, it is given by
$\hat{J} = \frac{1}{\sqrt{2}}(\hat{I}^{\otimes N} + i\hat{\sigma}_{x}^{\otimes N})$
How should the two terms enclosed in the parentheses be understood?
| In general, when dealing with any product-like operation $\star$, you should understand $a^{\star n}$ as $$\underbrace{a \star a \star \dots \star a}_{n \text { times}}.$$ This might be written as simply $a^n$ in case where it's completely obvious which product is meant, but putting $\star$ in the exponent specifies which product it is.
I personally encounter this notation most often in graph theory, where (for example) $G^{\boxtimes n}$ is the $n$-fold strong product of graph $G$ with itself. But it shows up in many places, and tensor products are a common example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why are these two definitions of Martingales equivalent? I am recently reading books on Probability theory, in Durrett's book Probability: Theory and Examples, the definition is following:
If $X_{n}$ is sequence with
(1) $\mathbb{E}\left|X_{n}\right|<\infty$
(2) $X_{n} \text { is adapted to } \mathcal{F}_{n}$
(3) $\mathbb{E}\left(X_{n+1} | \mathcal{F}_{n}\right)=X_{n} \text { for all } n$
Then $X$ is said to be a martingale (with respect to $\mathcal{F}_n$ )
And, in Erhan Cinlar's book Probability and Stochastics, the definition is
A real-valued stochastic process $X=\left(X_{t}\right)_{t \in \mathbb{T}}$ ($\mathbb{T}$ is a subset of $\overline{R}$.) is called $\text {an } \mathcal{F} \text { -submartingale if } X \text { is adapted to } \mathcal{F}, \text { each } X_{t} \text { is integrable, and}$ $$\mathbb{E}_{s}\left(X_{t}-X_{s}\right) \geq 0$$
$\text { whenever } s<t$, $\text {It is called an } \mathcal{F} \text { -supermartingale if }-X \text { is an } \mathcal{F} \text {-submartingale}$ $\text { and an } \mathcal{F}\text{-martingale} $ is it's both $\mathcal{F} \text { -supermartingale}$ and $\mathcal{F} \text { -submartingale}$.
From first looks I don't see the connection immediately, can you please illuminate me on this?
| From Cinlar's definition, if $X_t$ is a martingale, it is both a supermartingale and a submartingale.
The submartingale property implies $E_s[X_t - X_s] \ge 0$, and the supermartingale property implies $E_s[-(X_t - X_s)] \ge 0$, and thus $E_s[X_t - X_s] = 0$. Since $X_s \in \mathcal{F}_s$, this can be rewritten as $E_s[X_t] = X_s$. (Presumably, $E_s[\cdot]$ in Cinlar's notation is $E[\cdot \mid \mathcal{F}_s]$ in Durrett's notation?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Regarding steady state solution of $u_{t}= c^2 u_{xx}$ with $u_{x}(0,t) = c_{1}$ and $u(L,t) = c_{1}$? Suppose we have the one dimensional diffusion equation $u_{t} = c^2 u_{xx}$ with the boundary condition $u(L,t) = c_{2} $ and $u_{x}(0,t) = c_{1}$. I donot recognize which type of condition is it? badly, it seems to be the Robin condition.
Suppose I proceed by assuming $u(x,t) = X(x) T(t)$ then $XT' = c^2 X'' T$, but I am thinking how to incorporate the conditions given into the PDE or the process which can give us the steady state solution?
EDIT:
I am still thinking of the solution, the question is attached as pic.
The solution $u(x,t)$ is independent of $t$ in the below solution, may be the steady state then also how to answer the 2nd part as it hints the dependence of time there:
| Let $u(x,t) = w(x) + v(x,t)$ where $w(x)$ is the steady-state. Then we have
$$ w''(x) = 0, \quad w'(0) = w(L) = c_1 $$
which gives $w(x) = c_1(x-L) + c_1$
Now you can use separation of variables to find $v(x,t)$, which is homogeneous on the boundary (of "mixed" type).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Matrix such that $A^n=aA$ Let $A\in M_n(\mathbb{C})$ be a matrix such that $A^n=aA$,where $a\in \mathbb{R}-\{0,1\}$.
I wanted to find $A$'s eigenvalues and I thought that they are the roots of the polynomial equation $x^n=ax$. Is this correct?
| If $A$ has eigenvalue $b$, and $v$ is a corresponding (non-zero) eigenvector, then $$0 = (A^n-aA)v = A^nv - aAv = b^nv-abv = (b^n-ab)v$$This means $b^n-ab = 0$, which does make $b$ a root to the polynomial equation $x^n = ax$.
Apart from that, there is not much we can say. $A$ could have one, some, or all of those roots as eigenvalues, in any combination and multiplicity (as long as the total multiplicity is $n$). For instance, by being a diagonal matrix with the desired combination of eigenvalues along the diagonal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Multiplicative Chernoff Bound for coins We flip 1000 fair coins and mark the results with $X_1, . . . , X_{1000}$. A pair of adjacent coins are $X_i,X_{i+1}$ coins - i count $X_{1000}$ and $X_1$ as adjacent. We refer to the number of pairs of adjacent coins showing both heads as X.
I want to use the multiplicative Chernoff Bound to estimate $Pr[X \geq 300] $. But then $X$ must be the sum of independent bernoulli distributed random variable. Apparently $X_1$, for example, is not independent of $X_2$. Can I somehow work around this problem and use the multiplicative Chernoff Bound.
| Let $Y$ be the number of pairs of consecutive heads of the form $(X_{2i},X_{2i+1})$, and let $Z$ be the number of pairs of the form $(X_{2i-1},X_{2i})$. Note $X=Y+Z$, so
$$
P(X\ge 300)\le P(Y\ge 150)+P(Z\ge 150)
$$
You can then apply the Chernoff bound to each of $Y$ and $Z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3190974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is wrong with my argument using of Collatz–Wielandt formula? If $A$ is a positive square matrix, then the Collatz–Wielandt implies that
$\min_{=1,…,;_\neq 0}\frac{(Ay)_i}{y_i}≤≤\max_{=1,…,;_\neq 0}\frac{(Ay)_i}{y_i}$,
Where $r$ is the largest eigenvalue of $A$.
By replacing $y=e_j$ in the previous expression wouldn't we obtain that $a_{jj}\leq r \leq a_{jj}$ for each $j\in\{1,\ldots,n\}$? This cannot be true for a matrix $A$ that has different arguments in the diagonal.
The previous inequality is from @Surb’s answer in here:
Lower and upper bound for the largest eigenvalue.
| No. In $e_j$, $j$ is not related to $i$. You would have
$$ \min_i a_{ji} \leq r \leq \max_i a_{ji} $$
(... or possibly "$a_{ij}$" in both places, since I don't know whether you are using row first or column first indexing for matrices). You are pulling the minimum and maximum entries from the $j^\text{th}$ column.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3191125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the length x such that the two distances in the triangle are the same I have been working on the following problem
Statement
Assume you have a right angle triangle $\Delta ABC$ with cateti $a$, $b$ and hypotenuse $c = \sqrt{a^2 + b^2}$. Find or construct a point $D$ on the hypothenuse such that the distance $|CD| = |DE|$, where $E$ is positioned on $AB$ in such a way that $DE\parallel BC$ ($DE$ is parallel to $BC$).
Background
My background for wanting such a distance is that I want to create a semicircle from C onto the line $AB$. This can be made clearer in the image below
To be able to make sure the angles is right, I needed the red and blue line to be of same length. This lead to this problem
Solution
Using similar triangles one arrives at the three equations
$$
\begin{align*}
\frac{\color{blue}{\text{blue}}}{a - x} & = \frac{b}{a} \\
\frac{\color{red}{\text{red}}}{x} & = \frac{c}{a} \\
\color{red}{\text{red}} & = \color{blue}{\text{blue}}
\end{align*}
$$
Where one easily can solve for $\color{blue}{\text{blue}}$, $\color{red}{\text{red}}$, $x$.
Question
I feel my solution is quite barbaric and I feel that there is a better way to solve this problem. Is there another shorter, better, more intuitive solution. Or perhaps there exists a a way to construct the point $D$ in a simpler matter?
| Not sure if this is less barbaric but using simple trig: $DE=(a-x)\tan A$, $DC=\frac{x}{\cos A}$ so the equation to solve is $$(a-x)\frac{b}{a}=\frac{x\sqrt{a^2+b^2}}{a}$$ or $$x=\frac{ab}{\sqrt{a^2+b^2}+b}$$
Just another idea to construct point $E$: since $\triangle{DCE}$ is isosceles, it's easy to find $\angle{ACE}=(90°-A)/2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3191278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Is it possible to calculate $\int_{0}^{\pi}(a+\cos{\theta})^nd\theta$, where $a$ is a nonzero integer? I have tried to answer by taking change the variable $\theta$ to $\theta/2$, so the integration is now over unit circle, then I have taken $z=e^{i\theta}$. Now I tried to use residue formula for integration, but I failed.
| $\int_0^{\pi} (a+\cos x)^n \ dx = \frac 12 \int_0^{2\pi} (a+\cos x)^n \ dx\\
\cos x = \frac 12 (e^{ix} + e^{-ix})\\
\frac 1{2^{n+1}} \int_0^{2\pi} (2a+e^{ix} + e^{-ix})^n \ dx
z = e^{ix}\\
dx = \frac {1}{iz}\ dz$
$\frac 1{2^{n+1}i} \oint_{|z| = 1} \frac {1}{z}(2a+z + z^{-1})^n \ dz$
Now the trick. When we expand $(2a+z + z^{-1})^n$ we only care about the constant term. The rest of the terms will evaluate to $0.$
$(2a+z + z^{-1})^n = \sum_\limits{k=0}^n {n\choose k} (z+z^{-1})^k(2a)^{n-k}$
There is only a constant term for $(z+z^{-1})^k$ if $k$ is even, and it will equal ${k\choose \frac{k}{2}}$
$\frac{\pi}{2^n}\sum_\limits{k=0}^{\lfloor \frac n2\rfloor} {n\choose 2k}{2k\choose k}(2a)^{n-2k}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3191402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Prove or disapprove that the integral converges using Taylor's series $\int_0^\infty\sin(\frac{\sin x}{\sqrt x})dx$
I have an idea to do smth like that:
$\int_0^\infty\sin(\frac{\sin x}{\sqrt x})dx = \int_0^\infty(\frac{\sin x}{\sqrt x} + O(\frac{1}{\sqrt x}))$ and this integral should converge, but I have a feeling that I did smth wrong
| $$
\begin{align}
\int_0^\infty\sin\left(\frac{\sin(x)}{\sqrt{x}}\right)\,\mathrm{d}x
&=\int_0^\infty\left(\frac{\sin(x)}{\sqrt x}+O\!\left(\frac{x^{3/2}}{1+x^3}\right)\right)\,\mathrm{d}x\\
&=\sqrt{\frac\pi2}+O\!\left(\frac{2\pi}3\right)
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3191643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Describe all integral solutions of the equation $x^2 + y^2 = 2z^2$ such that $x,y,z > 0$, gcd$(x,y,z) = 1$, and $x > y$. As the title states, the question tasks me with finding all the integral solutions of the equation under the specified constraints. I have an idea of where to start due to a somewhat similar problem in my notes, but I'm having trouble adapting it to this new equation, and knowing where I need to change my process. My professor has provided the hint: "Use the circle $^2 + ^2 = 2$ and the lines
passing through the point $(1,1)$." Here's what I have so far:
$x^2 + y^2 = 2z^2$
$\frac{x^2}{z^2} + \frac{y^2}{z^2} = 2$
Let $X = \frac{x}{z}$ and $Y = \frac{y}{z}$
$X^2 + Y^2 = 2$
So we now have a circle with an origin at $(0,0)$ and radius of $\sqrt{2}$. I then drew up the circle and the line passing through $(1,1)$.
So the slope $\lambda$ of this line would be: $\lambda = \frac{Y-1}{X-1}$. This is the part where I get lost, in class we went off on a tangent related to these problems and I'm having trouble knowing exactly how to proceed. I have an idea of what the final answer will look like. The form we found for $x^2 + y^2 = z^2$ was: $(x,y,z) = (a^2 - b^2, 2ab, a^2 + b^2)$ Any help would be greatly appreciated!
| start with a primitive Pythagorean triple $x^2 + y^2 = z^2.$ there is a recipe for these. Then
$$ (x+y)^2 + |x-y|^2 = 2 z^2 $$
And those are all
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3191780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Summation formula for this? I have found the following summation formula based on a recurrence. It supposes $n = 2^k$ where k is an integer. I've intuitively discovered that the following closed form may be true (following the constraint on n), but I'm not sure why.
$\sum_{\textstyle i=0}^{\textstyle \lg n} {\frac n{2^i}}\\
= n\sum_{\textstyle i=0}^{\textstyle \lg n} \frac 1{2^i}\\
= n(1+ \frac 12 + \frac 14 +...+ \frac 1n)\\
= 2n-1\\$
I've reasoned that the last line should be true because if I plug in n=32 the solution is 63, and if we think about the numbers being added as $1$s in a long bit string, we will end up with lg$n+1$ ones in a row. I'm wondering if there is a summation formula or inductive proof that can show that this is true? I'm just waving my hands thinking this must be true, but I can't be sure.
| Let $m$ be a nonnegative integer. Then $$\sum_{i=0}^m \frac{1}{2^i}$$ is simply a finite geometric series with common ratio $r = 1/2$. In general, $$\sum_{i=0}^m r^i = \begin{cases} \frac{r^{m+1} - 1}{r - 1}, & r \ne 1, \\ m+1, & r = 1, \end{cases} \tag{1}$$ from which your desired result follows immediately.
The proof of $(1)$ is straightforward and is typically discussed in high school algebra. One sees that the summation is telescoping for $r \ne 1$:
$$(r - 1) \sum_{i=0}^m r^i = \sum_{i=0}^m r^{i+1} - r^i = \sum_{i=0}^m r^{i+1} - \sum_{i=0}^m r^i = \sum_{i=1}^{m+1} r^i - \sum_{i=0}^m r^i = r^{m+1} - r^0 = r^{m+1} - 1.$$ And when $r = 1$, the summation is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3191920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Solving $\frac{dy}{dx} = ae^{-bx} - cy(x)$ How would you solve an equation in the form
$$\frac{dy}{dx} = ae^{-bx} - cy(x) $$
where $a, b, c$ are just constants. My ultimate goal is to find $y(x)$ without the derivative in there.
My confusion comes from the fact that the right hand side has both $y(x)$ and $x$ itself in it. I tried using the integrating factor method and it gets me a similar form of solution that I want, but not completely. So is this the method I should be using or is there another that works for this type of equation?
| $$y'+cy = ae^{-bx} \tag 1$$
Solving with the variation of parameter method :
First, solve the associated homogeneous ODE
$$\quad y'+cy = 0 \tag 2$$
The solution is :
$$y=\lambda e^{-cx}$$
where $\lambda$ is a constant with respect to $x$.
Second, apply the method of variation of parameter. This means that the constant $\lambda$ is now considered as a function of $x$.
$y=\lambda(x) e^{-cx}$ is no longer solution of Eq.$(2)$, but will be solution of Eq.$(1)$ :
$y'=\lambda'e^{-cx}-c\lambda e^{-cx}\quad$ Putting it into Eq.$(1)$ :
$$y'+cy = ae^{-bx}=(\lambda'e^{-cx}-c\lambda e^{-cx})+c(\lambda e^{-cx})$$
$$ae^{-bx}=\lambda'e^{-cx}$$
$$\lambda'=ae^{(c-b)x}$$
$$\lambda=\frac{a}{c-b}e^{(c-b)x}+C$$
$y=\left(\frac{a}{c-b}e^{(c-b)x}+C \right) e^{-cx}$
$$y(x)=\frac{a}{c-b}e^{-bx}+Ce^{-cx}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Power sets : Do the relations between P(A) and P(B) always mirror the relations between the sets A and B? If I am correct it is true that:
(1) "$P(A)$ is included in $P(B)$" implies "$A$ is included in $B$".
(2) "$P(A) = P(B)$" implies "$A = B$".
Might I conclude from this that the power sets of two sets always have the same relations as these two sets have with one another?
Are there classical counterexamples to this (hasty) generalization?
I can think of this as a counterexample :
The fact that $A$ and $B$ are disjoint does NOT imply that $P(A)$ and $P(B)$ are disjoint.
Attenpt to prove (1) using the theorem : "$\{ x \}$ belongs to $P(S)$" $\Longleftrightarrow$ "$x$ belongs to $S$.
Let's admit that : $P(A)$ is included in $P(B)$.
Now, suppose (in view of refutation) that $A$ is not included in $B$.
It means that there exists an x such that x belongs to $A$ but not to $B$. And consequently that there is an $x$ such that $\{ x \}$ belongs to $P(A)$ but not to $P(B)$. If this were true, there would be a set $S$ such that $S$ belongs to $P(A)$ but not to $P(B)$. This contradicts our hypothesis according to which $P(A)$ is included in $P(B)$.
Conclusion: "$P(A)$ is included in $P(B)$" implies "$A$ is included in $B$".
| Let's take these sets as an example:
$$
\begin{align}
A &= \{1\} \\
B &= \{2\} \\
P(A) &= \{\emptyset, \{1\}\} \\
P(B) &= \{\emptyset, \{2\}\}
\end{align}
$$
A relation is in this context a function that takes two sets and gives true or false.
A simple counterexample relation is:
$$f(X, Y) = X \text{ contains a set}$$
Then we have:
$$
\begin{align}
f(A, B) &= \text{false} \\
f(P(A), P(B)) &= \text{true} \\
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Example of a sequence needed that satisfies certain conditions Can you please provide examples of sequence $\{a_1,a_2,\cdots\}$ such that $\sum_{i=1}^{\infty}a_i\to \infty$ while $\sum_{i=1}^{\infty}(a_i)^2<\infty$ with each $a_i\in[0,1)$. Thank you.
| A classical example is the sequence
$$ a = (1, 1/2, 1/3,1/4...) .$$
It is a well known result that
$$ \sum_{k=1}^{\infty} a_k = \sum_{k=1}^{\infty} 1/k = \infty. $$
On the other hand, it holds that
$$ \sum_{k=1}^{\infty} a_k^2 = \sum_{k=1}^{\infty} 1/k² = \pi²/6 \quad (\dagger). $$
Showing the equality in $(\dagger)$, hoewever, is nontrivial and is known as the Basler problem.
If you want the members of the sequence to be in $(0,1)$, then you can let it start at $a_2$ by defining
$$ b = (1/2,1/3,...). $$
Obviously, $b$ is now another sequence fulfilling your requirements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Special case of Bertrand Paradox or just a mistake? I've been working on a question and it seems I have obtained a paradoxical answer. Odds are I've just committed a mistake somewhere, however, I will elucidate the question and my solution just in case anyone is interested.
I want to know what is the average distance between two points on a circle of radius 1 where we consider only the boundary points.
My attempt is as follows:
Consider a segment of the diameter x which is uniformly distributed between 0 and 2. Then you can calculate the distance between the points (2,0) and the point determined by x just by elementary geometry as this picture shows:
Here in the picture, the green segment is the geometric mean and the orange one is the distance whose distribution we want to know.
Just by calculating the expected value, we obtain:
$E\left(\sqrt{(4-2X)}\right) = \int_{0}^{2} \sqrt{(4-2x)}\cdot\frac{1}{2} dx = 1.333.... = \frac{4}{3}$
Where $\sqrt{(4-2x)}$ is the transformation of the random variable and $\frac{1}{2}$ is the pdf of a uniform distribution $[0,2]$.
Also, if we derive the pdf of the transformation we obtain the same result:
$y = \sqrt{(4-2x)} , x = 2- \frac{y^2}{2}, \mid\frac{d}{dy}x\mid = y$
$g(y)=f(x)\cdot\mid\frac{d}{dy}x\mid = \frac{1}{2}\cdot y$
$E(Y)= \int_{0}^{2}y\cdot\frac{1}{2}\cdot y dy = 1.333.... = \frac{4}{3} $
I have seen a different approach somewhere else where the distribution of the angle is considered as a uniform distribution between 0 and $\pi$ and the final result was:
$1.27... = \frac{4}{\pi}$
That's pretty much the problem I found. Maybe I just did it wrong in some step but it all makes sense to me. I know this is not exactly what we refer as Bertrand paradox but it just suggests something like that because both problems handle with segments in circumference and maybe my result is wrong because it does not hold for rotations of the circle or something like that (I read a little bit about the Bertrand's Paradox).
That's pretty much it. Also sorry for my bad English and maybe I'm also wrong in something pretty elemental since I've just started learning about probability theory. It's also my first post so I will try to improve my exposition and LateX use in the following ones.
| Thanks, Erick Wong for your feedback. After your answer, I calculated the distribution of the arc length subject to the uniform distribution of the point on the diameter. In fact: if we want to express the arc length $l$ as a function of $x$, $l = f(x)$ we obtain:
$l = \arccos(1-x), x = 1-\cos{l}, |\frac{d}{dl}(x)| = \sin{l}$
$l_{pdf} = x_{pdf} \cdot |\frac{d}{dl}x| = \frac{1}{2}\cdot \sin{l}$.
So the arc length does not distribute uniformly, we have "lost it", we might say. That's what was wrong.
For instance, if the arc length obeys a uniform distribution [0, $\pi$], then we can calculate the segment $s$ as a function of the arc length:
We know $l = f(x)$ and want to know $s = h(l)$. If we calculate $s = g(x)$ we're done:
From the image on the question I post, $s = g(x)=\sqrt{2x}$ (or the opposite segment $\sqrt{4-2x}$ ) then $h = g \circ f^{-1}$, $s = \sqrt{2(1-cosl)}$
$E(s) = \int_0^\pi{\sqrt{2(1-cosl)}\frac{1}{\pi}dl}=1.273... = \frac{4}{\pi}$
Also the pdf:
$s = h(l) = \sqrt{2(1-cosl)} , l=h^{-1}(s)= 2\cdot \arcsin(\frac{s}{2}), |\frac{d}{ds}h^{-1}|=\frac{2}{\sqrt{4-s^2}}$
$s_{pdf} = \frac{1}{\pi} \frac{2}{\sqrt{4-s^2}}$
$E(s) = \int_0^2{s\cdot \frac{1}{\pi} \frac{2}{\sqrt{4-s^2}}ds} = 1.273... = \frac{4}{\pi}$
So we're done. Also, the pdf of the segment suggests something related to the Cauchy distribution. Not exactly but certainly it has to do something with it. If we read the description of Cauchy distribution in Wolfram MathWorld:
"The Cauchy distribution, also called the Lorentzian distribution or Lorentz distribution, is a continuous distribution describing resonance behavior. It also describes the distribution of horizontal distances at which a line segment tilted at a random angle cuts the x-axis."
And that's it. A really fascinating problem that introduces some subtle ideas of probability theory. If anyone knows something else please give me feedback. I really think there's a nice connection with Cauchy distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Convergent series by the Root Test Determining the convergence of $$\sum_{n=1}^{\infty}\frac{n^3\left(\sqrt{2}+(-1)^n\right)^n}{3^n}$$
Applying the Root Test: $$r=\lim_{n\to\infty}\sqrt[n]{\frac{n^3\left(\sqrt{2}+(-1)^n\right)}{3^n}}=\lim\sqrt[n]{n^3\cdot\left(\frac{(\sqrt{2}+(-1)^n}{3}\right)^n}=\left(\lim n^{\frac{3}{n}}\right)\left(\limsup\frac{\sqrt{2}+(-1)^n}{3}\right)=\left(\lim n^{\frac{1}{n}}\right)^3\cdot\frac{1}{3}\left(\limsup\sqrt{2}+(-1)^n\right)=\frac{1}{3}\limsup_{n\to\infty}\left(\sqrt{2}+(-1)^n\right)<\frac{1}{3}\lim_{n\to\infty}\left(3\right)=1$$
Therefore since $r<1$, the series converges. Is this correct?
| It does converge
but your proof is not correct.
What you should write is
$\left|\dfrac{\sqrt{2}+(-1)^n}{3}\right|
\le \left|\dfrac{\sqrt{2}+1}{3}\right|
\lt 1
$
and this shows convergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove the following ideal $I$ is not a principal ideal. Prove the ideal $I = \left<X^2,3\right> \space $of$ \space \mathbb{Z}[X]$ is not a principal ideal.
The solution I have been given is the following:
Assume for contradiction that I were a principal ideal, i.e., $I = \left<f(X)\right>$ for some $f(X) \in \mathbb{Z}[X]$. This means that
$$f(X) = 3 \cdot g(X) + X^2 \cdot h(X), \tag{1}$$
$$3 = f(X) \cdot k(X), \tag{2}$$
$$X^2 = f(X) \cdot s(X), \tag{3}$$
for some polynomials $g(X), h(X), k(X)$ and $s(X)$ in $\mathbb{Z}[X]$. As $\mathbb{Z}$is an integral domain, the equation $(2)$ implies that $0 = \deg(3) = \deg f(X) + \deg k(X)$, whereby $\deg f(X) = 0$, i.e., $f(X)$ is a constant polynomial, say $f(X) = n \in \mathbb{Z}$. Next, the equation $(1)$ implies that $n = f(0) = 3g(0)$,
whereby $n \in 3 \mathbb{Z}$. Thus, all the coefficients of $n \cdot s(X)$ are in $3\mathbb{Z}$, contradicting the fact that this is supposed to equal $X^2$ by the equation $(3)$.
Could someone help me break this down a bit please. I am struggling to come to terms with the definition of a degree. These are the current definitions I have in simple terms:
Ideal - A subring which agrees with the relevant axioms
Principal ideal - If $I$ is generated by a set with only one element
Apologies if this seems quite vague. TIA
| I'll assume you are happy with the first line.
Equation (1) is saying $f$ must be a combination of the generators given. Similarly, equations (2) and (3) say the generators given must be recoverable from $f$. Thus, the equations together give equality.
The degree of a univariate polynomial in $x$ is simply the highest power of $x$ occuring (so with nonzero coefficient). This is a useful tool allowing us to make the deductions in the next paragraph. That is, if $g = fk$, then $\deg g = \deg f + \deg k$, as used in the case $\deg g = 0$ to force, in particular, $\deg f = 0$.
Finally, substituting $X = 0$ in equation 1 forces $n$ to be a multiple of 3, deriving a contradiction in equation 3.
Is it now clear? If not, what in particular could I explain better?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why do early math courses focus on the cross sections of a cone and not on other 3D objects? Conic sections seem to get special attention in early math classes.
My question is why do these cross sections of cones deserve more attention than those of, say, a rectangular prism, a cube, or some other 3D (or any dimensional) object?
I have a couple of guesses:
*
*Studying a particular "simple" example can provide insight into the general idea (i.e. cross sections of higher dimensional objects). And conic sections are deemed simple.
*The applications of ellipses, parabolas, and hyperbolas are just so vast that their graphs and properties deserve special studying (e.g. elliptical orbits).
I'd really appreciate some outside thoughts on this, even if it is just speculation. I've been giving cross sections some special study attention recently and have done a handful of google searches to try and understand why conic sections keep coming up (as can be seen in a lot of math curriculum).
Thank you!
|
My question is why do these cross sections of cones deserve more attention than those of, say, a rectangular prism, a cube, or some other 3D (or any dimensional) object?
Because there's nothing else. Look at the common 3D solids. The cross sections of a cuboid (and in fact any other polytope) are just a bunch of straight lines connected together, so this is just a piecewise linear graph, and piecewise linear graphs don't need to be motivated as cross-sections of three-dimensional objects. The cross sections of a spheroids (ellipsoids) are ellipses, which come up in cross sections of cones anyway. The cross sections of a cylinder are either trapeziums or ellipses. These are much less interesting and rich than conic sections!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3193067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 4
} |
Why does $(x \to a) ≠ (x = a)$ , But $f(x \to a) = f(x = a)$ I am really satisfied that $(x \to a) ≠(x=a)$ and if that is not right , Then all the process of $Limits$ is dividing by zero and that is a crime.
Since $(x \to a) + h = (x=a)$ , $h ≠ 0$,So Why does $f(x \to a) = f(x = a)$ ?
NOTE:I am talking about continuous function.
| Your question seems unclear, but perhaps emphasizing this distinction will help:
A function $f: \mathbb R \longrightarrow \mathbb R$ is continuous at $a \in \mathbb R$ if (and only if) for every $\varepsilon>0$ there exists $\delta>0$ such that $|f(x)-f(a)|<\varepsilon$ whenever $|x-a|<\delta$.
We write $\lim_{x \to a} f(x)=L$ if and only if for every $\varepsilon>0$ there exists $\delta>0$ such that $|f(x)-L|<\varepsilon$ whenever $0<|x-a|<\delta$.
Noticing the differences between the two formal definitions, we see that a function $f$ is continuous at $a \in \mathbb R$ if and only if $\lim_{x \to a} f(x)=f(a)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3193342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof verification: Prove $\sqrt{n}$ is irrational. Problem
Let $n$ be a positive integer and not a perfect square. Prove that $\sqrt{n}$ is irrational.
Proof
Consider proving by contradiction. If $\sqrt{n}$ is rational, then there exist two coprime integers $p,q$ such that $$\sqrt{n}=\frac{p}{q},$$ which implies $$p^2=nq^2.$$
Moreover, since $p, q$ are coprime, by Bézout's theorem, there exist two integers $a,b$ such that $$ap+bq=1.$$
Thus
$$p=ap^2+bpq=anq^2+bpq=(anq+bp)q,$$ which implies $$\sqrt{n}=\frac{p}{q}=anq+bp \in \mathbb{N^+},$$ which contradicts.
| Proof by contradiction is not needed. It suffices to "take the contrapositive". This is when you switch the antecedent and the conclusion of an implication and negate them. Formally, it looks like this:
$$ P \implies Q \text{ has the contrapositive } \neg Q \implies \neg P$$
It applies to the proof in the following way. Your argument shows that if $\sqrt{n}$ is rational, then it must be an integer. But if $\sqrt{n}$ is an integer, then $n$ must be a perfect square.
This means that if $n$ is not a perfect square, then $\sqrt{n}$ is not an integer, so $\sqrt{n}$ is not rational. Q.E.D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3193554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
How do I evaluate the following combination of random variables? Is it martingale? I'm about to analyse the following expression
$$Z_n:=\prod_{k=1}^n \left(\frac{\frac{Y_k}{\prod_{i=1}^k X_i}}{\sum_{j=1}^k \frac{Y_j}{\prod_{i=1}^j X_i}} \right),$$
where $Y_j$ for all $j\in \mathbb{N}$, are independent and $\Gamma(\beta,1)$-distributed random variables and $X_i$, for all $i \in \mathbb{N}$ independent identically BetaDistr.($\alpha,\beta$). I have presumtion that this expression is a martingale and tried to prove the martingale property. Let $\mathcal{F_n}=\{X_i| Y_i: i \le n\}$
$$\mathbb{E}[Z_{n+1}|\mathcal{F}_n]=Z_n\mathbb{E}\left[\frac{\frac{Y_{n+1}}{\prod_{i=1}^{n+1} X_i}}{\sum_{j=1}^{n+1} \frac{Y_j}{\prod_{i=1}^j X_i}} |\mathcal{F}_n \right]=Z_n \frac{1}{\prod_{i=1}^n X_i}\mathbb{E}\left[\frac{\frac{Y_{n+1}}{X_{n+1}}}{\sum_{j=1}^{n} \frac{Y_j}{\prod_{i=1}^j X_i}+\frac{Y_{n+1}}{\prod_{i=1}^{n+1} X_i} } |\mathcal{F}_n \right]=?$$
I could calculate only till this step. Does somebody see how to proceed further? Maybe somebody has already had experience with this combination of random variables and could give me a hint what are another options to evaluate this expression?
| I assume that the family $\left(X_i,Y_j,i,j\geqslant 1\right)$ is independent.
There exists a formula for the conditional expectation of a function of two independent vectors with respect to the first vector. Let $f\colon \mathbb R^{k}\times\mathbb R^\ell\to\mathbb R$ be a measurable function and $U$ and $V$ two independent random vectors with values in $\mathbb R^{k}$ and $\mathbb R^\ell$ respectively. Then
$$
\mathbb E\left[f\left(U,V\right)\mid U\right]=g\left(U\right),
$$
where the function $g\colon\mathbb R^k\to\mathbb R$ is defined by $g\left(u\right)=\mathbb E\left[f\left(u,V\right) \right]$.
So it seems that a first step in the question is to compute
$$
\mathbb E\left[
\frac{\frac{Y}{X}}{a+b\frac{Y}{X}}
\right]
$$
for fixed real numbers $a$ and $b$, where $Y$ has $\Gamma(\beta,1)$-distribution and $X$ is independent of $Y$ and has a $\beta$-distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3193698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Semigroups with no morphisms between them Given two monoids we always have a morphism from one to the other thanks to the presence of the identity element.
Are there examples of non-empty semigroups that have no morphisms from one to the other? The destination semigroup can't be finite, because if we have an idempotent present, we just map everything to it and get a constant morphism. Apparently, it can't contain an idempotent, period.
Put another way, the subcategory of finite semigroups is clearly strongly connected. Is the same true of the category of all semigroups?
Are there two such non-empty semigroups that don't have a morphism in either direction?
| No morphism in either direction
Choose two distinct primes $p,q\in\Bbb N_+$ and consider the additive semi-groups
$$\begin{align}
P&:=\Big\{\frac n{p^m}\mid n,m\in\Bbb N_+\Big\}\subseteq\Bbb Q_+,\\
Q&:=\Big\{\frac n{q^m}\mid n,m\in\Bbb N_+\Big\}\subseteq\Bbb Q_+.
\end{align}$$
Assume there is a morphism $\phi:P\to Q$ and let $a/q^b:=\phi(1)\in Q$ for some $a,b\in\Bbb N_+$. Further, for every $m\in\Bbb N_+$ let
$$\frac{a_m}{q^{b_m}}:=\phi\Big(\frac1{p^m}\Big)\in Q, \qquad\text{for some $a_m,b_m\in\Bbb N_+$}.$$
This means
$$\begin{align}
p^m\cdot \frac{a_m}{q^{b_m}}&=\phi\Big(p^m\cdot \frac1{p^m}\Big)\\
&=\phi(1)\\
&=\frac a{q^b},
\end{align}$$
which implies $$p^mq^b\cdot a_m=q^{b_m}\cdot a.$$
Since the left side is divisible by $p^m$, so must be the right side. Since $p$ and $q$ are distinct primes, $a$ must be divisible by $p^m$ for all $m\in\Bbb N_+$, which is a contradiction. Hence there cannot be such a morphism, and since the argument is symmetric, there is no such morphism in either direction.
$\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3193852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 0
} |
Does Fourier imply Laplace? Can we find a function $f(t)$ for which $$\int_{-\infty}^{+\infty}f(t)e^{-j\omega t}dt,$$ converges but $$\int_{-\infty}^{+\infty}f(t)e^{-st}dt,$$ does not ?
Here, $j^2=-1$, $\omega$ is a real number and $s$ is a complex number.
I am thinking that we can find such an $f(t)$, for example, when $Re(s)>0$.
| $f(t)=\frac 1 {1+t^{2}}$ is such a function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3194078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Finding the floor of $\sqrt{d^2-1}$ If we are given $d$ an integer, then I've seen it written that the floor of $\sqrt{d^2-1}=d-1$, but how is this found?
It's not immediately obvious, at least not to me.It makes me feel there must be an algorithm that can be used .
Would anyone be able to impart their knowledge on how to find the floor of such an expression?
I read it here : Show that the simple continued fraction of $\sqrt{d^2-1}$ is $[d-1; \overline{1, 2d-1}]$ for $d \geq 2$
| It is because $$(d-1)^2 \le d^2-1 < d^2,$$ so $$d-1 \le \sqrt{d^2-1} < d,$$ so $$\lfloor \sqrt{d^2-1}\rfloor = d-1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3194249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.